This change removes the internal pad function in favor a more opimized
paddedAppend function. Unlike pad, which would always alloate a new
slice of the desired size and copy the bytes into it, paddedAppend
only appends the leading padding when necesary, and uses the builtin
append to copy the remaining source bytes. pad was also used in
combination with another call to the builtin copy func to copy into a
zeroed byte slice. As the slice is now created using make with an
initial length of zero, this copy can also be removed.
As confirmed by poking the bytes with the unsafe package, gc does not
zero array elements between the len and cap when allocating slices
with make(). In combination with the paddedAppend func, this results
in only a single copy of each byte, with no unnecssary zeroing, when
creating the serialized pubkeys. This has not been tested with other
Go compilers (namely, gccgo and llgo), but the new behavior is still
functionally correct regardless of compiler optimizations.
The TestPad function has been removed as the pad func it tested has
likewise been removed.
ok @davecgh
Ordinarily, getwork will return an error if btcd is not connected to any
other peers. This commit relaxes that requirement when running in
regression test mode since it is useful for development purposes.
While here, also improve check which returns an error from getwork is not
current to exclude the check when the best chain height is zero since the
code never believes it is current when at height 0.
This commit adds a full suite tests for the new reject message added in
protocol version 70002 to bring the overall test coverage of btcwire back
up to 100%.
Closes#9.
This commit adds the new reject protocol message added to recent versions
of the reference implementation. This message is intended to be used in
response to messages from a remote peer when it is rejected for some
reason such as blocks being rejected due to not conforming to the chain
rules, transactions double spending inputs, and version messages sent
after they're already sent.
This is work toward issue #9.
- Correct MsgFilterLoad max payload
- Enforce max flag bytes per merkle block
- Improve and finish tests to include testing all error paths
- Add fast paths for BloomUpdateType
- Convert all byte fields to use read/writeVarBytes
- Style and consistency updates
- README.md and doc.go updates
Closes#12.
- Group the new read/writeVarBytes functions together to be consistent
with the existing code
- Modify the comments on the new read/writeVarBytes to be a little more
descriptive and consistent with existing code
- Use "test payload" for field name in the tests for the
read/writeVarBytes functions which is more accurate
- Remove reserved param from NewAlert since there is no point is having
the caller deal with a reserved param
- Various comment tweaks for clarity and consistency
- Use camel case for fuction params for consistency
- Move the NewAlert and NewAlertFromPayload functions after the receiver
definitions for code layout consistency
Closes#11.
Along the same lines as the previous commit, the RPCs that return
serialized data structures should use the max protocol version btcd
supports as opposed to the maximum protocol version etcwire supports.
The getinfo RPC should return the max protocol version btcd supports as
opposed to the maximum protocol version btcwire supports. Currently they
are both the same value, so there is no issue. However, they will not
always be the same.
Since the github contributors list only lasts for a period of time, it's
better to also maintain a contributors list in the repository rather than
solely relying on github.
* Introduced common methods readVarBytes, writeVarBytes.
* Added type Alert which knows how to deserialize
the serialized payload and also serialize itself back.
* Updated MsgAlert BtcEncode/BtcDecode methods to handle the
new Alert.
* Sane limits are placed on variable length fields like SetCancel
and SetSubVer
This commit modifies InfoResult to remove all of the omitempty tags since
the fields should be present in the marshalled JSON regardless of them
being zero values or not.
This commit modifies the types of the GetMiningInfoResult to better match
their underlying types. While the information was all available and
accurate, it's nicer to treat numeric values as proper types instead of
all float64s.
Also, a new networkhashps field has been added for compatibility with the
reference implementation.
This commit updates the block manager's local chain state when a block
processed by submitting it directly to the block manager as opposed to
only when it comes from the network.
Also, it modifies the submitblock RPC to use the concurrent safe block
manager process block instead of the unsafe btcchain version.
The combination of these two fixes ensure the internal block manager chain
state is properly synced with the actual btcchain state regardless of how
blocks are added.
This change fixes rescan to include transactions that pay to the
pubkey for a rescanned pubkey hash address. This behavior was lost
when the rescan was optimized for specific types of the
btcutil.Address interface.
ok @davecgh
This commit correctly sets the error in the marhsalled reply if it is
already a *btcjson.Error. Previously it would only set the error if it
was not of that type which led to some RPC results showing no error when
they actually had one.
This change overrides the cmd return value of custom registered
methods to always return an unparsableCmd if the marshaling as a
JSON-RPC request succeeded, but the request was an invalid structure
for the custom method.
In Discover, the reponse was lowercased for comparison. However,
this caused a 404 - Not found when fetching the url provided by
the location header if the url contained uppercase.
ok @owainga
In practise the races caused by not protecting these quite simply didn't
matter, they couldn't actually cause any damage whatsoever. However, I
am sick of hearing about these essentially false positivies whenever
someone runs the race detector (yes, i know that race detector has no
false positives but this was effectively harmess).
verified to shut the detector up by dhill.
If we switch the knuth shuffle to the version that swaps the element
with an element between it and the end of the array, then once we have
gotten to the amount of elements we need they won't change later in the
algorithm. Terminating here means that we only do 23% of the length of
the array worth of random swaps at most.
We make ka.na immutable in the address manager. Whenever we would update
the structure we replace it with a new copy. This beats making a copy of
all addresses once per getaddr command (max is just over 23000 we would
be copying, compared to at most 2000 copies on a new getaddr that has
all addresses we know with newer dates).
On unknown inventory types, handleGetDataMsg would loop forever.
After fixing that, if a getdata request only had unknown inventory
types, it would block forever.
ok @davecgh