This commit modifies the RandomUint64 function so that rather than
returning an io.ErrShortBuffer when the system does not have enough
entropy available, it now blocks until it does have enough. This means
that RandomUint64 will now always eventually succeed unless the entropy
source is closed (which only really ever happens when the operating system
is shutting down).
The tests have also been updated for the change in semantics to maintain
100% coverage.
Closes#23.
This commit adds the new reject protocol message added to recent versions
of the reference implementation. This message is intended to be used in
response to messages from a remote peer when it is rejected for some
reason such as blocks being rejected due to not conforming to the chain
rules, transactions double spending inputs, and version messages sent
after they're already sent.
This is work toward issue #9.
- Correct MsgFilterLoad max payload
- Enforce max flag bytes per merkle block
- Improve and finish tests to include testing all error paths
- Add fast paths for BloomUpdateType
- Convert all byte fields to use read/writeVarBytes
- Style and consistency updates
- README.md and doc.go updates
Closes#12.
- Group the new read/writeVarBytes functions together to be consistent
with the existing code
- Modify the comments on the new read/writeVarBytes to be a little more
descriptive and consistent with existing code
- Use "test payload" for field name in the tests for the
read/writeVarBytes functions which is more accurate
- Remove reserved param from NewAlert since there is no point is having
the caller deal with a reserved param
- Various comment tweaks for clarity and consistency
- Use camel case for fuction params for consistency
- Move the NewAlert and NewAlertFromPayload functions after the receiver
definitions for code layout consistency
Closes#11.
* Introduced common methods readVarBytes, writeVarBytes.
* Added type Alert which knows how to deserialize
the serialized payload and also serialize itself back.
* Updated MsgAlert BtcEncode/BtcDecode methods to handle the
new Alert.
* Sane limits are placed on variable length fields like SetCancel
and SetSubVer
This commit exports the VarIntSerializeSize function to provide callers
with an easy method to determine how many bytes it would take to serialize
the passed value as a variable length integer.
This commit modifies the writeElement function to have a "fast path" which uses type
assertions for all of the types which btcwire write so the more expensive
reflection-based binary.Write can be avoided.
Also, this changes all cases that were writing raw ShaHash (32-byte) arrays (which
requires a stack copy) instead simply passing the pointer.
The following benchmark results show the results for serializing a block header
after these changes:
Before: BenchmarkWriteBlockHeader 500000 5566 ns/op
After: BenchmarkWriteBlockHeader 1000000 991 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
The following benchmark results show the results for deserializing a block header:
Before: BenchmarkReadBlockHeader 500000 5916 ns/op
After: BenchmarkReadBlockHeader 1000000 2078 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkWriteVarStr4 1000000 1114 ns/op
BenchmarkWriteVarStr10 1000000 1352 ns/op
After:
BenchmarkWriteVarStr4 5000000 291 ns/op
BenchmarkWriteVarStr10 10000000 248 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkReadVarStr4 1000000 1698 ns/op
BenchmarkReadVarStr10 1000000 1812 ns/op
After:
BenchmarkReadVarStr4 2000000 853 ns/op
BenchmarkReadVarStr10 5000000 712 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
This commit slightly optimizes the readVarInt function in the case of
multiple-byte variable length integers. It also reduces the amount of
memory garbage it generates.
Before:
BenchmarkReadVarInt1 5000000 386 ns/op
BenchmarkReadVarInt3 5000000 693 ns/op
BenchmarkReadVarInt5 2000000 793 ns/op
BenchmarkReadVarInt9 5000000 709 ns/op
After:
BenchmarkReadVarInt1 5000000 387 ns/op
BenchmarkReadVarInt3 5000000 471 ns/op
BenchmarkReadVarInt5 5000000 575 ns/op
BenchmarkReadVarInt9 5000000 473 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
This commit adds a new function named SerializeSize to the public API for
MsgTx, TxOut, and TxIn which can be used to determine how many bytes the
serialized data would take without having to actually serialize it.
The following benchmark shows the difference between using the new
function to get the serialize size for a typical transaction and
serializing into a temporary buffer and taking the length of it:
Bufffer: BenchmarkTxSerializeSizeBuffer 200000 7050 ns/op
New: BenchmarkTxSerializeSizeNew 100000000 18 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Most variable length integers are smaller numbers, so this commit reverses
the order of the if checks in the writeVarInt to assume smaller numbers
are more common.
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27
Several of the bitcoin data structures contain variable length entries,
many of which have well-defined maximum limits. However, there are still
a few cases, such as variable length strings and number of transactions
which don't have clearly defined maximum limits. Instead they are only
limited by the maximum size of a message.
In order to efficiently decode messages, space is pre-allocated for the
slices which hold these variable length pieces as to avoid needing to
dynamically grow the backing arrays. Due to this however, it was
previously possible to claim extremely high slice lengths which exceed
available memory (or maximum allowed slice lengths).
This commit imposes limits to all of these cases based on calculating
the maximum possible number of elements that could fit into a message
and using those as sane upper limits.
The variable length string case was found (and tests added to hit it) by
drahn@ which prompted an audit to find all cases.
It is technically possible for the Read method on a reader to return zero
bytes read with a nil error even though that behavior is "discouraged" by
the interface documenation. This commit switches the read of the first
byte to use io.ReadFull which will always error in this case.