Before: BenchmarkReadTxIn 1000000 2435 ns/op
After: BenchmarkReadTxIn 1000000 1427 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkReadTxOut 500000 4576 ns/op
After: BenchmarkReadTxOut 2000000 871 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkWriteTxOut 500000 4050 ns/op
After: BenchmarkWriteTxOut 10000000 248 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkReadOutPoint 500000 2946 ns/op
After: BenchmarkReadOutPoint 5000000 582 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkWriteOutPoint 500000 2664 ns/op
After: BenchmarkWriteOutPoint 10000000 151 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkWriteVarStr4 1000000 1114 ns/op
BenchmarkWriteVarStr10 1000000 1352 ns/op
After:
BenchmarkWriteVarStr4 5000000 291 ns/op
BenchmarkWriteVarStr10 10000000 248 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkReadVarStr4 1000000 1698 ns/op
BenchmarkReadVarStr10 1000000 1812 ns/op
After:
BenchmarkReadVarStr4 2000000 853 ns/op
BenchmarkReadVarStr10 5000000 712 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
This commit slightly optimizes the readVarInt function in the case of
multiple-byte variable length integers. It also reduces the amount of
memory garbage it generates.
Before:
BenchmarkReadVarInt1 5000000 386 ns/op
BenchmarkReadVarInt3 5000000 693 ns/op
BenchmarkReadVarInt5 2000000 793 ns/op
BenchmarkReadVarInt9 5000000 709 ns/op
After:
BenchmarkReadVarInt1 5000000 387 ns/op
BenchmarkReadVarInt3 5000000 471 ns/op
BenchmarkReadVarInt5 5000000 575 ns/op
BenchmarkReadVarInt9 5000000 473 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
This commit adds tests for the new SerializeSize functions for variable
length integers and transactions (and indirectly transaction inputs and
outputs).
This commit adds a new function named SerializeSize to the public API for
MsgTx, TxOut, and TxIn which can be used to determine how many bytes the
serialized data would take without having to actually serialize it.
The following benchmark shows the difference between using the new
function to get the serialize size for a typical transaction and
serializing into a temporary buffer and taking the length of it:
Bufffer: BenchmarkTxSerializeSizeBuffer 200000 7050 ns/op
New: BenchmarkTxSerializeSizeNew 100000000 18 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Most variable length integers are smaller numbers, so this commit reverses
the order of the if checks in the writeVarInt to assume smaller numbers
are more common.
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27
Several of the bitcoin data structures contain variable length entries,
many of which have well-defined maximum limits. However, there are still
a few cases, such as variable length strings and number of transactions
which don't have clearly defined maximum limits. Instead they are only
limited by the maximum size of a message.
In order to efficiently decode messages, space is pre-allocated for the
slices which hold these variable length pieces as to avoid needing to
dynamically grow the backing arrays. Due to this however, it was
previously possible to claim extremely high slice lengths which exceed
available memory (or maximum allowed slice lengths).
This commit imposes limits to all of these cases based on calculating
the maximum possible number of elements that could fit into a message
and using those as sane upper limits.
The variable length string case was found (and tests added to hit it) by
drahn@ which prompted an audit to find all cases.
This commit changes the InvVect_* constants, which are not standard Go
style, to the InvType*. In order to preserve backwards compatibility, it
also adds a legacy.go file which maps the old public constant names to the
new ones.
Closes#1.
It is technically possible for the Read method on a reader to return zero
bytes read with a nil error even though that behavior is "discouraged" by
the interface documenation. This commit switches the read of the first
byte to use io.ReadFull which will always error in this case.
Several of the messages store the parts that have a variable number of
elements as slices. This commit modifies the code to choose sane defaults
for the backing arrays for the slices so when the entries are actually
appended, a lot of the overhead of growing the backing arrays and copying
the data multiple times is avoided.
Along the same lines, when decoding messages, the actual size is known and
now is pre-allocated instead of dynamically growing the backing array
thereby avoiding some overhead.
This function is a convenience method to create a new NetAddress
from a net.IP and uint16 port as opposed to a net.Addr which must be of
type *net.TCPAddr. This allows callers to support connection types that
don't provide access to a concrete *net.TCPAddr implementation.
Both of these depend on the serialized bytes which are dependent on the
version field in the block/transaction. They must be independent of the
protocol version so there is no need to require it.
This commit introduces two new functions for MsgBlock and MsgTx named
Serialize and Deserialize. The functions provide a stable mechanism for
serializing and deserializing blocks and transactions to and from disk
without having to worry about the protocol version. Instead these
functions use the Version fields in the blocks and transactions.
These new functions differ from BtcEncode and BtcDecode in that the latter
functions are intended to encode/decode blocks and transaction from the
wire which technically can differ depending on the protocol version and
don't even really need to use the same format as the stored data.
Currently, there is no difference between the two, and due to how
intertwined they are in the reference implementaiton, they may not ever
diverge, but there is a difference and the goal for btcwire is to provide
a stable API that is flexible enough to deal with encoding changes.
The go vet command complains about untagged struct initializers when
defining a ShaHash directly. This seems to be a limitation where go vet
does not exclude the warning for types which are a constant size byte array
like it does for normal constant size byte array definition.
This commit simply modifies the tests to use a constant definition cast to a
ShaHash to overcome the limitation of go vet.