This commit changes all cases which generate default timestamps to
time.Now to limit the timestamp to one second precision. The code which
serializes and deserializes timestamps already does this, but it is useful
to make sure defaults don't exceed the precision of the protocol either.
With this change there is less chance that developers using defaults will
end up with structures that have a higher time precision than what will
ultimately be sent across the wire.
This commit adds two new funtions named ReadMessageN and WriteMessageN
which return an additional paramter for the number of bytes read or written,
respectively.
It also adds tests to ensure the number of bytes read and written are the
expected values both for successful reads/writes and unsuccessful ones.
Closes#6.
This commit introduces two new functions for Blockheader named Serialize
and Deserialize. The functions provide a stable mechanism for serializing
and deserializing block headers to and from disk. The main benefit here
is deserialization of the header since typically only full blocks are
serialized to disk. Then when a header is needed, only the header portion
of the block is read and deserialized.
This commit removes the TxnCount field from the BlockHeader type and
updates the tests accordingly. Note that this change does not affect the
actual wire protocol encoding in any way.
The reason the field has been removed is it really doesn't belong there
even though the wire protocol wiki entry on the official bitcoin wiki
implies it does. The implication is an artifact from the way the
reference implementation serializes headers (MsgHeaders) messages. It
includes the transaction count, which is naturally always 0 for headers,
along with every header. However, in reality, a block header does not
include the transaction count. This can be evidenced by looking at how a
block hash is calculated. It is only up to and including the Nonce field
(a total of 80 bytes).
From an API standpoint, having the field as part of the BlockHeader type
results in several odd cases.
For example, the transaction count for MsgBlocks (the only place that
actually has a real transaction count since MsgHeaders does not) is
available by taking the len of the Transactions slice. As such, having
the extra field in the BlockHeader is really a useless field that could
potentially get out of sync and cause the encode to fail.
Another example is related to deserializing a block header from the
database in order to serve it in response to a getheaders (MsgGetheaders)
request. If a block header is assumed to have the transaction count as a
part of it, then derserializing a block header not only consumes more than
the 80 bytes that actually comprise the header as stated above, but you
then need to change the transaction count to 0 before sending the headers
(MsgHeaders) message. So, not only are you reading and deserializing more
bytes than needed, but worse, you generally have to make a copy of it so
you can change the transaction count without busting cached headers.
This is part 1 of #13.
This commit adds two new functions named NewMsgGetDataSizeHint and
NewMsgInvSizeHint. These are intended to allow callers which know in
advance how large the inventory lists will grow the ability to provides
that information when creating the message. This in turn provides a
mechanism to avoid the need to perform several grow operations of the
backing array when adding large number of inventory vectors.
This commit adds tests to ensure the new "fast" paths in readElement and
writeElement work properly including proper fallback to the slower
reflection based read/write of the binary package.
This commit modifies the writeElement function to have a "fast path" which uses type
assertions for all of the types which btcwire write so the more expensive
reflection-based binary.Write can be avoided.
Also, this changes all cases that were writing raw ShaHash (32-byte) arrays (which
requires a stack copy) instead simply passing the pointer.
The following benchmark results show the results for serializing a block header
after these changes:
Before: BenchmarkWriteBlockHeader 500000 5566 ns/op
After: BenchmarkWriteBlockHeader 1000000 991 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
The following benchmark results show the results for deserializing a block header:
Before: BenchmarkReadBlockHeader 500000 5916 ns/op
After: BenchmarkReadBlockHeader 1000000 2078 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
The benchmark results for the current commit:
Before: BenchmarkDeserializeTx 500000 4018 ns/op
After: BenchmarkDeserializeTx 500000 3780 ns/op
The cumulative benchmark results since commit b7b700fd5a:
Before: BenchmarkDeserializeTx 200000 10665 ns/op
After: BenchmarkDeserializeTx 500000 3780 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
The benchmark results for the current commit:
Before: BenchmarkSerializeTx 1000000 1233 ns/op
After: BenchmarkSerializeTx 1000000 1083 ns/op
The cumulative benchmark results since commit b7b700fd5a:
Before: BenchmarkSerializeTx 1000000 5437 ns/op
After: BenchmarkSerializeTx 1000000 1083 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkWriteTxIn 5000000 422 ns/op
After: BenchmarkWriteTxIn 5000000 389 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkReadTxIn 1000000 2435 ns/op
After: BenchmarkReadTxIn 1000000 1427 ns/op
This is part of the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkReadTxOut 500000 4576 ns/op
After: BenchmarkReadTxOut 2000000 871 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkWriteTxOut 500000 4050 ns/op
After: BenchmarkWriteTxOut 10000000 248 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkReadOutPoint 500000 2946 ns/op
After: BenchmarkReadOutPoint 5000000 582 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before: BenchmarkWriteOutPoint 500000 2664 ns/op
After: BenchmarkWriteOutPoint 10000000 151 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkWriteVarStr4 1000000 1114 ns/op
BenchmarkWriteVarStr10 1000000 1352 ns/op
After:
BenchmarkWriteVarStr4 5000000 291 ns/op
BenchmarkWriteVarStr10 10000000 248 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
Before:
BenchmarkReadVarStr4 1000000 1698 ns/op
BenchmarkReadVarStr10 1000000 1812 ns/op
After:
BenchmarkReadVarStr4 2000000 853 ns/op
BenchmarkReadVarStr10 5000000 712 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.
This commit slightly optimizes the readVarInt function in the case of
multiple-byte variable length integers. It also reduces the amount of
memory garbage it generates.
Before:
BenchmarkReadVarInt1 5000000 386 ns/op
BenchmarkReadVarInt3 5000000 693 ns/op
BenchmarkReadVarInt5 2000000 793 ns/op
BenchmarkReadVarInt9 5000000 709 ns/op
After:
BenchmarkReadVarInt1 5000000 387 ns/op
BenchmarkReadVarInt3 5000000 471 ns/op
BenchmarkReadVarInt5 5000000 575 ns/op
BenchmarkReadVarInt9 5000000 473 ns/op
This is part ef the ongoing effort to optimize serialization as noted in
conformal/btcd#27.