Only two operations are performed with this data structure: adding to
the back and removing from the front. Because middle inserts and
deletions are never needed, a linked list results in overall worse
performance due to an extra allocation for each element's node, worse
cache locality, and the runtime cost of boxing/unboxing each item
during accesses.
On top of the performance gains, a slice is more type safe as it is a
true generic data structure making it is impossible to insert or
access an element with the wrong type.
As pointed out in #189, according to the Go documentation, a ticker must
be stopped to release associated resources. This commit adds a defer call
to stop two tickers there were previously not being stopped as well as
changes two others that were being stopped over to use defer so it's more
consistent.
The other ticker in ScheduleShutdown is replaced and already calls Stop
before replacing it, so it has not been modified.
Closes#189.
ok @jrick
This commit uses the new MedianTimeSource API in btcchain to create a
median time source which is stored in the server and is fed time samples
from all remote nodes that are connected. It also modifies all call sites
which now require the the time source to pass it in.
This ensures we backoff when reconnecting to peers for which we don't
understand the replies, just like we do for peers we fail to connect to.
Closes#103
The sync.atomic requires alignment of variables used atomically on ARM.
This commit moves the connected and disconnect variables in the peer
struct up so they are aligned.
Fixes#157.
This commit implements reject handling as defined by BIP0061 and bumps the
maximum supported protocol version to 70002 accordingly.
As a part of supporting this a new error type named RuleError has been
introduced which encapsulates and underlying error which could be one of
the existing TxRuleError or btcchain.RuleError types.
This allows a single high level type assertion to be used to determine if
the block or transaction was rejected due to a rule error or due to an
unexpected error. Meanwhile, an appropriate reject error can be created
from the error by pulling the underlying error out and using it.
Also, a check for minimum protocol version of 209 has been added.
Closes#133.
The done and wait channels used to throttle outgoing data are being used
as semaphores. As mentioned in the previous commit, it's more efficient
to use a 0-byte type and allow compiler optimizations for the specific use
case.
This commit implements full support for filtering based on the filterload,
filteradd, filterclear, and merkleblock messages introduced by BIP0037.
This allows btcd to work seamlessly with SPV wallets such as BitcoinJ.
Original code by @dajohi. Cleanup, bug fixes, and polish by @davecgh.
These changes are a joint effort between myself and @dajohi.
- Separate IP address range/network code into its own file
- Group all of the RFC range declarations together
- Introduces a new unexported function to simplify the range declarations
- Add comments for all exported functions
- Use consistent variable casing in refactored code
- Add initial doc.go package overview
- Bump serialize interval to 10 minutes
- Correct GroupKey to perform as intended
- Make AddLocalAddress return error instead of just a debug message
- Add tests for AddLocalAddress
- Add tests for GroupKey
- Add tests for GetBestLocalAddress
- Use time.Time to improve readability
- Make address manager code golint clean
- Misc cleanup
- Add test coverage reporting
This commit does just enough to move the address manager into its own
package. Since it was not originally written as a package, it will
require a bit of refactoring and cleanup to turn it into a robust
package with a friendly API.
This commit corrects an issue where the data requested by getdata was not
being properly throttled which could lead to higher than desired memory
usage on large requests.
This commit, along with recent commits to btcnet and btcwire, expose a new
network that is intended to provide a private network useful for
simulation testing. To that end, it has the special property that it has
no DNS seeds and will actively ignore all addr and getaddr messages. It
will also not try to connect to any nodes other than those specified via
--connect. This allows the network to remain private to the specific
nodes involved in the testing and not simply become another public
testnet.
The network difficulty is also set extremely low like the regression test
network so blocks can be created extremely quickly without requiring a lot
of hashing power.
In practise the races caused by not protecting these quite simply didn't
matter, they couldn't actually cause any damage whatsoever. However, I
am sick of hearing about these essentially false positivies whenever
someone runs the race detector (yes, i know that race detector has no
false positives but this was effectively harmess).
verified to shut the detector up by dhill.
On unknown inventory types, handleGetDataMsg would loop forever.
After fixing that, if a getdata request only had unknown inventory
types, it would block forever.
ok @davecgh
This commit modifies peers to use a max protocol version that is specified
as a constant in the peer code as opposed to the btcwire.ProtocolVersion
constant.
This allows btcwire to be updated to support new protocol versions without
causing peers to claim they support a protocol version which they actually
don't.
This commit changes the server byte counters over to use a mutex instead
of the atomic package. The atomic.AddUint64 function requires the struct
fields to be 64-bit aligned on 32-bit platforms. The byte counts are
fields in the server struct and are not 64-bit aligned. While it would be
possible to arrange the fields to be aligned through various means, it
would make the code too fragile for my tastes. I prefer code that doesn't
depend on platform specific alignment.
Fixes#96.
Previously the getnettotals was just looping through all of the currently
connected peers to sum the byte counts and returning that. However, the
intention of the getnettotals RPC is to get all bytes since the server was
started, so this logic was not correct.
This commit modifies the code to keep an atomic counter on the server for
bytes read/written and has each peer update the server counters as well as
the per-peer counters.
This commit adds byte counters to each peer using the new btcwire
ReadMessageN and WriteMessageN functions to obtain the number of bytes
read and written, respectively. It also returns those byte counters via
the PeerInfo struct which is used to populate the RPC getpeerinfo reply.
Closes#83.
This commit improves how the headers-first mode works in several ways.
The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics. This commit improves the heaers-first code to resolve
the issues discussed next.
- The previous code only used headers-first mode when starting out from
block height 0 rather than allowing it to work starting at any height
before the final checkpoint. This means if you stopped the chain
download at any point before the final checkpoint and restarted, it
would not resume and you therefore would not have the benefit of the
faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
downloaded and only the final checkpoint was verified. This resulted in
the following issues:
- As the block chain grew, increasingly larger numbers of headers were
downloaded and kept in memory
- If the node the node serving up the headers was serving an invalid
chain, it wouldn't be detected until downloading a large number of
headers
- When an invalid checkpoint was detected, no action was taken to
recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
merely keeping track of the hashes and heights is enough to provde they
properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
meant is was possible to inadvertently download the same headers twice
only to throw them away.
This commit resolves these issues with the following changes:
- The current height is now examined at startup and prior each sync peer
selection to allow it to resume headers-first mode starting from the
known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
from the current known block height up to the next checkpoint. This has
several desirable properties:
- The amount of memory required is bounded by the maximum distance
between to checkpoints rather than the entire length of the chain
- A node serving up an invalid chain is detected very quickly and with
little work
- When an invalid checkpoint is detected, the headers are simply
discarded and the peer is disconnected for serving an invalid chain
- When the sync peer disconnets, all current headers are thrown away
and, due to the new aforementioned resume code, when a new sync peer
is selected, headers-first mode will continue from the last known good
block
- In addition to reduced memory usage from only keeping information about
headers between two checkpoints, the only information now kept in memory
about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered