This commit updates btcd to work with the new btcchain APIs which now
accept btcutil.Tx instead of raw btcwire.MsgTx. It also modifies the
transaction memory pool to store btcutil.Tx.
This is part of the ongoing transaction hash optimization effort noted in
conformal/btcd#25.
If we don't hear from a peer for 5 minutes, we disconnect them. To keep
traffic flowing we send a ping every 2 minutes if we have not send any
other message that should get a reply.
If we haven't handshaken with a peer don't send messages that are not
the handshake. Additionally don't queue up invs for sending, they'll
find out soon enough when they ask us what we know.
This commit adds code to properly respond to getdata requests for
transactions by fetching them from the transaction pool. Previously, we
advertised newly available transactions, but the code to respond with the
actual transaction was not written yet.
Also, fix a couple of comments and make the pushTxMsg and pushBlockMsg
functions consistent.
This commit is a first pass at improving the logging. It changes a number
of things to improve the readability of the output. The biggest addition
is message summaries for each message type when using the debug logging
level.
There is sitll more to do here such as allowing the level of each
subsystem to be independently specified, syslog support, and allowing the
logging level to be changed run-time.
The block manager handles inventory messges to know which inventory should
be requested based on what is already known and what is already in flight.
So, this commit adds logic to ask the transaction memory pool if the
transaction is already known before requesting it and tracks pending
requests into an in-flight transaction map owned by the block manager.
It also moves the transaction processing into the block manager so the
in-flight map can be properly cleaned.
Rather than showing all errors from ProcessTransaction as a failure, check
if the error is a TxRuleError meaning the transaction was rejected as
opposed to something actually going wrong and log it accordingly.
This commit is a rather large one which implements transaction pool and
relay according to the protocol rules of the reference implementation.
It makes use of btcchain to ensure the transactions are valid for the
block chain and includes several stricter checks which determine if they
are "standard" or not before admitting them into the pool and relaying
them.
There are still a few TODOs around the more strict rules which determine
which transactions are willing to be mined, but the core checks which
are imperative (everything except the all of the "standard" checks really)
to operate as a good citizen on the bitcoin network are in place.
Rather than having all of the various places that print peer figure out
the direction and form the string, centralize it by implementing the
Stringer interface on the peer.
Only log errors for most cases if the peer is persisent (and thus requested).
Only log by default after version exchange, and after losing a peer that had
completed version exchange. Make most other messages debug.
We would occasionally hang or a while during server shudown, this is due
to an outbound peer waiting on a connection or a sleep. However, we
don't actually require to wait for the peers to finish at all. So just
let them finish.
Secondly, make peer.disconnnect and server.shutdown atomic varaibles so
that checking them from multiple goroutines isn't race, and clean up
their usage.
Use this information so that we do not request a block per peer we got
an inv for it, makes multi peer much quieter and rather more bandwidth
efficient.
In order to remove a number of possible races we combine blockhandling
an synchandler and use one channel for all messages. This ensures that
all messages from a single peer will be recieved in order. It also
removes the need for a lot of locking between the peer removal code and
the block/inv handlers.
Implement the bucketing by source group and group using essentially the
same algorithm as the address maanger in bitcoind.
Fix up the saving of peer.json to do so in a json format that keeps bucket
metadata.
If we fail to load the some of the data we asssume that we have
incomplete information, so we nuke the existing file and reinitialise so
we have a clean slate.
This removes a horrible case of reach-around from per into the guts of
the blockmaanger to frob the chain. Soon, when we try to deduplicate the
fetching of blocks from multiple peers this will need decisions made in
a central point.
Discussed at length with davec.
This commit adds detection and filtering for back-to-back duplicate
getblocks requests. This is needed because the trigger for requesting
more blocks is receiving an orphan. When the peer is further behind than
the number of blocks advertised via a single inventory message, the same
orphan block will be sent multiple times. When the peer receives the
final inventory message, it too contains the orphan that was previously
sent. This leads to a duplicate getblocks request that must be filtered
to prevent requesting the final series of blocks again.
- Remove leftover debug log prints
- Increment waitgroup outside of goroutine
- Various comment and log message consistency
- Combine peer setup and newPeer -> newInboundPeer
- Save and load peers.json to/from cfg.DataDir
- Only claim addrmgr needs more addresses when it has less than 1000
- Add warning if unkown peer on orphan block.
Use it to add multiple peer support. We try and keep 8 outbound peers
active at all times.
This address manager is not as complete as the one in bitcoind yet, but
additional functionality is being worked on.
We currently handle (in a similar manner to bitcoind):
- biasing between new and already tried addresses based on number of connected
peers.
- rejection of non-default ports until desparate
- address selection probabilities based on last successful connection and number
of failures.
- routability checks based on known unroutable subnets.
- only connecting to each network `group' once at any one time.
We currently lack support for:
- tor ``addresses'' (an .onion address encoded in 64 bytes of ip address)
- full state save and restore (we just save a json with the list of known
addresses in it)
- multiple buckets for new and tried addresses selected by a hash of address and
source. The current algorithm functions the same as bitcoind would with only
one bucket for new and tried (making the address cache rather smaller than it
otherwise would be).
This commit adds support for relaying blocks between peers. It keeps
track of inventory that has either already been advertised to remote peers
or advertised by remote peers using a size-limited most recently used
cache. This helps avoid relaying inventory the peer already knows as
much as possible while not allowing rogue peers to eat up arbitrary
amounts of memory with bogus inventory.
This commit reworks the getblocks handling a bit to clean it up and match
the reference implementation handling. In particular, it adds monitoring
for when peers request the final block advertised from a previous
getblocks message and automatically avertises the latest known block
inventory to trigger the peer to send another getblocks message.
When no blocks in the block locator are found, start with the block after
the genesis block. This means the client will start over with the genesis
block if unknown block locators are provided. This mirrors the behavior
in the reference implementation.
This commit reworks the getheaders handling a bit to clean it up and match
the reference implementation handling. In particular, in addition to the
normal handling where headers starting after the block locator up to the
stop hash are served, when no locator hashes are provided, the stop hash
acts as a way to specifically request that header. Next, an empty headers
message is sent when no hashes provided by the block locator can be
found. Finally, there was a bug that was limiting the number of headers
that could requested at once to 500 instead of the expected 2000.
Rather than only setting the services field for inbound peers, set it for
all peers. This field referes to the remote peer's services regardless of
inbound or outbound.