This commit switches the handleGetHeadersMsg function to make use of the
new FetchBlockHeightBySha and FetchBlockHeaderBySha functions in btcdb.
Also, while here, nuke the header copy which is no longer required due to
the recent btcwire changes.
This commit reduces the initial idle timeout before version negotiation
has happened on a new peer to 30 seconds. Previously it could take 5
minutes due to the general idle timeout.
This commit changes a couple of sections which deal with large lists of
inventory vectors to use the new size hint functions recently added to
btcwire. This allows a bit more efficiency since the size of the list is
known up front and we can therefore avoid dynamically growing the backing
array several times. This also helps avoid a Go bug that leaks memory on
appends and GC churn.
This implements --onion (and --onionuser/--onionpass) that enable a
different proxy to be used to connect to .onion addresses. If no main
proxy is supplied then no proxy will be used for non-onion addresses.
Additionally we add --noonion that blocks connection attempts to .onion
addresses entirely (and avoids using tor for proxy dns lookups).
the --tor option has been supersceded and thus removed.
Closes#47
This commit does some housekeeping on peer.go to make the code more
consistent, correct a few comments, and add new comments to explain the
peer data flow. A couple of examples are variables not using the standard
Go style (camelCase) and comments that don't match the style of other
comments.
Instead of one thread that queues and writes, we move to a two queue
model. The queueHandler muxes all the sources of outgoung packets and
drips them to the actual sender. This is done so that a large send
doesnt' allow the channels to fillup and cause blockmanager and server
to block, which delays other peers.
Most messages we handle as is. However, for getdata we do some manual
limiting and pipelining, we queue up three and then we load the next
into memory, not sending it until the otherp ackets have been sent. We
may want to change this later to queue the packet *then* wait so that we
don't completely drain the pipe.
A few misc tweaks to avoid deadlocking by ensuring the all channels will
always drain. mostly this relates to ensuring that we know no more data
will be coming before we drain the channel, and not queueing after we
are marked to disconnect.
Discussed heavily with drahn@ and davec@.
It is not necessary to do all of the transaction validation on
blocks if they have been confirmed to be in the block chain leading
up to the final checkpoint in a given blockschain.
This algorithm fetches block headers from the peer, then once it has
established the full blockchain connection, it requests blocks.
Any blocks before the final checkpoint pass true for fastAdd on
btcchain operation, which causes it to do less valiation on the block.
The you address is the one we already set up fo the user, so either waht
we connected to (this will work with tor, etc), or the ip the user
connect to us from otherwise. We must however check to see if it is the address
of the proxy and strip it.
The me addesss, we use the same address selection for local addresses as
always
This should mean that we pass our tor address out in the version message
and thus the peers should add us to their addressmanager.
This implements only the bare bones of external ip address selection
using very similar algorithms and selection methods to bitcoind. Every
address we bind to, and if we bind to the wildcard, every listening
address is recorded, and one for the appropriate address type of the
peer is selected.
Support for fetching addresses via upnp, external services, or via the
command line are not yet implemented.
Closes#35
Perform the requisite processing on .onion addresses to turn them into the tor
reserved ipv6 region (the same as bitcoind and onioncat). Furthermore,
when printing an ip address, reverse the conversion so we print it
nicely. base32 as standard is uppercase, but tor and bitcoind seem to
use lowercase so we first must for we force .onion addrs to uppercase
(and to lowercase on the reverse).
As a side effect we now should handle dns names on the command line (via tor if
required) and add them to the addressmanger as necessary.
The code to send an address messages in batches was previously clearing
all addresses from the existing message after queueing it to be sent.
Since the message is a pointer, this means it was removing the addresses
from the same message which might not have already been sent yet (from
another goroutine) which led to a race.
This commit modifies the code to create a new address message for each
batch as intended.
Fixes#58.
Also, make every subsystem within btcd use its own logger instance so each
subsystem can have its own level specified independent of the others.
This is work towards #48.
Outbound we already have the exact same thing set up, and this should
quieten the race detector. Please note that this does *not* cause
problems with the service flags being wrong. Since by this point we have
already done every thing that would use the service flags from p.na in
addrmanager, and now p.Services is correct..
- Lock the mempool when removing transactions during a notification as
intended
- When generating the inventory vectors to serve on a mempool request,
recheck the memory pool for each hash since it's possible another thread
could have removed an entry after the initial query for available
hashes
- When a block is connected, remove any transactions which are now double
spends as a result of the newly connected transactions
We have a channel for queries and commands in server, where we pass in
args and the channel to reply from, let rpcserver use these interfaces
to provide the requistie information.
So far not all of the informaation is 100% correct, the syncpeer
information needs to be fetched from blockmanager, the subversion isn't
recorded and the number of bytes sent and recieved needs to be obtained
from btcwire. The rest should be correct.
This commit updates btcd to work with the new btcchain APIs which now
accept btcutil.Tx instead of raw btcwire.MsgTx. It also modifies the
transaction memory pool to store btcutil.Tx.
This is part of the ongoing transaction hash optimization effort noted in
conformal/btcd#25.
If we don't hear from a peer for 5 minutes, we disconnect them. To keep
traffic flowing we send a ping every 2 minutes if we have not send any
other message that should get a reply.
If we haven't handshaken with a peer don't send messages that are not
the handshake. Additionally don't queue up invs for sending, they'll
find out soon enough when they ask us what we know.
This commit adds code to properly respond to getdata requests for
transactions by fetching them from the transaction pool. Previously, we
advertised newly available transactions, but the code to respond with the
actual transaction was not written yet.
Also, fix a couple of comments and make the pushTxMsg and pushBlockMsg
functions consistent.
This commit is a first pass at improving the logging. It changes a number
of things to improve the readability of the output. The biggest addition
is message summaries for each message type when using the debug logging
level.
There is sitll more to do here such as allowing the level of each
subsystem to be independently specified, syslog support, and allowing the
logging level to be changed run-time.
The block manager handles inventory messges to know which inventory should
be requested based on what is already known and what is already in flight.
So, this commit adds logic to ask the transaction memory pool if the
transaction is already known before requesting it and tracks pending
requests into an in-flight transaction map owned by the block manager.
It also moves the transaction processing into the block manager so the
in-flight map can be properly cleaned.
Rather than showing all errors from ProcessTransaction as a failure, check
if the error is a TxRuleError meaning the transaction was rejected as
opposed to something actually going wrong and log it accordingly.
This commit is a rather large one which implements transaction pool and
relay according to the protocol rules of the reference implementation.
It makes use of btcchain to ensure the transactions are valid for the
block chain and includes several stricter checks which determine if they
are "standard" or not before admitting them into the pool and relaying
them.
There are still a few TODOs around the more strict rules which determine
which transactions are willing to be mined, but the core checks which
are imperative (everything except the all of the "standard" checks really)
to operate as a good citizen on the bitcoin network are in place.
Rather than having all of the various places that print peer figure out
the direction and form the string, centralize it by implementing the
Stringer interface on the peer.
Only log errors for most cases if the peer is persisent (and thus requested).
Only log by default after version exchange, and after losing a peer that had
completed version exchange. Make most other messages debug.
We would occasionally hang or a while during server shudown, this is due
to an outbound peer waiting on a connection or a sleep. However, we
don't actually require to wait for the peers to finish at all. So just
let them finish.
Secondly, make peer.disconnnect and server.shutdown atomic varaibles so
that checking them from multiple goroutines isn't race, and clean up
their usage.
Use this information so that we do not request a block per peer we got
an inv for it, makes multi peer much quieter and rather more bandwidth
efficient.
In order to remove a number of possible races we combine blockhandling
an synchandler and use one channel for all messages. This ensures that
all messages from a single peer will be recieved in order. It also
removes the need for a lot of locking between the peer removal code and
the block/inv handlers.
Implement the bucketing by source group and group using essentially the
same algorithm as the address maanger in bitcoind.
Fix up the saving of peer.json to do so in a json format that keeps bucket
metadata.
If we fail to load the some of the data we asssume that we have
incomplete information, so we nuke the existing file and reinitialise so
we have a clean slate.
This removes a horrible case of reach-around from per into the guts of
the blockmaanger to frob the chain. Soon, when we try to deduplicate the
fetching of blocks from multiple peers this will need decisions made in
a central point.
Discussed at length with davec.
This commit adds detection and filtering for back-to-back duplicate
getblocks requests. This is needed because the trigger for requesting
more blocks is receiving an orphan. When the peer is further behind than
the number of blocks advertised via a single inventory message, the same
orphan block will be sent multiple times. When the peer receives the
final inventory message, it too contains the orphan that was previously
sent. This leads to a duplicate getblocks request that must be filtered
to prevent requesting the final series of blocks again.
- Remove leftover debug log prints
- Increment waitgroup outside of goroutine
- Various comment and log message consistency
- Combine peer setup and newPeer -> newInboundPeer
- Save and load peers.json to/from cfg.DataDir
- Only claim addrmgr needs more addresses when it has less than 1000
- Add warning if unkown peer on orphan block.
Use it to add multiple peer support. We try and keep 8 outbound peers
active at all times.
This address manager is not as complete as the one in bitcoind yet, but
additional functionality is being worked on.
We currently handle (in a similar manner to bitcoind):
- biasing between new and already tried addresses based on number of connected
peers.
- rejection of non-default ports until desparate
- address selection probabilities based on last successful connection and number
of failures.
- routability checks based on known unroutable subnets.
- only connecting to each network `group' once at any one time.
We currently lack support for:
- tor ``addresses'' (an .onion address encoded in 64 bytes of ip address)
- full state save and restore (we just save a json with the list of known
addresses in it)
- multiple buckets for new and tried addresses selected by a hash of address and
source. The current algorithm functions the same as bitcoind would with only
one bucket for new and tried (making the address cache rather smaller than it
otherwise would be).
This commit adds support for relaying blocks between peers. It keeps
track of inventory that has either already been advertised to remote peers
or advertised by remote peers using a size-limited most recently used
cache. This helps avoid relaying inventory the peer already knows as
much as possible while not allowing rogue peers to eat up arbitrary
amounts of memory with bogus inventory.
This commit reworks the getblocks handling a bit to clean it up and match
the reference implementation handling. In particular, it adds monitoring
for when peers request the final block advertised from a previous
getblocks message and automatically avertises the latest known block
inventory to trigger the peer to send another getblocks message.
When no blocks in the block locator are found, start with the block after
the genesis block. This means the client will start over with the genesis
block if unknown block locators are provided. This mirrors the behavior
in the reference implementation.
This commit reworks the getheaders handling a bit to clean it up and match
the reference implementation handling. In particular, in addition to the
normal handling where headers starting after the block locator up to the
stop hash are served, when no locator hashes are provided, the stop hash
acts as a way to specifically request that header. Next, an empty headers
message is sent when no hashes provided by the block locator can be
found. Finally, there was a bug that was limiting the number of headers
that could requested at once to 500 instead of the expected 2000.
Rather than only setting the services field for inbound peers, set it for
all peers. This field referes to the remote peer's services regardless of
inbound or outbound.
This commit significantly reworks the fetching code to interop better with
bitcoind. In particular, when an inventory message is sent, and the
remote peer requests the final block, the remote peer sends the current
end of the main chain to signal that there are more blocks to get.
Previously this code was automatically requesting more blocks when the
number of in-flight blocks was under a certain threshold. The original
approach does help alleviate delays in the "request final, wait for
orphan, request more" round trip, but due to the aforementioned mechanism,
it leads to double requests and other subtle issues.
This commit modifies the input message handler so that when a remote peer
sends a block, no further messages from that peer are accepted until the
block has been fully processed and therefore known good or bad. This
helps prevent a malicious peer from queueing up a bunch of bad blocks
before disconnecting (or being disconnected) and wasting memory.
Additionally, this behavior is depended on by at least the block
acceptance test tool as the reference implementation processes blocks in
the same thread and therefore blocks further messages until the block has
been fully processed as well.