-add cached `token` to Contact objects to minimize findValue requests
-remove self_store, always store to remote contacts even if we're the closest known node to the hash
-move the store call and error handling from announceHaveBlob to a smaller function of its own
-track contact failures, last replied, and last requested. use this to provide a 'contact_is_good' property on Contact objects
-ensure no duplicate contact objects are created
-remove confusing conflation of node id strings with Contact objects, update docstrings
-move RPC failure tracking to a callback/errback pair in sendRPC (so the contact is only updated once)
-handle seed nodes during the join sequence by setting their node ids after they initially reply to our ping
-name all of the kademlia RPC keyword args, remove confusing **kwargs and dictionary parsing
-add host ip/port to DHT send/receive logging to make the results comprehensible when running many nodes at once
-remove hash_announcer from Node and DiskBlobManager
-remove announcement related functions from DiskBlobManager
-update SQLiteStorage to store announcement times and provide blob hashes needing to be announced
-use dataExpireTimeout from lbrynet.dht.constants for re-announce timing
-use DeferredSemaphore for concurrent blob announcement
-adds reactor (clock) and reactor functions listenUDP, callLater, and resolve as arguments to Node.__init__
-set the reactor clock on LoopingCalls to make them easily testable
-convert callLater manage loops to LoopingCalls
-use looping call for running manage function rather than a scheduled
callLater
-track announce speed
-retry store requests that failed up to 3 times
-return a dict of {blob_hash: [storing_node_id]} results from
_announce_hashes
_refreshRoutingTable inline cb refactor
-add and use DeferredLockContextManager
-don't trap errback from iterativeFindNode in iterativeAnnounceHaveBlob
This method can be used by other components to check
if in the Node routing table there is at least one peer.
Signed-off-by: Antonio Quartulli <antonio@mandelbit.com>
In order to attempt to join the DHT several times
(i.e. when the first attempt has failed) we need to
split the components initialization from the real
joining operation.
Create node.startNetwork() to initialize the node
and keep the rest in node.joinNetwork()
Signed-off-by: Antonio Quartulli <antonio@mandelbit.com>
If a node is returning a peer list for a given blob hash
(being this been requested via CLI or via DHT) and it is
part of the resulting peer list, it will filter itself out
before returning the list.
This makes the results across the DHT inconsistent as
different nodes won't include themselves when
responding a findValue/findNode query.
Remove such filtering so that the local node ID is always
included when needed.
Signed-off-by: Antonio Quartulli <antonio@mandelbit.com>