This behavior seems to have been quite racy and broken.
Move nLocalHostNonce into CNode, and check received nonces against all
non-fully-connected nodes. If there's a match, assume we've connected
to ourself.
Tests if addresses are online or offline by briefly connecting to them. These short lived connections are referred to as feeler connections. Feeler connections are designed to increase the number of fresh online addresses in tried by selecting and connecting to addresses in new. One feeler connection is attempted on average once every two minutes.
This change was suggested as Countermeasure 4 in
Eclipse Attacks on Bitcoin’s Peer-to-Peer Network, Ethan Heilman,
Alison Kendler, Aviv Zohar, Sharon Goldberg. ePrint Archive Report
2015/263. March 2015.
1a5a4e6 Randomize name lookup result in ConnectSocketByName (Pieter Wuille)
f9f5cfc Prevent duplicate connections where one is by name and another by ip (Pieter Wuille)
1111b80 Rework addnode behaviour (Pieter Wuille)
6ee7f05 Allow disconnecting a netgroup with only one member in eviction. (Gregory Maxwell)
5d0ca81 Add recently accepted blocks and txn to AttemptToEvictConnection. (Gregory Maxwell)
* Use CNode::addeName to track whether a connection to a name is already open
* A new connection to a previously-connected by-name addednode is only opened when
the previous one closes (even if the name starts resolving to something else)
* At most one connection is opened per addednode (even if the name resolves to multiple)
* Unify the code between ThreadOpenAddedNodeConnections and getaddednodeinfo
* Information about open connections is always returned, and the dns argument becomes a dummy
* An IP address and inbound/outbound is only reported for the (at most 1) open connection
eebc232 test: Add more test vectors for siphash (Wladimir J. van der Laan)
8884830 Use C++11 thread-safe static initializers (Pieter Wuille)
c31b24f Use 64-bit SipHash of netgroups in eviction (Pieter Wuille)
9bf156b Support SipHash with arbitrary byte writes (Pieter Wuille)
053930f Avoid recalculating vchKeyedNetGroup in eviction logic. (Patrick Strateman)
6182d10 Do not increment nAttempts by more than one for every Good connection. (Gregory Maxwell)
c769c4a Avoid counting failed connect attempts when probably offline. (Gregory Maxwell)
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
5d5e7a0 net: No need to export ConnectNode (Cory Fields)
e9ed620 net: No need to export DumpBanlist (Cory Fields)
8b8f877 net: make Ban/Unban/ClearBan functionality consistent (Cory Fields)
cca221f net: Drop CNodeRef for AttemptToEvictConnection (Cory Fields)
563f375 net: use the exposed GetNodeSignals() rather than g_signals directly (Cory Fields)
9faa490 net: remove unused set (Cory Fields)
52cbce2 net: don't import std namespace (Cory Fields)
1475ecf Fix de-serialization bug where AddrMan is corrupted after exception * CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state * CAddrDB modified to make unit tests possible * Regression test created to ensure bug is fixed * StartNode modifed to clear adrman if CAddrDB::Read returns an error code. (EthanHeilman)
* CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state
* CAddrDB modified to make unit tests possible
* Regression test created to ensure bug is fixed
* StartNode modifed to clear adrman if CAddrDB::Read returns an error code.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
aa4b0c2 When not filtering blocks, getdata sends more in one test (Pieter Wuille)
d41e44c Actually only use filterInventoryKnown with MSG_TX inventory messages. (Gregory Maxwell)
b6a0da4 Only use filterInventoryKnown with MSG_TX inventory messages. (Patick Strateman)
6b84935 Rename setInventoryKnown filterInventoryKnown (Patick Strateman)
e206724 Remove mruset as it is no longer used. (Gregory Maxwell)
ec73ef3 Replace setInventoryKnown with a rolling bloom filter. (Gregory Maxwell)
ebb25f4 Limit setAskFor and retire requested entries only when a getdata returns. (Gregory Maxwell)
5029698 prevent peer flooding request queue for an inv (kazcw)
Mruset setInventoryKnown was reduced to a remarkably small 1000
entries as a side effect of sendbuffer size reductions in 2012.
This removes setInventoryKnown filtering from merkleBlock responses
because false positives there are especially unattractive and
also because I'm not sure if there aren't race conditions around
the relay pool that would cause some transactions there to
be suppressed. (Also, ProcessGetData was accessing
setInventoryKnown without taking the required lock.)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.
mapAlreadyAskedFor does not keep track of which peer has a request queued for a
particular tx. As a result, a peer can blind a node to a tx indefinitely by
sending many invs for the same tx, and then never replying to getdatas for it.
Each inv received will be placed 2 minutes farther back in mapAlreadyAskedFor,
so a short message containing 10 invs would render that tx unavailable for 20
minutes.
This is fixed by disallowing a peer from having more than one entry for a
particular inv in mapAlreadyAskedFor at a time.
- Force AUTHCOOKIE size to be 32 bytes: This provides protection against
an attack where a process pretends to be Tor and uses the cookie
authentication method to nab arbitrary files such as the
wallet
- torcontrol logging
- fix cookie auth
- add HASHEDPASSWORD auth, fix fd leak when fwrite() fails
- better error reporting when cookie file is not ok
- better init/shutdown flow
- stop advertizing service when disconnected from tor control port
- COOKIE->SAFECOOKIE auth
* -maxuploadtarget can be set in MiB
* if <limit> - ( time-left-in-24h-cycle / 600 * MAX_BLOCK_SIZE ) has reach, stop serve blocks older than one week and filtered blocks
* no action if limit has reached, no guarantee that the target will not be surpassed
* add outbound limit informations to rpc getnettotals
7b79cbd limit total length of user agent comments (Pavol Rusnak)
557f8ea implement uacomment config parameter which can add comments to user agent as per BIP-0014 (Pavol Rusnak)
This sets aside a number of connection slots for whitelisted peers,
useful for ensuring your local users and miners can always get in,
even if your limit on inbound connections has already been reached.
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
This is a simplified re-do of closed pull #3088.
This patch eliminates the privacy and reliability problematic use
of centralized web services for discovering the node's addresses
for advertisement.
The Bitcoin protocol already allows your peers to tell you what
IP they think you have, but this data isn't trustworthy since
they could lie. So the challenge is using it without creating a
DOS vector.
To accomplish this we adopt an approach similar to the one used
by P2Pool: If we're announcing and don't have a better address
discovered (e.g. via UPNP) or configured we just announce to
each peer the address that peer told us. Since peers could
already replace, forge, or drop our address messages this cannot
create a new vulnerability... but if even one of our peers is
giving us a good address we'll eventually make a useful
advertisement.
We also may randomly use the peer-provided address for the
daily rebroadcast even if we otherwise have a seemingly routable
address, just in case we've been misconfigured (e.g. by UPNP).
To avoid privacy problems, we only do these things if discovery
is enabled.
Many changes:
* Do not use 'getblocks', but 'getheaders', and use it to build a headers tree.
* Blocks are fetched in parallel from all available outbound peers, using a
limited moving window. When one peer stalls the movement of the window, it is
disconnected.
* No more orphan blocks. At all. We only ever request a block for which we have
verified the headers, and store it to disk immediately. This means that a
disk-fill attack would require PoW.
* Require protocol version 31800 for every peer (released in december 2010).
* No more syncnode (we sync from everyone we can, though limited to 1 during
initial *headers* sync).
* Introduce some extra named constants, comments and asserts.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
Split up util.cpp/h into:
- string utilities (hex, base32, base64): no internal dependencies, no dependency on boost (apart from foreach)
- money utilities (parsesmoney, formatmoney)
- time utilities (gettime*, sleep, format date):
- and the rest (logging, argument parsing, config file parsing)
The latter is basically the environment and OS handling,
and is stripped of all utility functions, so we may want to
rename it to something else than util.cpp/h for clarity (Matt suggested
osinterface).
Breaks dependency of sha256.cpp on all the things pulled in by util.
aa82795 Add detailed network info to getnetworkinfo RPC (Wladimir J. van der Laan)
075cf49 Add GetNetworkName function (Wladimir J. van der Laan)
c91a947 Add IsReachable(net) function (Wladimir J. van der Laan)
60dc8e4 Allow -onlynet=onion to be used (Wladimir J. van der Laan)
Simpler alternative to #4348.
The current setup with closesocket() is strange. It poses
as a compatibility wrapper but adds functionality.
Rename it and make it a documented utility function in netbase.
Code movement only, zero effect on the functionality.
4eedf4f make RandAddSeed() use OPENSSL_cleanse() (Philip Kaufmann)
6354935 move rand functions from util to new random.h/.cpp (Philip Kaufmann)
001a53d add GetRandBytes() as wrapper for RAND_bytes() (Philip Kaufmann)
This adds a -whitelist option to specify subnet ranges from which peers
that connect are whitelisted. In addition, there is a -whitebind option
which works like -bind, except peers connecting to it are also
whitelisted (allowing a separate listen port for trusted connections).
Being whitelisted has two effects (for now):
* They are immune to DoS disconnection/banning.
* Transactions they broadcast (which are valid) are always relayed,
even if they were already in the mempool. This means that a node
can function as a gateway for a local network, and that rebroadcasts
from the local network will work as expected.
Whitelisting replaces the magic exemption localhost had for DoS
disconnection (local addresses are still never banned, though), which
implied hidden service connects (from a localhost Tor node) were
incorrectly immune to DoS disconnection as well. This old
behaviour is removed for that reason, but can be restored using
-whitelist=127.0.0.1 or -whitelist=::1 can be specified. -whitebind
is safer to use in case non-trusted localhost connections are expected
(like hidden services).
- remove an unneded else in ConnectNode()
- make 0 a double and change to 0.0 in ConnectNode()
- rename strDest to pszDest in OpenNetworkConnection()
- remove an unneded call to our REF() macro in BindListenPort()
- small style cleanups and removal of unneeded new-lines
- add DEFAULT_LISTEN in net.h and use in the code (shared
setting between core and GUI)
Important: This makes it obvious, that we need to re-think the
settings/options handling, as GUI settings are processed before
any parameter-interaction (which is mostly important for network
stuff) in AppInit2()!
... instead of after 30 minutes of no sending, for latency measurement
and keep-alive. Also, disconnect if no reply arrives within 20 minutes,
instead of 90 of inactivity (for peers supporting the 'pong' message).
f0a83fc Use Params().NetworkID() instead of TestNet() from the payment protocol (jtimon)
2871889 net.h was using std namespace through chainparams.h included in protocol.h (jtimon)
c8c52de Replace virtual methods with static attributes, chainparams.h depends on protocol.h instead of the other way around (jtimon)
a3d946e Get rid of TestNet() (jtimon)
6fc0fa6 Add RPCisTestNet chain parameter (jtimon)
cfeb823 Add RequireStandard chain parameter (jtimon)
21913a9 Add AllowMinDifficultyBlocks chain parameter (jtimon)
d754f34 Move majority constants to chainparams (jtimon)
8d26721 Get rid of RegTest() (jtimon)
cb9bd83 Add DefaultCheckMemPool chain parameter (jtimon)
2595b9a Add DefaultMinerThreads chain parameter (jtimon)
bfa9a1a Add MineBlocksOnDemand chain parameter (jtimon)
1712adb Add MiningRequiresPeers chain parameter (jtimon)
Adds two new info query commands that take over information from
hodge-podge `getinfo`.
Also some new information is added:
- `getblockchaininfo`
- `chain`: (string) current chain (main, testnet3, regtest)
- `verificationprogress: (numeric) estimated verification progress
- `chainwork`
- `getnetworkinfo`
- `localaddresses`: (array) local addresses, from mapLocalHost (fixes#1734)
Amend to d5f1e72. It turns out that BerkelyDB was including inttypes.h
indirectly, so we cannot fix this with just macros.
Trivial commit: apply the following script to all .cpp and .h files:
# Middle
sed -i 's/"PRIx64"/x/g' "$1"
sed -i 's/"PRIu64"/u/g' "$1"
sed -i 's/"PRId64"/d/g' "$1"
# Initial
sed -i 's/PRIx64"/"x/g' "$1"
sed -i 's/PRIu64"/"u/g' "$1"
sed -i 's/PRId64"/"d/g' "$1"
# Trailing
sed -i 's/"PRIx64/x"/g' "$1"
sed -i 's/"PRIu64/u"/g' "$1"
sed -i 's/"PRId64/d"/g' "$1"
After this commit, `git grep` for PRI.64 should turn up nothing except
the defines in util.h.
As the tinyformat-based formatting system (introduced in b77dfdc) is
type-safe, no special format characters are needed to specify sizes.
Tinyformat can support (ignore) the C99 prefixes such as "ll" but
chokes on MSVC's inttypes.h defines prefixes such as "I64X". So don't
include inttypes.h and define our own for compatibility.
(an alternative would be to sweep the entire codebase using sed -i to
get rid of the size specifiers but this has less diff impact)
contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
This was a leftover from the times in which
peers.dat depended in BDB.
Other functions in db.cpp still depend on BerkelyDB,
to be able to compile without BDB this (small)
functionality needs to be moved to another file.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
INIT_PROTO_VERSION is the initial version, after a succesful version/verack it is increased to a negotiated version.
MIN_PEER_PROTO_VERSION could be a different value to disconnect from peers older than a specified version.
The existing CNode::addrLocal member is revealed to the user,
as an address string, similar to the existing "addr" field.
Instead of showing garbage or empty string,
it simply will not appear in the output if local address not known yet.
New RPC "ping" command to request ping.
Implemented "pong" message handler.
New "pingtime" field in getpeerinfo, to provide results to user.
New "pingwait" field, to show pings still in flight, to better see newly lagging peers.
This reduces a peer's ability to attack network resources by
using a full bloom filter, but without reducing the usability
of bloom filters. It sets a default match everything filter
for peers and it generalizes a prior optimization to
cover more cases.
Added explicit include of main.h in init.cpp, changed include of init.h to include of main.h in net.cpp.
Added function registration for net.cpp in init.cpp's network initialization.
Removed protocol.cpp's dependency on main.h.
TODO: Remove main.h include in net.cpp.
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
It seems there were two mechanisms for assessing whether a CNode
was still in use: a refcount and a release timestamp. The latter
seems to have been there for a long time, as a safety mechanism.
However, this timer also keeps CNode objects alive for far longer
than necessary after disconnects, potentially opening up a DoS
window.
This commit removes the timestamp-based mechanism, and replaces
it with an assert(nRefCount >= 0), to verify that the refcounting
is indeed correctly working.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)