This adds the listening address on which incoming connections were received to the
CNode and CNodeStats structures.
The address is reported in `getpeerinfo`.
This can be useful for distinguishing connections received on different listening ports
(e.g. when using a different listening port for Tor hidden service connections)
or different networks.
381a46e Consensus: Policy: MOVEONLY: Move CFeeRate out of the consensus module (Jorge Timón)
330bb5a Consensus: Minimal way to move dust out of consensus (Jorge Timón)
Tree-SHA512: 19a2ea8169afd5a9d3f940d8974e34cfaead153e3ff3068ac82fccdb8694d19d9b45938904ec9e8cd095bd5ca3a0080364da29372f6aaf56b11a6c2ccd6c7a4d
These are (afaik) all long-standing races or concurrent accesses. Going
forward, we can clean these up so that they're not all individual atomic
accesses.
- Reintroduce cs_vRecv to guard receive-specific vars
- Lock vRecv/vSend for CNodeStats
- Make some vars atomic.
- Only set the connection time in CNode's constructor so that it doesn't change
0729102 Net: pass interruptMsgProc as const where possible (Jorge Timón)
fc7f2ff Net: Make CNetMsgMaker more const (Jorge Timón)
d45955f Net: CConnman: Make some methods const (Jorge Timón)
Since ForEach* are can be used to send messages to all nodes, the caller may
end up sending a message before the version handshake is complete. To limit
this, filter out these nodes. While we're at it, may as well filter out
disconnected nodes as well.
Delete unused methods rather than updating them.
This avoids having some vars set if the version negotiation fails.
Also copy it all into CNode at the same site. nVersion and
fSuccessfullyConnected are set last, as they are the gates for the other vars.
Make them atomic for that reason.
376b3c2 Make the cs_sendProcessing a LOCK instead of a TRY_LOCK (Matt Corallo)
d7c58ad Split CNode::cs_vSend: message processing and message sending (Matt Corallo)
cs_vSend is used for two purposes - to lock the datastructures used
to queue messages to place on the wire and to only call
SendMessages once at a time per-node. I believe SendMessages used
to access some of the vSendMsg stuff, but it doesn't anymore, so
these locks do not need to be on the same mutex, and also make
deadlocking much more likely.
e60360e net: remove cs_vRecvMsg (Cory Fields)
991955e net: add a flag to indicate when a node's send buffer is full (Cory Fields)
c6e8a9b net: add a flag to indicate when a node's process queue is full (Cory Fields)
4d712e3 net: add a new message queue for the message processor (Cory Fields)
c5a8b1b net: rework the way that the messagehandler sleeps (Cory Fields)
c72cc88 net: remove useless comments (Cory Fields)
ef7b5ec net: Add a simple function for waking the message handler (Cory Fields)
f5c36d1 net: record bytes written before notifying the message processor (Cory Fields)
60befa3 net: handle message accounting in ReceiveMsgBytes (Cory Fields)
56212e2 net: set message deserialization version when it's actually time to deserialize (Cory Fields)
0e973d9 net: remove redundant max sendbuffer size check (Cory Fields)
6042587 net: wait until the node is destroyed to delete its recv buffer (Cory Fields)
f6315e0 net: only disconnect if fDisconnect has been set (Cory Fields)
5b4a8ac net: make GetReceiveFloodSize public (Cory Fields)
e5bcd9c net: make vRecvMsg a list so that we can use splice() (Cory Fields)
53ad9a1 net: fix typo causing the wrong receive buffer size (Cory Fields)
vRecvMsg is now only touched by the socket handler thread.
The accounting vars (nRecvBytes/nLastRecv/mapRecvBytesPerMsgCmd) are also
only used by the socket handler thread, with the exception of queries from
rpc/gui. These accesses are not threadsafe, but they never were. This needs to
be addressed separately.
Also, update comment describing data flow
Similar to the recv flag, but this one indicates whether or not the net's send
buffer is full.
The socket handler checks the send queue when a new message is added and pauses
if necessary, and possibly unpauses after each message is drained from its buffer.
Messages are dumped very quickly from the socket handler to the processor, so
it's the depth of the processing queue that's interesting.
The socket handler checks the process queue's size during the brief message
hand-off and pauses if necessary, and the processor possibly unpauses each time
a message is popped off of its queue.
In order to sleep accurately, the message handler needs to know if _any_ node
has more processing that it should do before the entire thread sleeps.
Rather than returning a value that represents whether ProcessMessages
encountered a message that should trigger a disconnnect, interpret the return
value as whether or not that node has more work to do.
Also, use a global fProcessWake value that can be set by other threads,
which takes precedence (for one cycle) over the messagehandler's decision.
Note that the previous behavior was to only process one message per loop
(except in the case of a bad checksum or invalid header). That was changed in
PR #3180.
The only change here in that regard is that the current node now falls to the
back of the processing queue for the bad checksum/invalid header cases.
Previously addnodes were in competition with outbound connections
for access to the eight outbound slots.
One result of this is that frequently a node with several addnode
configured peers would end up connected to none of them, because
while the addnode loop was in its two minute sleep the automatic
connection logic would fill any free slots with random peers.
This is particularly unwelcome to users trying to maintain links
to specific nodes for fast block relay or purposes.
Another result is that a group of nine or more nodes which are
have addnode configured towards each other can become partitioned
from the public network.
This commit introduces a new limit of eight connections just for
addnode peers which is not subject to any of the other connection
limitations (including maxconnections).
The choice of eight is sufficient so that under no condition would
a user find themselves connected to fewer addnoded peers than
previously. It is also low enough that users who are confused
about the significance of more connections and have gotten too
copy-and-paste happy will not consume more than twice the slot
usage of a typical user.
Any additional load on the network resulting from this will likely
be offset by a reduction in users applying even more wasteful
workaround for the prior behavior.
The retry delays are reduced to avoid nodes sitting around without
their added peers up, but are still sufficient to prevent overly
aggressive repeated connections. The reduced delays also make
the system much more responsive to the addnode RPC.
Ban-disconnects are also exempted for peers added via addnode since
the outbound addnode logic ignores bans. Previously it would ban
an addnode then immediately reconnect to it.
A minor change was also made to CSemaphoreGrant so that it is
possible to re-acquire via an object whos grant was moved.