6e8c48dc5 Add const to methods that do not modify the object for which it is called (practicalswift)
Pull request description:
Tree-SHA512: a6888111ba16fb796e320e60806e1a77d36f545989b5405dc7319992291800109eab0b8e8c286b784778f41f1ff5289e7cb6b4afd7aec77f385fbcafc02cffc1
Previously addnodes were in competition with outbound connections
for access to the eight outbound slots.
One result of this is that frequently a node with several addnode
configured peers would end up connected to none of them, because
while the addnode loop was in its two minute sleep the automatic
connection logic would fill any free slots with random peers.
This is particularly unwelcome to users trying to maintain links
to specific nodes for fast block relay or purposes.
Another result is that a group of nine or more nodes which are
have addnode configured towards each other can become partitioned
from the public network.
This commit introduces a new limit of eight connections just for
addnode peers which is not subject to any of the other connection
limitations (including maxconnections).
The choice of eight is sufficient so that under no condition would
a user find themselves connected to fewer addnoded peers than
previously. It is also low enough that users who are confused
about the significance of more connections and have gotten too
copy-and-paste happy will not consume more than twice the slot
usage of a typical user.
Any additional load on the network resulting from this will likely
be offset by a reduction in users applying even more wasteful
workaround for the prior behavior.
The retry delays are reduced to avoid nodes sitting around without
their added peers up, but are still sufficient to prevent overly
aggressive repeated connections. The reduced delays also make
the system much more responsive to the addnode RPC.
Ban-disconnects are also exempted for peers added via addnode since
the outbound addnode logic ignores bans. Previously it would ban
an addnode then immediately reconnect to it.
A minor change was also made to CSemaphoreGrant so that it is
possible to re-acquire via an object whos grant was moved.
The lockorder potential deadlock detection works by remembering for each
lock A that is acquired while holding another B the pair (A,B), and
triggering a warning when (B,A) already exists in the table.
A and B in the above text are represented by pointers to the CCriticalSection
object that is acquired. This does mean however that we need to clean up the
table entries that refer to any critical section which is destroyed, as it
memory address can potentially be used for another unrelated lock in the future.
Implement this clean up by remembering not only the pairs in forward direction,
but also backward direction. This allows for fast iteration over all pairs that
use a deleted CCriticalSection in either the first or the second position.
This allows us to use function/variable/class attributes to specify locking
requisites, allowing problems to be detected during static analysis.
This works perfectly with newer Clang versions (tested with 3.3-3.7). For older
versions (tested 3.2), it compiles fine but spews lots of false-positives.
Rebased by @laanwj:
- update for RPC methods added since 84d13ee: setmocktime,
invalidateblock, reconsiderblock. Only the first, setmocktime, required a change,
the other two are thread safe.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
o Remove unused Leave and GetLock functions
o Make Enter and TryEnter private.
o Simplify Enter and TryEnter.
boost::unique_lock doesn't really know whether the
mutex it wraps is locked or not when the defer_lock
option is used.
The boost::recursive_mutex does not expose this
information, so unique_lock only infers this
knowledge. When taking the lock is defered, it
(randomly) assumes that the lock is not taken.
boost::unique_lock has the following definition:
unique_lock(Mutex& m_,defer_lock_t):
m(&m_),is_locked(false)
{}
bool owns_lock() const
{
return is_locked;
}
Thus it is a mistake to check owns_lock() in Enter
and TryEnter - they will always return false.