Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
Compare the block download timeout to what the timeout would be if calculated
based on current time and current value of nQueuedValidatedHeaders, but
ignoring other in-flight blocks from the same peer. If the calculation based on
present conditions is shorter, then set that to be the time after which we
disconnect the peer for not delivering this block.
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.
SendMessages will now call getheaders on both inbound and outbound peers,
once the headers chain is close to synced. It will also try downloading
blocks from inbound peers once we're out of initial block download (so
inbound peers will participate in parallel block fetching for the last day
or two of blocks being downloaded).
This adds more tests to CheckBlockIndex:
- HAVE_DATA is true iff nTx > 0
- BLOCK_VALID_TRANSACTIONS is true iff nTx > 0
- BLOCK_VALID_TRANSACTIONS is true for a block and all parents iff
nChainTx > 0
This adds a -checkblockindex (defaulting to true for regtest), which occasionally
does a full consistency check for mapBlockIndex, setBlockIndexCandidates, chainActive, and
mapBlocksUnlinked.
Adds a regression test for the wallet's ResendWalletTransactions function, which uses a new, hidden RPC command "resendwallettransactions."
I refactored main's Broadcast signal so it is passed the best-block time, which let me remove a global variable shared between main.cpp and the wallet (nTimeBestReceived).
I also manually tested the "rebroadcast unconfirmed every half hour or so" functionality by:
1. Running bitcoind -connect=0.0.0.0:8333
2. Creating a couple of send-to-self transactions
3. Connect to a peer using -addnode
4. Waited a while, monitoring debug.log, until I see:
```2015-03-23 18:48:10 ResendWalletTransactions: rebroadcast 2 unconfirmed transactions```
One last change: don't bother putting ResendWalletTransactions messages in debug.log unless unconfirmed transactions were actually rebroadcast.
When re-indexing, there are a few cases where garbage data may be skipped in
the block files. In these cases, the indices are correctly written to the index
db, however the pointer to the next position for writing in the current block
file is calculated by adding the sizes of the valid blocks found.
As a result, when the re-index is finished, the index db is correct for all
existing blocks, but the next block will be written to an incorrect offset,
likely overwriting existing blocks.
Rather than using the sum of all valid blocks to determine the next write
position, use the end of the last block written to the file. Don't assume that
the current block is the last one in the file, since they may be read
out-of-order.
Normally bitcoin core does not display any network originated strings without
sanitizing or hex encoding. This wasn't done for strcommand in many places.
This could be used to play havoc with a terminal displaying the logs,
especially with printtoconsole in use.
Thanks to Evil-Knievel for reporting this issue.
The only time when a client sends a "getaddr" message is when he
esatblishes an Outbound connection (see ProcessMessage() in
src/main.cpp). Another bitcoin client is expected to receive a
"getaddr" message only on Inbound connection. Ignoring "gettaddr"
requests on Outbound connections can resolve potential privacy issues
(and as was said such request normally do not happen anyway).
Instead, create a separate function that applies the undo operation of a
CTxInUndo object onto a CCoinsViewCache. This method is used from
DisconnectBlock.
Note that this will also require translation changes in Transifex for the key
"A fee higher than %1 is considered an insanely high fee." which is now
"A fee higher than %1 is considered an absurdly high fee."
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
This harmonizes the block fetch timeout with the existing ping timeout
and eliminates a guaranteed eventual failure from congestion collapse
for a network operating right at its limit.
It's unlikely that we wouldn't suffer other failures if we were really
anywhere near the network's limit, and a complete avoidance of congestion
collapse risk requires (I think) an exponential back-off. So this isn't
a major concern, but I think it's also useful for reducing the complexity
of understanding out timeouts.
This will disconnect peers that do not transfer a block in 10 minutes, plus
5 minutes for every previously queued block with validated headers
(accomodating downstream bandwidth down to a few kilobytes per second - below
that the node would have trouble staying synchronized anyway).
856e862 namespace: drop most boost namespaces and a few header cleanups (Cory Fields)
9b1ab86 namespace: drop boost::assign altogether here (Cory Fields)
a324199 namespace: remove boost namespace pollution (Cory Fields)
If uint256() constructor takes a string, uint256(0) will become
dangerous when uint256 does not take integers anymore (it will go
through std::string(const char*) making a NULL string, and the explicit
keyword is no help).
Previous behavior with IsFinalTx() being an IsStandard() rule was rather
confusing and interferred with testing of protocols that depended on
nLockTime.
With the splashscreen being able to be closed it is possible to
shutdown during the lengthy verifyDB method. (Takes about a minute
on my machine). This change allows us to shutdown much sooner.
Github-Pull: #5557
1b178a7 Bugfix: ConnectBlock: In case the genesis block gets in with fJustCheck, behave correctly (Luke Dashjr)
228d238 Make CCoinsViewCache's copy constructor private (Luke Dashjr)
Don't allow immediate inv driven block downloads if
a peer already has MAX_BLOCKS_IN_TRANSIT_PER_PEER
active downloads. Prevents bogus inv spam from
blowing up block transfer tracking data structures.
This still leaves transactions in mempool that are potentially
invalid if the maturity period has been reorged out of, but at
least they're not missing inputs entirely.