The partition checking code was using chainActive timestamps
to detect partitioning; with headers-first syncing, it should use
(and with this pull request, does use) pIndexBestHeader timestamps.
Fixes issue #6251
Previously due to an off-by-one error the wallet ignored
nLockTime-by-height transactions that would be valid in the next block
even though they are accepted into the mempool. The transactions
wouldn't show up until confirmed, nor would they be included in the
unconfirmed balance. Similar to the mempool behavior fix in 665bdd3b,
the wallet code was calling IsFinalTx() directly without taking into
account the fact that doing so tells you if the transaction could have
been mined in the *current* block, rather than the next block.
To fix this we strip IsFinalTx() of non-consensus-critical
functionality, removing the default arguments, and add CheckFinalTx() to
check if a transaction will be final in the next block.
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
This adds a -checkblockindex (defaulting to true for regtest), which occasionally
does a full consistency check for mapBlockIndex, setBlockIndexCandidates, chainActive, and
mapBlocksUnlinked.
Adds a regression test for the wallet's ResendWalletTransactions function, which uses a new, hidden RPC command "resendwallettransactions."
I refactored main's Broadcast signal so it is passed the best-block time, which let me remove a global variable shared between main.cpp and the wallet (nTimeBestReceived).
I also manually tested the "rebroadcast unconfirmed every half hour or so" functionality by:
1. Running bitcoind -connect=0.0.0.0:8333
2. Creating a couple of send-to-self transactions
3. Connect to a peer using -addnode
4. Waited a while, monitoring debug.log, until I see:
```2015-03-23 18:48:10 ResendWalletTransactions: rebroadcast 2 unconfirmed transactions```
One last change: don't bother putting ResendWalletTransactions messages in debug.log unless unconfirmed transactions were actually rebroadcast.
Note that this will also require translation changes in Transifex for the key
"A fee higher than %1 is considered an insanely high fee." which is now
"A fee higher than %1 is considered an absurdly high fee."
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
There are 3 pieces of data that are maintained on disk. The actual block
and undo data, the block index (which can refer to positions on disk),
and the chainstate (which refers to the best block hash).
Earlier, there was no guarantee that blocks were written to disk before
block index entries referring to them were written. This commit introduces
dirty flags for block index data, and delays writing entries until the actual
block data is flushed.
With this stricter ordering in writes, it is now safe to not always flush
after every block, so there is no need for the IsInitialBlockDownload()
check there - instead we just write whenever enough time has passed or
the cache size grows too large. Also updating the wallet's best known block
is delayed until this is done, otherwise the wallet may end up referring to an
unknown block.
In addition, only do a write inside the block processing loop if necessary
(because of cache size exceeded). Otherwise, move the writing to a point
after processing is done, after relaying.
1bea2bb Rename ProcessBlock to ProcessNewBlock to indicate change of behaviour, and document it (Luke Dashjr)
d29a291 Rename RPC_TRANSACTION_* errors to RPC_VERIFY_* and use RPC_VERIFY_ERROR for submitblock (Luke Dashjr)
f877aaa Bugfix: submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock, now that it no longer does the full block validity check (Luke Dashjr)
24e8896 Add CValidationInterface::BlockChecked notification (Luke Dashjr)
85c579e script: add a slew of includes all around and drop includes from script.h (Cory Fields)
db8eb54 script: move ToString and ValueString out of the header (Cory Fields)
e9ca428 script: add ToByteVector() for converting anything with begin/end (Cory Fields)
066e2a1 script: move CScriptID to standard.h and add a ctor for creating them from CScripts (Cory Fields)
Many changes:
* Do not use 'getblocks', but 'getheaders', and use it to build a headers tree.
* Blocks are fetched in parallel from all available outbound peers, using a
limited moving window. When one peer stalls the movement of the window, it is
disconnected.
* No more orphan blocks. At all. We only ever request a block for which we have
verified the headers, and store it to disk immediately. This means that a
disk-fill attack would require PoW.
* Require protocol version 31800 for every peer (released in december 2010).
* No more syncnode (we sync from everyone we can, though limited to 1 during
initial *headers* sync).
* Introduce some extra named constants, comments and asserts.
There is only one message passed to AbortNode() that makes sense to
translate to the user specifically: Disk space is low. For the others
show a generic message and refer to debug.log for details.
Reduces the number of confusing jargon translation messages.
There is no reason to store thousands of orphan transactions;
normally an orphan's parents will either be broadcast or
mined reasonably quickly.
This pull drops the maximum number of orphans from 10,000 down
to 100, and adds a command-line option (-maxorphantx) that is
just like -maxorphanblocks to override the default.
Thanks to Pieter Wuille for most of the work on this commit.
I did not fixup the overhaul commit, because a rebase conflicted
with "remove fields of ser_streamplaceholder".
I prefer not to risk making a mistake while resolving it.
The implementation of each class' serialization/deserialization is no longer
passed within a macro. The implementation now lies within a template of form:
template <typename T, typename Stream, typename Operation>
inline static size_t SerializationOp(T thisPtr, Stream& s, Operation ser_action, int nType, int nVersion) {
size_t nSerSize = 0;
/* CODE */
return nSerSize;
}
In cases when codepath should depend on whether or not we are just deserializing
(old fGetSize, fWrite, fRead flags) an additional clause can be used:
bool fRead = boost::is_same<Operation, CSerActionUnserialize>();
The IMPLEMENT_SERIALIZE macro will now be a freestanding clause added within
class' body (similiar to Qt's Q_OBJECT) to implement GetSerializeSize,
Serialize and Unserialize. These are now wrappers around
the "SerializationOp" template.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files