The partition checking code was using chainActive timestamps
to detect partitioning; with headers-first syncing, it should use
(and with this pull request, does use) pIndexBestHeader timestamps.
Fixes issue #6251
Previously due to an off-by-one error the wallet ignored
nLockTime-by-height transactions that would be valid in the next block
even though they are accepted into the mempool. The transactions
wouldn't show up until confirmed, nor would they be included in the
unconfirmed balance. Similar to the mempool behavior fix in 665bdd3b,
the wallet code was calling IsFinalTx() directly without taking into
account the fact that doing so tells you if the transaction could have
been mined in the *current* block, rather than the next block.
To fix this we strip IsFinalTx() of non-consensus-critical
functionality, removing the default arguments, and add CheckFinalTx() to
check if a transaction will be final in the next block.
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
This adds a -checkblockindex (defaulting to true for regtest), which occasionally
does a full consistency check for mapBlockIndex, setBlockIndexCandidates, chainActive, and
mapBlocksUnlinked.
Adds a regression test for the wallet's ResendWalletTransactions function, which uses a new, hidden RPC command "resendwallettransactions."
I refactored main's Broadcast signal so it is passed the best-block time, which let me remove a global variable shared between main.cpp and the wallet (nTimeBestReceived).
I also manually tested the "rebroadcast unconfirmed every half hour or so" functionality by:
1. Running bitcoind -connect=0.0.0.0:8333
2. Creating a couple of send-to-self transactions
3. Connect to a peer using -addnode
4. Waited a while, monitoring debug.log, until I see:
```2015-03-23 18:48:10 ResendWalletTransactions: rebroadcast 2 unconfirmed transactions```
One last change: don't bother putting ResendWalletTransactions messages in debug.log unless unconfirmed transactions were actually rebroadcast.
Note that this will also require translation changes in Transifex for the key
"A fee higher than %1 is considered an insanely high fee." which is now
"A fee higher than %1 is considered an absurdly high fee."
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
There are 3 pieces of data that are maintained on disk. The actual block
and undo data, the block index (which can refer to positions on disk),
and the chainstate (which refers to the best block hash).
Earlier, there was no guarantee that blocks were written to disk before
block index entries referring to them were written. This commit introduces
dirty flags for block index data, and delays writing entries until the actual
block data is flushed.
With this stricter ordering in writes, it is now safe to not always flush
after every block, so there is no need for the IsInitialBlockDownload()
check there - instead we just write whenever enough time has passed or
the cache size grows too large. Also updating the wallet's best known block
is delayed until this is done, otherwise the wallet may end up referring to an
unknown block.
In addition, only do a write inside the block processing loop if necessary
(because of cache size exceeded). Otherwise, move the writing to a point
after processing is done, after relaying.
1bea2bb Rename ProcessBlock to ProcessNewBlock to indicate change of behaviour, and document it (Luke Dashjr)
d29a291 Rename RPC_TRANSACTION_* errors to RPC_VERIFY_* and use RPC_VERIFY_ERROR for submitblock (Luke Dashjr)
f877aaa Bugfix: submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock, now that it no longer does the full block validity check (Luke Dashjr)
24e8896 Add CValidationInterface::BlockChecked notification (Luke Dashjr)
85c579e script: add a slew of includes all around and drop includes from script.h (Cory Fields)
db8eb54 script: move ToString and ValueString out of the header (Cory Fields)
e9ca428 script: add ToByteVector() for converting anything with begin/end (Cory Fields)
066e2a1 script: move CScriptID to standard.h and add a ctor for creating them from CScripts (Cory Fields)
Many changes:
* Do not use 'getblocks', but 'getheaders', and use it to build a headers tree.
* Blocks are fetched in parallel from all available outbound peers, using a
limited moving window. When one peer stalls the movement of the window, it is
disconnected.
* No more orphan blocks. At all. We only ever request a block for which we have
verified the headers, and store it to disk immediately. This means that a
disk-fill attack would require PoW.
* Require protocol version 31800 for every peer (released in december 2010).
* No more syncnode (we sync from everyone we can, though limited to 1 during
initial *headers* sync).
* Introduce some extra named constants, comments and asserts.
There is only one message passed to AbortNode() that makes sense to
translate to the user specifically: Disk space is low. For the others
show a generic message and refer to debug.log for details.
Reduces the number of confusing jargon translation messages.
There is no reason to store thousands of orphan transactions;
normally an orphan's parents will either be broadcast or
mined reasonably quickly.
This pull drops the maximum number of orphans from 10,000 down
to 100, and adds a command-line option (-maxorphantx) that is
just like -maxorphanblocks to override the default.
Thanks to Pieter Wuille for most of the work on this commit.
I did not fixup the overhaul commit, because a rebase conflicted
with "remove fields of ser_streamplaceholder".
I prefer not to risk making a mistake while resolving it.
The implementation of each class' serialization/deserialization is no longer
passed within a macro. The implementation now lies within a template of form:
template <typename T, typename Stream, typename Operation>
inline static size_t SerializationOp(T thisPtr, Stream& s, Operation ser_action, int nType, int nVersion) {
size_t nSerSize = 0;
/* CODE */
return nSerSize;
}
In cases when codepath should depend on whether or not we are just deserializing
(old fGetSize, fWrite, fRead flags) an additional clause can be used:
bool fRead = boost::is_same<Operation, CSerActionUnserialize>();
The IMPLEMENT_SERIALIZE macro will now be a freestanding clause added within
class' body (similiar to Qt's Q_OBJECT) to implement GetSerializeSize,
Serialize and Unserialize. These are now wrappers around
the "SerializationOp" template.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
Bypassing the main coins cache allows more thorough checking with the same
memory budget.
This has no effect on performance because everything ends up in the child
cache created by VerifyDB itself.
It has bugged me ever since #4675, which effectively reduced the
number of checked blocks to reduce peak memory usage.
- Pass the coinsview to use as argument to VerifyDB
- This also avoids that the first `pcoinsTip->Flush()` after VerifyDB
writes a large slew of unchanged coin records back to the database.
ad49c25 Split up util.cpp/h (Wladimir J. van der Laan)
f841aa2 Move `COIN` and `CENT` to core.h (Wladimir J. van der Laan)
6e5fd00 Move `*Version()` functions to version.h/cpp (Wladimir J. van der Laan)
b4aa769 Move `S_I*` constants and `MSG_NOSIGNAL` to compat.h (Wladimir J. van der Laan)
af8297c Move functions in wallet.h to implementation file (Wladimir J. van der Laan)
651480c move functions in main and net to implementation files (Wladimir J. van der Laan)
610a8c0 Move SetThreadPriority implementation to util.cpp instead of the header (Wladimir J. van der Laan)
f780e65 Remove unused function `ByteReverse` from util.h (Wladimir J. van der Laan)
121d6ad Remove unused `alignup` function from util.h (Wladimir J. van der Laan)
d1e26d4 Move CMedianFilter to timedata.cpp (Wladimir J. van der Laan)
Port over https://github.com/chronokings/huntercoin/pull/19 from
Huntercoin: This implements a new RPC command "getchaintips" that can be
used to find all currently active chain heads. This is similar to the
-printblocktree startup option, but it can be used without restarting
just via the RPC interface on a running daemon.
* Replace -benchmark (and the related fBenchmark) with a regular debug option, -debug=bench.
* Increase coverage and granularity of individual block processing steps.
* Add cummulative times.
First and foremost, this defaults to OFF.
This option lets a node consider such transactions non-standard,
meaning they will not be relayed or mined by default, but other miners
are free to mine these as usual.
The wallet now uses the mempool fee estimator with a new
command-line option: -txconfirmtarget (default: 1) instead
of using hard-coded fees or priorities.
A new bitcoind that hasn't seen enough transactions to estimate
will fall back to the old hard-coded minimum priority or
transaction fee.
-paytxfee option overrides -txconfirmtarget.
Relaying and mining code isn't changed.
For Qt, the coin control dialog now uses priority estimates to
label transaction priority (instead of hard-coded constants);
unspent outputs were consistently labeled with a much higher
priority than is justified by the free transactions actually
being accepted into blocks.
I did not implement any GUI for setting -txconfirmtarget; I would
suggest getting rid of the "Pay transaction fee" GUI and replace
it with either "target number of confirmations" or maybe
a "faster confirmation <--> lower fee" slider or select box.
Allows network wallets and other clients to see transactions that respend
a prevout already spent in an unconfirmed transaction in this node's mempool.
Knowledge of an attempted double-spend is of interest to recipients of the
first spend. In some cases, it will allow these recipients to withhold
goods or services upon being alerted of a double-spend that deprives them
of payment.
As before, respends are not added to the mempool.
Anti-Denial-of-Service-Attack provisions:
- Use a bloom filter to relay only one respend per mempool prevout
- Rate-limit respend relays to a default of 100 thousand bytes/minute
- Define tx2.IsEquivalentTo(tx1): equality when scriptSigs are not considered
- Do not relay these equivalent transactions
Remove an unused variable declaration in txmempool.cpp.
Relax the AreInputsStandard() tests for P2SH transactions --
allow any Script in a P2SH transaction to be relayed/mined,
as long as it has 15 or fewer signature operations.
Rationale: https://gist.github.com/gavinandresen/88be40c141bc67acb247
I don't have an easy way to test this, but the code changes are
straightforward and I've updated the AreInputsStandard unit tests.
bitcoin-config.h moved, but the old file is likely to still exist when
reconfiguring or switching branches. This would've caused files to not rebuild
correctly, and other strange problems.
Make the path explicit so that the old one cannot be found.
Core libs use config/bitcoin-config.h.
Libs (like crypto) which don't want access to bitcoin's headers continue
to use -Iconfig and #include bitcoin-config.h.
f0a83fc Use Params().NetworkID() instead of TestNet() from the payment protocol (jtimon)
2871889 net.h was using std namespace through chainparams.h included in protocol.h (jtimon)
c8c52de Replace virtual methods with static attributes, chainparams.h depends on protocol.h instead of the other way around (jtimon)
a3d946e Get rid of TestNet() (jtimon)
6fc0fa6 Add RPCisTestNet chain parameter (jtimon)
cfeb823 Add RequireStandard chain parameter (jtimon)
21913a9 Add AllowMinDifficultyBlocks chain parameter (jtimon)
d754f34 Move majority constants to chainparams (jtimon)
8d26721 Get rid of RegTest() (jtimon)
cb9bd83 Add DefaultCheckMemPool chain parameter (jtimon)
2595b9a Add DefaultMinerThreads chain parameter (jtimon)
bfa9a1a Add MineBlocksOnDemand chain parameter (jtimon)
1712adb Add MiningRequiresPeers chain parameter (jtimon)
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.
f40dbee remove CPubKey::VerifyCompact( ) which is never used (Kamil Domanski)
28b6c1d remove GetMedianTime( ) which is never used (Kamil Domanski)
5bd4adc remove LookupHostNumeric( ) which is never used (Kamil Domanski)
595f691 remove LogException( ) which is never used (Kamil Domanski)
f4057cb remove CTransaction::IsNewerThan which is never used (Kamil Domanski)
0e31e56 remove CWallet::AddReserveKey which is never used (Kamil Domanski)
397668e Deduplicate uint* comparison operator logic (Pieter Wuille)
df9eb5e Move {Get,Set}Compact from bignum to uint256 (Pieter Wuille)
a703150 Add multiplication and division to uint160/uint256 (Pieter Wuille)
4d480c8 Exception instead of assigning 0 in case of wrong vector length (Pieter Wuille)
eb2cbd7 Deduplicate shared code between uint160 and uint256 (Pieter Wuille)
787ee0c Check redeemScript size does not exceed 520 byte limit (Peter Todd)
4d79098 Increase IsStandard() scriptSig length (Peter Todd)
f80cffa Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails (Peter Todd)
6380180 Add rejection of non-null CHECKMULTISIG dummy values (Peter Todd)
29c1749 Let tx (in)valid tests use any SCRIPT_VERIFY flag (Peter Todd)
68f7d1d Create (MANDATORY|STANDARD)_SCRIPT_VERIFY_FLAGS constants (Peter Todd)
Generally useless information. Only updates on connect time, not after
that. Peers can easily lie and the median filter is not effective in
preventing that.
In the past it was used for progress display in the GUI but
`CheckPoints::guessVerificationProgress` provides a better way that is now used.
It was too easy to mislead it. Peers do lie about it in practice, see issue #4065.
From the RPC, `getpeerinfo` gives the peer raw values, which are more
useful.
- introduce DEFAULT_SCRIPTCHECK_THREADS in main.h
- only show values from -"MAX_HW_THREADS" up to 16 for -par, as it
makes no sense to try to leave more "cores free" than the system
supports anyway
- use the new constant in optionsdialog and remove defaults from
.ui file
This resolves a case in which a mismatch could be used to bloat up the
mempool by sending transactions that pay enough fee to relay, but not
to be mined, with the default policies.
5770254 Copyright header updates s/2013/2014 on files whose last git commit was done in 2014. contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does. (gubatron)
Extend CMerkleTx::GetDepthInMainChain with the concept of
a "conflicted" transaction-- a transaction generated by the wallet
that is not in the main chain or in the mempool, and, therefore,
will likely never be confirmed.
GetDepthInMainChain() now returns -1 for conflicted transactions
(0 for unconfirmed-but-in-the-mempool, and >1 for confirmed).
This makes getbalance, getbalance '*', and listunspent all agree when there are
mutated transactions in the wallet.
Before:
listunspent: one 49BTC output
getbalance: 96 BTC (change counted twice)
getbalance '*': 46 BTC (spends counted twice)
After: all agree, 49 BTC available to spend.
contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
In case the total number of orphan blocks in memory exceeds a limit
(currently set to 750), a random orphan block (which is not
depended on by another orphan block) is dropped. This means it will
need to be downloaded again, but it won't consume memory until then.
c117d9e Support for error messages and a few more rejection reasons (Luke Dashjr)
14e7ffc Use standard BIP 22 rejection reasons where applicable (Luke Dashjr)
This changes the block processing logic from "try to atomically switch
to a new block" to a continuous "(dis)connect a block, aiming for the
assumed best chain".
This means the smallest atomic operations on the chainstate become
individual block connections or disconnections, instead of entire
reorganizations. It may mean that we try to reorganize to one block,
fail, and rereorganize again to the old block. This is slower, but
doesn't require unbounded RAM.
It also means that a ConnectBlock which fails may be no longer called
from the ProcessBlock which knows which node sent it. To deal with that,
a mapBlockSource is kept, and invalid blocks cause asynchronous "reject"
messages and banning (if necessary).
After discussing with BlueMatt, this appears to be harmless in its
current state since it's always set before it's used. Initialize it
anyway for readability and future safety.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
As block index entries have a flag for marking invalid blocks, the
'best invalid work' information can be derived from there. In addition,
remove the global from main.h
This removes a few unused CBlockLocator methods, and moves the
construction and fork-finding logic to CChain (which can do these
more efficiently, as it has a height-indexable chain available).
It also makes CBlockLocator independent from the validation code.