- implement find_value() function for UniValue
- replace all Array/Value/Object types with UniValues, remove JSON Spirit to UniValue wrapper
- remove JSON Spirit sources
In some corner cases, it may be possible for recent blocks to end up in
the same block file as much older blocks. Previously, the pruning code
would stop looking for files to remove upon first encountering a file
containing a block that cannot be pruned, now it will keep looking for
candidate files until the target is met and all other criteria are
satisfied.
This can result in a noncontiguous set of block files (by number) on
disk, which is fine except for during some reindex corner cases, so
make reindex preparation smarter such that we keep the data we can
actually use and throw away the rest. This allows pruning to work
correctly while downloading any blocks needed during the reindex.
Change `read_string` to fail when not the entire input has been
consumed. This avoids unexpected, even dangerous behavior (fixes#6223).
The new JSON parser adapted in #6121 also solves this problem so in
master this is a temporary fix, but should be backported to older releases.
Also adds tests for the new behavior.
If/when CTransaction::CURRENT_VERSION is incremented, this will break CChainParams and the miner tests. This fix sets the transaction version explicitly where we depend on the hash value (genesis block, proof of work checks).
aa41bc8 Update help message to match the #4219 change (lpescher)
f60bb5e Update documentation to match the #4219 change (lpescher)
cb87386 Make command line option to show all debugging consistent with similar options (lpescher)
Previously due to an off-by-one error the wallet ignored
nLockTime-by-height transactions that would be valid in the next block
even though they are accepted into the mempool. The transactions
wouldn't show up until confirmed, nor would they be included in the
unconfirmed balance. Similar to the mempool behavior fix in 665bdd3b,
the wallet code was calling IsFinalTx() directly without taking into
account the fact that doing so tells you if the transaction could have
been mined in the *current* block, rather than the next block.
To fix this we strip IsFinalTx() of non-consensus-critical
functionality, removing the default arguments, and add CheckFinalTx() to
check if a transaction will be final in the next block.
Fix two CSubNet constructor problems:
- The use of `/x` where 8 does not divide x was broken, due to a
bit-order issue
- The use of e.g. `1.2.3.4/24` where the netmasked bits in the network
are not 0 was broken. Fix this by explicitly normalizing the netwok
according to the bitmask.
Also add tests for these cases.
Fixes#6179. Thanks to @jonasschnelli for reporting and initial fix.
Don't clear `stopRequested` and `stopWhenEmpty` at the top of
`serviceQueue`, as this results in a race condition: on systems under
heavy load, some of the threads only get scheduled on the CPU when the
other threads have already finished their work. This causes the flags to
be cleared post-hoc and thus those threads to wait forever.
The potential drawback of this change is that the scheduler cannot be
restarted after being stopped (an explicit reset would be needed), but
we don't use this functionality anyway.
Most people expect a value of 1 to enable all for command line arguments.
However to do this for the -debug option you must type "-debug=".
This has been changed to allow "-debug=1" as well as "-debug=" to
enable all debug logging
To protect privacy, do not use UPNP when a proxy is set. The user may
still specify -listen=1 to listen locally (for a hidden service), so
don't rely on this happening through -listen.
Fixes#2927.
On a busy or slow system, the CScheduler unit test could fail because it
assumed all threads would be done after a couple of milliseconds.
Replace the hard-coded sleep with CScheduler stop() method that
will cleanly exit the servicing threads when all tasks are completely
finished.
Previously this was cleared only after UnlinkPrunedFiles, but it should really be cleared after FindFilesToPrune, regardless of whether there are any files to be pruned.
86a5f4b Relocate calls to CheckDiskSpace (Alex Morcos)
67708ac Write block index more frequently than cache flushes (Pieter Wuille)
b3ed423 Cache tweak and logging improvements (Pieter Wuille)
fc684ad Use accurate memory for flushing decisions (Pieter Wuille)
046392d Keep track of memory usage in CCoinsViewCache (Pieter Wuille)
540629c Add memusage.h (Pieter Wuille)
8f0947b Increase timeouts in pruning.py and modify warning language. (Alex Morcos)
b89f307 Fix incorrect variable name in FindFilesToPrune (Suhas Daftuar)
This assertion will occur any time that the client quits without
shutting down properly due to an error condition. As the user will
report this error instead of the error that was the root cause, it is
better to remove it.
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
ca5f688 [QT] don't colorize icons on win and mac (Jonas Schnelli)
7247d10 [QT] use alert icon with tooltip insted of "(out of sync)" text (Jonas Schnelli)
51c7c70 [QT] remove frame to avoid double-frame situation in sendcoinsentry.ui (Jonas Schnelli)
2a6b844 [QT] change transaction amount and height in overview page (Jonas Schnelli)
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
When the internal miner is enabled at the start of a new node, there
is an near instant assert in TestBlockValidity because its attempting
to mine a block before the top checkpoint.
Also avoids a data race around vNodes.
03c5687 appropriate response when trying to get a block in pruned mode (Jonas Schnelli)
1b2e555 add autoprune information to RPC "getblockchaininfo" (Jonas Schnelli)
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
a71ab10 QA: add RPC tests for error reporting of "signrawtransaction" (dexX7)
8ac2a4e RPC: show script verification errors in "signrawtransaction" result (dexX7)
If there are any script verification errors, when using "signrawtransaction", they are shown in the RPC result:
```
// ...
Result:
{
"hex" : "value", (string) The hex-encoded raw transaction with signature(s)
"complete" : true|false, (boolean) If the transaction has a complete set of signatures
"errors" : [ (json array of objects) Script verification errors (if there are any)
{
"txid" : "hash", (string) The hash of the referenced, previous transaction
"vout" : n, (numeric) The index of the output to spent and used as input
"scriptSig" : "hex", (string) The hex-encoded signature script
"sequence" : n, (numeric) Script sequence number
"error" : "text" (string) Verification or signing error related to the input
}
,...
]
}
```
Connecting the chain can take quite a while.
All the while it is still showing `Loading wallet...`.
Add an init message to inform the user what is happening.
libsecp256k1's API changed, so update key.cpp to use it.
Libsecp256k1 now has explicit context objects, which makes it completely thread-safe.
In turn, keep an explicit context object in key.cpp, which is explicitly initialized
destroyed. This is not really pretty now, but it's more efficient than the static
initialized object in key.cpp (which made for example bitcoin-tx slow, as for most of
its calls, libsecp256k1 wasn't actually needed).
This also brings in the new blinding support in libsecp256k1. By passing in a random
seed, temporary variables during the elliptic curve computations are altered, in such
a way that if an attacker does not know the blind, observing the internal operations
leaks less information about the keys used. This was implemented by Greg Maxwell.
bba2216 RPC test for "#5418 Report missing inputs in sendrawtransaction" (Jonas Schnelli)
de8e801 Report missing inputs in sendrawtransaction (Pieter Wuille)
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
For when you need to keep track of the last N items
you've seen, and can tolerate some false-positives.
Rebased-by: Pieter Wuille <pieter.wuille@gmail.com>
1ec900a Remove broken+useless lock/unlock log prints (Matt Corallo)
352ed22 Add merkle blocks test (Matt Corallo)
59ed61b Add RPC call to generate and verify merkle blocks (Matt Corallo)
30da90d Add CMerkleBlock constructor for tx set + block and an empty one (Matt Corallo)
This commit adds several tests to the script_invalid.json data which
exercise some edge conditions that are not currently being tested.
These are mainly being added to cover several cases a branch coverage
analysis of btcd showed are not already being covered, but given more
tests of edge conditions are always a good thing, I'm contributing
them upstream.
The test which is intended to prove that the script engine is properly
rejecting non-minimally encoded PUSHDATA4 data is using the wrong
opcode and value. The test is using 0x4f, which is OP_1NEGATE instead
of the desired 0x4e, which is OP_PUSHDATA4. Further, the push of data
is intended to be 256 bytes, but the value the test is using is
0x00100000 (4096), instead of the desired 0x00010000 (256).
This commit fixes both issues.
This was found while examining the branch coverage in btcd against only
these tests to help find missing branch coverage.
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
has parts of @mhearn #4351
* allows querying the utxos over REST
* same binary input and outputs as mentioned in Bip64
* input format = output format
* various rpc/rest regtests