This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
Does what the old txnmall.sh test did.
Creates an equivalent malleated clone and tests that SyncMetaData
syncs the accounting effects from the original transaction to the
confirmed clone.
Tests error reporting of transaction signing via RPC call "signrawtransaction".
Expected results:
Test 1: create and sign a valid raw transaction with one input:
- 1) The transaction has a complete set of signatures
- 2) No script verification error occurred
Test 2: create and sign a raw transaction with one valid, one invalid and one missing input script:
- 3) The transaction has no complete set of signatures
- 4) Two script verification errors occurred
- 5) Script verification errors have certain properties ("txid", "vout", "scriptSig", "sequence", "error")
- 6) The verification errors refer to the invalid (vin 1) and missing input (vin 2)
bba2216 RPC test for "#5418 Report missing inputs in sendrawtransaction" (Jonas Schnelli)
de8e801 Report missing inputs in sendrawtransaction (Pieter Wuille)
Previously, each NodeConnCB had its own lock to synchronize data structures
used by the testing thread and the networking thread, and NodeConn provided a
separate additional lock for synchronizing access to each send buffer. This
commit replaces those locks with a single global lock (mininode_lock) that we
use to synchronize access to all data structures shared by the two threads.
Updates comptool and maxblocksinflight to use the new synchronization
semantics, eliminating previous race conditions within comptool, and re-enables
invalidblockrequest.py in travis.
2703412 Fix default binary in p2p tests to use environment variable (Suhas Daftuar)
29bff0e Add some travis debugging for python scripts (Suhas Daftuar)
d76412b Add script manipulation tools for use in mininode testing framework (Suhas Daftuar)
b93974c Add comparison tool test runner, built on mininode (Suhas Daftuar)
6c1d1ba Python p2p testing framework (Suhas Daftuar)
script.py is modified from the code in python-bitcoinlib, and provides tools
for manipulating and creating CScript objects.
bignum.py is a dependency for script.py
script_test.py is an example test that uses the script tools for running a test
that compares the behavior of two nodes, in a comptool- style test, for each of
the test cases in the bitcoin unit test script files, script_valid.json and
script_invalid.json. (This test is very slow to run, but is a proof of concept
for how we can write tests to compare consensus-critical behavior between
different versions of bitcoind.)
bipdersig-p2p.py is another example test in the comptool framework, which tests
deployment of BIP DERSIG for a single node. It uses the script.py tools for
manipulating signatures to be non-DER compliant.
comptool.py creates a tool for running a test suite on top of the mininode p2p
framework. It supports two types of tests: those for which we expect certain
behavior (acceptance or rejection of a block or transaction) and those for
which we are just comparing that the behavior of 2 or more nodes is the same.
blockstore.py defines BlockStore and TxStore, which provide db-backed maps
between block/tx hashes and the corresponding block or tx.
blocktools.py defines utility functions for creating and manipulating blocks
and transactions.
invalidblockrequest.py is an example test in the comptool framework, which
tests the behavior of a single node when sent two different types of invalid
blocks (a block with a duplicated transaction and a block with a bad coinbase
value).
mininode.py provides a framework for connecting to a bitcoin node over the p2p
network. NodeConn is the main object that manages connectivity to a node and
provides callbacks; the interface for those callbacks is defined by NodeConnCB.
Defined also are all data structures from bitcoin core that pass on the network
(CBlock, CTransaction, etc), along with de-/serialization functions.
maxblocksinflight.py is an example test using this framework that tests whether
a node is limiting the maximum number of in-flight block requests.
This also adds support to util.py for specifying the binary to use when
starting nodes (for tests that compare the behavior of different bitcoind
versions), and adds maxblocksinflight.py to the pull tester.
1ec900a Remove broken+useless lock/unlock log prints (Matt Corallo)
352ed22 Add merkle blocks test (Matt Corallo)
59ed61b Add RPC call to generate and verify merkle blocks (Matt Corallo)
30da90d Add CMerkleBlock constructor for tx set + block and an empty one (Matt Corallo)
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
has parts of @mhearn #4351
* allows querying the utxos over REST
* same binary input and outputs as mentioned in Bip64
* input format = output format
* various rpc/rest regtests
`--nocleanup` should provide a way to preserve test data, but should not have an impact on whether nodes are to be stopped after the test execution.
In particular, when currently running RPC tests with `--nocleanup`, then it may result in several active `bitcoind` processes, which are not terminated properly.
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.
Remove reliance on accounting "move" ledger entries. Instead,
create funding transactions (and deal with fee complexities).
Do not rely on broken SyncMetaData. Instead expect double-spend
amount to be debited from the default "" account.
Adds a regression test for the wallet's ResendWalletTransactions function, which uses a new, hidden RPC command "resendwallettransactions."
I refactored main's Broadcast signal so it is passed the best-block time, which let me remove a global variable shared between main.cpp and the wallet (nTimeBestReceived).
I also manually tested the "rebroadcast unconfirmed every half hour or so" functionality by:
1. Running bitcoind -connect=0.0.0.0:8333
2. Creating a couple of send-to-self transactions
3. Connect to a peer using -addnode
4. Waited a while, monitoring debug.log, until I see:
```2015-03-23 18:48:10 ResendWalletTransactions: rebroadcast 2 unconfirmed transactions```
One last change: don't bother putting ResendWalletTransactions messages in debug.log unless unconfirmed transactions were actually rebroadcast.
1d9b378 qa/rpc-tests/wallet: Tests for sendmany (Luke Dashjr)
40a7573 rpcwallet/sendmany: Just take an array of addresses to subtract fees from, rather than an Object with all values being identical (Luke Dashjr)
292623a Subtract fee from amount (Cozz Lovan)
90a43c1 [Qt] Code-movement-only: Format confirmation message in sendcoinsdialog (Cozz Lovan)
34318d7 RPC-test based on invalidateblock for mempool coinbase spends (Gavin Andresen)
7fd6219 Make CTxMemPool::remove more effecient by avoiding recursion (Matt Corallo)
b7b4318 Make CTxMemPool::check more thourough by using CheckInputs (Matt Corallo)
723d12c Remove txn which are invalidated by coinbase maturity during reorg (Matt Corallo)
868d041 Remove coinbase-dependant transactions during reorg. (Matt Corallo)
- rest block request returns full unfolded tx details
- /rest/block/notxdetails/<HASH> returns block where transactions are only represented by its hash
Immature coinbase spends are allowed in the memory pool if they can be mined in the next block.
They are not allowed in the memory pool if they cannot be mined in the next block.
This regression test tests those edge cases.
Ported txnmall.sh to Python, and updated to match
recent transaction malleability changes.
I also modified it so it tests both double-spending
confirmed and unconfirmed (only-in-mempool) transactions.
Renamed to txn_doublespend, since that is really what is
being tested. And told the pull-tester to run both
variations on this test.
3c30f27 travis: disable rpc tests for windows until they're not so flaky (Cory Fields)
daf03e7 RPC tests: create initial chain with specific timestamps (Gavin Andresen)
a8b2ce5 regression test only setmocktime RPC call (Gavin Andresen)
There's a brief race here, the process might've already exited and cleaned up
after itself. If that's the case, reading from the pidfile will harmlessly
fail. Keep those quiet.
This adds a -regetest-only undocumented (for regression testing only)
command-line option -blockversion=N to set block.nVersion.
Adds to the "has the rest of the network upgraded to a
block.nVersion we don't understand" code so it calls
-alertnotify when 51 of the last 100 blocks are up-version.
But it only alerts once, not with every subsequent new, upversion
block.
And adds a forknotify.py regression test to make sure it works.
Tested using forknotify.py:
Before adding CAlert::Notify, get:
Assertion failed: -alertnotify did not warn of up-version blocks
Before adding code to only alert once:
Assertion failed: -alertnotify excessive warning of up-version blocks
After final code in this pull:
Tests successful
The entire debug log would be huge, and could cause issues for automated tools
like travis. Printing 200 lines is an initial guess at a reasonable number,
more may be required.
Port over https://github.com/chronokings/huntercoin/pull/19 from
Huntercoin: This implements a new RPC command "getchaintips" that can be
used to find all currently active chain heads. This is similar to the
-printblocktree startup option, but it can be used without restarting
just via the RPC interface on a running daemon.
The regtest framework is local, so often there is no need to
discover our external IP. Setting -discover=0 in util.py works
around shutdown hang caused by GetExternalIP waiting in recv().
An user on IRC reported an issue where `getrawchangeaddress`
keeps returning a single address when the keypool is exhausted.
In my opinion this is strange behaviour.
- Change CReserveKey to fail when running out of keys in the keypool.
- Make `getrawchangeaddress` return RPC_WALLET_KEYPOOL_RAN_OUT when
unable to create an address.
- Add a Python RPC test for checking the keypool behaviour in combination
with encrypted wallets.
This adds a -whitelist option to specify subnet ranges from which peers
that connect are whitelisted. In addition, there is a -whitebind option
which works like -bind, except peers connecting to it are also
whitelisted (allowing a separate listen port for trusted connections).
Being whitelisted has two effects (for now):
* They are immune to DoS disconnection/banning.
* Transactions they broadcast (which are valid) are always relayed,
even if they were already in the mempool. This means that a node
can function as a gateway for a local network, and that rebroadcasts
from the local network will work as expected.
Whitelisting replaces the magic exemption localhost had for DoS
disconnection (local addresses are still never banned, though), which
implied hidden service connects (from a localhost Tor node) were
incorrectly immune to DoS disconnection as well. This old
behaviour is removed for that reason, but can be restored using
-whitelist=127.0.0.1 or -whitelist=::1 can be specified. -whitebind
is safer to use in case non-trusted localhost connections are expected
(like hidden services).
This avoids a race condition in which the connection was
made but the version handshake is not completed yet. In that
case transactions won't be broadcasted to a peer yet, and
the nodes will wait forever for their mempools to sync.
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.