merge bitcoin changes

This commit is contained in:
Jimmy Kiselak 2015-04-06 11:43:13 -04:00
commit 1c85278d3e
44 changed files with 1571 additions and 407 deletions

View file

@ -13,6 +13,12 @@
# Connect via a SOCKS5 proxy
#proxy=127.0.0.1:9050
# Bind to given address and always listen on it. Use [host]:port notation for IPv6
#bind=<addr>
# Bind to given address and whitelist peers connecting to it. Use [host]:port notation for IPv6
#whitebind=<addr>
##############################################################
## Quick Primer on addnode vs connect ##
## Let's say for instance you use addnode=4.2.2.4 ##
@ -57,6 +63,10 @@
# server=1 tells Bitcoin-QT and bitcoind to accept JSON-RPC commands
#server=0
# Bind to given address to listen for JSON-RPC connections. Use [host]:port notation for IPv6.
# This option can be specified multiple times (default: bind to all interfaces)
#rpcbind=<addr>
# You must set rpcuser and rpcpassword to secure the JSON-RPC api
#rpcuser=Ulysseys
#rpcpassword=YourSuperGreatPasswordNumber_DO_NOT_USE_THIS_OR_YOU_WILL_GET_ROBBED_385593
@ -94,6 +104,14 @@
#rpcsslcertificatechainfile=server.cert
#rpcsslprivatekeyfile=server.pem
# Transaction Fee Changes in 0.10.0
# Send transactions as zero-fee transactions if possible (default: 0)
#sendfreetransactions=0
# Create transactions that have enough fees (or priority) so they are likely to begin confirmation within n blocks (default: 1).
# This setting is over-ridden by the -paytxfee option.
#txconfirmtarget=n
# Miscellaneous options

View file

@ -0,0 +1,762 @@
Bitcoin Core version 0.10.0 is now available from:
https://bitcoin.org/bin/0.10.0/
This is a new major version release, bringing both new features and
bug fixes.
Please report bugs using the issue tracker at github:
https://github.com/bitcoin/bitcoin/issues
Upgrading and downgrading
=========================
How to Upgrade
--------------
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or
bitcoind/bitcoin-qt (on Linux).
Downgrading warning
---------------------
Because release 0.10.0 makes use of headers-first synchronization and parallel
block download (see further), the block files and databases are not
backwards-compatible with older versions of Bitcoin Core or other software:
* Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
* The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support.
If you want to be able to downgrade smoothly, make a backup of your entire data
directory. Without this your node will need start syncing (or importing from
bootstrap.dat) anew afterwards. It is possible that the data from a completely
synchronised 0.10 node may be usable in older versions as-is, but this is not
supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility.
Notable changes
===============
Faster synchronization
----------------------
Bitcoin Core now uses 'headers-first synchronization'. This means that we first
ask peers for block headers (a total of 27 megabytes, as of December 2014) and
validate those. In a second stage, when the headers have been discovered, we
download the blocks. However, as we already know about the whole chain in
advance, the blocks can be downloaded in parallel from all available peers.
In practice, this means a much faster and more robust synchronization. On
recent hardware with a decent network link, it can be as little as 3 hours
for an initial full synchronization. You may notice a slower progress in the
very first few minutes, when headers are still being fetched and verified, but
it should gain speed afterwards.
A few RPCs were added/updated as a result of this:
- `getblockchaininfo` now returns the number of validated headers in addition to
the number of validated blocks.
- `getpeerinfo` lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we
have requested from peers (but haven't received yet) are also listed as
'inflight'.
- A new RPC `getchaintips` lists all known branches of the block chain,
including those we only have headers for.
Transaction fee changes
-----------------------
This release automatically estimates how high a transaction fee (or how
high a priority) transactions require to be confirmed quickly. The default
settings will create transactions that confirm quickly; see the new
'txconfirmtarget' setting to control the tradeoff between fees and
confirmation times. Fees are added by default unless the 'sendfreetransactions'
setting is enabled.
Prior releases used hard-coded fees (and priorities), and would
sometimes create transactions that took a very long time to confirm.
Statistics used to estimate fees and priorities are saved in the
data directory in the `fee_estimates.dat` file just before
program shutdown, and are read in at startup.
New command line options for transaction fee changes:
- `-txconfirmtarget=n` : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting
is over-ridden by the -paytxfee option.
- `-sendfreetransactions` : Send transactions as zero-fee transactions if possible
(default: 0)
New RPC commands for fee estimation:
- `estimatefee nblocks` : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough
transactions have been observed to compute a good estimate.
- `estimatepriority nblocks` : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not
enough free transactions have been observed to compute a good
estimate.
RPC access control changes
--------------------------
Subnet matching for the purpose of access control is now done
by matching the binary network address, instead of with string wildcard matching.
For the user this means that `-rpcallowip` takes a subnet specification, which can be
- a single IP address (e.g. `1.2.3.4` or `fe80::0012:3456:789a:bcde`)
- a network/CIDR (e.g. `1.2.3.0/24` or `fe80::0000/64`)
- a network/netmask (e.g. `1.2.3.4/255.255.255.0` or `fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff`)
An arbitrary number of `-rpcallow` arguments can be given. An incoming connection will be accepted if its origin address
matches one of them.
For example:
| 0.9.x and before | 0.10.x |
|--------------------------------------------|---------------------------------------|
| `-rpcallowip=192.168.1.1` | `-rpcallowip=192.168.1.1` (unchanged) |
| `-rpcallowip=192.168.1.*` | `-rpcallowip=192.168.1.0/24` |
| `-rpcallowip=192.168.*` | `-rpcallowip=192.168.0.0/16` |
| `-rpcallowip=*` (dangerous!) | `-rpcallowip=::/0` (still dangerous!) |
Using wildcards will result in the rule being rejected with the following error in debug.log:
Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24).
REST interface
--------------
A new HTTP API is exposed when running with the `-rest` flag, which allows
unauthenticated access to public node data.
It is served on the same port as RPC, but does not need a password, and uses
plain HTTP instead of JSON-RPC.
Assuming a local RPC server running on port 8332, it is possible to request:
- Blocks: http://localhost:8332/rest/block/*HASH*.*EXT*
- Blocks without transactions: http://localhost:8332/rest/block/notxdetails/*HASH*.*EXT*
- Transactions (requires `-txindex`): http://localhost:8332/rest/tx/*HASH*.*EXT*
In every case, *EXT* can be `bin` (for raw binary data), `hex` (for hex-encoded
binary) or `json`.
For more details, see the `doc/REST-interface.md` document in the repository.
RPC Server "Warm-Up" Mode
-------------------------
The RPC server is started earlier now, before most of the expensive
intialisations like loading the block index. It is available now almost
immediately after starting the process. However, until all initialisations
are done, it always returns an immediate error with code -28 to all calls.
This new behaviour can be useful for clients to know that a server is already
started and will be available soon (for instance, so that they do not
have to start it themselves).
Improved signing security
-------------------------
For 0.10 the security of signing against unusual attacks has been
improved by making the signatures constant time and deterministic.
This change is a result of switching signing to use libsecp256k1
instead of OpenSSL. Libsecp256k1 is a cryptographic library
optimized for the curve Bitcoin uses which was created by Bitcoin
Core developer Pieter Wuille.
There exist attacks[1] against most ECC implementations where an
attacker on shared virtual machine hardware could extract a private
key if they could cause a target to sign using the same key hundreds
of times. While using shared hosts and reusing keys are inadvisable
for other reasons, it's a better practice to avoid the exposure.
OpenSSL has code in their source repository for derandomization
and reduction in timing leaks that we've eagerly wanted to use for a
long time, but this functionality has still not made its
way into a released version of OpenSSL. Libsecp256k1 achieves
significantly stronger protection: As far as we're aware this is
the only deployed implementation of constant time signing for
the curve Bitcoin uses and we have reason to believe that
libsecp256k1 is better tested and more thoroughly reviewed
than the implementation in OpenSSL.
[1] https://eprint.iacr.org/2014/161.pdf
Watch-only wallet support
-------------------------
The wallet can now track transactions to and from wallets for which you know
all addresses (or scripts), even without the private keys.
This can be used to track payments without needing the private keys online on a
possibly vulnerable system. In addition, it can help for (manual) construction
of multisig transactions where you are only one of the signers.
One new RPC, `importaddress`, is added which functions similarly to
`importprivkey`, but instead takes an address or script (in hexadecimal) as
argument. After using it, outputs credited to this address or script are
considered to be received, and transactions consuming these outputs will be
considered to be sent.
The following RPCs have optional support for watch-only:
`getbalance`, `listreceivedbyaddress`, `listreceivedbyaccount`,
`listtransactions`, `listaccounts`, `listsinceblock`, `gettransaction`. See the
RPC documentation for those methods for more information.
Compared to using `getrawtransaction`, this mechanism does not require
`-txindex`, scales better, integrates better with the wallet, and is compatible
with future block chain pruning functionality. It does mean that all relevant
addresses need to added to the wallet before the payment, though.
Consensus library
-----------------
Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library.
The purpose of this library is to make the verification functionality that is
critical to Bitcoin's consensus available to other applications, e.g. to language
bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or
alternative node implementations.
This library is called `libbitcoinconsensus.so` (or, `.dll` for Windows).
Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h).
In its initial version the API includes two functions:
- `bitcoinconsensus_verify_script` verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
- `bitcoinconsensus_version` returns the API version, currently at an experimental `0`
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface
for existing methods should remain stable.
Standard script rules relaxed for P2SH addresses
------------------------------------------------
The IsStandard() rules have been almost completely removed for P2SH
redemption scripts, allowing applications to make use of any valid
script type, such as "n-of-m OR y", hash-locked oracle addresses, etc.
While the Bitcoin protocol has always supported these types of script,
actually using them on mainnet has been previously inconvenient as
standard Bitcoin Core nodes wouldn't relay them to miners, nor would
most miners include them in blocks they mined.
bitcoin-tx
----------
It has been observed that many of the RPC functions offered by bitcoind are
"pure functions", and operate independently of the bitcoind wallet. This
included many of the RPC "raw transaction" API functions, such as
createrawtransaction.
bitcoin-tx is a newly introduced command line utility designed to enable easy
manipulation of bitcoin transactions. A summary of its operation may be
obtained via "bitcoin-tx --help" Transactions may be created or signed in a
manner similar to the RPC raw tx API. Transactions may be updated, deleting
inputs or outputs, or appending new inputs and outputs. Custom scripts may be
easily composed using a simple text notation, borrowed from the bitcoin test
suite.
This tool may be used for experimenting with new transaction types, signing
multi-party transactions, and many other uses. Long term, the goal is to
deprecate and remove "pure function" RPC API calls, as those do not require a
server round-trip to execute.
Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making
key and script operations easily accessible via command line.
Mining and relay policy enhancements
------------------------------------
Bitcoin Core's block templates are now for version 3 blocks only, and any mining
software relying on its `getblocktemplate` must be updated in parallel to use
libblkmaker either version 0.4.2 or any version from 0.5.1 onward.
If you are solo mining, this will affect you the moment you upgrade Bitcoin
Core, which must be done prior to BIP66 achieving its 951/1001 status.
If you are mining with the stratum mining protocol: this does not affect you.
If you are mining with the getblocktemplate protocol to a pool: this will affect
you at the pool operator's discretion, which must be no later than BIP66
achieving its 951/1001 status.
The `prioritisetransaction` RPC method has been added to enable miners to
manipulate the priority of transactions on an individual basis.
Bitcoin Core now supports BIP 22 long polling, so mining software can be
notified immediately of new templates rather than having to poll periodically.
Support for BIP 23 block proposals is now available in Bitcoin Core's
`getblocktemplate` method. This enables miners to check the basic validity of
their next block before expending work on it, reducing risks of accidental
hardforks or mining invalid blocks.
Two new options to control mining policy:
- `-datacarrier=0/1` : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
- `-datacarriersize=n` : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs.
The relay policy has changed to more properly implement the desired behavior of not
relaying free (or very low fee) transactions unless they have a priority above the
AllowFreeThreshold(), in which case they are relayed subject to the rate limiter.
BIP 66: strict DER encoding for signatures
------------------------------------------
Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new
consensus rule, which prohibits non-DER signatures. Such transactions have been
non-standard since Bitcoin v0.8.0 (released in February 2013), but were
technically still permitted inside blocks.
This change breaks the dependency on OpenSSL's signature parsing, and is
required if implementations would want to remove all of OpenSSL from the
consensus code.
The same miner-voting mechanism as in BIP 34 is used: when 751 out of a
sequence of 1001 blocks have version number 3 or higher, the new consensus
rule becomes active for those blocks. When 951 out of a sequence of 1001
blocks have version number 3 or higher, it becomes mandatory for all blocks.
Backward compatibility with current mining software is NOT provided, thus miners
should read the first paragraph of "Mining and relay policy enhancements" above.
0.10.0 Change log
=================
Detailed release notes follow. This overview includes changes that affect external
behavior, not code moves, refactors or string updates.
RPC:
- `f923c07` Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
- `b641c9c` Fix addnode "onetry": Connect with OpenNetworkConnection
- `171ca77` estimatefee / estimatepriority RPC methods
- `b750cf1` Remove cli functionality from bitcoind
- `f6984e8` Add "chain" to getmininginfo, improve help in getblockchaininfo
- `99ddc6c` Add nLocalServices info to RPC getinfo
- `cf0c47b` Remove getwork() RPC call
- `2a72d45` prioritisetransaction <txid> <priority delta> <priority tx fee>
- `e44fea5` Add an option `-datacarrier` to allow users to disable relaying/mining data carrier transactions
- `2ec5a3d` Prevent easy RPC memory exhaustion attack
- `d4640d7` Added argument to getbalance to include watchonly addresses and fixed errors in balance calculation
- `83f3543` Added argument to listaccounts to include watchonly addresses
- `952877e` Showing 'involvesWatchonly' property for transactions returned by 'listtransactions' and 'listsinceblock'. It is only appended when the transaction involves a watchonly address
- `d7d5d23` Added argument to listtransactions and listsinceblock to include watchonly addresses
- `f87ba3d` added includeWatchonly argument to 'gettransaction' because it affects balance calculation
- `0fa2f88` added includedWatchonly argument to listreceivedbyaddress/...account
- `6c37f7f` `getrawchangeaddress`: fail when keypool exhausted and wallet locked
- `ff6a7af` getblocktemplate: longpolling support
- `c4a321f` Add peerid to getpeerinfo to allow correlation with the logs
- `1b4568c` Add vout to ListTransactions output
- `b33bd7a` Implement "getchaintips" RPC command to monitor blockchain forks
- `733177e` Remove size limit in RPC client, keep it in server
- `6b5b7cb` Categorize rpc help overview
- `6f2c26a` Closely track mempool byte total. Add "getmempoolinfo" RPC
- `aa82795` Add detailed network info to getnetworkinfo RPC
- `01094bd` Don't reveal whether password is <20 or >20 characters in RPC
- `57153d4` rpc: Compute number of confirmations of a block from block height
- `ff36cbe` getnetworkinfo: export local node's client sub-version string
- `d14d7de` SanitizeString: allow '(' and ')'
- `31d6390` Fixed setaccount accepting foreign address
- `b5ec5fe` update getnetworkinfo help with subversion
- `ad6e601` RPC additions after headers-first
- `33dfbf5` rpc: Fix leveldb iterator leak, and flush before `gettxoutsetinfo`
- `2aa6329` Enable customising node policy for datacarrier data size with a -datacarriersize option
- `f877aaa` submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock
- `e69a587` submitblock: Support for returning specific rejection reasons
- `af82884` Add "warmup mode" for RPC server
- `e2655e0` Add unauthenticated HTTP REST interface to public blockchain data
- `683dc40` Disable SSLv3 (in favor of TLS) for the RPC client and server
- `44b4c0d` signrawtransaction: validate private key
- `9765a50` Implement BIP 23 Block Proposal
- `f9de17e` Add warning comment to getinfo
Command-line options:
- `ee21912` Use netmasks instead of wildcards for IP address matching
- `deb3572` Add `-rpcbind` option to allow binding RPC port on a specific interface
- `96b733e` Add `-version` option to get just the version
- `1569353` Add `-stopafterblockimport` option
- `77cbd46` Let -zapwallettxes recover transaction meta data
- `1c750db` remove -tor compatibility code (only allow -onion)
- `4aaa017` rework help messages for fee-related options
- `4278b1d` Clarify error message when invalid -rpcallowip
- `6b407e4` -datadir is now allowed in config files
- `bdd5b58` Add option `-sysperms` to disable 077 umask (create new files with system default umask)
- `cbe39a3` Add "bitcoin-tx" command line utility and supporting modules
- `dbca89b` Trigger -alertnotify if network is upgrading without you
- `ad96e7c` Make -reindex cope with out-of-order blocks
- `16d5194` Skip reindexed blocks individually
- `ec01243` --tracerpc option for regression tests
- `f654f00` Change -genproclimit default to 1
- `3c77714` Make -proxy set all network types, avoiding a connect leak
- `57be955` Remove -printblock, -printblocktree, and -printblockindex
- `ad3d208` remove -maxorphanblocks config parameter since it is no longer functional
Block and transaction handling:
- `7a0e84d` ProcessGetData(): abort if a block file is missing from disk
- `8c93bf4` LoadBlockIndexDB(): Require block db reindex if any `blk*.dat` files are missing
- `77339e5` Get rid of the static chainMostWork (optimization)
- `4e0eed8` Allow ActivateBestChain to release its lock on cs_main
- `18e7216` Push cs_mains down in ProcessBlock
- `fa126ef` Avoid undefined behavior using CFlatData in CScript serialization
- `7f3b4e9` Relax IsStandard rules for pay-to-script-hash transactions
- `c9a0918` Add a skiplist to the CBlockIndex structure
- `bc42503` Use unordered_map for CCoinsViewCache with salted hash (optimization)
- `d4d3fbd` Do not flush the cache after every block outside of IBD (optimization)
- `ad08d0b` Bugfix: make CCoinsViewMemPool support pruned entries in underlying cache
- `5734d4d` Only remove actualy failed blocks from setBlockIndexValid
- `d70bc52` Rework block processing benchmark code
- `714a3e6` Only keep setBlockIndexValid entries that are possible improvements
- `ea100c7` Reduce maximum coinscache size during verification (reduce memory usage)
- `4fad8e6` Reject transactions with excessive numbers of sigops
- `b0875eb` Allow BatchWrite to destroy its input, reducing copying (optimization)
- `92bb6f2` Bypass reloading blocks from disk (optimization)
- `2e28031` Perform CVerifyDB on pcoinsdbview instead of pcoinsTip (reduce memory usage)
- `ab15b2e` Avoid copying undo data (optimization)
- `341735e` Headers-first synchronization
- `afc32c5` Fix rebuild-chainstate feature and improve its performance
- `e11b2ce` Fix large reorgs
- `ed6d1a2` Keep information about all block files in memory
- `a48f2d6` Abstract context-dependent block checking from acceptance
- `7e615f5` Fixed mempool sync after sending a transaction
- `51ce901` Improve chainstate/blockindex disk writing policy
- `a206950` Introduce separate flushing modes
- `9ec75c5` Add a locking mechanism to IsInitialBlockDownload to ensure it never goes from false to true
- `868d041` Remove coinbase-dependant transactions during reorg
- `723d12c` Remove txn which are invalidated by coinbase maturity during reorg
- `0cb8763` Check against MANDATORY flags prior to accepting to mempool
- `8446262` Reject headers that build on an invalid parent
- `008138c` Bugfix: only track UTXO modification after lookup
P2P protocol and network code:
- `f80cffa` Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails
- `c30329a` Add testnet DNS seed of Alex Kotenko
- `45a4baf` Add testnet DNS seed of Andreas Schildbach
- `f1920e8` Ping automatically every 2 minutes (unconditionally)
- `806fd19` Allocate receive buffers in on the fly
- `6ecf3ed` Display unknown commands received
- `aa81564` Track peers' available blocks
- `caf6150` Use async name resolving to improve net thread responsiveness
- `9f4da19` Use pong receive time rather than processing time
- `0127a9b` remove SOCKS4 support from core and GUI, use SOCKS5
- `40f5cb8` Send rejects and apply DoS scoring for errors in direct block validation
- `dc942e6` Introduce whitelisted peers
- `c994d2e` prevent SOCKET leak in BindListenPort()
- `a60120e` Add built-in seeds for .onion
- `60dc8e4` Allow -onlynet=onion to be used
- `3a56de7` addrman: Do not propagate obviously poor addresses onto the network
- `6050ab6` netbase: Make SOCKS5 negotiation interruptible
- `604ee2a` Remove tx from AlreadyAskedFor list once we receive it, not when we process it
- `efad808` Avoid reject message feedback loops
- `71697f9` Separate protocol versioning from clientversion
- `20a5f61` Don't relay alerts to peers before version negotiation
- `b4ee0bd` Introduce preferred download peers
- `845c86d` Do not use third party services for IP detection
- `12a49ca` Limit the number of new addressses to accumulate
- `35e408f` Regard connection failures as attempt for addrman
- `a3a7317` Introduce 10 minute block download timeout
- `3022e7d` Require sufficent priority for relay of free transactions
- `58fda4d` Update seed IPs, based on bitcoin.sipa.be crawler data
- `18021d0` Remove bitnodes.io from dnsseeds.
Validation:
- `6fd7ef2` Also switch the (unused) verification code to low-s instead of even-s
- `584a358` Do merkle root and txid duplicates check simultaneously
- `217a5c9` When transaction outputs exceed inputs, show the offending amounts so as to aid debugging
- `f74fc9b` Print input index when signature validation fails, to aid debugging
- `6fd59ee` script.h: set_vch() should shift a >32 bit value
- `d752ba8` Add SCRIPT_VERIFY_SIGPUSHONLY (BIP62 rule 2) (test only)
- `698c6ab` Add SCRIPT_VERIFY_MINIMALDATA (BIP62 rules 3 and 4) (test only)
- `ab9edbd` script: create sane error return codes for script validation and remove logging
- `219a147` script: check ScriptError values in script tests
- `0391423` Discourage NOPs reserved for soft-fork upgrades
- `98b135f` Make STRICTENC invalid pubkeys fail the script rather than the opcode
- `307f7d4` Report script evaluation failures in log and reject messages
- `ace39db` consensus: guard against openssl's new strict DER checks
- `12b7c44` Improve robustness of DER recoding code
- `76ce5c8` fail immediately on an empty signature
Build system:
- `f25e3ad` Fix build in OS X 10.9
- `65e8ba4` build: Switch to non-recursive make
- `460b32d` build: fix broken boost chrono check on some platforms
- `9ce0774` build: Fix windows configure when using --with-qt-libdir
- `ea96475` build: Add mention of --disable-wallet to bdb48 error messages
- `1dec09b` depends: add shared dependency builder
- `c101c76` build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix
- `e432a5f` build: add option for reducing exports (v2)
- `6134b43` Fixing condition 'sabotaging' MSVC build
- `af0bd5e` osx: fix signing to make Gatekeeper happy (again)
- `a7d1f03` build: fix dynamic boost check when --with-boost= is used
- `d5fd094` build: fix qt test build when libprotobuf is in a non-standard path
- `2cf5f16` Add libbitcoinconsensus library
- `914868a` build: add a deterministic dmg signer
- `2d375fe` depends: bump openssl to 1.0.1k
- `b7a4ecc` Build: Only check for boost when building code that requires it
Wallet:
- `b33d1f5` Use fee/priority estimates in wallet CreateTransaction
- `4b7b1bb` Sanity checks for estimates
- `c898846` Add support for watch-only addresses
- `d5087d1` Use script matching rather than destination matching for watch-only
- `d88af56` Fee fixes
- `a35b55b` Dont run full check every time we decrypt wallet
- `3a7c348` Fix make_change to not create half-satoshis
- `f606bb9` fix a possible memory leak in CWalletDB::Recover
- `870da77` fix possible memory leaks in CWallet::EncryptWallet
- `ccca27a` Watch-only fixes
- `9b1627d` [Wallet] Reduce minTxFee for transaction creation to 1000 satoshis
- `a53fd41` Deterministic signing
- `15ad0b5` Apply AreSane() checks to the fees from the network
- `11855c1` Enforce minRelayTxFee on wallet created tx and add a maxtxfee option
GUI:
- `c21c74b` osx: Fix missing dock menu with qt5
- `b90711c` Fix Transaction details shows wrong To:
- `516053c` Make links in 'About Bitcoin Core' clickable
- `bdc83e8` Ensure payment request network matches client network
- `65f78a1` Add GUI view of peer information
- `06a91d9` VerifyDB progress reporting
- `fe6bff2` Add BerkeleyDB version info to RPCConsole
- `b917555` PeerTableModel: Fix potential deadlock. #4296
- `dff0e3b` Improve rpc console history behavior
- `95a9383` Remove CENT-fee-rule from coin control completely
- `56b07d2` Allow setting listen via GUI
- `d95ba75` Log messages with type>QtDebugMsg as non-debug
- `8969828` New status bar Unit Display Control and related changes
- `674c070` seed OpenSSL PNRG with Windows event data
- `509f926` Payment request parsing on startup now only changes network if a valid network name is specified
- `acd432b` Prevent balloon-spam after rescan
- `7007402` Implement SI-style (thin space) thoudands separator
- `91cce17` Use fixed-point arithmetic in amount spinbox
- `bdba2dd` Remove an obscure option no-one cares about
- `bd0aa10` Replace the temporary file hack currently used to change Bitcoin-Qt's dock icon (OS X) with a buffer-based solution
- `94e1b9e` Re-work overviewpage UI
- `8bfdc9a` Better looking trayicon
- `b197bf3` disable tray interactions when client model set to 0
- `1c5f0af` Add column Watch-only to transactions list
- `21f139b` Fix tablet crash. closes #4854
- `e84843c` Broken addresses on command line no longer trigger testnet
- `a49f11d` Change splash screen to normal window
- `1f9be98` Disable App Nap on OSX 10.9+
- `27c3e91` Add proxy to options overridden if necessary
- `4bd1185` Allow "emergency" shutdown during startup
- `d52f072` Don't show wallet options in the preferences menu when running with -disablewallet
- `6093aa1` Qt: QProgressBar CPU-Issue workaround
- `0ed9675` [Wallet] Add global boolean whether to send free transactions (default=true)
- `ed3e5e4` [Wallet] Add global boolean whether to pay at least the custom fee (default=true)
- `e7876b2` [Wallet] Prevent user from paying a non-sense fee
- `c1c9d5b` Add Smartfee to GUI
- `e0a25c5` Make askpassphrase dialog behave more sanely
- `94b362d` On close of splashscreen interrupt verifyDB
- `b790d13` English translation update
- `8543b0d` Correct tooltip on address book page
Tests:
- `b41e594` Fix script test handling of empty scripts
- `d3a33fc` Test CHECKMULTISIG with m == 0 and n == 0
- `29c1749` Let tx (in)valid tests use any SCRIPT_VERIFY flag
- `6380180` Add rejection of non-null CHECKMULTISIG dummy values
- `21bf3d2` Add tests for BoostAsioToCNetAddr
- `b5ad5e7` Add Python test for -rpcbind and -rpcallowip
- `9ec0306` Add CODESEPARATOR/FindAndDelete() tests
- `75ebced` Added many rpc wallet tests
- `0193fb8` Allow multiple regression tests to run at once
- `92a6220` Hook up sanity checks
- `3820e01` Extend and move all crypto tests to crypto_tests.cpp
- `3f9a019` added list/get received by address/ account tests
- `a90689f` Remove timing-based signature cache unit test
- `236982c` Add skiplist unit tests
- `f4b00be` Add CChain::GetLocator() unit test
- `b45a6e8` Add test for getblocktemplate longpolling
- `cdf305e` Set -discover=0 in regtest framework
- `ed02282` additional test for OP_SIZE in script_valid.json
- `0072d98` script tests: BOOLAND, BOOLOR decode to integer
- `833ff16` script tests: values that overflow to 0 are true
- `4cac5db` script tests: value with trailing 0x00 is true
- `89101c6` script test: test case for 5-byte bools
- `d2d9dc0` script tests: add tests for CHECKMULTISIG limits
- `d789386` Add "it works" test for bitcoin-tx
- `df4d61e` Add bitcoin-tx tests
- `aa41ac2` Test IsPushOnly() with invalid push
- `6022b5d` Make `script_{valid,invalid}.json` validation flags configurable
- `8138cbe` Add automatic script test generation, and actual checksig tests
- `ed27e53` Add coins_tests with a large randomized CCoinViewCache test
- `9df9cf5` Make SCRIPT_VERIFY_STRICTENC compatible with BIP62
- `dcb9846` Extend getchaintips RPC test
- `554147a` Ensure MINIMALDATA invalid tests can only fail one way
- `dfeec18` Test every numeric-accepting opcode for correct handling of the numeric minimal encoding rule
- `2b62e17` Clearly separate PUSHDATA and numeric argument MINIMALDATA tests
- `16d78bd` Add valid invert of invalid every numeric opcode tests
- `f635269` tests: enable alertnotify test for Windows
- `7a41614` tests: allow rpc-tests to get filenames for bitcoind and bitcoin-cli from the environment
- `5122ea7` tests: fix forknotify.py on windows
- `fa7f8cd` tests: remove old pull-tester scripts
- `7667850` tests: replace the old (unused since Travis) tests with new rpc test scripts
- `f4e0aef` Do signature-s negation inside the tests
- `1837987` Optimize -regtest setgenerate block generation
- `2db4c8a` Fix node ranges in the test framework
- `a8b2ce5` regression test only setmocktime RPC call
- `daf03e7` RPC tests: create initial chain with specific timestamps
- `8656dbb` Port/fix txnmall.sh regression test
- `ca81587` Test the exact order of CHECKMULTISIG sig/pubkey evaluation
- `7357893` Prioritize and display -testsafemode status in UI
- `f321d6b` Add key generation/verification to ECC sanity check
- `132ea9b` miner_tests: Disable checkpoints so they don't fail the subsidy-change test
- `bc6cb41` QA RPC tests: Add tests block block proposals
- `f67a9ce` Use deterministically generated script tests
- `11d7a7d` [RPC] add rpc-test for http keep-alive (persistent connections)
- `34318d7` RPC-test based on invalidateblock for mempool coinbase spends
- `76ec867` Use actually valid transactions for script tests
- `c8589bf` Add actual signature tests
- `e2677d7` Fix smartfees test for change to relay policy
- `263b65e` tests: run sanity checks in tests too
Miscellaneous:
- `122549f` Fix incorrect checkpoint data for testnet3
- `5bd02cf` Log used config file to debug.log on startup
- `68ba85f` Updated Debian example bitcoin.conf with config from wiki + removed some cruft and updated comments
- `e5ee8f0` Remove -beta suffix
- `38405ac` Add comment regarding experimental-use service bits
- `be873f6` Issue warning if collecting RandSeed data failed
- `8ae973c` Allocate more space if necessary in RandSeedAddPerfMon
- `675bcd5` Correct comment for 15-of-15 p2sh script size
- `fda3fed` libsecp256k1 integration
- `2e36866` Show nodeid instead of addresses in log (for anonymity) unless otherwise requested
- `cd01a5e` Enable paranoid corruption checks in LevelDB >= 1.16
- `9365937` Add comment about never updating nTimeOffset past 199 samples
- `403c1bf` contrib: remove getwork-based pyminer (as getwork API call has been removed)
- `0c3e101` contrib: Added systemd .service file in order to help distributions integrate bitcoind
- `0a0878d` doc: Add new DNSseed policy
- `2887bff` Update coding style and add .clang-format
- `5cbda4f` Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope
- `b4a72a7` contrib/linearize: split output files based on new-timestamp-year or max-file-size
- `e982b57` Use explicit fflush() instead of setvbuf()
- `234bfbf` contrib: Add init scripts and docs for Upstart and OpenRC
- `01c2807` Add warning about the merkle-tree algorithm duplicate txid flaw
- `d6712db` Also create pid file in non-daemon mode
- `772ab0e` contrib: use batched JSON-RPC in linarize-hashes (optimization)
- `7ab4358` Update bash-completion for v0.10
- `6e6a36c` contrib: show pull # in prompt for github-merge script
- `5b9f842` Upgrade leveldb to 1.18, make chainstate databases compatible between ARM and x86 (issue #2293)
- `4e7c219` Catch UTXO set read errors and shutdown
- `867c600` Catch LevelDB errors during flush
- `06ca065` Fix CScriptID(const CScript& in) in empty script case
Credits
=======
Thanks to everyone who contributed to this release:
- 21E14
- Adam Weiss
- Aitor Pazos
- Alexander Jeng
- Alex Morcos
- Alon Muroch
- Andreas Schildbach
- Andrew Poelstra
- Andy Alness
- Ashley Holman
- Benedict Chan
- Ben Holden-Crowther
- Bryan Bishop
- BtcDrak
- Christian von Roques
- Clinton Christian
- Cory Fields
- Cozz Lovan
- daniel
- Daniel Kraft
- David Hill
- Derek701
- dexX7
- dllud
- Dominyk Tiller
- Doug
- elichai
- elkingtowa
- ENikS
- Eric Shaw
- Federico Bond
- Francis GASCHET
- Gavin Andresen
- Giuseppe Mazzotta
- Glenn Willen
- Gregory Maxwell
- gubatron
- HarryWu
- himynameismartin
- Huang Le
- Ian Carroll
- imharrywu
- Jameson Lopp
- Janusz Lenar
- JaSK
- Jeff Garzik
- JL2035
- Johnathan Corgan
- Jonas Schnelli
- jtimon
- Julian Haight
- Kamil Domanski
- kazcw
- kevin
- kiwigb
- Kosta Zertsekel
- LongShao007
- Luke Dashjr
- Mark Friedenbach
- Mathy Vanvoorden
- Matt Corallo
- Matthew Bogosian
- Micha
- Michael Ford
- Mike Hearn
- mrbandrews
- mruddy
- ntrgn
- Otto Allmendinger
- paveljanik
- Pavel Vasin
- Peter Todd
- phantomcircuit
- Philip Kaufmann
- Pieter Wuille
- pryds
- randy-waterhouse
- R E Broadley
- Rose Toomey
- Ross Nicoll
- Roy Badami
- Ruben Dario Ponticelli
- Rune K. Svendsen
- Ryan X. Charles
- Saivann
- sandakersmann
- SergioDemianLerner
- shshshsh
- sinetek
- Stuart Cardall
- Suhas Daftuar
- Tawanda Kembo
- Teran McKinney
- tm314159
- Tom Harding
- Trevin Hofmann
- Whit J
- Wladimir J. van der Laan
- Yoichi Hirai
- Zak Wilcox
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).

View file

@ -133,15 +133,22 @@ rm SHA256SUMS
Note: check that SHA256SUMS itself doesn't end up in SHA256SUMS, which is a spurious/nonsensical entry.
- Upload zips and installers, as well as `SHA256SUMS.asc` from last step, to the bitcoin.org server
into `/var/www/bin/bitcoin-core-${VERSION}`
- Update bitcoin.org version
- Make a pull request to add a file named `YYYY-MM-DD-vX.Y.Z.md` with the release notes
to https://github.com/bitcoin/bitcoin.org/tree/master/_releases
([Example for 0.9.2.1](https://raw.githubusercontent.com/bitcoin/bitcoin.org/master/_releases/2014-06-19-v0.9.2.1.md)).
- First, check to see if the Bitcoin.org maintainers have prepared a
release: https://github.com/bitcoin/bitcoin.org/labels/Releases
- After the pull request is merged, the website will automatically show the newest version, as well
as update the OS download links. Ping Saivann in case anything goes wrong
- If they have, it will have previously failed their Travis CI
checks because the final release files weren't uploaded.
Trigger a Travis CI rebuild---if it passes, merge.
- If they have not prepared a release, follow the Bitcoin.org release
instructions: https://github.com/bitcoin/bitcoin.org#release-notes
- After the pull request is merged, the website will automatically show the newest version within 15 minutes, as well
as update the OS download links. Ping @saivann/@harding (saivann/harding on Freenode) in case anything goes wrong
- Announce the release:

View file

@ -16,6 +16,7 @@
# h) node0 should now have 2 unspent outputs; send these to node2 via raw tx broadcast by node1
# i) have node1 mine a block
# j) check balances - node0 should have 0, node2 should have 100
# k) test ResendWalletTransactions - create transactions, startup fourth node, make sure it syncs
#
from test_framework import BitcoinTestFramework
@ -26,7 +27,7 @@ class WalletTest (BitcoinTestFramework):
def setup_chain(self):
print("Initializing test directory "+self.options.tmpdir)
initialize_chain_clean(self.options.tmpdir, 3)
initialize_chain_clean(self.options.tmpdir, 4)
def setup_network(self, split=False):
self.nodes = start_nodes(3, self.options.tmpdir)
@ -132,5 +133,23 @@ class WalletTest (BitcoinTestFramework):
assert_equal(self.nodes[2].getbalance(), Decimal('59.99800000'))
assert_equal(self.nodes[0].getbalance(), Decimal('39.99800000'))
# Test ResendWalletTransactions:
# Create a couple of transactions, then start up a fourth
# node (nodes[3]) and ask nodes[0] to rebroadcast.
# EXPECT: nodes[3] should have those transactions in its mempool.
txid1 = self.nodes[0].sendtoaddress(self.nodes[1].getnewaddress(), 1)
txid2 = self.nodes[1].sendtoaddress(self.nodes[0].getnewaddress(), 1)
sync_mempools(self.nodes)
self.nodes.append(start_node(3, self.options.tmpdir))
connect_nodes_bi(self.nodes, 0, 3)
sync_blocks(self.nodes)
relayed = self.nodes[0].resendwallettransactions()
assert_equal(set(relayed), set([txid1, txid2]))
sync_mempools(self.nodes)
assert(txid1 in self.nodes[3].getrawmempool())
if __name__ == '__main__':
WalletTest ().main ()

View file

@ -87,6 +87,7 @@ BITCOIN_CORE_H = \
coins.h \
compat.h \
compressor.h \
consensus/params.h \
core_io.h \
wallet/db.h \
eccryptoverify.h \

View file

@ -50,6 +50,7 @@ BITCOIN_TESTS =\
test/hash_tests.cpp \
test/key_tests.cpp \
test/main_tests.cpp \
test/mempool_tests.cpp \
test/miner_tests.cpp \
test/mruset_tests.cpp \
test/multisig_tests.cpp \

View file

@ -10,34 +10,27 @@
using namespace std;
int CAddrInfo::GetTriedBucket(const std::vector<unsigned char>& nKey) const
int CAddrInfo::GetTriedBucket(const uint256& nKey) const
{
CDataStream ss1(SER_GETHASH, 0);
std::vector<unsigned char> vchKey = GetKey();
ss1 << nKey << vchKey;
uint64_t hash1 = Hash(ss1.begin(), ss1.end()).GetCheapHash();
CDataStream ss2(SER_GETHASH, 0);
std::vector<unsigned char> vchGroupKey = GetGroup();
ss2 << nKey << vchGroupKey << (hash1 % ADDRMAN_TRIED_BUCKETS_PER_GROUP);
uint64_t hash2 = Hash(ss2.begin(), ss2.end()).GetCheapHash();
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetKey()).GetHash().GetCheapHash();
uint64_t hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << (hash1 % ADDRMAN_TRIED_BUCKETS_PER_GROUP)).GetHash().GetCheapHash();
return hash2 % ADDRMAN_TRIED_BUCKET_COUNT;
}
int CAddrInfo::GetNewBucket(const std::vector<unsigned char>& nKey, const CNetAddr& src) const
int CAddrInfo::GetNewBucket(const uint256& nKey, const CNetAddr& src) const
{
CDataStream ss1(SER_GETHASH, 0);
std::vector<unsigned char> vchGroupKey = GetGroup();
std::vector<unsigned char> vchSourceGroupKey = src.GetGroup();
ss1 << nKey << vchGroupKey << vchSourceGroupKey;
uint64_t hash1 = Hash(ss1.begin(), ss1.end()).GetCheapHash();
CDataStream ss2(SER_GETHASH, 0);
ss2 << nKey << vchSourceGroupKey << (hash1 % ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP);
uint64_t hash2 = Hash(ss2.begin(), ss2.end()).GetCheapHash();
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << vchSourceGroupKey).GetHash().GetCheapHash();
uint64_t hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << vchSourceGroupKey << (hash1 % ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP)).GetHash().GetCheapHash();
return hash2 % ADDRMAN_NEW_BUCKET_COUNT;
}
int CAddrInfo::GetBucketPosition(const uint256 &nKey, bool fNew, int nBucket) const
{
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << (fNew ? 'N' : 'K') << nBucket << GetKey()).GetHash().GetCheapHash();
return hash1 % ADDRMAN_BUCKET_SIZE;
}
bool CAddrInfo::IsTerrible(int64_t nNow) const
{
if (nLastTry && nLastTry >= nNow - 60) // never remove things tried in the last minute
@ -70,8 +63,6 @@ double CAddrInfo::GetChance(int64_t nNow) const
if (nSinceLastTry < 0)
nSinceLastTry = 0;
fChance *= 600.0 / (600.0 + nSinceLastSeen);
// deprioritize very recent attempts away
if (nSinceLastTry < 60 * 10)
fChance *= 0.01;
@ -128,130 +119,81 @@ void CAddrMan::SwapRandom(unsigned int nRndPos1, unsigned int nRndPos2)
vRandom[nRndPos2] = nId1;
}
int CAddrMan::SelectTried(int nKBucket)
void CAddrMan::Delete(int nId)
{
std::vector<int>& vTried = vvTried[nKBucket];
assert(mapInfo.count(nId) != 0);
CAddrInfo& info = mapInfo[nId];
assert(!info.fInTried);
assert(info.nRefCount == 0);
// randomly shuffle the first few elements (using the entire list)
// find the least recently tried among them
int64_t nOldest = -1;
int nOldestPos = -1;
for (unsigned int i = 0; i < ADDRMAN_TRIED_ENTRIES_INSPECT_ON_EVICT && i < vTried.size(); i++) {
int nPos = GetRandInt(vTried.size() - i) + i;
int nTemp = vTried[nPos];
vTried[nPos] = vTried[i];
vTried[i] = nTemp;
assert(nOldest == -1 || mapInfo.count(nTemp) == 1);
if (nOldest == -1 || mapInfo[nTemp].nLastSuccess < mapInfo[nOldest].nLastSuccess) {
nOldest = nTemp;
nOldestPos = nPos;
}
}
return nOldestPos;
}
int CAddrMan::ShrinkNew(int nUBucket)
{
assert(nUBucket >= 0 && (unsigned int)nUBucket < vvNew.size());
std::set<int>& vNew = vvNew[nUBucket];
// first look for deletable items
for (std::set<int>::iterator it = vNew.begin(); it != vNew.end(); it++) {
assert(mapInfo.count(*it));
CAddrInfo& info = mapInfo[*it];
if (info.IsTerrible()) {
if (--info.nRefCount == 0) {
SwapRandom(info.nRandomPos, vRandom.size() - 1);
vRandom.pop_back();
mapAddr.erase(info);
mapInfo.erase(*it);
mapInfo.erase(nId);
nNew--;
}
vNew.erase(it);
return 0;
}
}
// otherwise, select four randomly, and pick the oldest of those to replace
int n[4] = {GetRandInt(vNew.size()), GetRandInt(vNew.size()), GetRandInt(vNew.size()), GetRandInt(vNew.size())};
int nI = 0;
int nOldest = -1;
for (std::set<int>::iterator it = vNew.begin(); it != vNew.end(); it++) {
if (nI == n[0] || nI == n[1] || nI == n[2] || nI == n[3]) {
assert(nOldest == -1 || mapInfo.count(*it) == 1);
if (nOldest == -1 || mapInfo[*it].nTime < mapInfo[nOldest].nTime)
nOldest = *it;
}
nI++;
}
assert(mapInfo.count(nOldest) == 1);
CAddrInfo& info = mapInfo[nOldest];
if (--info.nRefCount == 0) {
SwapRandom(info.nRandomPos, vRandom.size() - 1);
vRandom.pop_back();
mapAddr.erase(info);
mapInfo.erase(nOldest);
nNew--;
}
vNew.erase(nOldest);
return 1;
}
void CAddrMan::MakeTried(CAddrInfo& info, int nId, int nOrigin)
void CAddrMan::ClearNew(int nUBucket, int nUBucketPos)
{
assert(vvNew[nOrigin].count(nId) == 1);
// if there is an entry in the specified bucket, delete it.
if (vvNew[nUBucket][nUBucketPos] != -1) {
int nIdDelete = vvNew[nUBucket][nUBucketPos];
CAddrInfo& infoDelete = mapInfo[nIdDelete];
assert(infoDelete.nRefCount > 0);
infoDelete.nRefCount--;
vvNew[nUBucket][nUBucketPos] = -1;
if (infoDelete.nRefCount == 0) {
Delete(nIdDelete);
}
}
}
void CAddrMan::MakeTried(CAddrInfo& info, int nId)
{
// remove the entry from all new buckets
for (std::vector<std::set<int> >::iterator it = vvNew.begin(); it != vvNew.end(); it++) {
if ((*it).erase(nId))
for (int bucket = 0; bucket < ADDRMAN_NEW_BUCKET_COUNT; bucket++) {
int pos = info.GetBucketPosition(nKey, true, bucket);
if (vvNew[bucket][pos] == nId) {
vvNew[bucket][pos] = -1;
info.nRefCount--;
}
}
nNew--;
assert(info.nRefCount == 0);
// which tried bucket to move the entry to
int nKBucket = info.GetTriedBucket(nKey);
std::vector<int>& vTried = vvTried[nKBucket];
int nKBucketPos = info.GetBucketPosition(nKey, false, nKBucket);
// first check whether there is place to just add it
if (vTried.size() < ADDRMAN_TRIED_BUCKET_SIZE) {
vTried.push_back(nId);
nTried++;
info.fInTried = true;
return;
}
// first make space to add it (the existing tried entry there is moved to new, deleting whatever is there).
if (vvTried[nKBucket][nKBucketPos] != -1) {
// find an item to evict
int nIdEvict = vvTried[nKBucket][nKBucketPos];
assert(mapInfo.count(nIdEvict) == 1);
CAddrInfo& infoOld = mapInfo[nIdEvict];
// otherwise, find an item to evict
int nPos = SelectTried(nKBucket);
// Remove the to-be-evicted item from the tried set.
infoOld.fInTried = false;
vvTried[nKBucket][nKBucketPos] = -1;
nTried--;
// find which new bucket it belongs to
assert(mapInfo.count(vTried[nPos]) == 1);
int nUBucket = mapInfo[vTried[nPos]].GetNewBucket(nKey);
std::set<int>& vNew = vvNew[nUBucket];
int nUBucket = infoOld.GetNewBucket(nKey);
int nUBucketPos = infoOld.GetBucketPosition(nKey, true, nUBucket);
ClearNew(nUBucket, nUBucketPos);
assert(vvNew[nUBucket][nUBucketPos] == -1);
// remove the to-be-replaced tried entry from the tried set
CAddrInfo& infoOld = mapInfo[vTried[nPos]];
infoOld.fInTried = false;
// Enter it into the new set again.
infoOld.nRefCount = 1;
// do not update nTried, as we are going to move something else there immediately
// check whether there is place in that one,
if (vNew.size() < ADDRMAN_NEW_BUCKET_SIZE) {
// if so, move it back there
vNew.insert(vTried[nPos]);
} else {
// otherwise, move it to the new bucket nId came from (there is certainly place there)
vvNew[nOrigin].insert(vTried[nPos]);
}
vvNew[nUBucket][nUBucketPos] = nIdEvict;
nNew++;
}
assert(vvTried[nKBucket][nKBucketPos] == -1);
vTried[nPos] = nId;
// we just overwrote an entry in vTried; no need to update nTried
vvTried[nKBucket][nKBucketPos] = nId;
nTried++;
info.fInTried = true;
return;
}
void CAddrMan::Good_(const CService& addr, int64_t nTime)
@ -281,12 +223,12 @@ void CAddrMan::Good_(const CService& addr, int64_t nTime)
return;
// find a bucket it is in now
int nRnd = GetRandInt(vvNew.size());
int nRnd = GetRandInt(ADDRMAN_NEW_BUCKET_COUNT);
int nUBucket = -1;
for (unsigned int n = 0; n < vvNew.size(); n++) {
int nB = (n + nRnd) % vvNew.size();
std::set<int>& vNew = vvNew[nB];
if (vNew.count(nId)) {
for (unsigned int n = 0; n < ADDRMAN_NEW_BUCKET_COUNT; n++) {
int nB = (n + nRnd) % ADDRMAN_NEW_BUCKET_COUNT;
int nBpos = info.GetBucketPosition(nKey, true, nB);
if (vvNew[nB][nBpos] == nId) {
nUBucket = nB;
break;
}
@ -300,7 +242,7 @@ void CAddrMan::Good_(const CService& addr, int64_t nTime)
LogPrint("addrman", "Moving %s to tried\n", addr.ToString());
// move nId to the tried tables
MakeTried(info, nId, nUBucket);
MakeTried(info, nId);
}
bool CAddrMan::Add_(const CAddress& addr, const CNetAddr& source, int64_t nTimePenalty)
@ -348,12 +290,25 @@ bool CAddrMan::Add_(const CAddress& addr, const CNetAddr& source, int64_t nTimeP
}
int nUBucket = pinfo->GetNewBucket(nKey, source);
std::set<int>& vNew = vvNew[nUBucket];
if (!vNew.count(nId)) {
int nUBucketPos = pinfo->GetBucketPosition(nKey, true, nUBucket);
if (vvNew[nUBucket][nUBucketPos] != nId) {
bool fInsert = vvNew[nUBucket][nUBucketPos] == -1;
if (!fInsert) {
CAddrInfo& infoExisting = mapInfo[vvNew[nUBucket][nUBucketPos]];
if (infoExisting.IsTerrible() || (infoExisting.nRefCount > 1 && pinfo->nRefCount == 0)) {
// Overwrite the existing new table entry.
fInsert = true;
}
}
if (fInsert) {
ClearNew(nUBucket, nUBucketPos);
pinfo->nRefCount++;
if (vNew.size() == ADDRMAN_NEW_BUCKET_SIZE)
ShrinkNew(nUBucket);
vvNew[nUBucket].insert(nId);
vvNew[nUBucket][nUBucketPos] = nId;
} else {
if (pinfo->nRefCount == 0) {
Delete(nId);
}
}
}
return fNew;
}
@ -377,24 +332,23 @@ void CAddrMan::Attempt_(const CService& addr, int64_t nTime)
info.nAttempts++;
}
CAddress CAddrMan::Select_(int nUnkBias)
CAddress CAddrMan::Select_()
{
if (size() == 0)
return CAddress();
double nCorTried = sqrt(nTried) * (100.0 - nUnkBias);
double nCorNew = sqrt(nNew) * nUnkBias;
if ((nCorTried + nCorNew) * GetRandInt(1 << 30) / (1 << 30) < nCorTried) {
// Use a 50% chance for choosing between tried and new table entries.
if (nTried > 0 && (nNew == 0 || GetRandInt(2) == 0)) {
// use a tried node
double fChanceFactor = 1.0;
while (1) {
int nKBucket = GetRandInt(vvTried.size());
std::vector<int>& vTried = vvTried[nKBucket];
if (vTried.size() == 0)
int nKBucket = GetRandInt(ADDRMAN_TRIED_BUCKET_COUNT);
int nKBucketPos = GetRandInt(ADDRMAN_BUCKET_SIZE);
if (vvTried[nKBucket][nKBucketPos] == -1)
continue;
int nPos = GetRandInt(vTried.size());
assert(mapInfo.count(vTried[nPos]) == 1);
CAddrInfo& info = mapInfo[vTried[nPos]];
int nId = vvTried[nKBucket][nKBucketPos];
assert(mapInfo.count(nId) == 1);
CAddrInfo& info = mapInfo[nId];
if (GetRandInt(1 << 30) < fChanceFactor * info.GetChance() * (1 << 30))
return info;
fChanceFactor *= 1.2;
@ -403,16 +357,13 @@ CAddress CAddrMan::Select_(int nUnkBias)
// use a new node
double fChanceFactor = 1.0;
while (1) {
int nUBucket = GetRandInt(vvNew.size());
std::set<int>& vNew = vvNew[nUBucket];
if (vNew.size() == 0)
int nUBucket = GetRandInt(ADDRMAN_NEW_BUCKET_COUNT);
int nUBucketPos = GetRandInt(ADDRMAN_BUCKET_SIZE);
if (vvNew[nUBucket][nUBucketPos] == -1)
continue;
int nPos = GetRandInt(vNew.size());
std::set<int>::iterator it = vNew.begin();
while (nPos--)
it++;
assert(mapInfo.count(*it) == 1);
CAddrInfo& info = mapInfo[*it];
int nId = vvNew[nUBucket][nUBucketPos];
assert(mapInfo.count(nId) == 1);
CAddrInfo& info = mapInfo[nId];
if (GetRandInt(1 << 30) < fChanceFactor * info.GetChance() * (1 << 30))
return info;
fChanceFactor *= 1.2;
@ -460,22 +411,30 @@ int CAddrMan::Check_()
if (mapNew.size() != nNew)
return -10;
for (int n = 0; n < vvTried.size(); n++) {
std::vector<int>& vTried = vvTried[n];
for (std::vector<int>::iterator it = vTried.begin(); it != vTried.end(); it++) {
if (!setTried.count(*it))
for (int n = 0; n < ADDRMAN_TRIED_BUCKET_COUNT; n++) {
for (int i = 0; i < ADDRMAN_BUCKET_SIZE; i++) {
if (vvTried[n][i] != -1) {
if (!setTried.count(vvTried[n][i]))
return -11;
setTried.erase(*it);
if (mapInfo[vvTried[n][i]].GetTriedBucket(nKey) != n)
return -17;
if (mapInfo[vvTried[n][i]].GetBucketPosition(nKey, false, n) != i)
return -18;
setTried.erase(vvTried[n][i]);
}
}
}
for (int n = 0; n < vvNew.size(); n++) {
std::set<int>& vNew = vvNew[n];
for (std::set<int>::iterator it = vNew.begin(); it != vNew.end(); it++) {
if (!mapNew.count(*it))
for (int n = 0; n < ADDRMAN_NEW_BUCKET_COUNT; n++) {
for (int i = 0; i < ADDRMAN_BUCKET_SIZE; i++) {
if (vvNew[n][i] != -1) {
if (!mapNew.count(vvNew[n][i]))
return -12;
if (--mapNew[*it] == 0)
mapNew.erase(*it);
if (mapInfo[vvNew[n][i]].GetBucketPosition(nKey, true, n) != i)
return -19;
if (--mapNew[vvNew[n][i]] == 0)
mapNew.erase(vvNew[n][i]);
}
}
}
@ -483,6 +442,8 @@ int CAddrMan::Check_()
return -13;
if (mapNew.size())
return -15;
if (nKey.IsNull())
return -16;
return 0;
}

View file

@ -79,17 +79,20 @@ public:
}
//! Calculate in which "tried" bucket this entry belongs
int GetTriedBucket(const std::vector<unsigned char> &nKey) const;
int GetTriedBucket(const uint256 &nKey) const;
//! Calculate in which "new" bucket this entry belongs, given a certain source
int GetNewBucket(const std::vector<unsigned char> &nKey, const CNetAddr& src) const;
int GetNewBucket(const uint256 &nKey, const CNetAddr& src) const;
//! Calculate in which "new" bucket this entry belongs, using its default source
int GetNewBucket(const std::vector<unsigned char> &nKey) const
int GetNewBucket(const uint256 &nKey) const
{
return GetNewBucket(nKey, source);
}
//! Calculate in which position of a bucket to store this entry.
int GetBucketPosition(const uint256 &nKey, bool fNew, int nBucket) const;
//! Determine whether the statistics about this entry are bad enough so that it can just be deleted
bool IsTerrible(int64_t nNow = GetAdjustedTime()) const;
@ -106,15 +109,15 @@ public:
*
* To that end:
* * Addresses are organized into buckets.
* * Address that have not yet been tried go into 256 "new" buckets.
* * Based on the address range (/16 for IPv4) of source of the information, 32 buckets are selected at random
* * Address that have not yet been tried go into 1024 "new" buckets.
* * Based on the address range (/16 for IPv4) of source of the information, 64 buckets are selected at random
* * The actual bucket is chosen from one of these, based on the range the address itself is located.
* * One single address can occur in up to 4 different buckets, to increase selection chances for addresses that
* * One single address can occur in up to 8 different buckets, to increase selection chances for addresses that
* are seen frequently. The chance for increasing this multiplicity decreases exponentially.
* * When adding a new address to a full bucket, a randomly chosen entry (with a bias favoring less recently seen
* ones) is removed from it first.
* * Addresses of nodes that are known to be accessible go into 64 "tried" buckets.
* * Each address range selects at random 4 of these buckets.
* * Addresses of nodes that are known to be accessible go into 256 "tried" buckets.
* * Each address range selects at random 8 of these buckets.
* * The actual bucket is chosen from one of these, based on the full address.
* * When adding a new good address to a full bucket, a randomly chosen entry (with a bias favoring less recently
* tried ones) is evicted from it, back to the "new" buckets.
@ -125,28 +128,22 @@ public:
*/
//! total number of buckets for tried addresses
#define ADDRMAN_TRIED_BUCKET_COUNT 64
//! maximum allowed number of entries in buckets for tried addresses
#define ADDRMAN_TRIED_BUCKET_SIZE 64
#define ADDRMAN_TRIED_BUCKET_COUNT 256
//! total number of buckets for new addresses
#define ADDRMAN_NEW_BUCKET_COUNT 256
#define ADDRMAN_NEW_BUCKET_COUNT 1024
//! maximum allowed number of entries in buckets for new addresses
#define ADDRMAN_NEW_BUCKET_SIZE 64
//! maximum allowed number of entries in buckets for new and tried addresses
#define ADDRMAN_BUCKET_SIZE 64
//! over how many buckets entries with tried addresses from a single group (/16 for IPv4) are spread
#define ADDRMAN_TRIED_BUCKETS_PER_GROUP 4
#define ADDRMAN_TRIED_BUCKETS_PER_GROUP 8
//! over how many buckets entries with new addresses originating from a single group are spread
#define ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP 32
#define ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP 64
//! in how many buckets for entries with new addresses a single address may occur
#define ADDRMAN_NEW_BUCKETS_PER_ADDRESS 4
//! how many entries in a bucket with tried addresses are inspected, when selecting one to replace
#define ADDRMAN_TRIED_ENTRIES_INSPECT_ON_EVICT 4
#define ADDRMAN_NEW_BUCKETS_PER_ADDRESS 8
//! how old addresses can maximally be
#define ADDRMAN_HORIZON_DAYS 30
@ -176,7 +173,7 @@ private:
mutable CCriticalSection cs;
//! secret key to randomize bucket select with
std::vector<unsigned char> nKey;
uint256 nKey;
//! last used nId
int nIdCount;
@ -194,13 +191,13 @@ private:
int nTried;
//! list of "tried" buckets
std::vector<std::vector<int> > vvTried;
int vvTried[ADDRMAN_TRIED_BUCKET_COUNT][ADDRMAN_BUCKET_SIZE];
//! number of (unique) "new" entries
int nNew;
//! list of "new" buckets
std::vector<std::set<int> > vvNew;
int vvNew[ADDRMAN_NEW_BUCKET_COUNT][ADDRMAN_BUCKET_SIZE];
protected:
@ -214,17 +211,14 @@ protected:
//! Swap two elements in vRandom.
void SwapRandom(unsigned int nRandomPos1, unsigned int nRandomPos2);
//! Return position in given bucket to replace.
int SelectTried(int nKBucket);
//! Remove an element from a "new" bucket.
//! This is the only place where actual deletions occur.
//! Elements are never deleted while in the "tried" table, only possibly evicted back to the "new" table.
int ShrinkNew(int nUBucket);
//! Move an entry from the "new" table(s) to the "tried" table
//! @pre vvUnkown[nOrigin].count(nId) != 0
void MakeTried(CAddrInfo& info, int nId, int nOrigin);
void MakeTried(CAddrInfo& info, int nId);
//! Delete an entry. It must not be in tried, and have refcount 0.
void Delete(int nId);
//! Clear a position in a "new" table. This is the only place where entries are actually deleted.
void ClearNew(int nUBucket, int nUBucketPos);
//! Mark an entry "good", possibly moving it from "new" to "tried".
void Good_(const CService &addr, int64_t nTime);
@ -237,7 +231,7 @@ protected:
//! Select an address to connect to.
//! nUnkBias determines how much to favor new addresses over tried ones (min=0, max=100)
CAddress Select_(int nUnkBias);
CAddress Select_();
#ifdef DEBUG_ADDRMAN
//! Perform consistency check. Returns an error code or zero.
@ -253,17 +247,21 @@ protected:
public:
/**
* serialized format:
* * version byte (currently 0)
* * nKey
* * version byte (currently 1)
* * 0x20 + nKey (serialized as if it were a vector, for backward compatibility)
* * nNew
* * nTried
* * number of "new" buckets
* * number of "new" buckets XOR 2**30
* * all nNew addrinfos in vvNew
* * all nTried addrinfos in vvTried
* * for each bucket:
* * number of elements
* * for each element: index
*
* 2**30 is xorred with the number of buckets to make addrman deserializer v0 detect it
* as incompatible. This is necessary because it did not check the version number on
* deserialization.
*
* Notice that vvTried, mapAddr and vVector are never encoded explicitly;
* they are instead reconstructed from the other information.
*
@ -275,132 +273,190 @@ public:
*
* We don't use ADD_SERIALIZE_METHODS since the serialization and deserialization code has
* very little in common.
*
*/
template<typename Stream>
void Serialize(Stream &s, int nType, int nVersionDummy) const
{
LOCK(cs);
unsigned char nVersion = 0;
unsigned char nVersion = 1;
s << nVersion;
s << ((unsigned char)32);
s << nKey;
s << nNew;
s << nTried;
int nUBuckets = ADDRMAN_NEW_BUCKET_COUNT;
int nUBuckets = ADDRMAN_NEW_BUCKET_COUNT ^ (1 << 30);
s << nUBuckets;
std::map<int, int> mapUnkIds;
int nIds = 0;
for (std::map<int, CAddrInfo>::const_iterator it = mapInfo.begin(); it != mapInfo.end(); it++) {
if (nIds == nNew) break; // this means nNew was wrong, oh ow
mapUnkIds[(*it).first] = nIds;
const CAddrInfo &info = (*it).second;
if (info.nRefCount) {
assert(nIds != nNew); // this means nNew was wrong, oh ow
s << info;
nIds++;
}
}
nIds = 0;
for (std::map<int, CAddrInfo>::const_iterator it = mapInfo.begin(); it != mapInfo.end(); it++) {
if (nIds == nTried) break; // this means nTried was wrong, oh ow
const CAddrInfo &info = (*it).second;
if (info.fInTried) {
assert(nIds != nTried); // this means nTried was wrong, oh ow
s << info;
nIds++;
}
}
for (std::vector<std::set<int> >::const_iterator it = vvNew.begin(); it != vvNew.end(); it++) {
const std::set<int> &vNew = (*it);
int nSize = vNew.size();
for (int bucket = 0; bucket < ADDRMAN_NEW_BUCKET_COUNT; bucket++) {
int nSize = 0;
for (int i = 0; i < ADDRMAN_BUCKET_SIZE; i++) {
if (vvNew[bucket][i] != -1)
nSize++;
}
s << nSize;
for (std::set<int>::const_iterator it2 = vNew.begin(); it2 != vNew.end(); it2++) {
int nIndex = mapUnkIds[*it2];
for (int i = 0; i < ADDRMAN_BUCKET_SIZE; i++) {
if (vvNew[bucket][i] != -1) {
int nIndex = mapUnkIds[vvNew[bucket][i]];
s << nIndex;
}
}
}
}
template<typename Stream>
void Unserialize(Stream& s, int nType, int nVersionDummy)
{
LOCK(cs);
Clear();
unsigned char nVersion;
s >> nVersion;
unsigned char nKeySize;
s >> nKeySize;
if (nKeySize != 32) throw std::ios_base::failure("Incorrect keysize in addrman deserialization");
s >> nKey;
s >> nNew;
s >> nTried;
int nUBuckets = 0;
s >> nUBuckets;
nIdCount = 0;
mapInfo.clear();
mapAddr.clear();
vRandom.clear();
vvTried = std::vector<std::vector<int> >(ADDRMAN_TRIED_BUCKET_COUNT, std::vector<int>(0));
vvNew = std::vector<std::set<int> >(ADDRMAN_NEW_BUCKET_COUNT, std::set<int>());
if (nVersion != 0) {
nUBuckets ^= (1 << 30);
}
// Deserialize entries from the new table.
for (int n = 0; n < nNew; n++) {
CAddrInfo &info = mapInfo[n];
s >> info;
mapAddr[info] = n;
info.nRandomPos = vRandom.size();
vRandom.push_back(n);
if (nUBuckets != ADDRMAN_NEW_BUCKET_COUNT) {
vvNew[info.GetNewBucket(nKey)].insert(n);
if (nVersion != 1 || nUBuckets != ADDRMAN_NEW_BUCKET_COUNT) {
// In case the new table data cannot be used (nVersion unknown, or bucket count wrong),
// immediately try to give them a reference based on their primary source address.
int nUBucket = info.GetNewBucket(nKey);
int nUBucketPos = info.GetBucketPosition(nKey, true, nUBucket);
if (vvNew[nUBucket][nUBucketPos] == -1) {
vvNew[nUBucket][nUBucketPos] = n;
info.nRefCount++;
}
}
}
nIdCount = nNew;
// Deserialize entries from the tried table.
int nLost = 0;
for (int n = 0; n < nTried; n++) {
CAddrInfo info;
s >> info;
std::vector<int> &vTried = vvTried[info.GetTriedBucket(nKey)];
if (vTried.size() < ADDRMAN_TRIED_BUCKET_SIZE) {
int nKBucket = info.GetTriedBucket(nKey);
int nKBucketPos = info.GetBucketPosition(nKey, false, nKBucket);
if (vvTried[nKBucket][nKBucketPos] == -1) {
info.nRandomPos = vRandom.size();
info.fInTried = true;
vRandom.push_back(nIdCount);
mapInfo[nIdCount] = info;
mapAddr[info] = nIdCount;
vTried.push_back(nIdCount);
vvTried[nKBucket][nKBucketPos] = nIdCount;
nIdCount++;
} else {
nLost++;
}
}
nTried -= nLost;
for (int b = 0; b < nUBuckets; b++) {
std::set<int> &vNew = vvNew[b];
// Deserialize positions in the new table (if possible).
for (int bucket = 0; bucket < nUBuckets; bucket++) {
int nSize = 0;
s >> nSize;
for (int n = 0; n < nSize; n++) {
int nIndex = 0;
s >> nIndex;
if (nIndex >= 0 && nIndex < nNew) {
CAddrInfo &info = mapInfo[nIndex];
if (nUBuckets == ADDRMAN_NEW_BUCKET_COUNT && info.nRefCount < ADDRMAN_NEW_BUCKETS_PER_ADDRESS) {
int nUBucketPos = info.GetBucketPosition(nKey, true, bucket);
if (nVersion == 1 && nUBuckets == ADDRMAN_NEW_BUCKET_COUNT && vvNew[bucket][nUBucketPos] == -1 && info.nRefCount < ADDRMAN_NEW_BUCKETS_PER_ADDRESS) {
info.nRefCount++;
vNew.insert(nIndex);
vvNew[bucket][nUBucketPos] = nIndex;
}
}
}
}
// Prune new entries with refcount 0 (as a result of collisions).
int nLostUnk = 0;
for (std::map<int, CAddrInfo>::const_iterator it = mapInfo.begin(); it != mapInfo.end(); ) {
if (it->second.fInTried == false && it->second.nRefCount == 0) {
std::map<int, CAddrInfo>::const_iterator itCopy = it++;
Delete(itCopy->first);
nLostUnk++;
} else {
it++;
}
}
if (nLost + nLostUnk > 0) {
LogPrint("addrman", "addrman lost %i new and %i tried addresses due to collisions\n", nLostUnk, nLost);
}
Check();
}
unsigned int GetSerializeSize(int nType, int nVersion) const
{
return (CSizeComputer(nType, nVersion) << *this).size();
}
CAddrMan() : vRandom(0), vvTried(ADDRMAN_TRIED_BUCKET_COUNT, std::vector<int>(0)), vvNew(ADDRMAN_NEW_BUCKET_COUNT, std::set<int>())
void Clear()
{
nKey.resize(32);
GetRandBytes(&nKey[0], 32);
std::vector<int>().swap(vRandom);
nKey = GetRandHash();
for (size_t bucket = 0; bucket < ADDRMAN_NEW_BUCKET_COUNT; bucket++) {
for (size_t entry = 0; entry < ADDRMAN_BUCKET_SIZE; entry++) {
vvNew[bucket][entry] = -1;
}
}
for (size_t bucket = 0; bucket < ADDRMAN_TRIED_BUCKET_COUNT; bucket++) {
for (size_t entry = 0; entry < ADDRMAN_BUCKET_SIZE; entry++) {
vvTried[bucket][entry] = -1;
}
}
nIdCount = 0;
nTried = 0;
nNew = 0;
}
CAddrMan()
{
Clear();
}
~CAddrMan()
{
nKey.SetNull();
}
//! Return the number of (unique) addresses in all tables.
int size()
{
@ -477,13 +533,13 @@ public:
* Choose an address to connect to.
* nUnkBias determines how much "new" entries are favored over "tried" ones (0-100).
*/
CAddress Select(int nUnkBias = 50)
CAddress Select()
{
CAddress addrRet;
{
LOCK(cs);
Check();
addrRet = Select_(nUnkBias);
addrRet = Select_();
Check();
}
return addrRet;

View file

@ -106,6 +106,14 @@ class CMainParams : public CChainParams {
public:
CMainParams() {
strNetworkID = "newcc";
consensus.nSubsidyHalvingInterval = 2100000;
consensus.nMajorityEnforceBlockUpgrade = 750;
consensus.nMajorityRejectBlockOutdated = 950;
consensus.nMajorityWindow = 1000;
consensus.powLimit = ~arith_uint256(0) >> 1;
consensus.nPowTargetTimespan = 30 * 60 * 12;//14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 30;
consensus.fPowAllowMinDifficultyBlocks = false;
/**
* The message start string is designed to be unlikely to occur in normal data.
* The characters are rarely used upper ASCII, not valid as UTF-8, and produce
@ -117,14 +125,7 @@ public:
pchMessageStart[3] = 0xd9;
vAlertPubKey = ParseHex("04fc9702847840aaf195de8442ebecedf5b095cdbb9bc716bda9110971b28a49e0ead8564ff0db22209e0374782c093bb899692d524e9d6a6956e7c5ecbcd68284");
nDefaultPort = 8333;
bnProofOfWorkLimit = ~arith_uint256(0) >> 1;
nSubsidyHalvingInterval = 2100000;
nEnforceBlockUpgradeMajority = 750;
nRejectBlockOutdatedMajority = 950;
nToCheckBlockUpgradeMajority = 1000;
nMinerThreads = 2;
nTargetTimespan = 30 * 60 * 12;//14 * 24 * 60 * 60; // two weeks
nTargetSpacing = 30;
/**
* Build the genesis block. Note that the output of the genesis coinbase cannot
@ -168,10 +169,10 @@ public:
}
}*/
hashGenesisBlock = genesis.GetHash();
consensus.hashGenesisBlock = genesis.GetHash();
//printf("%s\n", hashGenesisBlock.GetHex().c_str());
//assert(hashGenesisBlock == uint256("0x000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"));
assert(hashGenesisBlock == uint256S("0x00ef6ded2b610fc5e4f06d187d12136bd5fba7b932fa0b66bf353c7c1648ec9c"));
//assert(consensus.hashGenesisBlock == uint256S("0x000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"));
assert(consensus.hashGenesisBlock == uint256S("0x00ef6ded2b610fc5e4f06d187d12136bd5fba7b932fa0b66bf353c7c1648ec9c"));
//printf("%s\n", genesis.hashMerkleRoot.GetHex().c_str());
//assert(genesis.hashMerkleRoot == uint256("0x4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"));
assert(genesis.hashMerkleRoot == uint256S("0xa7d51d407092059a2beeffab22e65d6176cfb3c33b93515109480aa7c81c9141"));
@ -180,7 +181,6 @@ public:
//vSeeds.push_back(CDNSSeedData("bluematt.me", "dnsseed.bluematt.me"));
//vSeeds.push_back(CDNSSeedData("dashjr.org", "dnsseed.bitcoin.dashjr.org"));
//vSeeds.push_back(CDNSSeedData("bitcoinstats.com", "seed.bitcoinstats.com"));
//vSeeds.push_back(CDNSSeedData("bitnodes.io", "seed.bitnodes.io"));
//vSeeds.push_back(CDNSSeedData("xf2.org", "bitseed.xf2.org"));
vSeeds.clear();
@ -196,8 +196,7 @@ public:
fRequireRPCPassword = true;
fMiningRequiresPeers = true;
fDefaultCheckMemPool = false;
fAllowMinDifficultyBlocks = false;
fDefaultConsistencyChecks = false;
fRequireStandard = true;
fMineBlocksOnDemand = false;
fTestnetToBeDeprecatedFieldRPC = false;
@ -217,24 +216,23 @@ class CTestNetParams : public CMainParams {
public:
CTestNetParams() {
strNetworkID = "test";
consensus.nMajorityEnforceBlockUpgrade = 51;
consensus.nMajorityRejectBlockOutdated = 75;
consensus.nMajorityWindow = 100;
consensus.fPowAllowMinDifficultyBlocks = true;
pchMessageStart[0] = 0x0b;
pchMessageStart[1] = 0x11;
pchMessageStart[2] = 0x09;
pchMessageStart[3] = 0x07;
vAlertPubKey = ParseHex("04302390343f91cc401d56d68b123028bf52e5fca1939df127f63c6467cdf9c8e2c14b61104cf817d0b780da337893ecc4aaff1309e536162dabbdb45200ca2b0a");
nDefaultPort = 18333;
nEnforceBlockUpgradeMajority = 51;
nRejectBlockOutdatedMajority = 75;
nToCheckBlockUpgradeMajority = 100;
nMinerThreads = 0;
nTargetTimespan = 14 * 24 * 60 * 60; //! two weeks
nTargetSpacing = 10 * 60;
//! Modify the testnet genesis block so the timestamp is valid for a later start.
genesis.nTime = 1296688602;
genesis.nNonce = 414098458;
hashGenesisBlock = genesis.GetHash();
//assert(hashGenesisBlock == uint256("0x000000000933ea01ad0ee984209779baaec3ced90fa3f408719526f8d77f4943"));
consensus.hashGenesisBlock = genesis.GetHash();
//assert(consensus.hashGenesisBlock == uint256("0x000000000933ea01ad0ee984209779baaec3ced90fa3f408719526f8d77f4943"));
vFixedSeeds.clear();
vSeeds.clear();
@ -253,8 +251,7 @@ public:
fRequireRPCPassword = true;
fMiningRequiresPeers = true;
fDefaultCheckMemPool = false;
fAllowMinDifficultyBlocks = true;
fDefaultConsistencyChecks = false;
fRequireStandard = false;
fMineBlocksOnDemand = false;
fTestnetToBeDeprecatedFieldRPC = true;
@ -273,32 +270,29 @@ class CRegTestParams : public CTestNetParams {
public:
CRegTestParams() {
strNetworkID = "regtest";
consensus.nSubsidyHalvingInterval = 150;
consensus.nMajorityEnforceBlockUpgrade = 750;
consensus.nMajorityRejectBlockOutdated = 950;
consensus.nMajorityWindow = 1000;
consensus.powLimit = ~arith_uint256(0) >> 1;
pchMessageStart[0] = 0xfa;
pchMessageStart[1] = 0xbf;
pchMessageStart[2] = 0xb5;
pchMessageStart[3] = 0xda;
nSubsidyHalvingInterval = 150;
nEnforceBlockUpgradeMajority = 750;
nRejectBlockOutdatedMajority = 950;
nToCheckBlockUpgradeMajority = 1000;
nMinerThreads = 1;
nTargetTimespan = 14 * 24 * 60 * 60; //! two weeks
nTargetSpacing = 10 * 60;
bnProofOfWorkLimit = ~arith_uint256(0) >> 1;
genesis.nTime = 1296688602;
genesis.nBits = 0x207fffff;
genesis.nNonce = 2;
hashGenesisBlock = genesis.GetHash();
consensus.hashGenesisBlock = genesis.GetHash();
nDefaultPort = 18444;
//assert(hashGenesisBlock == uint256("0x0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206"));
//assert(consensus.hashGenesisBlock == uint256("0x0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206"));
vFixedSeeds.clear(); //! Regtest mode doesn't have any fixed seeds.
vSeeds.clear(); //! Regtest mode doesn't have any DNS seeds.
fRequireRPCPassword = false;
fMiningRequiresPeers = false;
fDefaultCheckMemPool = true;
fAllowMinDifficultyBlocks = true;
fDefaultConsistencyChecks = true;
fRequireStandard = false;
fMineBlocksOnDemand = true;
fTestnetToBeDeprecatedFieldRPC = false;

View file

@ -6,11 +6,12 @@
#ifndef BITCOIN_CHAINPARAMS_H
#define BITCOIN_CHAINPARAMS_H
#include "arith_uint256.h"
#include "chainparamsbase.h"
#include "checkpoints.h"
#include "consensus/params.h"
#include "primitives/block.h"
#include "protocol.h"
#include "arith_uint256.h"
#include <vector>
@ -39,16 +40,16 @@ public:
MAX_BASE58_TYPES
};
const uint256& HashGenesisBlock() const { return hashGenesisBlock; }
const Consensus::Params& GetConsensus() const { return consensus; }
const uint256& HashGenesisBlock() const { return consensus.hashGenesisBlock; }
const CMessageHeader::MessageStartChars& MessageStart() const { return pchMessageStart; }
const std::vector<unsigned char>& AlertKey() const { return vAlertPubKey; }
int GetDefaultPort() const { return nDefaultPort; }
const arith_uint256& ProofOfWorkLimit() const { return bnProofOfWorkLimit; }
int SubsidyHalvingInterval() const { return nSubsidyHalvingInterval; }
/** Used to check majorities for block version upgrade */
int EnforceBlockUpgradeMajority() const { return nEnforceBlockUpgradeMajority; }
int RejectBlockOutdatedMajority() const { return nRejectBlockOutdatedMajority; }
int ToCheckBlockUpgradeMajority() const { return nToCheckBlockUpgradeMajority; }
const arith_uint256& ProofOfWorkLimit() const { return consensus.powLimit; }
int SubsidyHalvingInterval() const { return consensus.nSubsidyHalvingInterval; }
int EnforceBlockUpgradeMajority() const { return consensus.nMajorityEnforceBlockUpgrade; }
int RejectBlockOutdatedMajority() const { return consensus.nMajorityRejectBlockOutdated; }
int ToCheckBlockUpgradeMajority() const { return consensus.nMajorityWindow; }
/** Used if GenerateBitcoins is called with a negative number of threads */
int DefaultMinerThreads() const { return nMinerThreads; }
@ -56,15 +57,15 @@ public:
bool RequireRPCPassword() const { return fRequireRPCPassword; }
/** Make miner wait to have peers to avoid wasting work */
bool MiningRequiresPeers() const { return fMiningRequiresPeers; }
/** Default value for -checkmempool argument */
bool DefaultCheckMemPool() const { return fDefaultCheckMemPool; }
/** Default value for -checkmempool and -checkblockindex argument */
bool DefaultConsistencyChecks() const { return fDefaultConsistencyChecks; }
/** Allow mining of a min-difficulty block */
bool AllowMinDifficultyBlocks() const { return fAllowMinDifficultyBlocks; }
bool AllowMinDifficultyBlocks() const { return consensus.fPowAllowMinDifficultyBlocks; }
/** Make standard checks */
bool RequireStandard() const { return fRequireStandard; }
int64_t TargetTimespan() const { return nTargetTimespan; }
int64_t TargetSpacing() const { return nTargetSpacing; }
int64_t DifficultyAdjustmentInterval() const { return nTargetTimespan / nTargetSpacing; }
int64_t TargetTimespan() const { return consensus.nPowTargetTimespan; }
int64_t TargetSpacing() const { return consensus.nPowTargetSpacing; }
int64_t DifficultyAdjustmentInterval() const { return consensus.nPowTargetTimespan / consensus.nPowTargetSpacing; }
/** Make miner stop after a block is found. In RPC, don't return until nGenProcLimit blocks are generated */
bool MineBlocksOnDemand() const { return fMineBlocksOnDemand; }
/** In the future use NetworkIDString() for RPC fields */
@ -78,18 +79,11 @@ public:
protected:
CChainParams() {}
uint256 hashGenesisBlock;
Consensus::Params consensus;
CMessageHeader::MessageStartChars pchMessageStart;
//! Raw pub key bytes for the broadcast alert signing key.
std::vector<unsigned char> vAlertPubKey;
int nDefaultPort;
arith_uint256 bnProofOfWorkLimit;
int nSubsidyHalvingInterval;
int nEnforceBlockUpgradeMajority;
int nRejectBlockOutdatedMajority;
int nToCheckBlockUpgradeMajority;
int64_t nTargetTimespan;
int64_t nTargetSpacing;
int nMinerThreads;
std::vector<CDNSSeedData> vSeeds;
std::vector<unsigned char> base58Prefixes[MAX_BASE58_TYPES];
@ -98,8 +92,7 @@ protected:
std::vector<CAddress> vFixedSeeds;
bool fRequireRPCPassword;
bool fMiningRequiresPeers;
bool fDefaultCheckMemPool;
bool fAllowMinDifficultyBlocks;
bool fDefaultConsistencyChecks;
bool fRequireStandard;
bool fMineBlocksOnDemand;
bool fTestnetToBeDeprecatedFieldRPC;

32
src/consensus/params.h Normal file
View file

@ -0,0 +1,32 @@
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2009-2014 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_CONSENSUS_CONSENSUS_PARAMS_H
#define BITCOIN_CONSENSUS_CONSENSUS_PARAMS_H
#include "arith_uint256.h"
#include "uint256.h"
namespace Consensus {
/**
* Parameters that influence chain consensus.
*/
struct Params {
uint256 hashGenesisBlock;
int nSubsidyHalvingInterval;
/** Used to check majorities for block version upgrade */
int nMajorityEnforceBlockUpgrade;
int nMajorityRejectBlockOutdated;
int nMajorityWindow;
/** Proof of work parameters */
arith_uint256 powLimit;
bool fPowAllowMinDifficultyBlocks;
int64_t nPowTargetSpacing;
int64_t nPowTargetTimespan;
int64_t DifficultyAdjustmentInterval() const { return nPowTargetTimespan / nPowTargetSpacing; }
};
} // namespace Consensus
#endif // BITCOIN_CONSENSUS_CONSENSUS_PARAMS_H

View file

@ -699,8 +699,9 @@ bool AppInit2(boost::thread_group& threadGroup)
if (GetBoolArg("-benchmark", false))
InitWarning(_("Warning: Unsupported argument -benchmark ignored, use -debug=bench."));
// Checkmempool defaults to true in regtest mode
mempool.setSanityCheck(GetBoolArg("-checkmempool", Params().DefaultCheckMemPool()));
// Checkmempool and checkblockindex default to true in regtest mode
mempool.setSanityCheck(GetBoolArg("-checkmempool", Params().DefaultConsistencyChecks()));
fCheckBlockIndex = GetBoolArg("-checkblockindex", Params().DefaultConsistencyChecks());
Checkpoints::fEnabled = GetBoolArg("-checkpoints", true);
// -par=0 means autodetect, but nScriptCheckThreads==0 means no concurrency

View file

@ -55,6 +55,7 @@ bool fImporting = false;
bool fReindex = false;
bool fTxIndex = false;
bool fIsBareMultisigStd = true;
bool fCheckBlockIndex = false;
unsigned int nCoinCacheSize = 5000;
/** Fees smaller than this (in satoshi) are considered zero fee (for relaying and mining) */
@ -76,6 +77,7 @@ void EraseOrphansFor(NodeId peer);
* and going backwards.
*/
static bool IsSuperMajority(int minVersion, const CBlockIndex* pstart, unsigned int nRequired);
static void CheckBlockIndex();
/** Constant stuff for coinbase transactions we create: */
CScript COINBASE_FLAGS;
@ -87,7 +89,7 @@ namespace {
struct CBlockIndexWorkComparator
{
bool operator()(CBlockIndex *pa, CBlockIndex *pb) {
bool operator()(CBlockIndex *pa, CBlockIndex *pb) const {
// First sort by most total work, ...
if (pa->nChainWork > pb->nChainWork) return false;
if (pa->nChainWork < pb->nChainWork) return true;
@ -109,8 +111,8 @@ namespace {
CBlockIndex *pindexBestInvalid;
/**
* The set of all CBlockIndex entries with BLOCK_VALID_TRANSACTIONS or better that are at least
* as good as our current tip. Entries may be failed, though.
* The set of all CBlockIndex entries with BLOCK_VALID_TRANSACTIONS (for itself and all ancestors) and
* as good as our current tip or better. Entries may be failed, though.
*/
set<CBlockIndex*, CBlockIndexWorkComparator> setBlockIndexCandidates;
/** Number of nodes with fSyncStarted. */
@ -1162,7 +1164,7 @@ bool ReadBlockFromDisk(CBlock& block, const CDiskBlockPos& pos)
}
// Check the header
if (!CheckProofOfWork(block.GetHash(), block.nBits))
if (!CheckProofOfWork(block.GetHash(), block.nBits, Params().GetConsensus()))
return error("ReadBlockFromDisk: Errors in block header at %s", pos.ToString());
return true;
@ -2371,6 +2373,7 @@ bool ActivateBestChain(CValidationState &state, CBlock *pblock) {
uiInterface.NotifyBlockTip(hashNewTip);
}
} while(pindexMostWork != chainActive.Tip());
CheckBlockIndex();
// Write changes periodically to disk, after relay.
if (!FlushStateToDisk(state, FLUSH_STATE_PERIODIC)) {
@ -2507,7 +2510,9 @@ bool ReceivedBlockTransactions(const CBlock &block, CValidationState& state, CBl
CBlockIndex *pindex = queue.front();
queue.pop_front();
pindex->nChainTx = (pindex->pprev ? pindex->pprev->nChainTx : 0) + pindex->nTx;
if (chainActive.Tip() == NULL || !setBlockIndexCandidates.value_comp()(pindex, chainActive.Tip())) {
setBlockIndexCandidates.insert(pindex);
}
std::pair<std::multimap<CBlockIndex*, CBlockIndex*>::iterator, std::multimap<CBlockIndex*, CBlockIndex*>::iterator> range = mapBlocksUnlinked.equal_range(pindex);
while (range.first != range.second) {
std::multimap<CBlockIndex*, CBlockIndex*>::iterator it = range.first;
@ -2607,7 +2612,7 @@ bool FindUndoPos(CValidationState &state, int nFile, CDiskBlockPos &pos, unsigne
bool CheckBlockHeader(const CBlockHeader& block, CValidationState& state, bool fCheckPOW)
{
// Check proof of work matches claimed amount
if (fCheckPOW && !CheckProofOfWork(block.GetHash(), block.nBits))
if (fCheckPOW && !CheckProofOfWork(block.GetHash(), block.nBits, Params().GetConsensus()))
return state.DoS(50, error("CheckBlockHeader(): proof of work failed"),
REJECT_INVALID, "high-hash");
@ -2690,7 +2695,7 @@ bool ContextualCheckBlockHeader(const CBlockHeader& block, CValidationState& sta
int nHeight = pindexPrev->nHeight+1;
// Check proof of work
if ((block.nBits != GetNextWorkRequired(pindexPrev, &block)))
if (block.nBits != GetNextWorkRequired(pindexPrev, &block, Params().GetConsensus()))
return state.DoS(100, error("%s: incorrect proof of work", __func__),
REJECT_INVALID, "bad-diffbits");
@ -2870,6 +2875,7 @@ bool ProcessNewBlock(CValidationState &state, CNode* pfrom, CBlock* pblock, CDis
if (pindex && pfrom) {
mapBlockSource[pindex->GetBlockHash()] = pfrom->GetId();
}
CheckBlockIndex();
if (!ret)
return error("%s: AcceptBlock FAILED", __func__);
}
@ -3360,6 +3366,136 @@ bool LoadExternalBlockFile(FILE* fileIn, CDiskBlockPos *dbp)
return nLoaded > 0;
}
void static CheckBlockIndex()
{
if (!fCheckBlockIndex) {
return;
}
LOCK(cs_main);
// Build forward-pointing map of the entire block tree.
std::multimap<CBlockIndex*,CBlockIndex*> forward;
for (BlockMap::iterator it = mapBlockIndex.begin(); it != mapBlockIndex.end(); it++) {
forward.insert(std::make_pair(it->second->pprev, it->second));
}
assert(forward.size() == mapBlockIndex.size());
std::pair<std::multimap<CBlockIndex*,CBlockIndex*>::iterator,std::multimap<CBlockIndex*,CBlockIndex*>::iterator> rangeGenesis = forward.equal_range(NULL);
CBlockIndex *pindex = rangeGenesis.first->second;
rangeGenesis.first++;
assert(rangeGenesis.first == rangeGenesis.second); // There is only one index entry with parent NULL.
// Iterate over the entire block tree, using depth-first search.
// Along the way, remember whether there are blocks on the path from genesis
// block being explored which are the first to have certain properties.
size_t nNodes = 0;
int nHeight = 0;
CBlockIndex* pindexFirstInvalid = NULL; // Oldest ancestor of pindex which is invalid.
CBlockIndex* pindexFirstMissing = NULL; // Oldest ancestor of pindex which does not have BLOCK_HAVE_DATA.
CBlockIndex* pindexFirstNotTreeValid = NULL; // Oldest ancestor of pindex which does not have BLOCK_VALID_TREE (regardless of being valid or not).
CBlockIndex* pindexFirstNotChainValid = NULL; // Oldest ancestor of pindex which does not have BLOCK_VALID_CHAIN (regardless of being valid or not).
CBlockIndex* pindexFirstNotScriptsValid = NULL; // Oldest ancestor of pindex which does not have BLOCK_VALID_SCRIPTS (regardless of being valid or not).
while (pindex != NULL) {
nNodes++;
if (pindexFirstInvalid == NULL && pindex->nStatus & BLOCK_FAILED_VALID) pindexFirstInvalid = pindex;
if (pindexFirstMissing == NULL && !(pindex->nStatus & BLOCK_HAVE_DATA)) pindexFirstMissing = pindex;
if (pindex->pprev != NULL && pindexFirstNotTreeValid == NULL && (pindex->nStatus & BLOCK_VALID_MASK) < BLOCK_VALID_TREE) pindexFirstNotTreeValid = pindex;
if (pindex->pprev != NULL && pindexFirstNotChainValid == NULL && (pindex->nStatus & BLOCK_VALID_MASK) < BLOCK_VALID_CHAIN) pindexFirstNotChainValid = pindex;
if (pindex->pprev != NULL && pindexFirstNotScriptsValid == NULL && (pindex->nStatus & BLOCK_VALID_MASK) < BLOCK_VALID_SCRIPTS) pindexFirstNotScriptsValid = pindex;
// Begin: actual consistency checks.
if (pindex->pprev == NULL) {
// Genesis block checks.
assert(pindex->GetBlockHash() == Params().HashGenesisBlock()); // Genesis block's hash must match.
assert(pindex == chainActive.Genesis()); // The current active chain's genesis block must be this block.
}
assert((pindexFirstMissing != NULL) == (pindex->nChainTx == 0)); // nChainTx == 0 is used to signal that all parent block's transaction data is available.
assert(pindex->nHeight == nHeight); // nHeight must be consistent.
assert(pindex->pprev == NULL || pindex->nChainWork >= pindex->pprev->nChainWork); // For every block except the genesis block, the chainwork must be larger than the parent's.
assert(nHeight < 2 || (pindex->pskip && (pindex->pskip->nHeight < nHeight))); // The pskip pointer must point back for all but the first 2 blocks.
assert(pindexFirstNotTreeValid == NULL); // All mapBlockIndex entries must at least be TREE valid
if ((pindex->nStatus & BLOCK_VALID_MASK) >= BLOCK_VALID_TREE) assert(pindexFirstNotTreeValid == NULL); // TREE valid implies all parents are TREE valid
if ((pindex->nStatus & BLOCK_VALID_MASK) >= BLOCK_VALID_CHAIN) assert(pindexFirstNotChainValid == NULL); // CHAIN valid implies all parents are CHAIN valid
if ((pindex->nStatus & BLOCK_VALID_MASK) >= BLOCK_VALID_SCRIPTS) assert(pindexFirstNotScriptsValid == NULL); // SCRIPTS valid implies all parents are SCRIPTS valid
if (pindexFirstInvalid == NULL) {
// Checks for not-invalid blocks.
assert((pindex->nStatus & BLOCK_FAILED_MASK) == 0); // The failed mask cannot be set for blocks without invalid parents.
}
if (!CBlockIndexWorkComparator()(pindex, chainActive.Tip()) && pindexFirstMissing == NULL) {
if (pindexFirstInvalid == NULL) { // If this block sorts at least as good as the current tip and is valid, it must be in setBlockIndexCandidates.
assert(setBlockIndexCandidates.count(pindex));
}
} else { // If this block sorts worse than the current tip, it cannot be in setBlockIndexCandidates.
assert(setBlockIndexCandidates.count(pindex) == 0);
}
// Check whether this block is in mapBlocksUnlinked.
std::pair<std::multimap<CBlockIndex*,CBlockIndex*>::iterator,std::multimap<CBlockIndex*,CBlockIndex*>::iterator> rangeUnlinked = mapBlocksUnlinked.equal_range(pindex->pprev);
bool foundInUnlinked = false;
while (rangeUnlinked.first != rangeUnlinked.second) {
assert(rangeUnlinked.first->first == pindex->pprev);
if (rangeUnlinked.first->second == pindex) {
foundInUnlinked = true;
break;
}
rangeUnlinked.first++;
}
if (pindex->pprev && pindex->nStatus & BLOCK_HAVE_DATA && pindexFirstMissing != NULL) {
if (pindexFirstInvalid == NULL) { // If this block has block data available, some parent doesn't, and has no invalid parents, it must be in mapBlocksUnlinked.
assert(foundInUnlinked);
}
} else { // If this block does not have block data available, or all parents do, it cannot be in mapBlocksUnlinked.
assert(!foundInUnlinked);
}
// assert(pindex->GetBlockHash() == pindex->GetBlockHeader().GetHash()); // Perhaps too slow
// End: actual consistency checks.
// Try descending into the first subnode.
std::pair<std::multimap<CBlockIndex*,CBlockIndex*>::iterator,std::multimap<CBlockIndex*,CBlockIndex*>::iterator> range = forward.equal_range(pindex);
if (range.first != range.second) {
// A subnode was found.
pindex = range.first->second;
nHeight++;
continue;
}
// This is a leaf node.
// Move upwards until we reach a node of which we have not yet visited the last child.
while (pindex) {
// We are going to either move to a parent or a sibling of pindex.
// If pindex was the first with a certain property, unset the corresponding variable.
if (pindex == pindexFirstInvalid) pindexFirstInvalid = NULL;
if (pindex == pindexFirstMissing) pindexFirstMissing = NULL;
if (pindex == pindexFirstNotTreeValid) pindexFirstNotTreeValid = NULL;
if (pindex == pindexFirstNotChainValid) pindexFirstNotChainValid = NULL;
if (pindex == pindexFirstNotScriptsValid) pindexFirstNotScriptsValid = NULL;
// Find our parent.
CBlockIndex* pindexPar = pindex->pprev;
// Find which child we just visited.
std::pair<std::multimap<CBlockIndex*,CBlockIndex*>::iterator,std::multimap<CBlockIndex*,CBlockIndex*>::iterator> rangePar = forward.equal_range(pindexPar);
while (rangePar.first->second != pindex) {
assert(rangePar.first != rangePar.second); // Our parent must have at least the node we're coming from as child.
rangePar.first++;
}
// Proceed to the next one.
rangePar.first++;
if (rangePar.first != rangePar.second) {
// Move to the sibling.
pindex = rangePar.first->second;
break;
} else {
// Move up further.
pindex = pindexPar;
nHeight--;
continue;
}
}
}
// Check that we actually traversed the entire map.
assert(nNodes == forward.size());
}
//////////////////////////////////////////////////////////////////////////////
//
// CAlert
@ -4118,6 +4254,8 @@ bool static ProcessMessage(CNode* pfrom, string strCommand, CDataStream& vRecv,
LogPrint("net", "more getheaders (%d) to end to peer=%d (startheight:%d)\n", pindexLast->nHeight, pfrom->id, pfrom->nStartingHeight);
pfrom->PushMessage("getheaders", chainActive.GetLocator(pindexLast), uint256());
}
CheckBlockIndex();
}
else if (strCommand == "block" && !fImporting && !fReindex) // Ignore blocks received while importing
@ -4622,7 +4760,7 @@ bool SendMessages(CNode* pto, bool fSendTrickle)
// transactions become unconfirmed and spams other nodes.
if (!fReindex && !fImporting && !IsInitialBlockDownload())
{
GetMainSignals().Broadcast();
GetMainSignals().Broadcast(nTimeBestReceived);
}
//

View file

@ -18,7 +18,6 @@
#include "primitives/transaction.h"
#include "ncctrie.h"
#include "net.h"
#include "pow.h"
#include "script/script.h"
#include "script/sigcache.h"
#include "script/standard.h"
@ -117,7 +116,6 @@ extern BlockMap mapBlockIndex;
extern uint64_t nLastBlockTx;
extern uint64_t nLastBlockSize;
extern const std::string strMessageMagic;
extern int64_t nTimeBestReceived;
extern CWaitableCriticalSection csBestBlock;
extern CConditionVariable cvBlockChange;
extern bool fImporting;
@ -125,6 +123,7 @@ extern bool fReindex;
extern int nScriptCheckThreads;
extern bool fTxIndex;
extern bool fIsBareMultisigStd;
extern bool fCheckBlockIndex;
extern unsigned int nCoinCacheSize;
extern CFeeRate minRelayTxFee;
@ -169,7 +168,12 @@ bool LoadBlockIndex();
void UnloadBlockIndex();
/** Process protocol messages received from a given node */
bool ProcessMessages(CNode* pfrom);
/** Send queued protocol messages to be sent to a give node */
/**
* Send queued protocol messages to be sent to a give node.
*
* @param[in] pto The node which we are sending messages to.
* @param[in] fSendTrickle When true send the trickled data, otherwise trickle the data until true.
*/
bool SendMessages(CNode* pto, bool fSendTrickle);
/** Run an instance of the script checking thread */
void ThreadScriptCheck();

View file

@ -86,7 +86,7 @@ void UpdateTime(CBlockHeader* pblock, const CBlockIndex* pindexPrev)
// Updating time can change work required on testnet:
if (Params().AllowMinDifficultyBlocks())
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock);
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, Params().GetConsensus());
}
CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
@ -384,7 +384,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
// Fill in header
pblock->hashPrevBlock = pindexPrev->GetBlockHash();
UpdateTime(pblock, pindexPrev);
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock);
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, Params().GetConsensus());
pblock->nNonce = 0;
pblocktemplate->vTxSigOps[0] = GetLegacySigOpCount(pblock->vtx[0]);
CNCCTrieQueueUndo dummyundo;

View file

@ -1221,8 +1221,7 @@ void ThreadOpenConnections()
int nTries = 0;
while (true)
{
// use an nUnkBias between 10 (no outgoing connections) and 90 (8 outgoing connections)
CAddress addr = addrman.Select(10 + min(nOutbound,8)*10);
CAddress addr = addrman.Select();
// if we selected an invalid address, restart
if (!addr.IsValid() || setConnected.count(addr.GetGroup()) || IsLocal(addr))
@ -1406,7 +1405,7 @@ void ThreadMessageHandler()
{
TRY_LOCK(pnode->cs_vSend, lockSend);
if (lockSend)
g_signals.SendMessages(pnode, pnode == pnodeTrickle);
g_signals.SendMessages(pnode, pnode == pnodeTrickle || pnode->fWhitelisted);
}
boost::this_thread::interruption_point();
}

View file

@ -7,34 +7,33 @@
#include "arith_uint256.h"
#include "chain.h"
#include "chainparams.h"
#include "primitives/block.h"
#include "uint256.h"
#include "util.h"
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock)
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params)
{
unsigned int nProofOfWorkLimit = Params().ProofOfWorkLimit().GetCompact();
unsigned int nProofOfWorkLimit = params.powLimit.GetCompact();
// Genesis block
if (pindexLast == NULL)
return nProofOfWorkLimit;
// Only change once per difficulty adjustment interval
if ((pindexLast->nHeight+1) % Params().DifficultyAdjustmentInterval() != 0)
if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0)
{
if (Params().AllowMinDifficultyBlocks())
if (params.fPowAllowMinDifficultyBlocks)
{
// Special difficulty rule for testnet:
// If the new block's timestamp is more than 2* 10 minutes
// then allow mining of a min-difficulty block.
if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + Params().TargetSpacing()*2)
if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2)
return nProofOfWorkLimit;
else
{
// Return the last non-special-min-difficulty-rules-block
const CBlockIndex* pindex = pindexLast;
while (pindex->pprev && pindex->nHeight % Params().DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit)
while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit)
pindex = pindex->pprev;
return pindex->nBits;
}
@ -44,22 +43,22 @@ unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHead
// Go back by what we want to be 14 days worth of blocks
const CBlockIndex* pindexFirst = pindexLast;
for (int i = 0; pindexFirst && i < Params().DifficultyAdjustmentInterval()-1; i++)
for (int i = 0; pindexFirst && i < params.DifficultyAdjustmentInterval()-1; i++)
pindexFirst = pindexFirst->pprev;
assert(pindexFirst);
return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime());
return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime(), params);
}
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime)
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{
// Limit adjustment step
int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
LogPrintf(" nActualTimespan = %d before bounds\n", nActualTimespan);
if (nActualTimespan < Params().TargetTimespan()/4)
nActualTimespan = Params().TargetTimespan()/4;
if (nActualTimespan > Params().TargetTimespan()*4)
nActualTimespan = Params().TargetTimespan()*4;
if (nActualTimespan < params.nPowTargetTimespan/4)
nActualTimespan = params.nPowTargetTimespan/4;
if (nActualTimespan > params.nPowTargetTimespan*4)
nActualTimespan = params.nPowTargetTimespan*4;
// Retarget
arith_uint256 bnNew;
@ -67,21 +66,21 @@ unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nF
bnNew.SetCompact(pindexLast->nBits);
bnOld = bnNew;
bnNew *= nActualTimespan;
bnNew /= Params().TargetTimespan();
bnNew /= params.nPowTargetTimespan;
if (bnNew > Params().ProofOfWorkLimit())
bnNew = Params().ProofOfWorkLimit();
if (bnNew > params.powLimit)
bnNew = params.powLimit;
/// debug print
LogPrintf("GetNextWorkRequired RETARGET\n");
LogPrintf("Params().TargetTimespan() = %d nActualTimespan = %d\n", Params().TargetTimespan(), nActualTimespan);
LogPrintf("params.nPowTargetTimespan = %d nActualTimespan = %d\n", params.nPowTargetTimespan, nActualTimespan);
LogPrintf("Before: %08x %s\n", pindexLast->nBits, bnOld.ToString());
LogPrintf("After: %08x %s\n", bnNew.GetCompact(), bnNew.ToString());
return bnNew.GetCompact();
}
bool CheckProofOfWork(uint256 hash, unsigned int nBits)
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
bool fNegative;
bool fOverflow;
@ -90,7 +89,7 @@ bool CheckProofOfWork(uint256 hash, unsigned int nBits)
bnTarget.SetCompact(nBits, &fNegative, &fOverflow);
// Check range
if (fNegative || bnTarget == 0 || fOverflow || bnTarget > Params().ProofOfWorkLimit())
if (fNegative || bnTarget == 0 || fOverflow || bnTarget > params.powLimit)
return error("CheckProofOfWork(): nBits below minimum work");
// Check proof of work matches claimed amount

View file

@ -6,6 +6,8 @@
#ifndef BITCOIN_POW_H
#define BITCOIN_POW_H
#include "consensus/params.h"
#include <stdint.h>
class CBlockHeader;
@ -13,11 +15,11 @@ class CBlockIndex;
class uint256;
class arith_uint256;
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock);
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime);
unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params&);
unsigned int CalculateNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params&);
/** Check whether a block hash satisfies the proof-of-work requirement specified by nBits */
bool CheckProofOfWork(uint256 hash, unsigned int nBits);
bool CheckProofOfWork(uint256 hash, unsigned int nBits, const Consensus::Params&);
arith_uint256 GetBlockProof(const CBlockIndex& block);
#endif // BITCOIN_POW_H

View file

@ -67,7 +67,14 @@ public:
/** nServices flags */
enum {
// NODE_NETWORK means that the node is capable of serving the block chain. It is currently
// set by all Bitcoin Core nodes, and is unset by SPV clients or other peers that just want
// network services but don't provide them.
NODE_NETWORK = (1 << 0),
// NODE_GETUTXO means the node is capable of responding to the getutxo protocol request.
// Bitcoin Core does not support this but a patch set called Bitcoin XT does.
// See BIP 64 for details on how this is implemented.
NODE_GETUTXO = (1 << 1),
// Bits 24-31 are reserved for temporary experiments. Just pick a bit that
// isn't getting used, or one not being used much, and notify the

View file

@ -859,18 +859,18 @@ void BitcoinGUI::closeEvent(QCloseEvent *event)
}
#ifdef ENABLE_WALLET
void BitcoinGUI::incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address)
void BitcoinGUI::incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address, const QString& label)
{
// On new transaction, make an info balloon
QString msg = tr("Date: %1\n").arg(date) +
tr("Amount: %1\n").arg(BitcoinUnits::formatWithUnit(unit, amount, true)) +
tr("Type: %1\n").arg(type);
if (!label.isEmpty())
msg += tr("Label: %1\n").arg(label);
else if (!address.isEmpty())
msg += tr("Address: %1\n").arg(address);
message((amount)<0 ? tr("Sent transaction") : tr("Incoming transaction"),
tr("Date: %1\n"
"Amount: %2\n"
"Type: %3\n"
"Address: %4\n")
.arg(date)
.arg(BitcoinUnits::formatWithUnit(unit, amount, true))
.arg(type)
.arg(address), CClientUIInterface::MSG_INFORMATION);
msg, CClientUIInterface::MSG_INFORMATION);
}
#endif // ENABLE_WALLET

View file

@ -165,7 +165,7 @@ public slots:
bool handlePaymentRequest(const SendCoinsRecipient& recipient);
/** Show incoming transaction notification for new transactions. */
void incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address);
void incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address, const QString& label);
#endif // ENABLE_WALLET
private slots:

View file

@ -878,10 +878,13 @@ QString formatServicesStr(quint64 mask)
switch (check)
{
case NODE_NETWORK:
strList.append(QObject::tr("NETWORK"));
strList.append("NETWORK");
break;
case NODE_GETUTXO:
strList.append("GETUTXO");
break;
default:
strList.append(QString("%1[%2]").arg(QObject::tr("UNKNOWN")).arg(check));
strList.append(QString("%1[%2]").arg("UNKNOWN").arg(check));
}
}
}

View file

@ -115,7 +115,7 @@ PeerTableModel::PeerTableModel(ClientModel *parent) :
clientModel(parent),
timer(0)
{
columns << tr("Address/Hostname") << tr("User Agent") << tr("Ping Time");
columns << tr("Node/Service") << tr("User Agent") << tr("Ping Time");
priv = new PeerTablePriv();
// default to unsorted
priv->sortColumn = -1;

View file

@ -357,7 +357,6 @@ void RPCConsole::clear()
ui->messagesWidget->document()->setDefaultStyleSheet(
"table { }"
"td.time { color: #808080; padding-top: 3px; } "
"td.message { font-family: monospace; font-size: 12px; } " // Todo: Remove fixed font-size
"td.cmd-request { color: #006060; } "
"td.cmd-error { color: red; } "
"b { color: #006060; } "

View file

@ -6,6 +6,7 @@
#include "config/bitcoin-config.h"
#endif
#include "util.h"
#include "uritests.h"
#ifdef ENABLE_WALLET
@ -27,6 +28,7 @@ Q_IMPORT_PLUGIN(qkrcodecs)
// This is all you need to run all the tests
int main(int argc, char *argv[])
{
SetupEnvironment();
bool fInvalid = false;
// Don't remove this, it's needed to access

View file

@ -227,7 +227,7 @@ TransactionTableModel::TransactionTableModel(CWallet* wallet, WalletModel *paren
priv(new TransactionTablePriv(wallet, this)),
fProcessingQueuedTransactions(false)
{
columns << QString() << QString() << tr("Date") << tr("Type") << tr("Address") << BitcoinUnits::getAmountColumnTitle(walletModel->getOptionsModel()->getDisplayUnit());
columns << QString() << QString() << tr("Date") << tr("Type") << tr("Label") << BitcoinUnits::getAmountColumnTitle(walletModel->getOptionsModel()->getDisplayUnit());
priv->refreshWallet();
connect(walletModel->getOptionsModel(), SIGNAL(displayUnitChanged(int)), this, SLOT(updateDisplayUnit()));
@ -626,7 +626,7 @@ QVariant TransactionTableModel::headerData(int section, Qt::Orientation orientat
case Watchonly:
return tr("Whether or not a watch-only address is involved in this transaction.");
case ToAddress:
return tr("Destination address of transaction.");
return tr("User-defined intent/purpose of the transaction.");
case Amount:
return tr("Amount removed from or added to balance.");
}

View file

@ -93,7 +93,7 @@ void WalletView::setBitcoinGUI(BitcoinGUI *gui)
connect(this, SIGNAL(encryptionStatusChanged(int)), gui, SLOT(setEncryptionStatus(int)));
// Pass through transaction notifications
connect(this, SIGNAL(incomingTransaction(QString,int,CAmount,QString,QString)), gui, SLOT(incomingTransaction(QString,int,CAmount,QString,QString)));
connect(this, SIGNAL(incomingTransaction(QString,int,CAmount,QString,QString,QString)), gui, SLOT(incomingTransaction(QString,int,CAmount,QString,QString,QString)));
}
}
@ -149,9 +149,11 @@ void WalletView::processNewTransaction(const QModelIndex& parent, int start, int
QString date = ttm->index(start, TransactionTableModel::Date, parent).data().toString();
qint64 amount = ttm->index(start, TransactionTableModel::Amount, parent).data(Qt::EditRole).toULongLong();
QString type = ttm->index(start, TransactionTableModel::Type, parent).data().toString();
QString address = ttm->index(start, TransactionTableModel::ToAddress, parent).data().toString();
QModelIndex index = ttm->index(start, 0, parent);
QString address = ttm->data(index, TransactionTableModel::AddressRole).toString();
QString label = ttm->data(index, TransactionTableModel::LabelRole).toString();
emit incomingTransaction(date, walletModel->getOptionsModel()->getDisplayUnit(), amount, type, address);
emit incomingTransaction(date, walletModel->getOptionsModel()->getDisplayUnit(), amount, type, address, label);
}
void WalletView::gotoOverviewPage()

View file

@ -113,7 +113,7 @@ signals:
/** Encryption status of wallet changed */
void encryptionStatusChanged(int status);
/** Notify that a new transaction appeared */
void incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address);
void incomingTransaction(const QString& date, int unit, const CAmount& amount, const QString& type, const QString& address, const QString& label);
};
#endif // BITCOIN_QT_WALLETVIEW_H

View file

@ -181,7 +181,7 @@ Value setgenerate(const Array& params, bool fHelp)
LOCK(cs_main);
IncrementExtraNonce(pblock, chainActive.Tip(), nExtraNonce);
}
while (!CheckProofOfWork(pblock->GetHash(), pblock->nBits)) {
while (!CheckProofOfWork(pblock->GetHash(), pblock->nBits, Params().GetConsensus())) {
// Yes, there is a chance every nonce could fail to satisfy the -regtest
// target -- 1 in 2^(2^32). That ain't gonna happen.
++pblock->nNonce;

View file

@ -333,6 +333,9 @@ static const CRPCCommand vRPCCommands[] =
{ "hidden", "invalidateblock", &invalidateblock, true, false },
{ "hidden", "reconsiderblock", &reconsiderblock, true, false },
{ "hidden", "setmocktime", &setmocktime, true, false },
#ifdef ENABLE_WALLET
{ "hidden", "resendwallettransactions", &resendwallettransactions, true, true },
#endif
#ifdef ENABLE_WALLET
/* Wallet */

View file

@ -211,6 +211,7 @@ extern json_spirit::Value getwalletinfo(const json_spirit::Array& params, bool f
extern json_spirit::Value getblockchaininfo(const json_spirit::Array& params, bool fHelp);
extern json_spirit::Value getnetworkinfo(const json_spirit::Array& params, bool fHelp);
extern json_spirit::Value setmocktime(const json_spirit::Array& params, bool fHelp);
extern json_spirit::Value resendwallettransactions(const json_spirit::Array& params, bool fHelp);
extern json_spirit::Value getrawtransaction(const json_spirit::Array& params, bool fHelp); // in rcprawtransaction.cpp
extern json_spirit::Value listunspent(const json_spirit::Array& params, bool fHelp);

104
src/test/mempool_tests.cpp Normal file
View file

@ -0,0 +1,104 @@
// Copyright (c) 2011-2014 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include "main.h"
#include "txmempool.h"
#include "util.h"
#include "test/test_bitcoin.h"
#include <boost/test/unit_test.hpp>
#include <list>
BOOST_FIXTURE_TEST_SUITE(mempool_tests, TestingSetup)
BOOST_AUTO_TEST_CASE(MempoolRemoveTest)
{
// Test CTxMemPool::remove functionality
// Parent transaction with three children,
// and three grand-children:
CMutableTransaction txParent;
txParent.vin.resize(1);
txParent.vin[0].scriptSig = CScript() << OP_11;
txParent.vout.resize(3);
for (int i = 0; i < 3; i++)
{
txParent.vout[i].scriptPubKey = CScript() << OP_11 << OP_EQUAL;
txParent.vout[i].nValue = 33000LL;
}
CMutableTransaction txChild[3];
for (int i = 0; i < 3; i++)
{
txChild[i].vin.resize(1);
txChild[i].vin[0].scriptSig = CScript() << OP_11;
txChild[i].vin[0].prevout.hash = txParent.GetHash();
txChild[i].vin[0].prevout.n = i;
txChild[i].vout.resize(1);
txChild[i].vout[0].scriptPubKey = CScript() << OP_11 << OP_EQUAL;
txChild[i].vout[0].nValue = 11000LL;
}
CMutableTransaction txGrandChild[3];
for (int i = 0; i < 3; i++)
{
txGrandChild[i].vin.resize(1);
txGrandChild[i].vin[0].scriptSig = CScript() << OP_11;
txGrandChild[i].vin[0].prevout.hash = txChild[i].GetHash();
txGrandChild[i].vin[0].prevout.n = 0;
txGrandChild[i].vout.resize(1);
txGrandChild[i].vout[0].scriptPubKey = CScript() << OP_11 << OP_EQUAL;
txGrandChild[i].vout[0].nValue = 11000LL;
}
CTxMemPool testPool(CFeeRate(0));
std::list<CTransaction> removed;
// Nothing in pool, remove should do nothing:
testPool.remove(txParent, removed, true);
BOOST_CHECK_EQUAL(removed.size(), 0);
// Just the parent:
testPool.addUnchecked(txParent.GetHash(), CTxMemPoolEntry(txParent, 0, 0, 0.0, 1));
testPool.remove(txParent, removed, true);
BOOST_CHECK_EQUAL(removed.size(), 1);
removed.clear();
// Parent, children, grandchildren:
testPool.addUnchecked(txParent.GetHash(), CTxMemPoolEntry(txParent, 0, 0, 0.0, 1));
for (int i = 0; i < 3; i++)
{
testPool.addUnchecked(txChild[i].GetHash(), CTxMemPoolEntry(txChild[i], 0, 0, 0.0, 1));
testPool.addUnchecked(txGrandChild[i].GetHash(), CTxMemPoolEntry(txGrandChild[i], 0, 0, 0.0, 1));
}
// Remove Child[0], GrandChild[0] should be removed:
testPool.remove(txChild[0], removed, true);
BOOST_CHECK_EQUAL(removed.size(), 2);
removed.clear();
// ... make sure grandchild and child are gone:
testPool.remove(txGrandChild[0], removed, true);
BOOST_CHECK_EQUAL(removed.size(), 0);
testPool.remove(txChild[0], removed, true);
BOOST_CHECK_EQUAL(removed.size(), 0);
// Remove parent, all children/grandchildren should go:
testPool.remove(txParent, removed, true);
BOOST_CHECK_EQUAL(removed.size(), 5);
BOOST_CHECK_EQUAL(testPool.size(), 0);
removed.clear();
// Add children and grandchildren, but NOT the parent (simulate the parent being in a block)
for (int i = 0; i < 3; i++)
{
testPool.addUnchecked(txChild[i].GetHash(), CTxMemPoolEntry(txChild[i], 0, 0, 0.0, 1));
testPool.addUnchecked(txGrandChild[i].GetHash(), CTxMemPoolEntry(txGrandChild[i], 0, 0, 0.0, 1));
}
// Now remove the parent, as might happen if a block-re-org occurs but the parent cannot be
// put into the mempool (maybe because it is non-standard):
testPool.remove(txParent, removed, true);
BOOST_CHECK_EQUAL(removed.size(), 6);
BOOST_CHECK_EQUAL(testPool.size(), 0);
removed.clear();
}
BOOST_AUTO_TEST_SUITE_END()

View file

@ -74,7 +74,7 @@ bool CreateBlock(CBlockTemplate* pblocktemplate, bool f = false)
for (int i = 0; ; ++i)
{
pblock->nNonce = i;
if (CheckProofOfWork(pblock->GetHash(), pblock->nBits))
if (CheckProofOfWork(pblock->GetHash(), pblock->nBits, Params().GetConsensus()))
break;
}
CValidationState state;

View file

@ -18,13 +18,14 @@ BOOST_FIXTURE_TEST_SUITE(pow_tests, BasicTestingSetup)
/*BOOST_AUTO_TEST_CASE(get_next_work)
{
SelectParams(CBaseChainParams::MAIN);
const Consensus::Params& params = Params().GetConsensus();
int64_t nLastRetargetTime = 1261130161; // Block #30240
CBlockIndex pindexLast;
pindexLast.nHeight = 32255;
pindexLast.nTime = 1262152739; // Block #32255
pindexLast.nBits = 0x1d00ffff;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime), 0x1d00d86a);
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, params), 0x1d00d86a);
}
*/
@ -34,13 +35,14 @@ BOOST_FIXTURE_TEST_SUITE(pow_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(get_next_work_pow_limit)
{
SelectParams(CBaseChainParams::MAIN);
const Consensus::Params& params = Params().GetConsensus();
int64_t nLastRetargetTime = 1231006505; // Block #0
CBlockIndex pindexLast;
pindexLast.nHeight = 2015;
pindexLast.nTime = 1233061996; // Block #2015
pindexLast.nBits = 0x1d00ffff;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime), 0x1d00ffff);
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, params), 0x1d00ffff);
}
*/
@ -49,13 +51,14 @@ BOOST_AUTO_TEST_CASE(get_next_work_pow_limit)
/*BOOST_AUTO_TEST_CASE(get_next_work_lower_limit_actual)
{
SelectParams(CBaseChainParams::MAIN);
const Consensus::Params& params = Params().GetConsensus();
int64_t nLastRetargetTime = 1279008237; // Block #66528
CBlockIndex pindexLast;
pindexLast.nHeight = 68543;
pindexLast.nTime = 1279297671; // Block #68543
pindexLast.nBits = 0x1c05a3f4;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime), 0x1c0168fd);
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, params), 0x1c0168fd);
}
*/
@ -63,12 +66,14 @@ BOOST_AUTO_TEST_CASE(get_next_work_pow_limit)
BOOST_AUTO_TEST_CASE(get_next_work_upper_limit_actual)
{
SelectParams(CBaseChainParams::MAIN);
const Consensus::Params& params = Params().GetConsensus();
int64_t nLastRetargetTime = 1263163443; // NOTE: Not an actual block time
CBlockIndex pindexLast;
pindexLast.nHeight = 46367;
pindexLast.nTime = 1269211443; // Block #46367
pindexLast.nBits = 0x1c387f6f;
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime), 0x1d00e1fd);
BOOST_CHECK_EQUAL(CalculateNextWorkRequired(&pindexLast, nLastRetargetTime, params), 0x1d00e1fd);
}
BOOST_AUTO_TEST_SUITE_END()

View file

@ -28,7 +28,9 @@ extern void noui_connect();
BasicTestingSetup::BasicTestingSetup()
{
SetupEnvironment();
fPrintToDebugLog = false; // don't want to write to debug.log file
fCheckBlockIndex = true;
SelectParams(CBaseChainParams::MAIN);
}
BasicTestingSetup::~BasicTestingSetup()

View file

@ -5,6 +5,8 @@
#include "txdb.h"
#include "chainparams.h"
#include "hash.h"
#include "main.h"
#include "pow.h"
#include "uint256.h"
@ -224,7 +226,7 @@ bool CBlockTreeDB::LoadBlockIndexGuts()
pindexNew->nStatus = diskindex.nStatus;
pindexNew->nTx = diskindex.nTx;
if (!CheckProofOfWork(pindexNew->GetBlockHash(), pindexNew->nBits))
if (!CheckProofOfWork(pindexNew->GetBlockHash(), pindexNew->nBits, Params().GetConsensus()))
return error("LoadBlockIndex(): CheckProofOfWork failed: %s", pindexNew->ToString());
pcursor->Next();

View file

@ -16,7 +16,7 @@
class CBlockFileInfo;
class CBlockIndex;
class CDiskTxPos;
struct CDiskTxPos;
class uint256;
//! -dbcache default (MiB)

View file

@ -444,6 +444,18 @@ void CTxMemPool::remove(const CTransaction &origTx, std::list<CTransaction>& rem
LOCK(cs);
std::deque<uint256> txToRemove;
txToRemove.push_back(origTx.GetHash());
if (fRecursive && !mapTx.count(origTx.GetHash())) {
// If recursively removing but origTx isn't in the mempool
// be sure to remove any children that are in the pool. This can
// happen during chain re-orgs if origTx isn't re-accepted into
// the mempool for any reason.
for (unsigned int i = 0; i < origTx.vout.size(); i++) {
std::map<COutPoint, CInPoint>::iterator it = mapNextTx.find(COutPoint(origTx.GetHash(), i));
if (it == mapNextTx.end())
continue;
txToRemove.push_back(it->second.ptx->GetHash());
}
}
while (!txToRemove.empty())
{
uint256 hash = txToRemove.front();

View file

@ -723,18 +723,19 @@ void RenameThread(const char* name)
void SetupEnvironment()
{
std::locale loc("C");
// On most POSIX systems (e.g. Linux, but not BSD) the environment's locale
// may be invalid, in which case the "C" locale is used as fallback.
#if !defined(WIN32) && !defined(MAC_OSX) && !defined(__FreeBSD__) && !defined(__OpenBSD__)
try {
std::locale(""); // Raises a runtime error if current locale is invalid
loc = std::locale(""); // Raises a runtime error if current locale is invalid
} catch (const std::runtime_error&) {
std::locale::global(std::locale("C"));
setenv("LC_ALL", "C", 1);
}
#endif
// The path locale is lazy initialized and to avoid deinitialization errors
// in multithreading environments, it is set explicitly by the main thread.
boost::filesystem::path::imbue(std::locale());
boost::filesystem::path::imbue(loc);
}
void SetThreadPriority(int nPriority)

View file

@ -18,13 +18,13 @@ void RegisterValidationInterface(CValidationInterface* pwalletIn) {
g_signals.UpdatedTransaction.connect(boost::bind(&CValidationInterface::UpdatedTransaction, pwalletIn, _1));
g_signals.SetBestChain.connect(boost::bind(&CValidationInterface::SetBestChain, pwalletIn, _1));
g_signals.Inventory.connect(boost::bind(&CValidationInterface::Inventory, pwalletIn, _1));
g_signals.Broadcast.connect(boost::bind(&CValidationInterface::ResendWalletTransactions, pwalletIn));
g_signals.Broadcast.connect(boost::bind(&CValidationInterface::ResendWalletTransactions, pwalletIn, _1));
g_signals.BlockChecked.connect(boost::bind(&CValidationInterface::BlockChecked, pwalletIn, _1, _2));
}
void UnregisterValidationInterface(CValidationInterface* pwalletIn) {
g_signals.BlockChecked.disconnect(boost::bind(&CValidationInterface::BlockChecked, pwalletIn, _1, _2));
g_signals.Broadcast.disconnect(boost::bind(&CValidationInterface::ResendWalletTransactions, pwalletIn));
g_signals.Broadcast.disconnect(boost::bind(&CValidationInterface::ResendWalletTransactions, pwalletIn, _1));
g_signals.Inventory.disconnect(boost::bind(&CValidationInterface::Inventory, pwalletIn, _1));
g_signals.SetBestChain.disconnect(boost::bind(&CValidationInterface::SetBestChain, pwalletIn, _1));
g_signals.UpdatedTransaction.disconnect(boost::bind(&CValidationInterface::UpdatedTransaction, pwalletIn, _1));

View file

@ -9,7 +9,7 @@
#include <boost/signals2/signal.hpp>
class CBlock;
class CBlockLocator;
struct CBlockLocator;
class CTransaction;
class CValidationInterface;
class CValidationState;
@ -33,7 +33,7 @@ protected:
virtual void SetBestChain(const CBlockLocator &locator) {};
virtual void UpdatedTransaction(const uint256 &hash) {};
virtual void Inventory(const uint256 &hash) {};
virtual void ResendWalletTransactions() {};
virtual void ResendWalletTransactions(int64_t nBestBlockTime) {};
virtual void BlockChecked(const CBlock&, const CValidationState&) {};
friend void ::RegisterValidationInterface(CValidationInterface*);
friend void ::UnregisterValidationInterface(CValidationInterface*);
@ -52,7 +52,7 @@ struct CMainSignals {
/** Notifies listeners about an inventory item being seen on the network. */
boost::signals2::signal<void (const uint256 &)> Inventory;
/** Tells listeners to broadcast their data. */
boost::signals2::signal<void ()> Broadcast;
boost::signals2::signal<void (int64_t nBestBlockTime)> Broadcast;
/** Notifies listeners of a block validation result */
boost::signals2::signal<void (const CBlock&, const CValidationState&)> BlockChecked;
};

View file

@ -2485,3 +2485,25 @@ Value getwalletinfo(const Array& params, bool fHelp)
obj.push_back(Pair("unlocked_until", nWalletUnlockTime));
return obj;
}
Value resendwallettransactions(const Array& params, bool fHelp)
{
if (fHelp || params.size() != 0)
throw runtime_error(
"resendwallettransactions\n"
"Immediately re-broadcast unconfirmed wallet transactions to all peers.\n"
"Intended only for testing; the wallet code periodically re-broadcasts\n"
"automatically.\n"
"Returns array of transaction ids that were re-broadcast.\n"
);
LOCK2(cs_main, pwalletMain->cs_wallet);
std::vector<uint256> txids = pwalletMain->ResendWalletTransactionsBefore(GetTime());
Array result;
BOOST_FOREACH(const uint256& txid, txids)
{
result.push_back(txid.ToString());
}
return result;
}

View file

@ -1119,15 +1119,17 @@ void CWallet::ReacceptWalletTransactions()
}
}
void CWalletTx::RelayWalletTransaction()
bool CWalletTx::RelayWalletTransaction()
{
if (!IsCoinBase())
{
if (GetDepthInMainChain() == 0) {
LogPrintf("Relaying wtx %s\n", GetHash().ToString());
RelayTransaction((CTransaction)*this);
return true;
}
}
return false;
}
set<uint256> CWalletTx::GetConflicts() const
@ -1329,7 +1331,31 @@ bool CWalletTx::IsTrusted() const
return true;
}
void CWallet::ResendWalletTransactions()
std::vector<uint256> CWallet::ResendWalletTransactionsBefore(int64_t nTime)
{
std::vector<uint256> result;
LOCK(cs_wallet);
// Sort them in chronological order
multimap<unsigned int, CWalletTx*> mapSorted;
BOOST_FOREACH(PAIRTYPE(const uint256, CWalletTx)& item, mapWallet)
{
CWalletTx& wtx = item.second;
// Don't rebroadcast if newer than nTime:
if (wtx.nTimeReceived > nTime)
continue;
mapSorted.insert(make_pair(wtx.nTimeReceived, &wtx));
}
BOOST_FOREACH(PAIRTYPE(const unsigned int, CWalletTx*)& item, mapSorted)
{
CWalletTx& wtx = *item.second;
if (wtx.RelayWalletTransaction())
result.push_back(wtx.GetHash());
}
return result;
}
void CWallet::ResendWalletTransactions(int64_t nBestBlockTime)
{
// Do this infrequently and randomly to avoid giving away
// that these are our transactions.
@ -1341,30 +1367,15 @@ void CWallet::ResendWalletTransactions()
return;
// Only do it if there's been a new block since last time
if (nTimeBestReceived < nLastResend)
if (nBestBlockTime < nLastResend)
return;
nLastResend = GetTime();
// Rebroadcast any of our txes that aren't in a block yet
LogPrintf("ResendWalletTransactions()\n");
{
LOCK(cs_wallet);
// Sort them in chronological order
multimap<unsigned int, CWalletTx*> mapSorted;
BOOST_FOREACH(PAIRTYPE(const uint256, CWalletTx)& item, mapWallet)
{
CWalletTx& wtx = item.second;
// Don't rebroadcast until it's had plenty of time that
// it should have gotten in already by now.
if (nTimeBestReceived - (int64_t)wtx.nTimeReceived > 5 * 60)
mapSorted.insert(make_pair(wtx.nTimeReceived, &wtx));
}
BOOST_FOREACH(PAIRTYPE(const unsigned int, CWalletTx*)& item, mapSorted)
{
CWalletTx& wtx = *item.second;
wtx.RelayWalletTransaction();
}
}
// Rebroadcast unconfirmed txes older than 5 minutes before the last
// block was found:
std::vector<uint256> relayed = ResendWalletTransactionsBefore(nBestBlockTime-5*60);
if (!relayed.empty())
LogPrintf("%s: rebroadcast %u unconfirmed transactions\n", __func__, relayed.size());
}
/** @} */ // end of mapWallet

View file

@ -381,7 +381,7 @@ public:
int64_t GetTxTime() const;
int GetRequestCount() const;
void RelayWalletTransaction();
bool RelayWalletTransaction();
std::set<uint256> GetConflicts() const;
};
@ -614,7 +614,8 @@ public:
void EraseFromWallet(const uint256 &hash);
int ScanForWalletTransactions(CBlockIndex* pindexStart, bool fUpdate = false);
void ReacceptWalletTransactions();
void ResendWalletTransactions();
void ResendWalletTransactions(int64_t nBestBlockTime);
std::vector<uint256> ResendWalletTransactionsBefore(int64_t nTime);
CAmount GetBalance() const;
CAmount GetUnconfirmedBalance() const;
CAmount GetImmatureBalance() const;