9ec75c5 Add a locking mechanism to IsInitialBlockDownload to ensure it never goes from false to true. (Ruben Dario Ponticelli)
a2d0fc6 Fix IsInitialBlockDownload which was broken by headers first. (Ruben Dario Ponticelli)
There are 3 pieces of data that are maintained on disk. The actual block
and undo data, the block index (which can refer to positions on disk),
and the chainstate (which refers to the best block hash).
Earlier, there was no guarantee that blocks were written to disk before
block index entries referring to them were written. This commit introduces
dirty flags for block index data, and delays writing entries until the actual
block data is flushed.
With this stricter ordering in writes, it is now safe to not always flush
after every block, so there is no need for the IsInitialBlockDownload()
check there - instead we just write whenever enough time has passed or
the cache size grows too large. Also updating the wallet's best known block
is delayed until this is done, otherwise the wallet may end up referring to an
unknown block.
In addition, only do a write inside the block processing loop if necessary
(because of cache size exceeded). Otherwise, move the writing to a point
after processing is done, after relaying.
Like in a real world situation, a safe mode test should also be visible in the
UI. A test of safe mode is furthermore mostly relevant for developers, so it
should not be overwritten by a warning about a pre-release test build.
Previously, AcceptBlockHeader did not check the header (in particular
PoW). This made the client accept invalid-PoW-headers from peers in
headers-first sync.
Previously transactions were only tested again the
STANDARD_SCRIPT_VERIFY_FLAGS prior to mempool acceptance, so any bugs in
those flags that allowed actually-invalid transactions to pass would
result in allowing invalid transactions into the mempool. Fortunately
there is a second check in CreateNewBlock() that would prevent those
transactions from being mined, resulting in an invalid block, however
this could still be exploited as a DoS attack.
This is a simplified re-do of closed pull #3088.
This patch eliminates the privacy and reliability problematic use
of centralized web services for discovering the node's addresses
for advertisement.
The Bitcoin protocol already allows your peers to tell you what
IP they think you have, but this data isn't trustworthy since
they could lie. So the challenge is using it without creating a
DOS vector.
To accomplish this we adopt an approach similar to the one used
by P2Pool: If we're announcing and don't have a better address
discovered (e.g. via UPNP) or configured we just announce to
each peer the address that peer told us. Since peers could
already replace, forge, or drop our address messages this cannot
create a new vulnerability... but if even one of our peers is
giving us a good address we'll eventually make a useful
advertisement.
We also may randomly use the peer-provided address for the
daily rebroadcast even if we otherwise have a seemingly routable
address, just in case we've been misconfigured (e.g. by UPNP).
To avoid privacy problems, we only do these things if discovery
is enabled.
50b43fd Be a bit more verbose during -loadblock if we already have blocks (Matt Corallo)
8375e22 Fix -loadblock after shutdown during IBD (Matt Corallo)
4ead850 Fix for crash during block download (Matt Corallo)
1bea2bb Rename ProcessBlock to ProcessNewBlock to indicate change of behaviour, and document it (Luke Dashjr)
d29a291 Rename RPC_TRANSACTION_* errors to RPC_VERIFY_* and use RPC_VERIFY_ERROR for submitblock (Luke Dashjr)
f877aaa Bugfix: submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock, now that it no longer does the full block validity check (Luke Dashjr)
24e8896 Add CValidationInterface::BlockChecked notification (Luke Dashjr)
a873823 CAutoFile: Explicit Get() and remove unused methods (Wladimir J. van der Laan)
fef24ca Add IsNull() to class CAutoFile and remove operator ! (Ruben Dario Ponticeli)
Previous refactorings broke the ability to rebuild the chainstate by deleting the chainstate
directory, resulting in an incorrect "Incorrect or no genesis block found" error message. Fix
that.
Also, improve the performance of ActivateBestBlockStep by using the skiplist to only discover
a few potential blocks to connect at a time, instead of all blocks forever - as we likely bail
out after connecting a single one anyway.
Instead of skipping to the last reindexed block in each file (which could
jump over processed out-of-order blocks), just skip each already processed
block individually.
Remember out-of-order block headers along with disk positions. This is
likely the simplest and least-impact way to make -reindex work with
headers first.
Based on top of #4468.
Many changes:
* Do not use 'getblocks', but 'getheaders', and use it to build a headers tree.
* Blocks are fetched in parallel from all available outbound peers, using a
limited moving window. When one peer stalls the movement of the window, it is
disconnected.
* No more orphan blocks. At all. We only ever request a block for which we have
verified the headers, and store it to disk immediately. This means that a
disk-fill attack would require PoW.
* Require protocol version 31800 for every peer (released in december 2010).
* No more syncnode (we sync from everyone we can, though limited to 1 during
initial *headers* sync).
* Introduce some extra named constants, comments and asserts.
This adds a -regetest-only undocumented (for regression testing only)
command-line option -blockversion=N to set block.nVersion.
Adds to the "has the rest of the network upgraded to a
block.nVersion we don't understand" code so it calls
-alertnotify when 51 of the last 100 blocks are up-version.
But it only alerts once, not with every subsequent new, upversion
block.
And adds a forknotify.py regression test to make sure it works.
Tested using forknotify.py:
Before adding CAlert::Notify, get:
Assertion failed: -alertnotify did not warn of up-version blocks
Before adding code to only alert once:
Assertion failed: -alertnotify excessive warning of up-version blocks
After final code in this pull:
Tests successful
7c70438 Get rid of the dummy CCoinsViewCache constructor arg (Pieter Wuille)
ed27e53 Add coins_tests with a large randomized CCoinViewCache test. (Pieter Wuille)
058b08c Do not keep fully spent but unwritten CCoins entries cached. (Pieter Wuille)
c9d1a81 Get rid of CCoinsView's SetCoins and SetBestBlock. (Pieter Wuille)
f28aec0 Use ModifyCoins instead of mutable GetCoins. (Pieter Wuille)
e790c37 Replace SCRIPT_VERIFY_NOCACHE by flag directly to checker (Pieter Wuille)
5c1e798 Make signature cache optional (Pieter Wuille)
c7829ea Abstract out SignatureChecker (Pieter Wuille)
938bcce CAutoFile: make file private (Philip Kaufmann)
0c35486 CBufferedFile: add explicit close function (Philip Kaufmann)
c9fb27d CBufferedFile: convert into a non-refcounted RAII wrapper (Philip Kaufmann)
There is only one message passed to AbortNode() that makes sense to
translate to the user specifically: Disk space is low. For the others
show a generic message and refer to debug.log for details.
Reduces the number of confusing jargon translation messages.
- it now takes over the passed file descriptor and closes it in the
destructor
- this fixes a leak in LoadExternalBlockFile(), where an exception could
cause the file to not getting closed
- disallow copies (like recently added for CAutoFile)
- make nType and nVersion private
4705902 Avoid introducing a virtual into CChainParams (Wladimir J. van der Laan)
5e2e7fc Suggested corrections on comments, variable names. Also new test case testing the PoW skip in UNITTEST. (SergioDemianLerner)
a25fd6b Switch testing framework from MAIN to new UNITTEST network (SergioDemianLerner)
f74fc9b Print input index when signature validation fails, to aid debugging. (Mark Friedenbach)
217a5c9 When transaction outputs exceed inputs, show the offending amounts so as to aid debugging. (Mark Friedenbach)
Move the txid duplicates check into BuildMerkleTree, where it can be done
much more efficiently (without needing to build a full txid set to detect
duplicates).
The previous version (using the std::set<uint256> to detect duplicates) was
also slightly too weak. A block mined with actual duplicate transactions
(which is invalid, due to the inputs of the duplicated transactions being
seen as double spends) would trigger the duplicates logic, resulting in the
block not being stored on disk, and rerequested. This change fixes that by
only triggering in the case of duplicated transactions that can actually
result in an identical merkle root.
Instead of storing CCoins entries directly in CCoinsMap, store a CCoinsCacheEntry
which additionally keeps track of whether a particular entry is:
* dirty: potentially different from its parent view.
* fresh: the parent view is known to not have a non-pruned version.
This allows us to skip non-dirty cache entries when pushing batches of changes up,
and to remove CCoins entries about transactions that are fully spent before the
parent cache learns about them.
All direct modifications are now done through ModifyCoins, and BatchWrite is
used for pushing batches of queued modifications up, so we don't need the
low-level SetCoins and SetBestBlock anymore in the top-level CCoinsView class.
Replace the mutable non-copying GetCoins method with a ModifyCoins, which
returns an encapsulated iterator, so we can keep track of concurrent
modifications (as iterators can be invalidated by those) and run cleanup
code after a modification is finished.
This also removes the overloading of the 'GetCoins' name.
There is no reason to store thousands of orphan transactions;
normally an orphan's parents will either be broadcast or
mined reasonably quickly.
This pull drops the maximum number of orphans from 10,000 down
to 100, and adds a command-line option (-maxorphantx) that is
just like -maxorphanblocks to override the default.
Prevent denial-of-service attacks by banning
peers that send us invalid orphan transactions
and only storing orphan transactions given to
us by a peer while the peer is connected.
2c2cc5d Remove some unnecessary c_strs() in logging and the GUI (Philip Kaufmann)
f7d0a86 netbase: Use .data() instead of .c_str() on binary string (Wladimir J. van der Laan)
The efficient version of CCoinsViewCache::GetCoins only works for known-to-exist
cache entries, requiring a separate HaveCoins call beforehand. This is
inefficient as both perform a hashtable lookup.
Replace the non-mutable GetCoins with AccessCoins, which returns a potentially-NULL
pointer. This also decreases the overloading of GetCoins.
Also replace some copying (inefficient) GetCoins calls with equivalent AccessCoins,
decreasing the copying.
Bypassing the main coins cache allows more thorough checking with the same
memory budget.
This has no effect on performance because everything ends up in the child
cache created by VerifyDB itself.
It has bugged me ever since #4675, which effectively reduced the
number of checked blocks to reduce peak memory usage.
- Pass the coinsview to use as argument to VerifyDB
- This also avoids that the first `pcoinsTip->Flush()` after VerifyDB
writes a large slew of unchanged coin records back to the database.
ad49c25 Split up util.cpp/h (Wladimir J. van der Laan)
f841aa2 Move `COIN` and `CENT` to core.h (Wladimir J. van der Laan)
6e5fd00 Move `*Version()` functions to version.h/cpp (Wladimir J. van der Laan)
b4aa769 Move `S_I*` constants and `MSG_NOSIGNAL` to compat.h (Wladimir J. van der Laan)
af8297c Move functions in wallet.h to implementation file (Wladimir J. van der Laan)
651480c move functions in main and net to implementation files (Wladimir J. van der Laan)
610a8c0 Move SetThreadPriority implementation to util.cpp instead of the header (Wladimir J. van der Laan)
f780e65 Remove unused function `ByteReverse` from util.h (Wladimir J. van der Laan)
121d6ad Remove unused `alignup` function from util.h (Wladimir J. van der Laan)
d1e26d4 Move CMedianFilter to timedata.cpp (Wladimir J. van der Laan)
Split up util.cpp/h into:
- string utilities (hex, base32, base64): no internal dependencies, no dependency on boost (apart from foreach)
- money utilities (parsesmoney, formatmoney)
- time utilities (gettime*, sleep, format date):
- and the rest (logging, argument parsing, config file parsing)
The latter is basically the environment and OS handling,
and is stripped of all utility functions, so we may want to
rename it to something else than util.cpp/h for clarity (Matt suggested
osinterface).
Breaks dependency of sha256.cpp on all the things pulled in by util.
Due to growing coinsviewcaches, the memory usage with checklevel=3
(and standard settings for dbcache) could be up to 500MiB on a
64-bit system. This is about twice the peak during reindexing,
unnecessarily extending bitcoind's memory envelope.
This commit reduces the maximum total size of the caches used during
verification to just nCoinCacheSize, which should be the limit.
Remove the 'state' and 'exceptmask' from serialize.h's stream implementations,
as well as related methods.
As exceptmask always included 'failbit', and setstate was always called with
bits = failbit, all it did was immediately raise an exception. Get rid of
those variables, and replace the setstate with direct exception throwing
(which also removes some dead code).
As a result, good() is never reached after a failure (there are only 2
calls, one of which is in tests), and can just be replaced by !eof().
fail(), clear(n) and exceptions() are just never called. Delete them.
The only other method of logging remote addresses is via
-logips=1 -debug=net
which increases the logged activity by 100x or more.
Github-Pull: #4608
Amended-By: Wladimir J. van der Laan <laanwj@gmail.com>
Port over https://github.com/chronokings/huntercoin/pull/19 from
Huntercoin: This implements a new RPC command "getchaintips" that can be
used to find all currently active chain heads. This is similar to the
-printblocktree startup option, but it can be used without restarting
just via the RPC interface on a running daemon.
* Replace -benchmark (and the related fBenchmark) with a regular debug option, -debug=bench.
* Increase coverage and granularity of individual block processing steps.
* Add cummulative times.
First and foremost, this defaults to OFF.
This option lets a node consider such transactions non-standard,
meaning they will not be relayed or mined by default, but other miners
are free to mine these as usual.
4eedf4f make RandAddSeed() use OPENSSL_cleanse() (Philip Kaufmann)
6354935 move rand functions from util to new random.h/.cpp (Philip Kaufmann)
001a53d add GetRandBytes() as wrapper for RAND_bytes() (Philip Kaufmann)
This adds a -whitelist option to specify subnet ranges from which peers
that connect are whitelisted. In addition, there is a -whitebind option
which works like -bind, except peers connecting to it are also
whitelisted (allowing a separate listen port for trusted connections).
Being whitelisted has two effects (for now):
* They are immune to DoS disconnection/banning.
* Transactions they broadcast (which are valid) are always relayed,
even if they were already in the mempool. This means that a node
can function as a gateway for a local network, and that rebroadcasts
from the local network will work as expected.
Whitelisting replaces the magic exemption localhost had for DoS
disconnection (local addresses are still never banned, though), which
implied hidden service connects (from a localhost Tor node) were
incorrectly immune to DoS disconnection as well. This old
behaviour is removed for that reason, but can be restored using
-whitelist=127.0.0.1 or -whitelist=::1 can be specified. -whitebind
is safer to use in case non-trusted localhost connections are expected
(like hidden services).
- add a small wrapper in util around RAND_bytes() and replace with
GetRandBytes() in the code to log errors from calling RAND_bytes()
- remove OpenSSL header rand.h where no longer needed
75f51f2a introduced asynchronous processing for blocks, where reject messages
and DoS scoring could be applied outside of ProcessBlock, because block
validation may happen later.
However, some types of errors are still detected immediately (in particular,
CheckBlock violations), which need acting after ProcessBlock returns.
The wallet now uses the mempool fee estimator with a new
command-line option: -txconfirmtarget (default: 1) instead
of using hard-coded fees or priorities.
A new bitcoind that hasn't seen enough transactions to estimate
will fall back to the old hard-coded minimum priority or
transaction fee.
-paytxfee option overrides -txconfirmtarget.
Relaying and mining code isn't changed.
For Qt, the coin control dialog now uses priority estimates to
label transaction priority (instead of hard-coded constants);
unspent outputs were consistently labeled with a much higher
priority than is justified by the free transactions actually
being accepted into blocks.
I did not implement any GUI for setting -txconfirmtarget; I would
suggest getting rid of the "Pay transaction fee" GUI and replace
it with either "target number of confirmations" or maybe
a "faster confirmation <--> lower fee" slider or select box.
The original comment forgets to account for the script push which will
need an OP_PUSHDATA2 + 2-bytes for the 513 script bytes.
props davecgh
fixes#4224
Allows network wallets and other clients to see transactions that respend
a prevout already spent in an unconfirmed transaction in this node's mempool.
Knowledge of an attempted double-spend is of interest to recipients of the
first spend. In some cases, it will allow these recipients to withhold
goods or services upon being alerted of a double-spend that deprives them
of payment.
As before, respends are not added to the mempool.
Anti-Denial-of-Service-Attack provisions:
- Use a bloom filter to relay only one respend per mempool prevout
- Rate-limit respend relays to a default of 100 thousand bytes/minute
- Define tx2.IsEquivalentTo(tx1): equality when scriptSigs are not considered
- Do not relay these equivalent transactions
Remove an unused variable declaration in txmempool.cpp.
Relax the AreInputsStandard() tests for P2SH transactions --
allow any Script in a P2SH transaction to be relayed/mined,
as long as it has 15 or fewer signature operations.
Rationale: https://gist.github.com/gavinandresen/88be40c141bc67acb247
I don't have an easy way to test this, but the code changes are
straightforward and I've updated the AreInputsStandard unit tests.
... instead of after 30 minutes of no sending, for latency measurement
and keep-alive. Also, disconnect if no reply arrives within 20 minutes,
instead of 90 of inactivity (for peers supporting the 'pong' message).
f0a83fc Use Params().NetworkID() instead of TestNet() from the payment protocol (jtimon)
2871889 net.h was using std namespace through chainparams.h included in protocol.h (jtimon)
c8c52de Replace virtual methods with static attributes, chainparams.h depends on protocol.h instead of the other way around (jtimon)
a3d946e Get rid of TestNet() (jtimon)
6fc0fa6 Add RPCisTestNet chain parameter (jtimon)
cfeb823 Add RequireStandard chain parameter (jtimon)
21913a9 Add AllowMinDifficultyBlocks chain parameter (jtimon)
d754f34 Move majority constants to chainparams (jtimon)
8d26721 Get rid of RegTest() (jtimon)
cb9bd83 Add DefaultCheckMemPool chain parameter (jtimon)
2595b9a Add DefaultMinerThreads chain parameter (jtimon)
bfa9a1a Add MineBlocksOnDemand chain parameter (jtimon)
1712adb Add MiningRequiresPeers chain parameter (jtimon)
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.
Use CFeeRate instead of an int64_t for quantities that are
fee-per-size.
Helps prevent unit-conversion mismatches between the wallet,
relaying, and mining code.
f40dbee remove CPubKey::VerifyCompact( ) which is never used (Kamil Domanski)
28b6c1d remove GetMedianTime( ) which is never used (Kamil Domanski)
5bd4adc remove LookupHostNumeric( ) which is never used (Kamil Domanski)
595f691 remove LogException( ) which is never used (Kamil Domanski)
f4057cb remove CTransaction::IsNewerThan which is never used (Kamil Domanski)
0e31e56 remove CWallet::AddReserveKey which is never used (Kamil Domanski)
8c93bf4 LoadBlockIndexDB(): Require block db reindex if any blk*.dat files are missing. (Ashley Holman)
7a0e84d ProcessGetData(): abort if a block file is missing from disk (Ashley Holman)
397668e Deduplicate uint* comparison operator logic (Pieter Wuille)
df9eb5e Move {Get,Set}Compact from bignum to uint256 (Pieter Wuille)
a703150 Add multiplication and division to uint160/uint256 (Pieter Wuille)
4d480c8 Exception instead of assigning 0 in case of wrong vector length (Pieter Wuille)
eb2cbd7 Deduplicate shared code between uint160 and uint256 (Pieter Wuille)
787ee0c Check redeemScript size does not exceed 520 byte limit (Peter Todd)
4d79098 Increase IsStandard() scriptSig length (Peter Todd)
f80cffa Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails (Peter Todd)
6380180 Add rejection of non-null CHECKMULTISIG dummy values (Peter Todd)
29c1749 Let tx (in)valid tests use any SCRIPT_VERIFY flag (Peter Todd)
68f7d1d Create (MANDATORY|STANDARD)_SCRIPT_VERIFY_FLAGS constants (Peter Todd)
Removes the limits on number of pubkeys for P2SH CHECKMULTISIG outputs.
Previously with the 500 byte scriptSig limit there were odd restrictions
where even a 1-of-12 P2SH could be spent in a standard transaction(1),
yet multisig scriptPubKey's requiring more signatures quickly ran out of
scriptSig space.
From a "stuff-data-in-the-blockchain" point of view not much has changed
as with the prior commit now only allowing the dummy value to be null
the newly allowed scriptSig space can only be used for signatures. In
any case, just using more outputs is trivial and doesn't cost much.
1) See 779b519480d8c5346de6e635119c7ee772e97ec872240c45e558f582a37b4b73
Mined by BTC Guild.
Size specifiers are no longer needed now that we use typesafe tinyformat
for string formatting, instead of the system's sprintf.
No functional changes.
This continues the work in #3735.
Generally useless information. Only updates on connect time, not after
that. Peers can easily lie and the median filter is not effective in
preventing that.
In the past it was used for progress display in the GUI but
`CheckPoints::guessVerificationProgress` provides a better way that is now used.
It was too easy to mislead it. Peers do lie about it in practice, see issue #4065.
From the RPC, `getpeerinfo` gives the peer raw values, which are more
useful.
- In wallet and GUI code LOCK cs_main as well as cs_wallet when
necessary
- In main.cpp SendMessages move the TRY_LOCK(cs_main) up, to encompass the call
to IsInitialBlockDownload.
- Make ActivateBestChain, AddToBlockIndex, IsInitialBlockDownload,
InitBlockIndex acquire the cs_main lock
Fixes#3997
All functions that use ChainActive but do not aquire the cs_main
lock themselves, need to be called with the cs_main lock held.
This commit adds assertions to all externally callable functions
that use chainActive or chainMostWork.
This will flag usages when built with -DDEBUG_LOCKORDER.
PrintBlockTree output was broken starting from e010af70.
Everything appears on one line.
PrintWallet() added the newline after a block, but this functionality
was removed and no newline was added.
Seemingly, no one noticed. Add a newline after the block information
to fix this.
This resolves a case in which a mismatch could be used to bloat up the
mempool by sending transactions that pay enough fee to relay, but not
to be mined, with the default policies.
Amend to d5f1e72. It turns out that BerkelyDB was including inttypes.h
indirectly, so we cannot fix this with just macros.
Trivial commit: apply the following script to all .cpp and .h files:
# Middle
sed -i 's/"PRIx64"/x/g' "$1"
sed -i 's/"PRIu64"/u/g' "$1"
sed -i 's/"PRId64"/d/g' "$1"
# Initial
sed -i 's/PRIx64"/"x/g' "$1"
sed -i 's/PRIu64"/"u/g' "$1"
sed -i 's/PRId64"/"d/g' "$1"
# Trailing
sed -i 's/"PRIx64/x"/g' "$1"
sed -i 's/"PRIu64/u"/g' "$1"
sed -i 's/"PRId64/d"/g' "$1"
After this commit, `git grep` for PRI.64 should turn up nothing except
the defines in util.h.
As the tinyformat-based formatting system (introduced in b77dfdc) is
type-safe, no special format characters are needed to specify sizes.
Tinyformat can support (ignore) the C99 prefixes such as "ll" but
chokes on MSVC's inttypes.h defines prefixes such as "I64X". So don't
include inttypes.h and define our own for compatibility.
(an alternative would be to sweep the entire codebase using sed -i to
get rid of the size specifiers but this has less diff impact)
5770254 Copyright header updates s/2013/2014 on files whose last git commit was done in 2014. contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does. (gubatron)
Extend CMerkleTx::GetDepthInMainChain with the concept of
a "conflicted" transaction-- a transaction generated by the wallet
that is not in the main chain or in the mempool, and, therefore,
will likely never be confirmed.
GetDepthInMainChain() now returns -1 for conflicted transactions
(0 for unconfirmed-but-in-the-mempool, and >1 for confirmed).
This makes getbalance, getbalance '*', and listunspent all agree when there are
mutated transactions in the wallet.
Before:
listunspent: one 49BTC output
getbalance: 96 BTC (change counted twice)
getbalance '*': 46 BTC (spends counted twice)
After: all agree, 49 BTC available to spend.
contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
In case the total number of orphan blocks in memory exceeds a limit
(currently set to 750), a random orphan block (which is not
depended on by another orphan block) is dropped. This means it will
need to be downloaded again, but it won't consume memory until then.
c117d9e Support for error messages and a few more rejection reasons (Luke Dashjr)
14e7ffc Use standard BIP 22 rejection reasons where applicable (Luke Dashjr)
This changes the block processing logic from "try to atomically switch
to a new block" to a continuous "(dis)connect a block, aiming for the
assumed best chain".
This means the smallest atomic operations on the chainstate become
individual block connections or disconnections, instead of entire
reorganizations. It may mean that we try to reorganize to one block,
fail, and rereorganize again to the old block. This is slower, but
doesn't require unbounded RAM.
It also means that a ConnectBlock which fails may be no longer called
from the ProcessBlock which knows which node sent it. To deal with that,
a mapBlockSource is kept, and invalid blocks cause asynchronous "reject"
messages and banning (if necessary).
Previously CreateNewBlock() didn't take into account the fact that
IsFinalTx() without any arguments tests if the transaction is considered
final in the *current* block, when both those functions really needed to
know if the transaction would be final in the *next* block.
Additionally the UI had a similar misunderstanding.
Also adds some basic tests to check that CreateNewBlock() is in fact
mining nLockTime-using transactions correctly.
Thanks to Wladimir J. van der Laan for rebase.
After the tinyformat switch sprintf() family functions support passing
actual std::string objects.
Remove unnecessary c_str calls (236 of them) in logging and formatting.
012ca1c LoadWallet: acquire cs_wallet mutex before clearing setKeyPool (Wladimir J. van der Laan)
9569168 Document cs_wallet lock and add AssertLockHeld (Wladimir J. van der Laan)
19a5676 Use mutex pointer instead of name for AssertLockHeld (Wladimir J. van der Laan)
There were quite a few places where assert() was used with side effects,
making operation with NDEBUG non-functional. This commit fixes all the
cases I know about, but also adds an #error on NDEBUG because the code
is untested without assertions and may still have vulnerabilities if
used without assert.
This dead code can be resurrected from git history if
transaction replacement is ever implemented. Keeping
dead code in the source is a bad idea, because it implies
it was tested and worked at some point, which is not true.
The last fee drop was by 5x (from 50k satoshis to 10k satoshis)
in the 0.8.2 release which was about 6 months ago.
The current fee is (assuming a $500 exchange rate) about 5 dollar
cents. The new fee after this patch is 0.5 cents.
Miners who prefer the higher fees are obviously still able to
use the command line flags to override this setting. Miners who
choose to create smaller blocks will select the highest-fee paying
transactions anyway.
This would hopefully be the last manual adjustment ever required
before floating fees become normal.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
caca6aa Make some globals in main non-public. (Pieter Wuille)
85eb2ce Do not use the redundant BestInvalidWork record in the block database. (Pieter Wuille)
As block index entries have a flag for marking invalid blocks, the
'best invalid work' information can be derived from there. In addition,
remove the global from main.h
- re-work -debug help message text
- make -debug log every debugging information again (even all categories)
- remove unneeded fDebug checks in front of LogPrint()/qDebug(), as that
check is done in LogPrintf() when category is != NULL (true for all
LogPrint() calls
- remove fDebug ONLY in code which is NOT performance-critical
- harmonize addrman category name
- deprecate -debugnet usage, should be used via -debug=net and remove the
corresponding global
Instead of explicitly testing for the presence of any output, and
dealing with this case specially, just interpret it as an empty
CCoins.
The case previously caught using the HaveCoins check, is now handled
by the generic outs != outsBlock test.
This required some code movement (what was CWalletTx::AcceptToMemoryPool
doing in main?), and adding a few explicit includes that used to be
implicit through init.h.
INIT_PROTO_VERSION is the initial version, after a succesful version/verack it is increased to a negotiated version.
MIN_PEER_PROTO_VERSION could be a different value to disconnect from peers older than a specified version.
Changes the response to the 'mempool' command so that if
the memory pool has more than MAX_INV_SZ transactions (50,000)
it will respond with multiple 'inv' messages.
SendMessages() tries to acquire a cs_main lock now, but this isn't nessecary
for much of its functionality. Move those parts out of the locked section,
so they can always be performed, and we hold cs_main for a shorter time.
This removes a few unused CBlockLocator methods, and moves the
construction and fork-finding logic to CChain (which can do these
more efficiently, as it has a height-indexable chain available).
It also makes CBlockLocator independent from the validation code.
971bb3e Added ping time measurement. New RPC "ping" command to request ping. Implemented "pong" message handler. New "pingtime" field in getpeerinfo, to provide results to user. New "pingwait" field, to show pings still in flight, to better see newly lagging peers. (Josh Lehan)
Changes the maximum size of a free transaction that will be created
from 10,000 bytes to 1,000 bytes.
The idea behind this change is to make the free transaction area
available to a greater number of people; with the default 27K-per-block,
just three very-large very-high-priority transactions could fill the space.
Remove the (relay/mempool) rule that all outputs of free transactions
must be greater than 0.01 XBT. Dust spam is now taken care of by making
dusty outputs non-standard.
New RPC "ping" command to request ping.
Implemented "pong" message handler.
New "pingtime" field in getpeerinfo, to provide results to user.
New "pingwait" field, to show pings still in flight, to better see newly lagging peers.
- rename URL into URI in paymentserver where correct
- add some missing Qt-coding-stuff in paymentserver
- change QSpinBox to QLineEdit as base for BitcoinAmountField in .ui files
(as this is the result when converting the BAF back into base)
- remove some c_str() and replace with QString::fromStdString()
- remove several new-lines
- remove unneeded spaces
- indentation fixes
This also makes negative transaction versions non-standard.
This avoids an issue triggered in block 256818 where transactions with
negative version numbers were incorrectly serialized into the UTXO set.
On restart nodes detect the inconsistency and refuse to start so long as
a block with these transactions is inside the self-consistency check
window, logging "coin database inconsistencies found". The software
recommends reindexing, but reindexing does not correct the problem.
This should be fixed by changing the chainstate serialization, but
working around it seems harmless for now because the version is not
used by any network rule currently.
A patch free workaround is to start with -checklevel=2 which skips
the consistency checks, but the IsStandard change is important for
miners in order to protect unpatched nodes.
There have been several incidents where mainnet experimentation with
raw transactions resulted in insane fees. This is hard to prevent
in the raw transaction api because the inputs may not be known.
Since sending doesn't work if the inputs aren't known, we can catch
it there.
This rejects fees > than 10000 * nMinRelayTxFee or 1 BTC with the
defaults and can be overridden with a bool at the rpc.
We're not seeing large reorgs that would justify waiting a large
amount past the rule required maturity, and the extra three
hours is just a nuisance. Take one more block to at least give
the 100th block time to propagate.
This reduces a peer's ability to attack network resources by
using a full bloom filter, but without reducing the usability
of bloom filters. It sets a default match everything filter
for peers and it generalizes a prior optimization to
cover more cases.
The length of vectors, maps, sets, etc are serialized using
Write/ReadCompactSize -- which, unfortunately, do not use a
unique encoding.
So deserializing and then re-serializing a transaction (for example)
can give you different bits than you started with. That doesn't
cause any problems that we are aware of, but it is exactly the type
of subtle mismatch that can lead to exploits.
With this pull, reading a non-canonical CompactSize throws an
exception, which means nodes will ignore 'tx' or 'block' or
other messages that are not properly encoded.
Please check my logic... but this change is safe with respect to
causing a network split. Old clients that receive
non-canonically-encoded transactions or blocks deserialize
them into CTransaction/CBlock structures in memory, and then
re-serialize them before relaying them to peers.
And please check my logic with respect to causing a blockchain
split: there are no CompactSize fields in the block header, so
the block hash is always canonical. The merkle root in the block
header is computed on a vector<CTransaction>, so
any non-canonical encoding of the transactions in 'tx' or 'block'
messages is erased as they are read into memory by old clients,
and does not affect the block hash. And, as noted above, old
clients re-serialize (with canonical encoding) 'tx' and 'block'
messages before relaying to peers.
Orphan transactions were stored as a CDataStream pointer;
this changes the mapOrphanTransactions data structures to
store orphans as a CTransaction.
This also fixes CVE-2013-4627 by always re-serializing
transactions before relaying them.
The new class is accessed via the Params() method and holds
most things that vary between main, test and regtest networks.
The regtest mode has two purposes, one is to run the
bitcoind/bitcoinj comparison tool which compares two separate
implementations of the Bitcoin protocol looking for divergence.
The other is that when run, you get a local node which can mine
a single block instantly, which is highly convenient for testing
apps during development as there's no need to wait 10 minutes for
a block on the testnet.
This (nearly) doesn't change fee rules at all:
* To make it into the fee transaction area, the dPriority comparison
changed from < to <=
* We now just ignore transactions > MAX_BLOCK_SIZE/4 instead of
doing some calculations to require increasingly large fees as
size increases.
Removed AreInputsStandard from CTransaction, made it a regular function in main.
Moved CTransaction::GetOutputFor to CCoinsViewCache.
Moved GetLegacySigOpCount and GetP2SHSigOpCount out of CTransaction into regular functions in main.
Moved GetValueIn and HaveInputs from CTransaction into CCoinsViewCache.
Moved AllowFree, ClientCheckInputs, CheckInputs, UpdateCoins, and CheckTransaction out of CTransaction and into main.
Moved IsStandard and IsFinal out of CTransaction and put them in main as IsStandardTx and IsFinalTx. Moved GetValueOut out of CTransaction into main. Moved CTxIn, CTxOut, and CTransaction into core.
Added minimum fee parameter to CTxOut::IsDust() temporarily until CTransaction is moved to core.h so that CTxOut needn't know about CTransaction.
- explicitly set the default of all GetBoolArg() calls
- rework getarg_test.cpp and util_tests.cpp to cover this change
- some indentation fixes
- move macdockiconhandler.h include in bitcoin.cpp to the "our headers"
section
Write bestblock records in wallets:
* Every 20160 blocks synced, no matter what (before: none during IBD)
* Every 144 blocks after IBD (before: for every block, slow)
* When creating a new wallet
* At shutdown
This should result in far fewer spurious rescans.
Remove the pnext pointer in CBlockIndex, and replace it with a
vBlockIndexByHeight vector (no effect on memory usage). pnext can
now be replaced by vBlockIndexByHeight[nHeight+1], but
FindBlockByHeight becomes constant-time.
This also means the entire mapBlockIndex structure and the block
index entries in it become purely blocktree-related data, and
independent from the currently active chain, potentially allowing
them to be protected by separate mutexes in the future.
Every block index entry currently requires a separately-allocated
CBigNum. By replacing them with uint256, it's just 32 bytes extra
in CBlockIndex itself.
This should save us a few megabytes in RAM, and less allocation
overhead.
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
As these were not updated when 'backporting' the 225430 checkpoint
into head.
Additionally, also report verification progress in debug.log, and
tweak the sigcheck-verification-speed-factor a bit.
Two reasons for this change:
1. Need to always use boost::thread's sleep, even on Windows, so the
sleeps can be interrupted (prior code used Windows' built-in Sleep).
2. I always forgot what units the old Sleep took.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
- remove an unneeded MODAL flag, as MSG_ERROR sets MODAL
- re-order an if-clause in main to have bool checks before a function call
- fix some log messages that used wrong function names
- make a log message use a correct ellipsis
- remove some unneded spaces, brackets and line-breaks
- fix style for adding files in the Qt project
Extremely large transactions with lots of inputs can cost the network
almost as much to process as they cost the sender in fees.
We would never create transactions larger than 100K big; this change
makes transactions larger than 100K non-standard, so they are not
relayed/mined by default. This is most important for miners that might
create blocks larger than 250K big, who could be vulnerable to a
make-your-blocks-so-expensive-to-verify-they-get-orphaned attack.
At least one service that accepted zero-confirmation transactions
was vulnerable because an attacker could send a transaction
with a lock time far in the future, and then have plenty of time in
which to get a double-spend mined (perhaps from a miner who wasn't
on the network when the first transaction was broadcast).
That is a variation on the "Finney attack". We still don't
recommend anybody accept 0-confirmation transactions as final
payment for anything. This change keeps non-final transactions
from appearing in the wallet, and, assuming most of the network
accepts this change, will prevent them from being relayed until
they are final.
* Pass txid's to CCoinsView functions by reference instead of by value
* Add a method to swap CCoins, and use it in some places to avoid a
allocating copy + destruct.
* Optimize CCoinsViewCache::FetchCoins to do only a single search
through the backing map.
This actually simplifies some SPV code, as they can keep track of
a filtered block and its txn before accepting both in one step.
The previous argument was that SPV nodes should handle the txn the
same as any other free txn and then mark them as connected to a
block when they get the filtered block itself. However, it now
appears that SPV nodes will need to put in more effort to verify
loose txn than they would to verify txn in blocks, thus making it
more approriate to send the txn after the filtered block.
By specifying -txindex when initializing the database, a txid-to-diskpos
index is maintained in the blktree database. This database is used to
help answering getrawtransaction() RPC queries, when enabled.
Changing the -txindex value requires a -reindex; the client will abort
at startup if the database and the specified -txindex mismatch.
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Fixes issue #2178 : attacker could penny-flood with invalid-signature
transactions to deduce which addresses belonged to your node.
I'm committing this early for code review; I still need to write up
a test plan.
Executive summary of fix: check all transactions received from the network
for penny-flood rate-limiting before adding to the memory pool. But do NOT
ratelimit transactions added to the memory pool:
- because of blockchain reorgs
- stored in the wallet and added at startup
- sent from the GUI or one of the send* RPC commands (CWallet::CommitTransaction)
The limit-free-transactions code really should be a method on CNode, with
counters per-peer. But that is a bigger change for another day.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
Since block validation happens in parallel, multiple threads may be
accessing the signature cache simultaneously. To prevent contention:
* Turn the signature cache lock into a shared mutex
* Make reading from the cache only acquire a shared lock
* Let block validations not store their results in the cache
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
- some users reported it as weird, that the estimated block count could be
lower than our own nodes block number (which is indeed true and not good)
- this pull adds a new default behaviour, which displays our own block
number as estimated block number, if own >= est. block count
- the pull raises space for nodes block counts in cPeerBlockCounts to 8 to
be more accurate
- also removes a reduntant setNumBlocks() call in RPCConsole and moves
initialisation of numBlocksAtStartup in ClientModel, where it belongs
-checklevel gets a new meaning:
0: verify blocks can be read from disk (like before)
1: verify (contextless) block validity (like before)
2: verify undo files can be read and have good checksums
3: verify coin database is consistent with the last few blocks
(close to level 6 before)
4: verify all validity rules of the last few blocks
Level 3 is the new default, as it's reasonably fast. As level 3 and
4 are implemented using an in-memory rollback of the database, they
are limited to as many blocks as possible without exceeding the
limits set by -dbcache. The default of -dbcache=25 allows for some
150-200 blocks to be rolled back.
In case an error is found, the application quits with a message
instructing the user to restart with -reindex. Better instructions,
and automatic recovery (when possible) or automatic reindexing are
left as future work.
When the coin database is out of date with the block database, the
best block in it is automatically switched to. This reconnection
process can take time, so allow it to be interrupted.
This also stops block connection as soon as shutdown is requested,
leading to a faster shutdown.
This problem is like earth (mostly harmless). After/during a
-reindex, it means the statistics about the last block file
reported in debug.log are always of blk00000.dat instead of the
last file. Apart from that, it means a few more database entries
need to be read when finding a file to append to the first time.
- even if we are allowed to fail pre-allocating, it's better to check
for sufficient space before calling AllocateFileRange() and if we
are out of disk space return with error()
- the above change allows us to remove the CheckDiskSpace() check
in CBlock::AcceptBlock()
In case a reorganisation fails, the internal state could become
inconsistent (memory only). Previously, a cache per block connect
or disconnect action was used, so blocks could not be applied in
a partial way. Extend this to a cache for the entire reorganisation,
making it atomic entirely. This also simplifies the code a bit.
- fix ThreadSafeMessageBox always displays error icon
- allow to specify MSG_ERROR / MSG_WARNING or MSG_INFORMATION without a
custom caption / title
- allow to specify CClientUIInterface::ICON_ERROR / ICON_WARNING and
ICON_INFORMATION (which is default) as message box icon
- remove CClientUIInterface::OK from ThreadSafeMessageBox-calls, as
the OK button will be set as default, if none is specified
- prepend "Bitcoin - " to used captions
- rename BitcoinGUI::error() -> BitcoinGUI::message() and add function
documentation
- change all style parameters and enum flags to unsigned
- update code to use that new API
- update Client- and WalletModel to use new BitcoinGUI::message() and
rename the classes error() method into message()
- include the possibility to supply the wanted icon for messages from
Client- and WalletModel via "style" parameter
When a transaction A is in the memory pool, while a transaction B
(which shares an input with A) gets accepted into a block, A was
kept forever in the memory pool.
This commit adds a CTxMemPool::removeConflicts method, which
removes transactions that conflict with a given transaction, and
all their children.
This results in less transactions in the memory pool, and faster
construction of new blocks.
These flags select features to be enabled/disabled during script
evaluation/checking, instead of several booleans passed along.
Currently these flags are defined:
* SCRIPT_VERIFY_P2SH: enable BIP16-style subscript evaluation
* SCRIPT_VERIFY_STRICTENC: enforce strict adherence to pubkey/sig encoding standards.
- remove an unwanted ";" at the end of the ~CCoinsView() destructor
- in FindBlockPos() and FindUndoPos() only call fclose(), is file is open
- fix an error string in the CBlockUndo class
Flushes the blktree/ and coins/ databases, and reindexes the
block chain files, as if their contents was loaded via -loadblock.
Based on earlier work by Jeff Garzik.
Implements #1948
- Add macro `CLIENT_VERSION_IS_RELEASE` to clientversion.h
- When running a prerelease (the above macro is `false`):
- In UI, show an orange warning bar at the top. This will be used for other
warnings (and alerts) as well, instead of the status bar.
- For `bitcoind`, show the warning in the "errors" field in `getinfo`
response.
CreateNewBlock was reading pindexBest at the start before taking the lock
so it was possible to have the the block content not match the prevheader
and this can also trigger a newly added assert in ConnectBlock.
I noticed this during a code review after twobitcoins reported that ab91bf39
(BIP30 for all blocks) could cause a null dereference on a modified node
that mined during the IBD, or on testnet when it reached heights 91842 and
91880 due to CreateNewBlock calling ConnectBlock with pindex->phashBlock NULL.
- remove uiInterface.InitMessage() calls from ThreadImport(), as Qt
doesn't like them getting called out of it's main thread and because the
thread will continue to run after the GUI was loaded
Split off CBlockTreeDB and CCoinsViewDB into txdb-*.{cpp,h} files,
implemented by either LevelDB or BDB.
Based on code from earlier commits by Mike Hearn in his leveldb
branch.
To prevent excessive copying of CCoins in and out of the CCoinsView
implementations, introduce a GetCoins() function in CCoinsViewCache
with returns a direct reference. The block validation and connection
logic is updated to require caching CCoinsViews, and exploits the
GetCoins() function heavily.
Use CBlock's vMerkleTree to cache transaction hashes, and pass them
along as argument in more function calls. During initial block download,
this results in every transaction's hash to be only computed once.
During the initial block download (or -loadblock), delay connection
of new blocks a bit, and perform them in a single action. This reduces
the load on the database engine, as subsequent blocks often update an
earlier block's transaction already.
This switches bitcoin's transaction/block verification logic to use a
"coin database", which contains all unredeemed transaction output scripts,
amounts and heights.
The name ultraprune comes from the fact that instead of a full transaction
index, we only (need to) keep an index with unspent outputs. For now, the
blocks themselves are kept as usual, although they are only necessary for
serving, rescanning and reorganizing.
The basic datastructures are CCoins (representing the coins of a single
transaction), and CCoinsView (representing a state of the coins database).
There are several implementations for CCoinsView. A dummy, one backed by
the coins database (coins.dat), one backed by the memory pool, and one
that adds a cache on top of it. FetchInputs, ConnectInputs, ConnectBlock,
DisconnectBlock, ... now operate on a generic CCoinsView.
The block switching logic now builds a single cached CCoinsView with
changes to be committed to the database before any changes are made.
This means no uncommitted changes are ever read from the database, and
should ease the transition to another database layer which does not
support transactions (but does support atomic writes), like LevelDB.
For the getrawtransaction() RPC call, access to a txid-to-disk index
would be preferable. As this index is not necessary or even useful
for any other part of the implementation, it is not provided. Instead,
getrawtransaction() uses the coin database to find the block height,
and then scans that block to find the requested transaction. This is
slow, but should suffice for debug purposes.
Introduce a AllocateFileRange() function in util, which wipes or
at least allocates a given range of a file. It can be overriden
by more efficient OS-dependent versions if necessary.
Block and undo files are now allocated in chunks of 16 and 1 MiB,
respectively.
Change the block storage layer again, this time with multiple files
per block, but tracked by txindex.dat database entries. The file
format is exactly the same as the earlier blk00001.dat, but with
smaller files (128 MiB for now).
The database entries track how many bytes each block file already
uses, how many blocks are in it, which range of heights is present
and which range of dates.
Special serializer/deserializer for amount values. It is optimized for
values which have few non-zero digits in decimal representation. Most
amounts currently in the txout set take only 1 or 2 bytes to
represent.
These command are a leftover from send-to-IP transactions, which have been
removed a long time ago.
Also removes CNode::mapRequests and CNode::PushRequests, as these were
only used for the mentioned commands.
Matt pointed out some time ago that there existed a minor DOS
attack where a node in its initial block download could be wedged
by an overwrite attack in a fork created between checkpoints before
a time where BIP30 was enforced. Now that the BIP30 timestamp
is irreversibly past the check can be more aggressive and apply to
all blocks except the two historic violations.
Hard-code a special nId=max int alert, to be broadcast if the
alert key is ever compromised. It applies to all versions, never
expires, cancels all previous alerts, and has a fixed message:
URGENT: Alert key compromised, upgrade required
Variations are not allowed (ignored), so an attacker with
the private key cannot broadcast empty-message nId=max alerts.
This fixes two alert system vulnerabilities found by
Sergio Lerner; you could send peers unlimited numbers
of invalid alert message to try to either fill up their
debug.log with messages and/or keep their CPU busy
checking signatures.
Fixed by disconnecting/banning peers if they send 10 or more
bad (invalid/expired/cancelled) alerts.
If 950 of the last 1,000 blocks are nVersion=2, reject nVersion=1
(or zero, but no bitcoin release has created block.nVersion=0) blocks
-- 75 of last 100 on testnet3.
This rule is being put in place now so that we don't have to go
through another "express support" process to get what we really
want, which is for every single new block to include the block height
in the coinbase.
"Version 2" blocks are blocks that have nVersion=2 and
have the block height as the first item in their coinbase.
Block-height-in-the-coinbase is strictly enforced when
version=2 blocks are a supermajority in the block chain
(750 of the last 1,000 blocks on main net, 51 of 100 for
testnet). This does not affect old clients/miners at all,
which will continue producing nVersion=1 blocks, and
which will continue to be valid.
- If the height is in the first half, start at the genesis block and go up, rather than at the top
- Cache the last lookup and use it as a reference point if it's close to the next request, to make linear lookups always fast
- ensure warnings always start with "Warning:" and that the first
character after ":" is written uppercase
- ensure the first sentence in warnings ends with an "!"
- remove unneeded spaces from Warning-strings
- add missing Warning-string translation
- remove a "\n" and replace with untranslatable "<br><br>"
The new bytes are based on "11" to appeal to Gavin's 11 fetish.
This breaks existing testnet3 nodes as the blockchain files
are also versioned. To upgrade a node delete everything
except wallet.dat from your .bitcoin/testnet3 folder.
Modify CreateNewBlock so that instead of processing all transactions
in priority order, process the first 27K of transactions in
priority order and then process the rest in fee-per-kilobyte
order.
This is the first, minimal step towards better a better fee-handling
system for both miners and end-users; this patch should be easy
to backport to the old versions of Bitcoin, and accomplishes the
most important goal-- allow users to "buy their way in" to blocks
using transaction fees.
* Fix wrong thread name for wallet *relocking* thread
- Was named the unlocking thread
* Use consistent naming
Signed-off-by: Giel van Schijndel <me@mortis.eu>
NOTE: These thread names are visible in gdb when using 'info threads'.
Additionally both 'top' and 'ps' show these names *unless* told to
display the command-line instead of task name.
Signed-off-by: Giel van Schijndel <me@mortis.eu>
Adds CBlock::CURRENT_VERSION and CTransaction::CURRENT_VERSION
constants, and makes non-CURRENT_VERSION transactions nonstandard.
This will help make future upgrades smoother.
Prior to this change, each TX typically generated 3+ debug messages,
askfor tx 8644cc97480ba1537214 0
sending getdata: tx 8644cc97480ba1537214
askfor tx 8644cc97480ba1537214 1339640761000000
askfor tx 8644cc97480ba1537214 1339640881000000
CTxMemPool::accept() : accepted 8644cc9748 (poolsz 6857)
After this change, there is only one message for each valid TX received
CTxMemPool::accept() : accepted 22a73c5d8c (poolsz 42)
and two messages for each orphan tx received
ERROR: FetchInputs() : 673dc195aa mempool Tx prev not found 1e439346fc
stored orphan tx 673dc195aa (mapsz 19)
The -debugnet option, or its superset -debug, will restore the full debug
output.
- Signals now go directly from the core to WalletModel/ClientModel.
- WalletModel subscribes to signals on CWallet: Prepares for multi-wallet support, by no longer assuming an implicit global wallet.
- Gets rid of noui.cpp, the few lines that were left are merged into init.cpp
- Rename wxXXX message flags to MF_XXX, to make them UI indifferent.
- ThreadSafeMessageBox no longer returns the value `4` which was never used, converted to void.
Gets rid of `MainFrameRepaint` in favor of specific update functions that tell the UI exactly what changed.
This improves the efficiency of various handlers. Also fixes problems with mined transactions not showing up until restart.
The following notifications were added:
- `NotifyBlocksChanged`: Block chain changed
- `NotifyKeyStoreStatusChanged`: Wallet status (encrypted, locked) changed.
- `NotifyAddressBookChanged`: Address book entry changed.
- `NotifyTransactionChanged`: Wallet transaction added, removed or updated.
- `NotifyNumConnectionsChanged`: Number of connections changed.
- `NotifyAlertChanged`: New, updated or cancelled alert. As this finally makes it possible for the UI to know when a new alert arrived, it can be shown as OS notification.
These notifications could also be useful for RPC clients. However, currently, they are ignored in bitcoind (in noui.cpp).
Also brings back polling with timer for numBlocks in ClientModel. This value updates so frequently during initial download that the number of signals clogs the UI thread and causes heavy CPU usage. And after initial block download, the value changes so rarely that a delay of half a second until the UI updates is unnoticable.
If Reorganize() fails, then its caller, CBlock::SetBestChain(),
will call TxnAbort().
Redundant TxnAbort() calls are harmless. The second will return an
error return value, with no other side effects. TxnAbort() return
values are generally never checked. The impact is nil.
Loop over all inputs doing inexpensive validity checks first,
and then loop over them a second time doing expensive signature
checks. This helps prevent possible CPU exhaustion attacks
where an attacker tries to make a victim waste time checking
signatures for invalid transactions.
Remove orphan transactions from memory once
all of their parent transactions are received
and they're still not valid.
Thanks to Sergio Demian Lerner for suggesting this fix.
Old log message:
storing orphan tx df2244f6bc
New log message:
storing orphan tx df2244f6bc (mapsz 51)
Also, trim a few trailing whitespace in main.cpp.
Immediately issue a "getblocks", instead of a "getdata" (which will
trigger the relevant "inv" to be sent anyway), and only do so when
the previous set of invs led us into a known and attached part of
the block tree.
This prevents an undefined operation in main.cpp, when shifting the hash value
left by 32 bits.
Shifting a signed int left into the sign bit is undefined in C++11.
Introduce a boolean variable for each "network" (ipv4, ipv6, tor, i2p),
and track whether we are likely to able to connect to it. Addresses in
"addr" messages outside of our network get limited relaying and are not
stored in addrman.
This will make bitcoin relay valid routable IPv6 addresses, and when
USE_IPV6 is enabled, listen on IPv6 interfaces and attempt connections
to IPv6 addresses.
FetchInputs already logs failures internally. This commit makes the logging
more consistent with other FetchInputs callsites also.
Prior to this commit, two log lines were logged for one condition:
ERROR: FetchInputs() : de15fde415 mempool Tx prev not found a2c75da227
ERROR: CTxMemPool::accept() : FetchInputs failed de15fde415
After this commit, only one line is logged:
ERROR: FetchInputs() : e0507ab2c7 mempool Tx prev not found 9a620262cd
Previously, a single TX would trigger two log lines in quick succession,
addUnchecked(): size 152
CTxMemPool::accept() : accepted c4cfdd48b7
After this change, only one log line is used:
CTxMemPool::accept() : accepted 98885e65db (poolsz 26)