Amend to d5f1e72. It turns out that BerkelyDB was including inttypes.h
indirectly, so we cannot fix this with just macros.
Trivial commit: apply the following script to all .cpp and .h files:
# Middle
sed -i 's/"PRIx64"/x/g' "$1"
sed -i 's/"PRIu64"/u/g' "$1"
sed -i 's/"PRId64"/d/g' "$1"
# Initial
sed -i 's/PRIx64"/"x/g' "$1"
sed -i 's/PRIu64"/"u/g' "$1"
sed -i 's/PRId64"/"d/g' "$1"
# Trailing
sed -i 's/"PRIx64/x"/g' "$1"
sed -i 's/"PRIu64/u"/g' "$1"
sed -i 's/"PRId64/d"/g' "$1"
After this commit, `git grep` for PRI.64 should turn up nothing except
the defines in util.h.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
Previously CreateNewBlock() didn't take into account the fact that
IsFinalTx() without any arguments tests if the transaction is considered
final in the *current* block, when both those functions really needed to
know if the transaction would be final in the *next* block.
Additionally the UI had a similar misunderstanding.
Also adds some basic tests to check that CreateNewBlock() is in fact
mining nLockTime-using transactions correctly.
Thanks to Wladimir J. van der Laan for rebase.
Unit tests would fail if compiled with -DDEBUG_LOCKORDER (AssertLockHeld()
would fail; AssertLockHeld() relies on the DEBUG_LOCKORDER code to keep
track of locks held).
Fixed by LOCK'ing the wallet mutex in the unit tests that manipulate the
wallet.
Unit tests for uint256.h. The file uint160_tests.cpp is no longer
needed. The ad-hoc tests which were in uint256.h are also no longer
needed. The new tests achieve 100% coverage.
Instead, use have an exception object to check if the string returned by what() on the raised exception matches the string returned by what() on the expected exception instance.
This way, we do not need to list all different possible explanatory strings for different platforms in the test code, and make it simple. (The idea is by Cory Fields.)
Before the fix, there were 6 errors such as :
serialize_tests.cpp:77: error in "noncanonical": incorrect exception std::ios_base::failure is caught
It turns out that ex.what() returns following string instead of "non-canonical ReadCompactSize()"
"non-canonical ReadCompactSize(): unspecified iostream_category error"
After the fix, unit test passed.
The test ran using Apple LLVM v5.0 on OSX 10.9 and the unit test error happened because of different error messages by different compilers.
g++ --version on my development environment.
```
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix
```
`-logtodebugger` is a strange, obscure, WIN32-only (mostly MSVC) thing.
Let's clean up the options a bit get rid of it.
test_bitcoin was using fLogToDebugger as a way to prevent logging to
debug.log. For this, add a boolean (not exposed as option) fLogToDebugLog that
defaults to true and is disabled in the tests.
Use a fixed script instead of a CReserveKey from the wallet.
This does not affect the functionality or result of the tests as they never
check the state of the wallet in the first place.
Remove unnecessary dependencies for bitcoin-cli
(leveldb, berkelydb, wallet, RPC server)
Build system changes:
- split libbitcoin.a into libbitcoin_common.a, libbitcoin_server.a and
libbitcoin_cli.a
Code changes (movement only):
- split up HelpMessage into HelpMessage in init.cpp and HelpMessageCli
in rpcclient.cpp
- move uiInterface from init.cpp to util.cpp
Split bitcoinrpc up into
- rpcserver: bitcoind RPC server
- rpcclient: bitcoin-cli RPC client
- rpcprotocol: shared common HTTP/JSON-RPC protocol code
One step towards making bitcoin-cli independent from the rest
of the code, and thus a smaller executable that doesn't have to
be linked against leveldb.
This commit only does code movement, there are no functional changes.
The last fee drop was by 5x (from 50k satoshis to 10k satoshis)
in the 0.8.2 release which was about 6 months ago.
The current fee is (assuming a $500 exchange rate) about 5 dollar
cents. The new fee after this patch is 0.5 cents.
Miners who prefer the higher fees are obviously still able to
use the command line flags to override this setting. Miners who
choose to create smaller blocks will select the highest-fee paying
transactions anyway.
This would hopefully be the last manual adjustment ever required
before floating fees become normal.
I regenerated the alert test data; now alerts are tested
against a protocol version way above the current protocol
version.
So we won't have to regenerate them every time we bump
PROTOCOL_VERSION in the future.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
0056095 Show short scriptPubKeys correctly (Peter Todd)
22de68d Relay OP_RETURN TxOut as standard transaction type (Peter Todd)
Signed-off-by: Gavin Andresen <gavinandresen@gmail.com>
Changed CDataStream::GetAndClear() to use the most obvious
get get and clear instead of a tricky swap().
Added a unit test for CDataStream insert/erase/GetAndClear.
Note: GetAndClear() is not performance critical, it is used only
by the send-a-message-to-the-network code. Bug was not noticed
before now because the send-a-message code never erased from the
stream.
class template base_uint had its own private lookup table.
This is saving 256 bytes per instantiation.
The result is not spectacular as bitcoin-qt has only shrinked of
about 1Kb but it is still valid improvement.
Also, I have replaced a for loop with a memset() call.
Made CBigNum::SetHex() use the new HexDigit() function.
Signed-off-by: Olivier Langlois <olivier@olivierlanglois.net>
Just-in-case sanity test for JSON spirit and AmountFromValue.
Also update rpc_format_monetary_values test to use ValueFromAmount,
so that ValueFromAmount is also tested.
Instead of building a full copy of a CTransaction being signed, and
then modifying bits and pieces until its fits the form necessary
for computing the signature hash, use a wrapper serializer that
only serializes the necessary bits on-the-fly.
This makes it easier to see which data is actually being hash,
reduces load on the heap, and also marginally improves performances
(around 3-4us/sigcheck here). The performance improvements are much
larger for large transactions, though.
The old implementation of SignatureHash is moved to a unit tests,
to test whether the old and new algorithm result in the same value
for randomly-constructed transactions.
This change moves test data into the binaries rather than reading them from
the disk at runtime.
Advantages:
- Tests become distributable
- Cross-compile friendly. Build on one machine and execute in an arbitrary
location on another.
- Easier testing for backports. Users can verify that tests pass without having
to track down corresponding test data.
- More trustworthy test results and easier quality assurance as tests make
fewer assumptions about their environment.
- Tests could theoretically run at client/daemon startup and exit on failure.
Disadvantages:
- Required 'hexdump' build-dependency. This is a standard bsd tool that should
be usable everywhere. It is likely already installed on all build-machines.
- Tests can no longer be fudged after build by altering test-data.
Seems it was forgotten about when IsPushOnly() and the unittests were
written. A particular oddity is that OP_RESERVED doesn't count towards
the >201 opcode limit unlike every other named opcode.
To fix a minor malleability found by Sergio Lerner (reported here:
https://bitcointalk.org/index.php?topic=8392.msg1245898#msg1245898)
The problem is that if (R,S) is a valid ECDSA signature for a given
message and public key, (R,-S) is also valid. Modulo N (the order
of the secp256k1 curve), this means that both (R,S) and (R,N-S) are
valid. Given that N is odd, S and N-S have a different lowest bit.
We solve the problem by forcing signatures to have an even S value,
excluding one of the alternatives.
This commit just changes the signing code to always produce even S
values, and adds a verification mode to check it. This code is not
enabled anywhere yet. Existing tests in key_tests.cpp verify that
the produced signatures are still valid.
The length of vectors, maps, sets, etc are serialized using
Write/ReadCompactSize -- which, unfortunately, do not use a
unique encoding.
So deserializing and then re-serializing a transaction (for example)
can give you different bits than you started with. That doesn't
cause any problems that we are aware of, but it is exactly the type
of subtle mismatch that can lead to exploits.
With this pull, reading a non-canonical CompactSize throws an
exception, which means nodes will ignore 'tx' or 'block' or
other messages that are not properly encoded.
Please check my logic... but this change is safe with respect to
causing a network split. Old clients that receive
non-canonically-encoded transactions or blocks deserialize
them into CTransaction/CBlock structures in memory, and then
re-serialize them before relaying them to peers.
And please check my logic with respect to causing a blockchain
split: there are no CompactSize fields in the block header, so
the block hash is always canonical. The merkle root in the block
header is computed on a vector<CTransaction>, so
any non-canonical encoding of the transactions in 'tx' or 'block'
messages is erased as they are read into memory by old clients,
and does not affect the block hash. And, as noted above, old
clients re-serialize (with canonical encoding) 'tx' and 'block'
messages before relaying to peers.
Fixes issue#2838; this is a tweaked version of pull#2845 that
should not leak the length of the password and is more generic,
in case we run into other situations where we need
timing-attack-resistant comparisons.
Orphan transactions were stored as a CDataStream pointer;
this changes the mapOrphanTransactions data structures to
store orphans as a CTransaction.
This also fixes CVE-2013-4627 by always re-serializing
transactions before relaying them.
The new class is accessed via the Params() method and holds
most things that vary between main, test and regtest networks.
The regtest mode has two purposes, one is to run the
bitcoind/bitcoinj comparison tool which compares two separate
implementations of the Bitcoin protocol looking for divergence.
The other is that when run, you get a local node which can mine
a single block instantly, which is highly convenient for testing
apps during development as there's no need to wait 10 minutes for
a block on the testnet.
Removed AreInputsStandard from CTransaction, made it a regular function in main.
Moved CTransaction::GetOutputFor to CCoinsViewCache.
Moved GetLegacySigOpCount and GetP2SHSigOpCount out of CTransaction into regular functions in main.
Moved GetValueIn and HaveInputs from CTransaction into CCoinsViewCache.
Moved AllowFree, ClientCheckInputs, CheckInputs, UpdateCoins, and CheckTransaction out of CTransaction and into main.
Moved IsStandard and IsFinal out of CTransaction and put them in main as IsStandardTx and IsFinalTx. Moved GetValueOut out of CTransaction into main. Moved CTxIn, CTxOut, and CTransaction into core.
Added minimum fee parameter to CTxOut::IsDust() temporarily until CTransaction is moved to core.h so that CTxOut needn't know about CTransaction.
- explicitly set the default of all GetBoolArg() calls
- rework getarg_test.cpp and util_tests.cpp to cover this change
- some indentation fixes
- move macdockiconhandler.h include in bitcoin.cpp to the "our headers"
section
This fixes test_bitcoin failures on openbsd reported by dhill on IRC.
On some systems rand() is a simple LCG over 2^31 and so it produces
an even-odd sequence. ApproximateBestSubset was only using the least
significant bit and so every run of the iterative solver would be the
same for some inputs, resulting in some pretty dumb decisions.
Using something other than the least significant bit would paper over
the issue but who knows what other way a system's rand() might get us
here. Instead we use an internal RNG with a period of something like
2^60 which is well behaved. This also makes it possible to make the
selection deterministic for the tests, if we wanted to implement that.
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
These flags select features to be enabled/disabled during script
evaluation/checking, instead of several booleans passed along.
Currently these flags are defined:
* SCRIPT_VERIFY_P2SH: enable BIP16-style subscript evaluation
* SCRIPT_VERIFY_STRICTENC: enforce strict adherence to pubkey/sig encoding standards.
Flushes the blktree/ and coins/ databases, and reindexes the
block chain files, as if their contents was loaded via -loadblock.
Based on earlier work by Jeff Garzik.
signrawtransaction was unable to sign pay-to-script-hash inputs
when given the list of private keys to use. With this commit
you can provide the p2sh redemption script in the list of
inputs.
To prevent excessive copying of CCoins in and out of the CCoinsView
implementations, introduce a GetCoins() function in CCoinsViewCache
with returns a direct reference. The block validation and connection
logic is updated to require caching CCoinsViews, and exploits the
GetCoins() function heavily.
This switches bitcoin's transaction/block verification logic to use a
"coin database", which contains all unredeemed transaction output scripts,
amounts and heights.
The name ultraprune comes from the fact that instead of a full transaction
index, we only (need to) keep an index with unspent outputs. For now, the
blocks themselves are kept as usual, although they are only necessary for
serving, rescanning and reorganizing.
The basic datastructures are CCoins (representing the coins of a single
transaction), and CCoinsView (representing a state of the coins database).
There are several implementations for CCoinsView. A dummy, one backed by
the coins database (coins.dat), one backed by the memory pool, and one
that adds a cache on top of it. FetchInputs, ConnectInputs, ConnectBlock,
DisconnectBlock, ... now operate on a generic CCoinsView.
The block switching logic now builds a single cached CCoinsView with
changes to be committed to the database before any changes are made.
This means no uncommitted changes are ever read from the database, and
should ease the transition to another database layer which does not
support transactions (but does support atomic writes), like LevelDB.
For the getrawtransaction() RPC call, access to a txid-to-disk index
would be preferable. As this index is not necessary or even useful
for any other part of the implementation, it is not provided. Instead,
getrawtransaction() uses the coin database to find the block height,
and then scans that block to find the requested transaction. This is
slow, but should suffice for debug purposes.
Special serializer/deserializer for amount values. It is optimized for
values which have few non-zero digits in decimal representation. Most
amounts currently in the txout set take only 1 or 2 bytes to
represent.
Variable-length integers: bytes are a MSB base-128 encoding of the number.
The high bit in each byte signifies whether another digit follows. To make
the encoding is one-to-one, one is subtracted from all but the last digit.
Thus, the byte sequence a[] with length len, where all but the last byte
has bit 128 set, encodes the number:
(a[len-1] & 0x7F) + sum(i=1..len-1, 128^i*((a[len-i-1] & 0x7F)+1))
Properties:
* Very small (0-127: 1 byte, 128-16511: 2 bytes, 16512-2113663: 3 bytes)
* Every integer has exactly one encoding
* Encoding does not depend on size of original integer type