Pull request #948 introduced a fix for nodes stuck on a long side branch
of the main chain. The fix was non-functional however, as the additional
getdata request was created in a first step of processing, but dropped
in a second step as it was considered redundant. This commits fixes it
by sending the request directly.
Works for wallet transactions, memory-pool transaction and block chain
transactions.
Available for all:
* txid
* version
* locktime
* size
* coinbase/inputs/outputs
* confirmations
Available only for wallet transactions:
* amount
* fee
* details
* blockindex
Available for wallet transactions and block chain transactions:
* blockhash
* time
This commit removes the dependency of serialize.h on PROTOCOL_VERSION,
and makes this parameter required instead of implicit. This is much saner,
as it makes the places where changing a version number can have an
influence obvious.
Conflict:
* cs_main in ProcessMessages() (before calling ProcessMessages)
* cs_vSend in CNode::BeginMessage
versus:
* cs_vSend in ThreadMessageHandler2 (before calling SendMessages)
* cs_main in SendMessages
Even though cs_vSend is a try_lock, if it succeeds simultaneously with
the locking of cs_main in ProcessMessages(), it could cause a deadlock.
Open database once per "tx" message, rather than multiple times,
in the case of orphan transaction presence.
As a side effect, a now-unused CTransaction::AcceptToMemoryPool()
variant is removed.
Reference miner exists for testnet-in-a-box type situations, and as a
reference. We don't care enough about highly optimized internal
mining to keep workarounds like this.
Add a pong message that is sent in reply to a ping. It echoes back a nonce
field that is now added to the ping message. Send a nonce of zero in ping
messages.
Original author: Mike Hearn @ Google
Modified Mike's change to introduce a mild form of protocol documentation in
version.h.
Where possible, use boost::filesystem::path instead of std::string or
char* for filenames. This avoids a lot of manual string tinkering, in
favor of path::operator/.
GetDataDir is also reworked significantly, it now only keeps two cached
directory names (the network-specific data dir, and the root data dir),
which are decided through a parameter instead of pre-initialized global
variables.
Finally, remove the "upgrade from 0.1.5" case where a debug.log in the
current directory has to be removed.
All client version information is moved to version.cpp, which optionally
(-DHAVE_BUILD_INFO) includes build.h. build.h is automatically generated
on supporting platforms via contrib/genbuild.sh, using git describe.
The git export-subst attribute is used to put the commit id statically
in version.cpp inside generated archives, and this value is used if no
build.h is present.
The gitian descriptors are modified to use git archive instead of a
copy, to create the src/ directory in the output. This way,
src/src/version.cpp will contain the static commit id. To prevent
gitian builds from getting the "-dirty" marker in their git-describe
generated identifiers, no touching of files or running sed on the
makefile is performed anymore. This does not seem to influence
determinism.
- rename wxMessageBox, remove redundant arguments to noui/qtui calls
- also, add flag to force blocking, modal dialog box for disk space warning etc
- clarify function naming
- no more special MessageBox needed from AppInit2, as window object is created before calling AppInit2
In cases of very large reorganisations (hundreds of blocks), a situation
may appear where an 'inv' is sent as response to a 'getblocks', but the
last block mentioned in the inv is already known to the receiver node.
However, the supplying node uses a request for this last block as a
trigger to send the rest of the inv blocks. If it never comes, the block
chain download is stuck.
This commit makes the receiver node always request the last inv'ed block,
even if it is already known, to prevent this problem.
Sometimes a new block arrives in a new chain that was already the
best valid one, but wasn't marked that way. This happens for example
when network rules change to recover after a fork.
In this case, it is not necessary to do the entire reorganisation
inside a single db commit. These can become huge, and exceed the
objects/lockers limits in bdb. This patch limits the blocks the
actual reorganisation is applied to, and adds the next blocks
afterwards in separate db transactions.
Introduce the following network rule:
* a block is not valid if it contains a transaction whose hash
already exists in the block chain, unless all that transaction's
outputs were already spent before said block.
Warning: this is effectively a network rule change, with potential
risk for forking the block chain. Leaving this unfixed carries the
same risk however, for attackers that can cause a reorganisation
in part of the network.
Thanks to Russell O'Connor and Ben Reeves.
Doing so would allow an attack on old nodes, which would relay a
standard transaction spending a BIP16 output in an invalid way,
until reaching a new node, which will disconnect their peer.
Reported by makomk on IRC.
Design goals:
* Only keep a limited number of addresses around, so that addr.dat does not grow without bound.
* Keep the address tables in-memory, and occasionally write the table to addr.dat.
* Make sure no (localized) attacker can fill the entire table with his nodes/addresses.
See comments in addrman.h for more detailed information.
This also avoids flushing setAddrKnown until 24 hours has passed,
and avoids contacting the external IP services when not listening.
Advertising non-listening nodes is just addr message spam.
It doesn't help the network, in fact it hurts the network,
and it also hurts user's privacy.
Advertising far out of sync nodes doesn't help the network—
they can't even forward (most) transactions and wastes nodes
outbound slots.
Allow mining of min-difficulty blocks if 20 minutes have gone by without mining a regular-difficulty block.
Normal rules apply every 2016 blocks, though, so there may be a very-slow-to-confirm block at the difficulty-adjustment blocks.
This also removes an un-needed sigops-per-byte check when accepting transactions to the memory pool (un-needed assuming only standard transactions are being accepted). And it only counts P2SH sigops after the switchover date.
This introduces CNetAddr and CService, respectively wrapping an
(IPv6) IP address and an IP+port combination. This functionality used
to be part of CAddress, which also contains network flags and
connection attempt information. These extra fields are however not
always necessary.
These classes, along with logic for creating connections and doing
name lookups, are moved to netbase.{h,cpp}, which does not depend on
headers.h.
Furthermore, CNetAddr is mostly IPv6-ready, though IPv6
functionality is not yet enabled for the application itself.
so it takes a flag for how to interpret OP_EVAL.
Also increased IsStandard size of scriptSigs to 500 bytes, so
a 3-of-3 multisig transaction IsStandard.
OP_EVAL is a new opcode that evaluates an item on the stack as a script.
It enables a new type of bitcoin address that needs an arbitrarily
complex script to redeem.
During the rushed transition from 0.01 BTC to 0.0005 BTC fees, we took the
approach of dropping the relay and block-inclusion fee to 0.0005 BTC
immediately, and only delayed adjusting the sending fee for the next release.
Afterward, the relay fee was lowered to 0.0001 BTC to avoid having the same
problem in the future. However, the block inclusion code was left setting
fForRelay to true! This fixes that, so the lower 0.0001 BTC allowance is (as
intended) only permitted for real relaying.
getmemorypool [data]
If [data] is not specified, returns data needed to construct a block to work on:
"version" : block version
"previousblockhash" : hash of current highest block
"transactions" : contents of non-coinbase transactions that should be included in the next block
"coinbasevalue" : maximum allowable input to coinbase transaction, including the generation award and transaction fees
"time" : timestamp appropriate for next block
"bits" : compressed target of next block
If [data] is specified, tries to solve the block and returns true if it was successful.
Collapsed multiple wallet mutexes to a single cs_wallet, to avoid deadlocks with wallet methods that acquired locks in different order.
Also change master RPC call handler to acquire cs_main and cs_wallet locks before executing RPC calls; requiring each RPC call to acquire the right set of locks in the right order was too error-prone.
Explicitly make these global variables less-global to reduce the maximum
scope of this global state.
In my experience global variables tend to be a major source of bugs. As
such the less accessible they are the less likely they are to be the
source of a bug.
Signed-off-by: Giel van Schijndel <me@mortis.eu>