288d85d Get rid of CTxMempool::lookup() entirely (Pieter Wuille)
c2a4724 Optimization: use usec in expiration and reuse nNow (Pieter Wuille)
e9b4780 Optimization: don't check the mempool at all if no mempool req ever (Pieter Wuille)
dbfb426 Optimize the relay map to use shared_ptr's (Pieter Wuille)
8d39d7a Switch CTransaction storage in mempool to std::shared_ptr (Pieter Wuille)
1b9e6d3 Add support for unique_ptr and shared_ptr to memusage (Pieter Wuille)
3d3602f Add RPC test for the p2p mempool command in conjunction with disabled bloomfilters (Jonas Schnelli)
beceac9 Disable the mempool P2P command when bloom filters disabled (Peter Todd)
We send a newly-accepted peer a 1000-entry addr message, and then only use
vAddrToSend for small messages. Deallocate vAddrToSend after it's been used for
the big message to save about 40 kB per connected inbound peer.
- BIP9DeploymentInfo struct for static deployment info
- VersionBitsDeploymentInfo: Avoid C++11ism by commenting parameter names
- getblocktemplate: Make sure to set deployments in the version if it is LOCKED_IN
- In this commit, all rules are considered required for clients to support
* Switch mapRelay to use shared_ptr<CTransaction>
* Switch the relay code to copy mempool shared_ptr's, rather than copying
the transaction itself.
* Change vRelayExpiration to store mapRelay iterators rather than hashes
(smaller and faster).
Optimistically test the latch bool before taking the lock.
For all IsInitialBlockDownload calls after the first to return false,
this avoids the need to lock cs_main.
Saves about 10% of application memory usage once the mempool warms up. Since the
mempool is DynamicUsage-regulated, this will translate to a larger mempool in
the same amount of space.
Map value type: eliminate the vin index; no users of the map need to know which
input of the transaction is spending the prevout.
Map key type: replace the COutPoint with a pointer to a COutPoint. A COutPoint
is 36 bytes, but each COutPoint is accessible from the same map entry's value.
A trivial DereferencingComparator functor allows indirect map keys, but the
resulting syntax is misleading: `map.find(&outpoint)`. Implement an indirectmap
that acts as a wrapper to a map that uses a DereferencingComparator, supporting
a syntax that accurately reflect the container's semantics: inserts and
iterators use pointers since they store pointers and need them to remain
constant and dereferenceable, but lookup functions take const references.
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
b4d24e1 Report reindexing progress in GUI (Pieter Wuille)
d3d7547 Add -reindex-chainstate that does not rebuild block index (Pieter Wuille)
fb8fad1 Optimize ActivateBestChain for long chains (Pieter Wuille)
316623f Switch reindexing to AcceptBlock in-loop and ActivateBestChain afterwards (Pieter Wuille)
d253ec4 Make ProcessNewBlock dbp const and update comment (Pieter Wuille)
The current logic for syncing headers may lead to lots of duplicate
getheaders requests being sent: If a new block arrives while the node
is in headers sync, it will send getheaders in response to the block
announcement. When the headers arrive, the message will be of maximum
size and so a follow-up request will be sent---all of that in addition
to the existing headers syncing. This will create a second "chain" of
getheaders requests. If more blocks arrive, this may even lead to
arbitrarily many parallel chains of redundant requests.
This patch changes the behaviour to only request more headers after a
maximum-sized message when it contained at least one unknown header.
This avoids sustaining parallel chains of redundant requests.
Note that this patch avoids the issues raised in the discussion of
https://github.com/bitcoin/bitcoin/pull/6821: There is no risk of the
node being permanently blocked. At the latest when a new block arrives
this will trigger a new getheaders request and restart syncing.
b559914 Move bloom and feerate filtering to just prior to tx sending. (Gregory Maxwell)
4578215 Return mempool queries in dependency order (Pieter Wuille)
ed70683 Handle mempool requests in send loop, subject to trickle (Pieter Wuille)
dc13dcd Split up and optimize transaction and block inv queues (Pieter Wuille)
f2d3ba7 Eliminate TX trickle bypass, sort TX invs for privacy and priority. (Gregory Maxwell)
ActivateBestChain uses chainActive after releasing the lock; reorder operations
to move all access to synchronized object into existing LOCK(cs_main) block.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we would assert that if every block in vBlockHashesToAnnounce is in
chainActive, then the blocks to be announced must connect. However, there are
edge cases where this assumption could be violated (eg using invalidateblock /
reconsiderblock), so just check for this case and revert to inv-announcement
instead.
Previously Bitcoin would send 1/4 of transactions out to all peers
instantly. This causes high overhead because it makes >80% of
INVs size 1. Doing so harms privacy, because it limits the
amount of source obscurity a transaction can receive.
These randomized broadcasts also disobeyed transaction dependencies
and required use of the orphan pool. Because the orphan pool is
so small this leads to poor propagation for dependent transactions.
When the bypass wasn't in effect, transactions were sent in the
order they were received. This avoided creating orphans but
undermines privacy fairly significantly.
This commit:
Eliminates the bypass. The bypass is replaced by halving the
average delay for outbound peers.
Sorts candidate transactions for INV by their topological
depth then by their feerate (then hash); removing the
information leakage and providing priority service to
higher fee transactions.
Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.
Break the circular dependency between main and txdb by:
- Moving `CBlockFileInfo` from `main.h` to `chain.h`. I think this makes
sense, as the other block-file stuff is there too.
- Moving `CDiskTxPos` from `main.h` to `txdb.h`. This type seems
specific to txdb.
- Pass a functor `insertBlockIndex` to `LoadBlockIndexGuts`. This leaves
it up to the caller how to insert block indices.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).
Currently, we're keeping a timeout for each requested block, starting
from when it is requested, with a correction factor for the number of
blocks in the queue.
That's unnecessarily complicated and inaccurate.
As peers process block requests in order, we can make the timeout for each
block start counting only when all previous ones have been received, and
have a correction based on the number of peers, rather than the total number
of blocks.
Two-line patch to make it possible to shut down bitcoind cleanly during
the initial ActivateBestChain.
Fixes#6459 (among other complaints).
To reproduce:
- shutdown bitcoind
- copy chainstate
- start bitcoind
- let the chain sync a bit
- shutdown bitcoind
- copy back old chainstate
- start bitcoind
- bitcoind will catch up with all blocks during Init()
(the `boost::this_thread::interruption_point` / `ShutdownRequested()`
dance is ugly, this should be refactored all over bitcoind at some point
when moving from boost::threads to c++11 threads, but it works...)
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
This implements caching of ancestor state to each mempool entry, similar to
descendant tracking, but also including caching sigops-with-ancestors (as that
metric will be helpful to future code that implements better transaction
selection in CreatenewBlock).
The work limit served to prevent the descendant walking algorithm from doing
too much work by marking the parent transaction as dirty. However to implement
ancestor tracking, it's not possible to similarly mark those descendant
transactions as dirty without having to calculate them to begin with.
This commit removes the work limit altogether. With appropriate
chain limits (-limitdescendantcount) the concern about doing too much
work inside this function should be mitigated.
Continues "Make logging for validation optional" from #6519.
The idea there was to remove all ERROR logging of rejected transaction,
and move it to one message in the class 'mempoolrej' which logs the
state message (and debug info). The superfluous ERRORs in the log
"terrify" users, see for example issue #5794.
Unfortunately a lot of new logging was introduced in #6871 (RBF) and
#7287 (misc refactoring). This pull updates that new code.
SequenceLocks functions are used to evaluate sequence lock times or heights per BIP 68.
The majority of this code is copied from maaku in #6312
Further credit: btcdrak, sipa, NicolasDorier
fad6244 ATMP: make nAbsurdFee const (MarcoFalke)
fa762d0 [wallet.h] Remove main.h include (MarcoFalke)
fa79db2 Move maxTxFee out of mempool (MarcoFalke)
ProcessNewBlock would return failure early if CheckBlock failed, before
calling AcceptBlock. AcceptBlock also calls CheckBlock, and upon failure
would update mapBlockIndex to indicate that a block was failed. By returning
early in ProcessNewBlock, we were not marking blocks that fail a check in
CheckBlock as permanently failed, and thus would continue to re-request and
reprocess them.
Also renames whitelistalwaysrelay.
Nodes relay all transactions from whitelisted peers, this
gets in the way of some useful reasons for whitelisting
peers-- for example, bypassing bandwidth limitations.
The purpose of this forced relaying is for specialized gateway
applications where a node is being used as a P2P connection
filter and multiplexer, but where you don't want it getting
in the way of (re-)broadcast.
This change makes it configurable with whitelistforcerelay.
"permit" is currently used to configure transaction filtering, whereas replacement is more to do with the memory pool state than the transaction itself.
Add a configuration option `-permitrbf` to set transaction replacement policy
for the mempool.
Enabling it will enable (opt-in) RBF, disabling it will refuse all
conflicting transactions.
If a new transaction will cause limitfreerelay
to be exceeded it should not be accepted
into the memory pool and the byte counter
should be updated only after the fact.
After discussion in #7164 I think this is better.
Max tip age was introduced in #5987 to make it possible to run
testnet-in-a-box. But associating this behavior with the testnet chain
is wrong conceptually, as it is not needed in normal usage.
Should aim to make testnet test the software as-is.
Replace it with a (debug) option `-maxtipage`, which can be
specified only in the specific case.
We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
- Avoids string typos (by making the compiler check)
- Makes it easier to grep for handling/generation of a certain message type
- Refer directly to documentation by following the symbol in IDE
- Move list of valid message types to protocol.cpp:
protocol.cpp is a more appropriate place for this, and having
the array there makes it easier to keep things consistent.
Mempool requests use a fair amount of bandwidth when the mempool is large,
disconnecting peers using them follows the same logic as disconnecting
peers fetching historical blocks.
aa4b0c2 When not filtering blocks, getdata sends more in one test (Pieter Wuille)
d41e44c Actually only use filterInventoryKnown with MSG_TX inventory messages. (Gregory Maxwell)
b6a0da4 Only use filterInventoryKnown with MSG_TX inventory messages. (Patick Strateman)
6b84935 Rename setInventoryKnown filterInventoryKnown (Patick Strateman)
e206724 Remove mruset as it is no longer used. (Gregory Maxwell)
ec73ef3 Replace setInventoryKnown with a rolling bloom filter. (Gregory Maxwell)
One test in AcceptToMemoryPool was to compare a transaction's fee
agains the value returned by GetMinRelayFee. This value was zero for
all small transactions. For larger transactions (between
DEFAULT_BLOCK_PRIORITY_SIZE and MAX_STANDARD_TX_SIZE), this function
was preventing low fee transactions from ever being accepted.
With this function removed, we will now allow transactions in that range
with fees (including modifications via PrioritiseTransaction) below
the minRelayTxFee, provided that they have sufficient priority.
Store sum of legacy and P2SH sig op counts. This is calculated in AcceptToMemory pool and storing it saves redoing the expensive calculation in block template creation.
But keep translating them in the GUI.
This - necessarily - requires duplication of a few messages.
Alternative take on #7134, that keeps the translations from being wiped.
Also document GetWarnings() input argument.
Fixes#5895.
ebb25f4 Limit setAskFor and retire requested entries only when a getdata returns. (Gregory Maxwell)
5029698 prevent peer flooding request queue for an inv (kazcw)
Mruset setInventoryKnown was reduced to a remarkably small 1000
entries as a side effect of sendbuffer size reductions in 2012.
This removes setInventoryKnown filtering from merkleBlock responses
because false positives there are especially unattractive and
also because I'm not sure if there aren't race conditions around
the relay pool that would cause some transactions there to
be suppressed. (Also, ProcessGetData was accessing
setInventoryKnown without taking the required lock.)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
For each 'bit' in the filter we really maintain 2 bits, which store either:
0: not set
1-3: set in generation N
After (nElements / 2) insertions, we switch to a new generation, and wipe
entries which already had the new generation number, effectively switching
from the last 1.5 * nElements set to the last 1.0 * nElements set.
This is 25% more space efficient than the previous implementation, and can
(at peak) store 1.5 times the requested amount of history (though only
1.0 times the requested history is guaranteed).
The existing unit tests should be sufficient.
This switches the Merkle tree logic for blocks to one that runs in constant (small) space.
The old code is moved to tests, and a new test is added that for various combinations of
block sizes, transaction positions to compute a branch for, and mutations:
* Verifies that the old code and new code agree for the Merkle root.
* Verifies that the old code and new code agree for the Merkle branch.
* Verifies that the computed Merkle branch is valid.
* Verifies that mutations don't change the Merkle root.
* Verifies that mutations are correctly detected.
This makes sure that retransmits by a whitelisted peer also actually
result in a retransmit.
Further, this changes the logic to never relay in case we would assign
a DoS score, as we expect to get DoS banned ourselves as a result.
Previously peers which implement a protocol version less than NO_BLOOM_VERSION
would not be disconnected for sending a filter command, regardless of the
peerbloomfilter option.
Many node operators do not wish to provide expensive bloom filtering for SPV
clients, previously they had to cherry pick the commit which enabled the
disconnect logic.
The default should remain false until a sufficient percent of SPV clients
have updated.