* Switch mapRelay to use shared_ptr<CTransaction>
* Switch the relay code to copy mempool shared_ptr's, rather than copying
the transaction itself.
* Change vRelayExpiration to store mapRelay iterators rather than hashes
(smaller and faster).
Saves about 10% of application memory usage once the mempool warms up. Since the
mempool is DynamicUsage-regulated, this will translate to a larger mempool in
the same amount of space.
Map value type: eliminate the vin index; no users of the map need to know which
input of the transaction is spending the prevout.
Map key type: replace the COutPoint with a pointer to a COutPoint. A COutPoint
is 36 bytes, but each COutPoint is accessible from the same map entry's value.
A trivial DereferencingComparator functor allows indirect map keys, but the
resulting syntax is misleading: `map.find(&outpoint)`. Implement an indirectmap
that acts as a wrapper to a map that uses a DereferencingComparator, supporting
a syntax that accurately reflect the container's semantics: inserts and
iterators use pointers since they store pointers and need them to remain
constant and dereferenceable, but lookup functions take const references.
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
b4d24e1 Report reindexing progress in GUI (Pieter Wuille)
d3d7547 Add -reindex-chainstate that does not rebuild block index (Pieter Wuille)
fb8fad1 Optimize ActivateBestChain for long chains (Pieter Wuille)
316623f Switch reindexing to AcceptBlock in-loop and ActivateBestChain afterwards (Pieter Wuille)
d253ec4 Make ProcessNewBlock dbp const and update comment (Pieter Wuille)
The current logic for syncing headers may lead to lots of duplicate
getheaders requests being sent: If a new block arrives while the node
is in headers sync, it will send getheaders in response to the block
announcement. When the headers arrive, the message will be of maximum
size and so a follow-up request will be sent---all of that in addition
to the existing headers syncing. This will create a second "chain" of
getheaders requests. If more blocks arrive, this may even lead to
arbitrarily many parallel chains of redundant requests.
This patch changes the behaviour to only request more headers after a
maximum-sized message when it contained at least one unknown header.
This avoids sustaining parallel chains of redundant requests.
Note that this patch avoids the issues raised in the discussion of
https://github.com/bitcoin/bitcoin/pull/6821: There is no risk of the
node being permanently blocked. At the latest when a new block arrives
this will trigger a new getheaders request and restart syncing.
b559914 Move bloom and feerate filtering to just prior to tx sending. (Gregory Maxwell)
4578215 Return mempool queries in dependency order (Pieter Wuille)
ed70683 Handle mempool requests in send loop, subject to trickle (Pieter Wuille)
dc13dcd Split up and optimize transaction and block inv queues (Pieter Wuille)
f2d3ba7 Eliminate TX trickle bypass, sort TX invs for privacy and priority. (Gregory Maxwell)
ActivateBestChain uses chainActive after releasing the lock; reorder operations
to move all access to synchronized object into existing LOCK(cs_main) block.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we would assert that if every block in vBlockHashesToAnnounce is in
chainActive, then the blocks to be announced must connect. However, there are
edge cases where this assumption could be violated (eg using invalidateblock /
reconsiderblock), so just check for this case and revert to inv-announcement
instead.
Previously Bitcoin would send 1/4 of transactions out to all peers
instantly. This causes high overhead because it makes >80% of
INVs size 1. Doing so harms privacy, because it limits the
amount of source obscurity a transaction can receive.
These randomized broadcasts also disobeyed transaction dependencies
and required use of the orphan pool. Because the orphan pool is
so small this leads to poor propagation for dependent transactions.
When the bypass wasn't in effect, transactions were sent in the
order they were received. This avoided creating orphans but
undermines privacy fairly significantly.
This commit:
Eliminates the bypass. The bypass is replaced by halving the
average delay for outbound peers.
Sorts candidate transactions for INV by their topological
depth then by their feerate (then hash); removing the
information leakage and providing priority service to
higher fee transactions.
Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.