ProcessNewBlock would return failure early if CheckBlock failed, before
calling AcceptBlock. AcceptBlock also calls CheckBlock, and upon failure
would update mapBlockIndex to indicate that a block was failed. By returning
early in ProcessNewBlock, we were not marking blocks that fail a check in
CheckBlock as permanently failed, and thus would continue to re-request and
reprocess them.
Also renames whitelistalwaysrelay.
Nodes relay all transactions from whitelisted peers, this
gets in the way of some useful reasons for whitelisting
peers-- for example, bypassing bandwidth limitations.
The purpose of this forced relaying is for specialized gateway
applications where a node is being used as a P2P connection
filter and multiplexer, but where you don't want it getting
in the way of (re-)broadcast.
This change makes it configurable with whitelistforcerelay.
"permit" is currently used to configure transaction filtering, whereas replacement is more to do with the memory pool state than the transaction itself.
Add a configuration option `-permitrbf` to set transaction replacement policy
for the mempool.
Enabling it will enable (opt-in) RBF, disabling it will refuse all
conflicting transactions.
If a new transaction will cause limitfreerelay
to be exceeded it should not be accepted
into the memory pool and the byte counter
should be updated only after the fact.
After discussion in #7164 I think this is better.
Max tip age was introduced in #5987 to make it possible to run
testnet-in-a-box. But associating this behavior with the testnet chain
is wrong conceptually, as it is not needed in normal usage.
Should aim to make testnet test the software as-is.
Replace it with a (debug) option `-maxtipage`, which can be
specified only in the specific case.
We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
- Avoids string typos (by making the compiler check)
- Makes it easier to grep for handling/generation of a certain message type
- Refer directly to documentation by following the symbol in IDE
- Move list of valid message types to protocol.cpp:
protocol.cpp is a more appropriate place for this, and having
the array there makes it easier to keep things consistent.
Mempool requests use a fair amount of bandwidth when the mempool is large,
disconnecting peers using them follows the same logic as disconnecting
peers fetching historical blocks.
aa4b0c2 When not filtering blocks, getdata sends more in one test (Pieter Wuille)
d41e44c Actually only use filterInventoryKnown with MSG_TX inventory messages. (Gregory Maxwell)
b6a0da4 Only use filterInventoryKnown with MSG_TX inventory messages. (Patick Strateman)
6b84935 Rename setInventoryKnown filterInventoryKnown (Patick Strateman)
e206724 Remove mruset as it is no longer used. (Gregory Maxwell)
ec73ef3 Replace setInventoryKnown with a rolling bloom filter. (Gregory Maxwell)
One test in AcceptToMemoryPool was to compare a transaction's fee
agains the value returned by GetMinRelayFee. This value was zero for
all small transactions. For larger transactions (between
DEFAULT_BLOCK_PRIORITY_SIZE and MAX_STANDARD_TX_SIZE), this function
was preventing low fee transactions from ever being accepted.
With this function removed, we will now allow transactions in that range
with fees (including modifications via PrioritiseTransaction) below
the minRelayTxFee, provided that they have sufficient priority.
Store sum of legacy and P2SH sig op counts. This is calculated in AcceptToMemory pool and storing it saves redoing the expensive calculation in block template creation.