6182d10 Do not increment nAttempts by more than one for every Good connection. (Gregory Maxwell)
c769c4a Avoid counting failed connect attempts when probably offline. (Gregory Maxwell)
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
5d5e7a0 net: No need to export ConnectNode (Cory Fields)
e9ed620 net: No need to export DumpBanlist (Cory Fields)
8b8f877 net: make Ban/Unban/ClearBan functionality consistent (Cory Fields)
cca221f net: Drop CNodeRef for AttemptToEvictConnection (Cory Fields)
563f375 net: use the exposed GetNodeSignals() rather than g_signals directly (Cory Fields)
9faa490 net: remove unused set (Cory Fields)
52cbce2 net: don't import std namespace (Cory Fields)
1475ecf Fix de-serialization bug where AddrMan is corrupted after exception * CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state * CAddrDB modified to make unit tests possible * Regression test created to ensure bug is fixed * StartNode modifed to clear adrman if CAddrDB::Read returns an error code. (EthanHeilman)
* CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state
* CAddrDB modified to make unit tests possible
* Regression test created to ensure bug is fixed
* StartNode modifed to clear adrman if CAddrDB::Read returns an error code.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
aa4b0c2 When not filtering blocks, getdata sends more in one test (Pieter Wuille)
d41e44c Actually only use filterInventoryKnown with MSG_TX inventory messages. (Gregory Maxwell)
b6a0da4 Only use filterInventoryKnown with MSG_TX inventory messages. (Patick Strateman)
6b84935 Rename setInventoryKnown filterInventoryKnown (Patick Strateman)
e206724 Remove mruset as it is no longer used. (Gregory Maxwell)
ec73ef3 Replace setInventoryKnown with a rolling bloom filter. (Gregory Maxwell)
ebb25f4 Limit setAskFor and retire requested entries only when a getdata returns. (Gregory Maxwell)
5029698 prevent peer flooding request queue for an inv (kazcw)
Mruset setInventoryKnown was reduced to a remarkably small 1000
entries as a side effect of sendbuffer size reductions in 2012.
This removes setInventoryKnown filtering from merkleBlock responses
because false positives there are especially unattractive and
also because I'm not sure if there aren't race conditions around
the relay pool that would cause some transactions there to
be suppressed. (Also, ProcessGetData was accessing
setInventoryKnown without taking the required lock.)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.
mapAlreadyAskedFor does not keep track of which peer has a request queued for a
particular tx. As a result, a peer can blind a node to a tx indefinitely by
sending many invs for the same tx, and then never replying to getdatas for it.
Each inv received will be placed 2 minutes farther back in mapAlreadyAskedFor,
so a short message containing 10 invs would render that tx unavailable for 20
minutes.
This is fixed by disallowing a peer from having more than one entry for a
particular inv in mapAlreadyAskedFor at a time.
- Force AUTHCOOKIE size to be 32 bytes: This provides protection against
an attack where a process pretends to be Tor and uses the cookie
authentication method to nab arbitrary files such as the
wallet
- torcontrol logging
- fix cookie auth
- add HASHEDPASSWORD auth, fix fd leak when fwrite() fails
- better error reporting when cookie file is not ok
- better init/shutdown flow
- stop advertizing service when disconnected from tor control port
- COOKIE->SAFECOOKIE auth
* -maxuploadtarget can be set in MiB
* if <limit> - ( time-left-in-24h-cycle / 600 * MAX_BLOCK_SIZE ) has reach, stop serve blocks older than one week and filtered blocks
* no action if limit has reached, no guarantee that the target will not be surpassed
* add outbound limit informations to rpc getnettotals
7b79cbd limit total length of user agent comments (Pavol Rusnak)
557f8ea implement uacomment config parameter which can add comments to user agent as per BIP-0014 (Pavol Rusnak)
This sets aside a number of connection slots for whitelisted peers,
useful for ensuring your local users and miners can always get in,
even if your limit on inbound connections has already been reached.
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
This is a simplified re-do of closed pull #3088.
This patch eliminates the privacy and reliability problematic use
of centralized web services for discovering the node's addresses
for advertisement.
The Bitcoin protocol already allows your peers to tell you what
IP they think you have, but this data isn't trustworthy since
they could lie. So the challenge is using it without creating a
DOS vector.
To accomplish this we adopt an approach similar to the one used
by P2Pool: If we're announcing and don't have a better address
discovered (e.g. via UPNP) or configured we just announce to
each peer the address that peer told us. Since peers could
already replace, forge, or drop our address messages this cannot
create a new vulnerability... but if even one of our peers is
giving us a good address we'll eventually make a useful
advertisement.
We also may randomly use the peer-provided address for the
daily rebroadcast even if we otherwise have a seemingly routable
address, just in case we've been misconfigured (e.g. by UPNP).
To avoid privacy problems, we only do these things if discovery
is enabled.
Many changes:
* Do not use 'getblocks', but 'getheaders', and use it to build a headers tree.
* Blocks are fetched in parallel from all available outbound peers, using a
limited moving window. When one peer stalls the movement of the window, it is
disconnected.
* No more orphan blocks. At all. We only ever request a block for which we have
verified the headers, and store it to disk immediately. This means that a
disk-fill attack would require PoW.
* Require protocol version 31800 for every peer (released in december 2010).
* No more syncnode (we sync from everyone we can, though limited to 1 during
initial *headers* sync).
* Introduce some extra named constants, comments and asserts.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
Split up util.cpp/h into:
- string utilities (hex, base32, base64): no internal dependencies, no dependency on boost (apart from foreach)
- money utilities (parsesmoney, formatmoney)
- time utilities (gettime*, sleep, format date):
- and the rest (logging, argument parsing, config file parsing)
The latter is basically the environment and OS handling,
and is stripped of all utility functions, so we may want to
rename it to something else than util.cpp/h for clarity (Matt suggested
osinterface).
Breaks dependency of sha256.cpp on all the things pulled in by util.
aa82795 Add detailed network info to getnetworkinfo RPC (Wladimir J. van der Laan)
075cf49 Add GetNetworkName function (Wladimir J. van der Laan)
c91a947 Add IsReachable(net) function (Wladimir J. van der Laan)
60dc8e4 Allow -onlynet=onion to be used (Wladimir J. van der Laan)
Simpler alternative to #4348.
The current setup with closesocket() is strange. It poses
as a compatibility wrapper but adds functionality.
Rename it and make it a documented utility function in netbase.
Code movement only, zero effect on the functionality.
4eedf4f make RandAddSeed() use OPENSSL_cleanse() (Philip Kaufmann)
6354935 move rand functions from util to new random.h/.cpp (Philip Kaufmann)
001a53d add GetRandBytes() as wrapper for RAND_bytes() (Philip Kaufmann)
This adds a -whitelist option to specify subnet ranges from which peers
that connect are whitelisted. In addition, there is a -whitebind option
which works like -bind, except peers connecting to it are also
whitelisted (allowing a separate listen port for trusted connections).
Being whitelisted has two effects (for now):
* They are immune to DoS disconnection/banning.
* Transactions they broadcast (which are valid) are always relayed,
even if they were already in the mempool. This means that a node
can function as a gateway for a local network, and that rebroadcasts
from the local network will work as expected.
Whitelisting replaces the magic exemption localhost had for DoS
disconnection (local addresses are still never banned, though), which
implied hidden service connects (from a localhost Tor node) were
incorrectly immune to DoS disconnection as well. This old
behaviour is removed for that reason, but can be restored using
-whitelist=127.0.0.1 or -whitelist=::1 can be specified. -whitebind
is safer to use in case non-trusted localhost connections are expected
(like hidden services).
- remove an unneded else in ConnectNode()
- make 0 a double and change to 0.0 in ConnectNode()
- rename strDest to pszDest in OpenNetworkConnection()
- remove an unneded call to our REF() macro in BindListenPort()
- small style cleanups and removal of unneeded new-lines
- add DEFAULT_LISTEN in net.h and use in the code (shared
setting between core and GUI)
Important: This makes it obvious, that we need to re-think the
settings/options handling, as GUI settings are processed before
any parameter-interaction (which is mostly important for network
stuff) in AppInit2()!
... instead of after 30 minutes of no sending, for latency measurement
and keep-alive. Also, disconnect if no reply arrives within 20 minutes,
instead of 90 of inactivity (for peers supporting the 'pong' message).
f0a83fc Use Params().NetworkID() instead of TestNet() from the payment protocol (jtimon)
2871889 net.h was using std namespace through chainparams.h included in protocol.h (jtimon)
c8c52de Replace virtual methods with static attributes, chainparams.h depends on protocol.h instead of the other way around (jtimon)
a3d946e Get rid of TestNet() (jtimon)
6fc0fa6 Add RPCisTestNet chain parameter (jtimon)
cfeb823 Add RequireStandard chain parameter (jtimon)
21913a9 Add AllowMinDifficultyBlocks chain parameter (jtimon)
d754f34 Move majority constants to chainparams (jtimon)
8d26721 Get rid of RegTest() (jtimon)
cb9bd83 Add DefaultCheckMemPool chain parameter (jtimon)
2595b9a Add DefaultMinerThreads chain parameter (jtimon)
bfa9a1a Add MineBlocksOnDemand chain parameter (jtimon)
1712adb Add MiningRequiresPeers chain parameter (jtimon)
Adds two new info query commands that take over information from
hodge-podge `getinfo`.
Also some new information is added:
- `getblockchaininfo`
- `chain`: (string) current chain (main, testnet3, regtest)
- `verificationprogress: (numeric) estimated verification progress
- `chainwork`
- `getnetworkinfo`
- `localaddresses`: (array) local addresses, from mapLocalHost (fixes#1734)
Amend to d5f1e72. It turns out that BerkelyDB was including inttypes.h
indirectly, so we cannot fix this with just macros.
Trivial commit: apply the following script to all .cpp and .h files:
# Middle
sed -i 's/"PRIx64"/x/g' "$1"
sed -i 's/"PRIu64"/u/g' "$1"
sed -i 's/"PRId64"/d/g' "$1"
# Initial
sed -i 's/PRIx64"/"x/g' "$1"
sed -i 's/PRIu64"/"u/g' "$1"
sed -i 's/PRId64"/"d/g' "$1"
# Trailing
sed -i 's/"PRIx64/x"/g' "$1"
sed -i 's/"PRIu64/u"/g' "$1"
sed -i 's/"PRId64/d"/g' "$1"
After this commit, `git grep` for PRI.64 should turn up nothing except
the defines in util.h.
As the tinyformat-based formatting system (introduced in b77dfdc) is
type-safe, no special format characters are needed to specify sizes.
Tinyformat can support (ignore) the C99 prefixes such as "ll" but
chokes on MSVC's inttypes.h defines prefixes such as "I64X". So don't
include inttypes.h and define our own for compatibility.
(an alternative would be to sweep the entire codebase using sed -i to
get rid of the size specifiers but this has less diff impact)
contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
This was a leftover from the times in which
peers.dat depended in BDB.
Other functions in db.cpp still depend on BerkelyDB,
to be able to compile without BDB this (small)
functionality needs to be moved to another file.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
INIT_PROTO_VERSION is the initial version, after a succesful version/verack it is increased to a negotiated version.
MIN_PEER_PROTO_VERSION could be a different value to disconnect from peers older than a specified version.
The existing CNode::addrLocal member is revealed to the user,
as an address string, similar to the existing "addr" field.
Instead of showing garbage or empty string,
it simply will not appear in the output if local address not known yet.
New RPC "ping" command to request ping.
Implemented "pong" message handler.
New "pingtime" field in getpeerinfo, to provide results to user.
New "pingwait" field, to show pings still in flight, to better see newly lagging peers.
This reduces a peer's ability to attack network resources by
using a full bloom filter, but without reducing the usability
of bloom filters. It sets a default match everything filter
for peers and it generalizes a prior optimization to
cover more cases.
Added explicit include of main.h in init.cpp, changed include of init.h to include of main.h in net.cpp.
Added function registration for net.cpp in init.cpp's network initialization.
Removed protocol.cpp's dependency on main.h.
TODO: Remove main.h include in net.cpp.
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
It seems there were two mechanisms for assessing whether a CNode
was still in use: a refcount and a release timestamp. The latter
seems to have been there for a long time, as a safety mechanism.
However, this timer also keeps CNode objects alive for far longer
than necessary after disconnects, potentially opening up a DoS
window.
This commit removes the timestamp-based mechanism, and replaces
it with an assert(nRefCount >= 0), to verify that the refcounting
is indeed correctly working.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
These command are a leftover from send-to-IP transactions, which have been
removed a long time ago.
Also removes CNode::mapRequests and CNode::PushRequests, as these were
only used for the mentioned commands.
Prior to this change, each TX typically generated 3+ debug messages,
askfor tx 8644cc97480ba1537214 0
sending getdata: tx 8644cc97480ba1537214
askfor tx 8644cc97480ba1537214 1339640761000000
askfor tx 8644cc97480ba1537214 1339640881000000
CTxMemPool::accept() : accepted 8644cc9748 (poolsz 6857)
After this change, there is only one message for each valid TX received
CTxMemPool::accept() : accepted 22a73c5d8c (poolsz 42)
and two messages for each orphan tx received
ERROR: FetchInputs() : 673dc195aa mempool Tx prev not found 1e439346fc
stored orphan tx 673dc195aa (mapsz 19)
The -debugnet option, or its superset -debug, will restore the full debug
output.
Introduce a boolean variable for each "network" (ipv4, ipv6, tor, i2p),
and track whether we are likely to able to connect to it. Addresses in
"addr" messages outside of our network get limited relaying and are not
stored in addrman.
Change internal HTTP JSON-RPC server from single-threaded to
thread-per-connection model. The IP filter list is applied prior to starting
the thread, which then processes the RPC.
A mutex covers the entire RPC operation, because not all RPC operations are
thread-safe.
[minor modifications by jgarzik, to make change upstream-ready]
-externalip=<ip> can be used to explicitly set the public IP address
of your node. -discover=0 can be used to disable the automatic public
IP discovery system.
This commit removes the dependency of serialize.h on PROTOCOL_VERSION,
and makes this parameter required instead of implicit. This is much saner,
as it makes the places where changing a version number can have an
influence obvious.
Design goals:
* Only keep a limited number of addresses around, so that addr.dat does not grow without bound.
* Keep the address tables in-memory, and occasionally write the table to addr.dat.
* Make sure no (localized) attacker can fill the entire table with his nodes/addresses.
See comments in addrman.h for more detailed information.
This turns on most gcc warnings, and removes some unused variables and other code that triggers warnings.
Exceptions are:
-Wno-sign-compare : triggered by lots of comparisons of signed integer to foo.size(), which is unsigned.
-Wno-char-subscripts : triggered by the convert-to-hex functions (I may fix this in a future commit).
This introduces CNetAddr and CService, respectively wrapping an
(IPv6) IP address and an IP+port combination. This functionality used
to be part of CAddress, which also contains network flags and
connection attempt information. These extra fields are however not
always necessary.
These classes, along with logic for creating connections and doing
name lookups, are moved to netbase.{h,cpp}, which does not depend on
headers.h.
Furthermore, CNetAddr is mostly IPv6-ready, though IPv6
functionality is not yet enabled for the application itself.
Replaced all occurrences of #if* __WXMSW__ with WIN32,
and all occurrences of __WXMAC_OSX__ with MAC_OSX, and made
sure those are defined appropriately in the makefile and bitcoin-qt.pro.
This commit does *not* and should not modify *any* code, it only moves
it from net.h and splits it across protocol.cpp and protocol.hpp.
Signed-off-by: Giel van Schijndel <me@mortis.eu>
This commit does *not* and should not modify *any* code, it only moves
it from net.h and splits it across protocol.cpp and protocol.hpp.
Signed-off-by: Giel van Schijndel <me@mortis.eu>
Move CMessageHeader from net.h to protocol.[ch]pp, with the
implementation in the .cpp compilation unit (compiling once is enough).
This commit does *not* and should not modify *any* code, it only moves
it from net.h and splits it across protocol.cpp and protocol.hpp.
Indentation changes aside the closest thing to a modification of code is
the addition of the 'TODO' comment (the execution of which requires code
modifications and thus doesn't belong in this commit).
Signed-off-by: Giel van Schijndel <me@mortis.eu>
Explicitly make these global variables less-global to reduce the maximum
scope of this global state.
In my experience global variables tend to be a major source of bugs. As
such the less accessible they are the less likely they are to be the
source of a bug.
Signed-off-by: Giel van Schijndel <me@mortis.eu>
Regarding https://bitcointalk.org/index.php?topic=28022.0
main.cpp has: "char pchMessageStart[4] = { 0xf9, 0xbe, 0xb4, 0xd9 };"
Per discussion on the thread linked, leaving the signedness of
pchMessageStart is unsafe for values > 0x80. This patch specifies
'unsigned char' in main.cpp and net.h.
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Introduce SendBufferSize() and ReceiveBufferSize(), and limit
the blocks sent as response to the "getblocks" message to
half of the active send buffer size.
Use non-blocking connects, and a select() call to wait a predefined
time (5s by default, but configurable with -timeout) for either
success or failure. This allows much more connections to be tried
per time unit.
Based on a patch by phantomcircuit.
* A new option -dns is introduced that enables name lookups in
-connect and -addnode, which is not enabled by default,
as it may be considered a security issue.
* A Lookup function is added that supports retrieving one or
more addresses based on a host name
* CAddress constructors (optionally) support name lookups.
* The different places in the source code that did name lookups
are refactored to use NameLookup or CAddress instead (dns seeding,
irc server lookup, getexternalip, ...).
* Removed ToStringLog() from CAddress, and switched to ToString(),
since it was empty.
there is no internal modification of any file in this commit
files are moved into directories according to established standards in
sourcecode distribution; these directories contain:
src - Files that are used in constructing the executable binaries,
but are not installed.
doc - Files in HTML and text format that document usage, quirks of
the implementation, and contributor checklists.
locale - Files that contain human language translation of strings
used in the program
contrib - Files contributed from distributions or other third party
implementing scripts and auxiliary programs