Commit graph

2385 commits

Author SHA1 Message Date
Chris Martin 0ddd10add6 Remove extra backtick in README.md 2016-09-13 15:10:48 -04:00
Marco Peereboom 5e93b1664e config: Replace log outputs with fmt.Fprintln.
This corrects the case where any errors that might have happened when
creating the config file were not being displayed due to the logging
system not being initialized yet.

While here, consistently use fmt.Fprint{f,ln} throughout the loadConfig
function even after the logging system is initialized since it will
help prevent copy/paste errors such as the one that induced this change
to begin with.
2016-09-12 02:21:37 -05:00
Dave Collins 2ef82e7db3
mempool: Improve tx input standard checks.
This changes the transaction input standardness checks as follows:

- Allow any script in a pay-to-script-hash transaction to be relayed and
  mined so long as it has no more than 15 signature operations
- Remove the obsolete checks which naively calculated the number of
  expected inputs in favor of the more accurate ScriptVerifyCleanStack
  and ScriptVerifySigPushOnly functionality of the script engine that was
  added after these checks.
2016-08-24 23:16:23 -05:00
Olaoluwa Osuntokun c6d50b7abf
rpctest: ensure the main harness rejects non-standard transactions
This commit modifies the current set of integration tests to ensure
that that the main harness always rejects non-standard transactions.

With this in place, even though we’re using simnet parameters, we
ensure that transaction acceptance/validation is identical to that of
main net.
2016-08-24 15:43:35 -07:00
Olaoluwa Osuntokun 815ded348e
config: introduce new flags to accept/reject non-std transactions
This commit adds two new cli flags: one for accepting non-std
transactions, and the other for rejecting non-std transactions.

The two flag are rejected when using concurrently. Config parsing is
set up such that, the desired policy expressed via the config always
overrides the policy set by default for a particular chain.

The doc.go files and the sample-btcd.conf file have been updated to document
the new flags exposing further policy control.
2016-08-24 15:43:26 -07:00
Olaoluwa Osuntokun dc5486a579
mempool: add non-standard tx relaying to policy config 2016-08-24 15:43:22 -07:00
Dave Collins 15bace88dc
mempool: Add basic test harness infrastructure.
This adds a basic test harness infrastructure for the mempool package
which aims to make writing tests for it much easier.

The harness provides functionality for creating and signing transactions
as well as a fake chain that provides utxos for use in generating valid
transactions and allows an arbitrary chain height to be set.  In order
to simplify transaction creation, a single signing key and payment
address is used throughout and a convenience type for spendable outputs
is provided.

The harness is initialized with a spendable coinbase output by default
and the fake chain height set to the maturity height needed to ensure
the provided output is in fact spendable as well as a policy that is
suitable for testing.

Since tests are in the same package and each harness provides a unique
pool and fake chain instance, the tests can safely reach into the pool
policy, or any other state, and change it for a given harness without
affecting the others.

In order to be able to make use of the existing blockchain.Viewpoint
type, a Clone method has been to the UtxoEntry type which allows the
fake chain instance to keep a single view with the actual available
unspent utxos while the mempool ends up fetching a subset of the view
with the specifically requested entries cloned.

To demo the harness, this also contains a couple of tests which make use
of it:

- TestSimpleOrphanChain -- Ensures an entire chain of orphans is
  properly accepted and connects up when the missing parent transaction
  is added
- TestOrphanRejects -- Ensure orphans are actually rejected when the
  flag on ProcessTransactions is set to reject them
2016-08-24 10:04:40 -05:00
David Hill a109bea3f1 mempool: unexport the mutex
callers should not need to lock/unlock the mempool themselves.
2016-08-23 14:59:48 -04:00
Dave Collins 641182b2ad
mempool: Break dependency on chain instance.
This modifies the config for the new mempool package such that it takes
a callback function to obtain the best chain height instead of requiring
a fully initialized blockchain.BlockChain instance.

This will make it much easier to test the mempool since the tests will
be able to provide their own height function to test various
functionality without having create and manipulate full blocks and chain
instances.
2016-08-23 12:29:45 -05:00
Dave Collins c57c18c8e2
blockchain: Add median time to state snapshot.
This adds a new field to the best chain state snapshot for the
calculated past median time as returned by the calcPastMedianTime
function.  This is useful since it provides fast access to it without
having to acquire the chain lock which is needed to recalculate it.

This will ultimately allow the associated exported function to be
removed since it only exists to be able to calculate this exact value,
however this commit only introduces the new field in order to keep the
changes minimal.
2016-08-23 01:29:11 -05:00
Olaoluwa Osuntokun bfe2ba4191
rpctest: prevent leaking processes in tests due to panics
This commit adds a `defer` statement at the top of  `TestRpcServer`
which will attempt a `recover` which tears down all active harnesses in
the event that one of the tests causes a panic in the main goroutine.

Before this commit, if a buggy test caused a panic while all integration
tests were being executed, then any active harnesses would fail to be
properly torn down. This would cause the running btcd processes to be
leaked, possibly interfering with future test runs until the process was
manually killed. This commit fixes such behavior.

In order to aide in debugging, when a test panics, the test number is
printed out along with a full stack-trace from the start of the test to
the panic point.
2016-08-22 17:09:18 -07:00
Dave Collins 6cf60b5fae
rpctest: Correct several issues in tests and joins.
This corrects several issues with the new rpctest package.

The following is an overview of the issues that have been corrected:

- The JoinNodes code was incorrect for both mempool and blocks.
  - For mempool it was only checking the first node against itself
  - For blocks it was only testing the height instead of hash and height
    which means the chains could be different and it would claim they're
    synced
- The test for mempool joins was inaccurate in a few ways:
  - It was generating separate chains such that the generated
    transaction was invalid (an orphan) on one chain, but not
    the other
  - Mempools are not automatically synced when nodes are connected and
    there is a large random delay before any transaction rebroadcast
    happens, so it can't be relied on for the purposes of this test
- The test for block joins was generating two independent chains of the
  same height with the same difficulty and was only passing due to the
  aforementioned bug in JoinNodes
- All of the ConnectNode calls were connecting the main harness outbound
  to the local test harness instances
  - This is not correct because ConnectNode makes the outbound
    connection persistent, which means once the local test harness is
    gone, it would keep trying to connect for the remainder of the tests
    to a node that is never coming back and instead ends up connecting to
    an independent test harness.
2016-08-21 15:10:23 -05:00
jadeblaquiere 47ced81d44 don't spin forever looking for peers (#724) 2016-08-20 22:11:12 -05:00
Olaoluwa Osuntokun 7c4b169faa
build: install btcd binary within Travis container
This commit modifies the existing Travis configuration to also install
the btcd binary within the container’s PATH.

This change is necessary in order for tests written against the new
rpctest package to work properly within the Travis testing environment.
2016-08-19 17:43:42 -05:00
Olaoluwa Osuntokun c3d5371615
build: update dependancies required by rpctest 2016-08-19 17:42:25 -05:00
Olaoluwa Osuntokun 2d86fbface
btcd: add basic RPC tests using the rpctest package
This commit introduces a new file: rpcserver_test.go dedicated for
including integration tests for btcd using the new rpctest package.

The tests are created using a TestMain instance first creates a single
main harness which is intended to be re-used across tests instances.
Afterwards all registered RPC tests are executed, with proper clean up
being executed regardless of the passing state of the tests.

The following RPC calls are excessed by the initial set of tests added:
    * getbestblock
    * getblockcount
    * getblockhash
2016-08-19 17:41:37 -05:00
Olaoluwa Osuntokun b86df0ba91
rpctest: create new rpctest package
This commit adds a new package (rpctest) which provides functionality
for writing automated black box tests to exercise the RPC interface.

An instance of a rpctest consists of an active btcd process running in
(typically) --simnet mode, a btcrpcclient instance connected to said
node, and finally an embedded in-memory wallet instance (the memWallet)
which manages any created coinbase outputs created by the mining btcd
node.

As part of the SetUp process for an RPC test, a test author can
optionally opt to have a test blockchain created. The second argument
to SetUp dictates the number of mature coinbase outputs desired. The
btcd process will then be directed to generate a test chain of length:
100 + numMatureOutputs.

The embedded memWallet instance acts as a minimal, simple wallet for
each Harness instance. The memWallet itself is a BIP 32 HD wallet
capable of creating new addresses, creating fully signed transactions,
creating+broadcasting a transaction paying to an arbitrary set of
outputs, and querying the currently confirmed balance.

In order to test various scenarios of blocks containing arbitrary
transactions, one can use the Generate rpc call via the exposed
btcrpcclient connected to the active btcd node. Additionally, the
Harness also exposes a secondary block generation API allowing callers
to create blocks with a set of hand-selected transactions, and an
arbitrary BlockVersion or Timestamp.

After execution of test logic TearDown should be called, allowing the
test instance to clean up created temporary directories, and shut down
the running processes.

Running multiple concurrent rpctest.Harness instances is supported in
order to allow for test authors to exercise complex scenarios. As a
result, the primary interface to create, and initialize an
rpctest.Harness instance is concurrent safe, with shared package level
private global variables protected by a sync.Mutex.

Fixes #116.
2016-08-19 17:37:08 -05:00
Dave Collins 763f731c5c wire: Lower MaxUserAgentLen to 256. (#744)
The protocol was silently upgrade in Core some time ago to enforce a
limit of 256 for the user agent in version messages.  This updates wire
to coincide with that change and consequently includes a wire version
bump to 0.4.1.
2016-08-19 12:40:24 -05:00
Dave Collins 7fac099bee mempool: Refactor mempool code to its own package. (#737)
This does the minimum work necessary to refactor the mempool code into
its own package.  The idea is that separating this code into its own
package will greatly improve its testability, allow independent
benchmarking and profiling, and open up some interesting opportunities
for future development related to the memory pool.

There are likely some areas related to policy that could be further
refactored, however it is better to do that in future commits in order
to keep the changeset as small as possible during this refactor.

Overview of the major changes:

- Create the new package
- Move several files into the new package:
  - mempool.go -> mempool/mempool.go
  - mempoolerror.go -> mempool/error.go
  - policy.go -> mempool/policy.go
  - policy_test.go -> mempool/policy_test.go
- Update mempool logging to use the new mempool package logger
- Rename mempoolPolicy to Policy (so it's now mempool.Policy)
- Rename mempoolConfig to Config (so it's now mempool.Config)
- Rename mempoolTxDesc to TxDesc (so it's now mempool.TxDesc)
- Rename txMemPool to TxPool (so it's now mempool.TxPool)
- Move defaultBlockPrioritySize to the new package and export it
- Export DefaultMinRelayTxFee from the mempool package
- Export the CalcPriority function from the mempool package
- Introduce a new RawMempoolVerbose function on the TxPool and update
  the RPC server to use it
- Update all references to the mempool to use the package.
- Add a skeleton README.md
2016-08-19 11:08:37 -05:00
Dave Collins 87b3756c8c server: Remove superfluous check in OnMemPool. (#736)
This reduces the mempool lock contention by removing an unnecessary
check when responding to a "mempool" request.

In particular, the code first gets a list of all transactions from the
mempool and then iterates them in order to construct the inventory
vectors and apply bloom filtering if it is enabled.  Since it is
possible that the transaction was removed from the mempool by another
thread while that list is being iterated, the code was checking if each
transaction was still in the mempool.  This is a pointless check because
the transaction might still be removed at any point after the check
anyways.  For example, it might be removed after the mempool response
has been sent to the remote peer or even while the loop is still
iterating.
2016-08-19 11:04:16 -05:00
David Hill 05ab7141d3 travis: Add go 1.7 and drop go 1.5 support. (#740) 2016-08-18 11:06:26 -05:00
Janus Troelsen 4a5223266c docs: Add chainhash to README.md (#739) 2016-08-17 10:41:50 -05:00
Dave Collins cee207c64c txscript: Expose AddOps on ScriptBuilder. (#734)
This exposes a new function on the ScriptBuilder type named AddOps that
allows multiple opcodes to be added via a single call and adds tests to
exercise the new function.

Finally, it updates a couple of places in the signing code that were
abusing the interface by setting its private script directly to use the
new public function instead.
2016-08-12 19:29:28 -05:00
Waldir Pimenta fb9b640ef2 add license title (#717)
It's not strictly required, but it's useful metadata, and part of the recommended license template text (see http://choosealicense.com/licenses/isc/ and https://opensource.org/licenses/isc-license)
2016-08-11 16:58:15 -05:00
Dave Collins 044a11c9fc btcd: Simplify shutdown signal handling logic. (#733)
This rewrites the shutdown logic to simplify the shutdown signalling.
All cleanup is now run from deferred functions in the main function and
channels are used to signal shutdown either from OS signals or from
other subsystems such as the RPC server and windows service controller.

The RPC server has been modified to use a new channel for signalling
shutdown that is exposed via the RequestedProcessShutdown function
instead of directly calling Stop on the server as it previously did.

Finally, it adds a few checks for early termination during the main
start sequence so the process can be stopped without starting all the
subsystems if desired.

This is a backport of the equivalent logic from Decred with a few slight
modifications.  Credits go to @jrick.
2016-08-11 13:39:23 -05:00
Dave Collins a7b35d9f9e chaincfg/blockchain: Parameterize more chain consts. (#732)
This moves several of the chain constants to the Params struct in the
chaincfg package which is intended for that purpose.  This is mostly a
backport of the same modifications made in Decred along with a few
additional things cleaned up.

The following is an overview of the changes:

- Comment all fields in the Params struct definition
- Add locals to BlockChain instance for the calculated values based on
  the provided chain params
- Rename the following param fields:
  - SubsidyHalvingInterval -> SubsidyReductionInterval
  - ResetMinDifficulty -> ReduceMinDifficulty
- Add new Param fields:
  - CoinbaseMaturity
  - TargetTimePerBlock
  - TargetTimespan
  - BlocksPerRetarget
  - RetargetAdjustmentFactor
  - MinDiffReductionTime
2016-08-10 16:02:23 -05:00
Dave Collins bd4e64d1d4 chainhash: Abstract hash logic to new package. (#729)
This is mostly a backport of some of the same modifications made in
Decred along with a few additional things cleaned up.  In particular,
this updates the code to make use of the new chainhash package.

Also, since this required API changes anyways and the hash algorithm is
no longer tied specifically to SHA, all other functions throughout the
code base which had "Sha" in their name have been changed to Hash so
they are not incorrectly implying the hash algorithm.

The following is an overview of the changes:

- Remove the wire.ShaHash type
- Update all references to wire.ShaHash to the new chainhash.Hash type
- Rename the following functions and update all references:
  - wire.BlockHeader.BlockSha -> BlockHash
  - wire.MsgBlock.BlockSha -> BlockHash
  - wire.MsgBlock.TxShas -> TxHashes
  - wire.MsgTx.TxSha -> TxHash
  - blockchain.ShaHashToBig -> HashToBig
  - peer.ShaFunc -> peer.HashFunc
- Rename all variables that included sha in their name to include hash
  instead
- Update for function name changes in other dependent packages such as
  btcutil
- Update copyright dates on all modified files
- Update glide.lock file to use the required version of btcutil
2016-08-08 14:04:33 -05:00
Dave Collins b6b1e55d1e TravisCI: Set vendor experiment variable earlier. (#731)
This sets the GO15VENDOREXPERIMENT environment variable before glide is
installed so vendoring is used when installing it as well.
2016-08-08 13:31:17 -05:00
Dave Collins 711f33450c chainhash: Implement a new chainhash package. (#730)
This is a backport of the chainhash package made in Decred along with a
few additional things cleaned up, finished test coverage, and rewording
of some documentation to make it more generic.

In particular, the new package provides the definition of the hash type
and associated hashing functions which will allow the rest of the code to be
agnostic to the specific hash algorithm.

This only implements the package and does not change any of the code
base over to use it.
2016-08-08 12:05:51 -05:00
Dave Collins d406d9e52b wire: Consolidate tests into the wire pkg. (#728)
Putting the test code in the same package makes it easier for forks
since they don't have to change the import paths as much and it also
gets rid of the need for internal_test.go to bridge.

This same thing should probably be done for the majority of the code
base.
2016-08-08 11:42:54 -05:00
David Hill 6e644855f5 Bump travis to use the latest golang - 1.6.3 (#723) 2016-07-25 13:48:19 -05:00
David Hill 61a15f6f1b blockchain: optimize HaveBlock (#720)
If a block is known to exist in the memory chain or database then
there is no need to check the orphan pool.
2016-07-25 10:16:57 -05:00
Dave Collins 00ebb9d14d blockchain: Associate time src with chain instance.
Rather than making the caller to pass in the median time source on
ProcessBlock and IsCurrent, modify the Config struct to include the
median time source and associate it with the chain instance when it is
created.

This is being done because both the ProcessBlock and IsCurrent functions
require access to the blockchain state already, it is a little bit safer
to ensure the time source matches the chain instance state, it
simplifies the caller logic, and it also allows its use within the logic
of the blockchain package itself which will be required by upcoming
rule change warning logic that is part of BIP9.
2016-07-14 13:10:47 -05:00
Jonathan Gillham 1ffc3dc18d peer: Fix logging of connected peer. 2016-06-24 13:39:50 +01:00
Jonathan Gillham f3d759d783 peer: Extract protocol negotiation from main read and write code paths.
This allows cleaner separation of the half-duplex version negotiation from the fully duplex message passing between peers.
2016-06-24 13:18:33 +01:00
Jonathan Gillham 777ccdade3 peer: Remove error return from Connect. 2016-06-24 13:12:01 +01:00
Jonathan Gillham 5cbd1f85bf peer: Remove potential race when calling Connect. 2016-06-24 13:12:01 +01:00
David Hill 7de7bddba9 peer: use atomics instead of mutexes (#670) 2016-06-20 14:34:21 -05:00
Hector Jusforgues ff4ada0b0e Add automatic RPC configuration. 2016-06-03 21:14:15 -05:00
Dave Collins 6229e35835 wire: Further reduce transaction allocs.
This commit drastically reduces the number of allocations needed to
deserialize a transaction and its scripts by using the combination of a
free list for initially deserializing the individual scripts along with
copying them into a single contiguous byte slice after the final size is
known and modifying each script in the transaction to point to its
location within the contiguous blob.

The end result is only a single allocation that holds all of the scripts
for a transaction regardless of the total number of scripts it has.

The script free list allows a maximum of 12,500 items with each buffer
being 512 bytes.  This implies it will have a peak usage of 6.1MB.  The
values were chosen based on profiling data and a desire to allow at
least 100 scripts per transaction to be simultaneously deserialized by
125 peers.

Also, while optimizing, decode directly into the existing previous
outpoint structure of each transaction input in order to avoid the extra
allocation per input that is otherwise caused when the local escapes to
the heap.

The following is a before and after comparison of the allocations
with the benchmarks that did not change removed:

benchmark              old allocs     new allocs     delta
-----------------------------------------------------------
ReadTxOut              1              0              -100.00%
ReadTxIn               2              0              -100.00%
DeserializeTxSmall     7              5              -28.57%
DeserializeTxLarge     11146          6              -99.95%
2016-06-03 17:09:14 -05:00
Dave Collins 2adfb3b56a wire: Reduce allocs with contiguous slices.
The current code involves a ton of small allocations which is harsh on
the garbage collector and in turn causes a lot of addition runtime
overhead both in terms of additional memory and processing time.

In order to improve the situation, this drasticially reduces the number
of allocations by creating contiguous slices of objects and
deserializing into them.  Since the final data structures consist of
slices of pointers to the objects, they are constructed by pointing them
into the appropriate offset of the contiguous slice.

This could be improved upon even further by converting all of the data
structures provided the wire package to be slices of contiguous objects
directly, however that would be a major breaking API change and would
end up requiring updating a lot more code in every caller.  I do think
that ultimately the API should be changed, but the changes in this
commit already makes a massive difference and it doesn't require
touching any of the callers, so it is a good place to begin.

The following is a before and after comparison of the allocations
with the benchmarks that did not change removed:

benchmark              old allocs     new allocs     delta
-----------------------------------------------------------
DeserializeTxLarge     16715          11146          -33.32%
DecodeGetHeaders       501            2              -99.60%
DecodeHeaders          2001           2              -99.90%
DecodeGetBlocks        501            2              -99.60%
DecodeAddr             3001           2002           -33.29%
DecodeInv              50003          3              -99.99%
DecodeNotFound         50002          3              -99.99%
DecodeMerkleBlock      107            3              -97.20%
2016-06-03 17:08:31 -05:00
Dave Collins 5de5b7354c wire: Avoid allocation on timestamp decodes.
Since the protocol encodes timestamps differently depending on the
message, the code currently decodes into a local variable and then
converts it to a time.Time.  However, this causes an allocation due to
the local having to escape to the heap in order for the readElement
function to write to it.

So, in order to avoid that, this introduces two new types for a
timestamp named uint32Time and int64Time that are encoded as the
respective type on the read.  When calling the readElements function,
the time.Time field in the message is cast to a pointer of the
appropriate type which effectively allows the allocations to be avoided.

The following is a before and after comparison of the allocations
with the benchmarks that did not change removed:

benchmark              old allocs     new allocs     delta
----------------------------------------------------------------------
ReadBlockHeader        1              0              -100.00%
DecodeHeaders          4001           2001           -49.99%
DecodeAddr             4001           3001           -24.99%
DecodeMerkleBlock      108            107            -0.93%
2016-06-03 17:08:31 -05:00
Dave Collins f68cd7422d wire: Reduce allocs with a binary free list.
This introduces a new binary free list which provides a concurrent safe
list of unused buffers for the purpose of serializing and deserializing
primitive integers to their raw binary bytes.

For convenience, the type also provides functions for each of the
primitive unsigned integers that automatically obtain a buffer from the
free list, perform the necessary binary conversion, read from or write
to the given io.Reader or io.Writer, and return the buffer to the free
list.

A global instance of the type has been introduced with a maximum number
of 1024 items. Since each buffer is 8 bytes, it will consume a maximum
of 8KB.  Theoretically, this value would only allow up to 1024 peers
simultaneously reading and writing without having to resort to burdening
the garbage collector with additional allocations.  However, due to the
fact the code is designed in such a way that the buffers are quickly
used and returned to the free list, in practice it can support much more
than 1024 peers without involving the garbage collector since it is
highly unlikely every peer would need a buffer at the exact same time.

The following is a before and after comparison of the allocations
with the benchmarks that did not change removed:

benchmark              old allocs     new allocs     delta
-------------------------------------------------------------
WriteVarInt1           1              0              -100.00%
WriteVarInt3           1              0              -100.00%
WriteVarInt5           1              0              -100.00%
WriteVarInt9           1              0              -100.00%
ReadVarInt1            1              0              -100.00%
ReadVarInt3            1              0              -100.00%
ReadVarInt5            1              0              -100.00%
ReadVarInt9            1              0              -100.00%
ReadVarStr4            3              2              -33.33%
ReadVarStr10           3              2              -33.33%
WriteVarStr4           2              1              -50.00%
WriteVarStr10          2              1              -50.00%
ReadOutPoint           1              0              -100.00%
WriteOutPoint          1              0              -100.00%
ReadTxOut              3              1              -66.67%
WriteTxOut             2              0              -100.00%
ReadTxIn               5              2              -60.00%
WriteTxIn              3              0              -100.00%
DeserializeTxSmall     15             7              -53.33%
DeserializeTxLarge     33428          16715          -50.00%
SerializeTx            8              0              -100.00%
ReadBlockHeader        7              1              -85.71%
WriteBlockHeader       10             4              -60.00%
DecodeGetHeaders       1004           501            -50.10%
DecodeHeaders          18002          4001           -77.77%
DecodeGetBlocks        1004           501            -50.10%
DecodeAddr             9002           4001           -55.55%
DecodeInv              150005         50003          -66.67%
DecodeNotFound         150004         50002          -66.67%
DecodeMerkleBlock      222            108            -51.35%
TxSha                  10             2              -80.00%
2016-06-03 17:08:31 -05:00
Dave Collins dc83f4ee6a addblock: Add support for generating indexes.
This adds two new flags, --txindex and --addrindex, to the addblock
utility which mirror the flags on btcd.  They serve to to specify that
the transaction index and/or address index, respectively, should be
built while importing from the bootstrap file.

This is technically not 100% required since btcd will build the indexes
on the first load (when enabled) if they aren't already built, however
it is much faster to build the indexes as the blocks are being validated
(particularly for the address index), so this makes the capability
available.
2016-06-03 17:06:16 -05:00
Mawueli Kofi Adzoe 7f07fb1093 txscript: Fix typo. (#700)
* Fix tiny typo. Bump copyright year.
* Clarify documentation.
2016-05-22 23:23:20 -05:00
Mawueli Kofi Adzoe e8e2167a1a Bump up copyright. Reflect recent update. (#699) 2016-05-22 23:22:42 -05:00
Steven Roose 24e41c843b Update installation instructions using Glide (#698)
The main README.md file had newer installation that uses Glide, while these did not.
2016-05-21 17:30:26 -05:00
Nathan Bass f893558d78 Minor correction to GenerateSharedSecret documentation. (#696) 2016-05-14 22:56:29 -05:00
Dave Collins 2554caee59 build: Convert project to use glide. (#689)
This converts the project to allow btcd to be used with the glide
package manager in order to provide stable and reproducible builds
without the user having to jump through all of the hoops as they do
today.

It consists of adding a glide.yaml file which identifies the project
dependencies and locations along with a glide.lock file which contains
the complete dependency tree pinned to specific versions.  Glide uses
these files to download the packages (or updates) to a local vendor
directory and checkout the correct pinned versions.  The go tool, in
turn, is used to build/install btcd and will use the pinned versions in
the vendor directory.

This also updates TravisCI to build using glide, removes some of the
exceptions in the lint checks which are no longer required, and updates
the README.md with the new instructions needed to build the project with
glide.
2016-05-06 10:47:53 -05:00
Dave Collins 128366734f main: Limit garbage collection percentage. (#686)
This reduces the target ratio of freshly allocated data to live data to
10% in order to limit excessive overallocations by the garbage collector
during data bursts such as processing complex blocks or rapidly
receiving a lot of large transactions.
2016-05-05 14:16:58 -05:00