Compare commits

...

117 commits

Author SHA1 Message Date
Brannon King
71fc94b1de
Merge pull request #410 from mjovanc/master 2021-11-26 18:37:08 -05:00
Marcus Cvjeticanin
927876e0c0 Changing download path artifactory URL for Boost 2021-11-26 20:52:04 +01:00
Alex Grin
73893d2f6b
Update README.md 2021-05-03 18:05:10 -04:00
Brannon King
7afc4c418a scale the cache change down a bit 2021-04-15 23:35:41 -04:00
Brannon King
db7e0b59e4 backport the rest of the depends adjustments from v17.4 2021-04-15 23:27:32 -04:00
Brannon King
1236c13a00 tweaked cache and minwork 2021-04-15 21:46:45 -04:00
Brannon King
a35385a5c0 backport wakeup fix 2021-04-15 21:15:40 -04:00
Brannon King
60a3d11df2 backport glib version fix, lsn_reset fix 2021-04-15 21:15:16 -04:00
Brannon King
cd7c2961dc rolling version 2021-04-15 10:13:33 -04:00
Alex Grintsvayg
03b1287359
Merge branch 'dnsseeds'
* dnsseeds:
  add a couple dns seeds
2021-04-13 10:12:14 -04:00
Alex Grintsvayg
131bcc8c9d
add a couple dns seeds 2021-04-12 13:49:13 -04:00
Alex Grintsvayg
29cc82db45
update github issue template 2021-04-05 10:07:20 -04:00
Brannon King
be118de19a raised default max tx fee 2019-11-26 15:16:59 -07:00
Brannon King
bf8cb69987 worked around an off-by-1 issue on the normalization fork block 2019-11-26 15:16:59 -07:00
Brannon King
37d177178f changed flush to have min height
don't flush blocks on regtest
2019-11-26 15:16:59 -07:00
gahag
f05b5973ae Implement basic log rotation (closes #211) (#344)
* Remove shrinkdebugfile flag (#211)

To implement a basic form of log rotation, two instances of the log file are to
be adopted: one for the current execution, and one for the previous
execution. On startup, if the log file exists, it will be renamed into the old
log file. This implies the deprecation and removal of the log shrink flag, since
the log is no longer forever growing.

* Implement log backup

To implement a basic form of log rotation, two instances of the log file are to
be adopted: one for the current execution, and one for the previous
execution. On startup, if the log file exists, it is renamed into the old
log file. This means that you should always have logs for the last 2 executions.

closes #211
2019-10-29 14:21:04 -06:00
Aditya J Karia
4a3c2e6504 Docs: Updated README.md files with TOC and best practices in Mar… (#339)
* Added table of contents to README.md

* Restructured headings and contents to follow Markdown best Practices
2019-10-29 14:20:05 -06:00
Bharat Raghunathan
8010681915 Update hyperlinks in README (#326) 2019-10-29 14:11:49 -06:00
GwanYeong Kim
6fe70b58ce Fix 'Use $(...) notation instead of legacy backticked ....' issue in shell script 2019-10-29 14:05:48 -06:00
Eric Brian Anil
51ec0a92f7 MIT License badge
Added the MIT license badge that redirects to the license page
2019-10-21 07:37:04 -06:00
Thomas Zarebczan
d3a8722ea8
Merge pull request #340 from addy1510/master
Fix typo in README.md
2019-10-17 21:06:08 -04:00
addy1510
ab08f6b35e
Fix typo in README.md 2019-10-18 06:33:42 +05:30
Brannon King
76e3d8861c error w/o segwit after fork 2019-10-11 14:16:38 -06:00
Brannon King
b16356b927
added segwit instructions 2019-10-10 10:50:02 -06:00
Brannon King
1bbedef565 ensure we don't return witness data in the transaction w/o segwit rule 2019-10-09 21:53:07 -06:00
Jeremy Kauffman
e5f049e5ed mention mailing list on README 2019-10-02 18:01:46 -06:00
Brannon King
9dee2eee5f windows compilation fix round 2 2019-09-30 13:58:11 -06:00
Brannon King
785471745c fixed compilation on Windows 2019-09-30 13:22:10 -06:00
Brannon King
7afebb02f4 Use memory mapped file for claim data allocations
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>

match previous serialization


tweaks


added check for RC's data
2019-09-30 12:21:55 -06:00
Brannon King
8932d90a9e rolled version, added info to chaintips 2019-09-27 09:26:26 -06:00
Brannon King
96fdb05689 added helper method, enabled signing 2019-09-26 12:29:40 -06:00
Brannon King
9149e371ed fixed fee calc issue, removed segwit enable fallback 2019-09-25 13:42:34 -06:00
Anthony Fieroni
12ab16f50e Show bid and sequence as optional parameters
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-19 10:59:15 -06:00
Anthony Fieroni
c185b49ede Show claimtrie help in cli rpc
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-19 10:59:15 -06:00
Brannon King
d05eee35e8 updated go-live heights 2019-09-18 13:53:07 -06:00
lbrynaut
4df4136c5d Fix unit tests after recent breaks. 2019-09-17 09:46:44 -06:00
Brannon King
b7cdd9f2a0 added staked totals to getwalletinfo 2019-09-16 16:04:57 -06:00
Brannon King
ffe828b1d9 rolled version, fix txindex_test, other tweaks 2019-09-16 14:54:05 -06:00
Brannon King
71bd612c4a Fix broken test on previous checkin 2019-09-16 14:49:35 -06:00
Brannon King
f176db058e change rpc enablement mechanism 2019-09-13 16:18:36 -06:00
Brannon King
b434864f18 initial commit of metadata on supports 2019-09-13 16:18:36 -06:00
Brannon King
4da4ab1995 restored nonstandard output on getrawtransaction
attempting to make index pointer problems more obvious


reverted strip in Solver


synced subtype
2019-09-13 16:16:08 -06:00
Brannon King
920a9b09dc proposed fix for issue 242 2019-09-13 16:16:08 -06:00
Brannon King
8b98a9a4e9 Organize unit tests by type
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>

moved a few more tests out
2019-09-11 09:21:45 -06:00
Anthony Fieroni
476eae3e93 Match network id to regtest
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-10 11:54:04 -06:00
Anthony Fieroni
6575fb9534 Show correct script ops in asm
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-10 10:02:16 -06:00
Brannon King
1583082acc added support for tips in RPC, minor cleanup 2019-09-06 14:03:38 -06:00
Anthony Fieroni
f71d6a4263 Introduce pending amount, the value when claim and its supports got valid
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-06 14:03:38 -06:00
Anthony Fieroni
d2b78a7fc3 Make optional claimid and amount in supportclaim
Return destination claim address as well

Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-06 14:03:38 -06:00
Anthony Fieroni
a98288aa80 Logic fixes, unit test
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-06 14:03:38 -06:00
Anthony Fieroni
3a0b4232a5 Add bid, sequence like rpc methods
Reuse a bunch of rpc help texts
2019-09-06 14:03:38 -06:00
Anthony Fieroni
5bdbc9e0d6 Split help and rpc methods
Use constants for field names

Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-06 14:03:38 -06:00
Anthony Fieroni
9d4ef899a6 Unify claimtrie rpc methods
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-09-06 14:03:38 -06:00
Brannon King
d0e8374636 more thoroughly tested tx signing 2019-09-05 14:17:22 -06:00
Brannon King
d2b26da3e8 added parameter for claim db cache size 2019-08-30 15:19:09 -06:00
Brannon King
59faea9815 added unit test for signing claims 2019-08-30 13:54:19 -06:00
Brannon King
ce6be620a5 setting go-live heights 2019-08-30 13:52:32 -06:00
Anthony Fieroni
580f6e20eb Implement binary tree hash algorithm
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-08-30 13:52:32 -06:00
Brannon King
7d93edf776 fix osx unit test failure 2019-08-30 09:22:58 -06:00
Brannon King
b3c5b1e88d simplified claim stripping, removed TX_CLAIM 2019-08-29 14:49:54 -06:00
Brannon King
a84c196916 reduced max open files on levelDB 2019-08-29 08:57:45 -06:00
Brannon King
b52f47f273 removed shared_ptr on TData, set minWork 2019-08-29 08:57:45 -06:00
Anthony Fieroni
e9d37a8c81 Trying to minimize disk reads / writes
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-08-29 08:57:45 -06:00
Brannon King
d96253cd40 attempted optimization of dbwrapper
it will only work for single-threaded access
2019-08-29 08:57:45 -06:00
Brannon King
4a5d310fd3 separate claim from children storage 2019-08-29 08:57:45 -06:00
Brannon King
c6e267e970 optimized a little 2019-08-29 08:57:45 -06:00
Brannon King
7b5ae24bea first pass at not loading full claimtrie into RAM
tweaks
2019-08-29 08:57:45 -06:00
lbrynaut
92037d786a Detect "claim" type transactions.
Add code to enable a hardfork into witness support, in addition to
possible BIP9 fiddling.
Fix a bug in abandonclaim and abandonsupport that burns coins on
abandon, rather than sending to the intended destination.
2019-08-28 13:35:04 -06:00
lbrynaut
494c48875d Fix unit tests. 2019-08-28 13:35:04 -06:00
Brannon King
fc0da99894 updated to support using bech32 addresses with claim ops 2019-08-28 13:35:04 -06:00
Brannon King
20e96d1233 fix unit test crash on OSX
pulled in some fixes from v18
2019-08-08 10:11:09 -06:00
Brannon King
43214bc6d2 removed superfluous fRequireTakeoverHeights 2019-07-29 10:25:41 -06:00
AlessandroSpallina
15b61996b5 fix lbrycrd-cli command typo 2019-07-29 09:54:33 -06:00
Brannon King
328ee12e8b made a new "claims" logging category (off by default) 2019-07-29 09:47:33 -06:00
Brannon King
6259378466 renamed some of the cache fields 2019-07-29 09:23:56 -06:00
Anthony Fieroni
e5c8b6b8ff Better use copies on iterate claim and support re-add
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-29 09:23:56 -06:00
Anthony Fieroni
c02b04f120 A bit more cleanup
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-29 09:23:56 -06:00
Anthony Fieroni
216cc51825 Code refactor
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-29 09:23:56 -06:00
Brannon King
b1aa5e04e1 revert regtest expiration change 2019-07-22 14:27:24 -06:00
Anthony Fieroni
0947307e14 Fix consensus
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-22 14:27:24 -06:00
Anthony Fieroni
966e7386d9 Fix expiration fork usage
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-22 14:27:24 -06:00
Brannon King
05381f2f5b stop overwriting releases when tags are rebased
and stop putting timestamps in the zips


reverted csv, segwit numbers


apparently the overwrite is necessary


going to do releases manually
2019-07-22 13:37:11 -06:00
Brannon King
61a024e182 moved cmake build to a subfolder 2019-07-19 12:21:59 -06:00
Brannon King
cf42fa1566 upped versions used 2019-07-19 12:21:59 -06:00
Anthony Fieroni
64f228ebbb Use sed instead of patch
Prefer system wide packages

Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-19 12:21:59 -06:00
Anthony Fieroni
cd772f57ff Fix icu and boost builds
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-19 12:21:59 -06:00
Anthony Fieroni
559319048b Fix openssl configure
Do not search icu when boost is found system wide
Add cmake variables for tests, wallet and bench options

Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-19 12:21:59 -06:00
Anthony Fieroni
d432936543 Add cmake build system
Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
2019-07-19 12:21:59 -06:00
Brannon King
3fa6588657 added yiimp pool instructions 2019-07-19 12:21:27 -06:00
Brannon King
63d72460c0 fix unavailable param on darwin 2019-07-19 12:21:07 -06:00
Brannon King
141f1400dc fix crash on cli help
fix bech32 prefix


bumped version


improve trie read RAM use, fix a few compiler warnings


open segwit window until Jan 2020


work around Windows ICU build issue


upped the soft fork thresh length to a week


open testnet soft forks window


clarifying segwit to be manually enabled


same for testnet
2019-07-19 12:21:07 -06:00
Brannon King
90afa72147 eliminated fuzzer test on osx 2019-07-02 08:44:33 -06:00
Brannon King
3d9e8f595f changed unit test to deterministic rand 2019-07-02 00:15:27 -06:00
Brannon King
cce0fde619 updated build docker file to include libc++abi 2019-07-01 23:39:24 -06:00
Brannon King
41417bf469 bumped version 2019-07-01 15:46:28 -06:00
Brannon King
09e2ba2d68 post-merge fixes 2019-07-01 14:44:28 -06:00
lbrynaut
c18f0ed8ea Add a test for locktime transactions. 2019-07-01 14:44:28 -06:00
Brannon King
9a67b514c9 flattening prefix trie work
put getclaimsintrie back as deprecated


added test for adding a lot of data to the claimtrie


updated unit test to dodge expiration fork
2019-07-01 14:44:28 -06:00
Brannon King
8d955fdd22 added a test for putting a lot of data into the claimtrie
updated to dodge expiration fork
2019-07-01 14:44:28 -06:00
Brannon King
24269e177f support coinbasetxn capability in getblocktemplate 2019-07-01 14:44:28 -06:00
Brannon King
36ccf3e903 fix multi-build mkdir conflict 2019-07-01 14:44:28 -06:00
Brannon King
c751b27e54 upped the default validation period 2019-07-01 14:44:28 -06:00
Brannon King
f2462c74b3 restored the current "depends" and friends
fix windows test run


unit test round 2


attempting to fix ccache use on darwin


made ccache optional, no longer pulls clang on darwin build


fixing darwin build from Dockerfile


fixed missing nproc on OSX


updated readme to include regtest example, build examples


fix QT unit tests


made -j get passed down, added build.sh
2019-07-01 14:44:28 -06:00
Brannon King
4a4c091c55 fixed ancestors not all in claim trie on packageFees condition 2019-07-01 14:44:28 -06:00
Brannon King
7eddd19521 made cache match legacy_master, removed my bad assert in undo 2019-07-01 14:44:28 -06:00
Brannon King
9cb4064adb added claimtrie field back to getblocktemplate
I also included a test to ensure that we don't forget it next time
2019-07-01 14:44:28 -06:00
Brannon King
5a393f5b14 Undo compatibility (#281)
* added test for claimname RPC
2019-07-01 14:44:28 -06:00
lbrynaut
ffe68d634f Fix a bug that treats all claims as our own wallet txs. 2019-07-01 14:44:28 -06:00
Brannon King
e560834489 allow rest/block/height.json
changes from review, added integration test
2019-07-01 14:44:28 -06:00
Brannon King
141e583cce code reuse between miner & validator
originally from BvbFan
2019-07-01 14:44:28 -06:00
Brannon King
c7cabf0e96 pulled in a few minor keepers from the other rebase branch 2019-07-01 14:43:59 -06:00
Brannon King
21f065aff9 fixed small claim names coming out as numeric 2019-07-01 14:43:59 -06:00
Brannon King
40a3668c97 fixed slow-running unit tests 2019-07-01 14:43:59 -06:00
lbrynaut
a08967a572 Rebase lbry on to Bitcoin 0.17.
This contains significant rebase / merge / testing work by Naut
<lbrynaut@protonmail.com>, Anthony Fieroni <bvbfan@abv.bg> and Brannon
King <countprimes@gmail.com>.
2019-07-01 14:43:59 -06:00
Wladimir J. van der Laan
c165df198d
Merge #16163: 0.17.2 backport: build with -fstack-reuse=none and test case
b5a4abeca2 Add test for GCC bug 90348 (Pieter Wuille)
05fb9f7fbb build with -fstack-reuse=none (MarcoFalke)

Pull request description:

  Backports:

  * [build with -fstack-reuse=none](https://github.com/bitcoin/bitcoin/pull/15983)
  * [Add test for GCC bug 90348](https://github.com/bitcoin/bitcoin/pull/15985)

  b5a4abeca2 has been modified to replace the `setup_common.h` with `test_bitcoin.h` include.

ACKs for commit b5a4ab:
  Empact:
    ACK b5a4abeca2 by review of the linked PRs, the GCC bug and option, and visual inspection/comparison of the ported code
  laanwj:
    Code review + relevancy for backport ACK b5a4abeca2

Tree-SHA512: cdfdc6e2f208e8dc6a8a86cd7a7ed0f2a6f96604a0663efc970f580f693c1975353341fa8434b23de3cb681e03c6918e3342178752ed595d16a0ec50db913266
2019-06-13 12:57:51 +02:00
Pieter Wuille
b5a4abeca2
Add test for GCC bug 90348
Github-Pull: #15985
Rebased-From: 58e291cfad
2019-06-07 10:21:26 +02:00
MarcoFalke
05fb9f7fbb
build with -fstack-reuse=none
Github-Pull: #15983
Rebased-From: faf38bc056
2019-06-07 10:07:36 +02:00
243 changed files with 32296 additions and 3630 deletions

View file

@ -1,8 +1,8 @@
<!-- This issue tracker is only for technical issues related to Bitcoin Core.
<!-- This issue tracker is only for technical issues related to lbrycrd (the LBRY blockchain).
General bitcoin questions and/or support requests are best directed to the Bitcoin StackExchange at https://bitcoin.stackexchange.com.
General questions and/or support requests are best directed to the community chat at https://chat.lbry.org.
For reporting security issues, please read instructions at https://bitcoincore.org/en/contact/.
For reporting security issues, please email security@lbry.com.
If the node is "stuck" during sync or giving "block checksum mismatch" errors, please ensure your hardware is stable by running memtest and observe CPU temperature with a load-test tool such as linpack before creating an issue! -->
@ -13,7 +13,7 @@ If the node is "stuck" during sync or giving "block checksum mismatch" errors, p
<!--- How reliably can you reproduce the issue, what are the steps to do so? -->
<!-- What version of Bitcoin Core are you using, where did you get it (website, self-compiled, etc)? -->
<!-- What version of lbrycrd are you using, where did you get it (website, self-compiled, etc)? -->
<!-- What type of machine are you observing the error on (OS/CPU and disk type)? -->

21
.gitignore vendored
View file

@ -1,13 +1,14 @@
*.tar.gz
*.exe
src/bitcoin
src/bitcoind
src/bitcoin-cli
src/bitcoin-tx
src/test/test_bitcoin
src/test/test_bitcoin_fuzzy
src/qt/test/test_bitcoin-qt
src/lbrycrd
src/lbrycrdd
src/lbrycrd-cli
src/lbrycrd-tx
src/test/test_lbrycrd
src/test/test_lbrycrd_fuzzy
src/qt/lbrycrd-qt
src/qt/test/test_lbrycrd-qt
# autoreconf
Makefile.in
@ -61,7 +62,6 @@ src/qt/bitcoin-qt.includes
*.pyc
*.o
*.o-*
*.patch
*.a
*.pb.cc
*.pb.h
@ -116,3 +116,8 @@ test/cache/*
libbitcoinconsensus.pc
contrib/devtools/split-debug.sh
.idea
cmake-build-*/
compile_commands\.json

View file

@ -1,164 +1,88 @@
dist: trusty
os: linux
language: minimal
filter_secrets: false
cache:
ccache: true
directories:
- depends/built
- depends/sdk-sources
- $HOME/.ccache
- ${HOME}/ccache
stages:
- lint
- build
- test
env:
global:
- MAKEJOBS=-j3
- RUN_TESTS=false
- RUN_BENCH=false # Set to true for any one job that has debug enabled, to quickly check bench is not crashing or hitting assertions
- DOCKER_NAME_TAG=ubuntu:18.04
- LC_ALL=C.UTF-8
- BOOST_TEST_RANDOM=1$TRAVIS_BUILD_ID
- CCACHE_SIZE=100M
- CCACHE_TEMPDIR=/tmp/.ccache-temp
- CCACHE_COMPRESS=1
- CCACHE_DIR=$HOME/.ccache
- BASE_OUTDIR=$TRAVIS_BUILD_DIR/out
- SDK_URL=https://bitcoincore.org/depends-sources/sdks
- WINEDEBUG=fixme-all
- DOCKER_PACKAGES="build-essential libtool autotools-dev automake pkg-config bsdmainutils curl git ca-certificates ccache"
before_install:
- export PATH=$(echo $PATH | tr ':' "\n" | sed '/\/opt\/python/d' | tr "\n" ":" | sed "s|::|:|g")
- BEGIN_FOLD () { echo ""; CURRENT_FOLD_NAME=$1; echo "travis_fold:start:${CURRENT_FOLD_NAME}"; }
- END_FOLD () { RET=$?; echo "travis_fold:end:${CURRENT_FOLD_NAME}"; return $RET; }
install:
- travis_retry docker pull $DOCKER_NAME_TAG
- env | grep -E '^(CCACHE_|WINEDEBUG|LC_ALL|BOOST_TEST_RANDOM|CONFIG_SHELL)' | tee /tmp/env
- if [[ $HOST = *-mingw32 ]]; then DOCKER_ADMIN="--cap-add SYS_ADMIN"; fi
- DOCKER_ID=$(docker run $DOCKER_ADMIN -idt --mount type=bind,src=$TRAVIS_BUILD_DIR,dst=$TRAVIS_BUILD_DIR --mount type=bind,src=$CCACHE_DIR,dst=$CCACHE_DIR -w $TRAVIS_BUILD_DIR --env-file /tmp/env $DOCKER_NAME_TAG)
- DOCKER_EXEC () { docker exec $DOCKER_ID bash -c "cd $PWD && $*"; }
- if [ -n "$DPKG_ADD_ARCH" ]; then DOCKER_EXEC dpkg --add-architecture "$DPKG_ADD_ARCH" ; fi
- travis_retry DOCKER_EXEC apt-get update
- travis_retry DOCKER_EXEC apt-get install --no-install-recommends --no-upgrade -qq $PACKAGES $DOCKER_PACKAGES
before_script:
- DOCKER_EXEC echo \> \$HOME/.bitcoin # Make sure default datadir does not exist and is never read by creating a dummy file
- mkdir -p depends/SDKs depends/sdk-sources
- if [ -n "$OSX_SDK" -a ! -f depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz ]; then curl --location --fail $SDK_URL/MacOSX${OSX_SDK}.sdk.tar.gz -o depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz; fi
- if [ -n "$OSX_SDK" -a -f depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz ]; then tar -C depends/SDKs -xf depends/sdk-sources/MacOSX${OSX_SDK}.sdk.tar.gz; fi
- if [[ $HOST = *-mingw32 ]]; then DOCKER_EXEC update-alternatives --set $HOST-g++ \$\(which $HOST-g++-posix\); fi
- if [ -z "$NO_DEPENDS" ]; then DOCKER_EXEC CONFIG_SHELL= make $MAKEJOBS -C depends HOST=$HOST $DEP_OPTS; fi
script:
- export TRAVIS_COMMIT_LOG=`git log --format=fuller -1`
- OUTDIR=$BASE_OUTDIR/$TRAVIS_PULL_REQUEST/$TRAVIS_JOB_NUMBER-$HOST
- BITCOIN_CONFIG_ALL="--disable-dependency-tracking --prefix=$TRAVIS_BUILD_DIR/depends/$HOST --bindir=$OUTDIR/bin --libdir=$OUTDIR/lib"
- if [ -z "$NO_DEPENDS" ]; then DOCKER_EXEC ccache --max-size=$CCACHE_SIZE; fi
- BEGIN_FOLD autogen; test -n "$CONFIG_SHELL" && DOCKER_EXEC "$CONFIG_SHELL" -c "./autogen.sh" || DOCKER_EXEC ./autogen.sh; END_FOLD
- mkdir build && cd build
- BEGIN_FOLD configure; DOCKER_EXEC ../configure --cache-file=config.cache $BITCOIN_CONFIG_ALL $BITCOIN_CONFIG || ( cat config.log && false); END_FOLD
- BEGIN_FOLD distdir; DOCKER_EXEC make distdir VERSION=$HOST; END_FOLD
- cd bitcoin-$HOST
- BEGIN_FOLD configure; DOCKER_EXEC ./configure --cache-file=../config.cache $BITCOIN_CONFIG_ALL $BITCOIN_CONFIG || ( cat config.log && false); END_FOLD
- BEGIN_FOLD build; DOCKER_EXEC make $MAKEJOBS $GOAL || ( echo "Build failure. Verbose build follows." && DOCKER_EXEC make $GOAL V=1 ; false ); END_FOLD
- if [ "$RUN_TESTS" = "true" ]; then BEGIN_FOLD unit-tests; DOCKER_EXEC LD_LIBRARY_PATH=$TRAVIS_BUILD_DIR/depends/$HOST/lib make $MAKEJOBS check VERBOSE=1; END_FOLD; fi
- if [ "$RUN_BENCH" = "true" ]; then BEGIN_FOLD bench; DOCKER_EXEC LD_LIBRARY_PATH=$TRAVIS_BUILD_DIR/depends/$HOST/lib $OUTDIR/bin/bench_bitcoin -scaling=0.001 ; END_FOLD; fi
- if [ "$TRAVIS_EVENT_TYPE" = "cron" ]; then extended="--extended --exclude feature_pruning,feature_dbcrash"; fi
- if [ "$RUN_TESTS" = "true" ]; then BEGIN_FOLD functional-tests; DOCKER_EXEC test/functional/test_runner.py --combinedlogslen=4000 --coverage --quiet --failfast ${extended}; END_FOLD; fi
after_script:
- echo $TRAVIS_COMMIT_RANGE
- echo $TRAVIS_COMMIT_LOG
jobs:
include:
# ARM
- stage: test
env: >-
HOST=arm-linux-gnueabihf
PACKAGES="g++-arm-linux-gnueabihf"
DEP_OPTS="NO_QT=1"
GOAL="install"
BITCOIN_CONFIG="--enable-glibc-back-compat --enable-reduce-exports"
# Win32
- stage: test
env: >-
HOST=i686-w64-mingw32
DPKG_ADD_ARCH="i386"
DEP_OPTS="NO_QT=1"
PACKAGES="python3 nsis g++-mingw-w64-i686 wine-binfmt wine32"
RUN_TESTS=true
GOAL="install"
BITCOIN_CONFIG="--enable-reduce-exports"
# Win64
- stage: test
env: >-
HOST=x86_64-w64-mingw32
DEP_OPTS="NO_QT=1"
PACKAGES="python3 nsis g++-mingw-w64-x86-64 wine-binfmt wine64"
RUN_TESTS=true
GOAL="install"
BITCOIN_CONFIG="--enable-reduce-exports"
# 32-bit + dash
- stage: test
env: >-
HOST=i686-pc-linux-gnu
PACKAGES="g++-multilib python3-zmq"
DEP_OPTS="NO_QT=1"
RUN_TESTS=true
GOAL="install"
BITCOIN_CONFIG="--enable-zmq --enable-glibc-back-compat --enable-reduce-exports LDFLAGS=-static-libstdc++"
CONFIG_SHELL="/bin/dash"
# x86_64 Linux (uses qt5 dev package instead of depends Qt to speed up build and avoid timeout)
- stage: test
env: >-
HOST=x86_64-unknown-linux-gnu
PACKAGES="python3-zmq qtbase5-dev qttools5-dev-tools protobuf-compiler libdbus-1-dev libharfbuzz-dev libprotobuf-dev"
DEP_OPTS="NO_QT=1 NO_UPNP=1 DEBUG=1 ALLOW_HOST_PACKAGES=1"
RUN_TESTS=true
RUN_BENCH=true
GOAL="install"
BITCOIN_CONFIG="--enable-zmq --with-gui=qt5 --enable-glibc-back-compat --enable-reduce-exports --enable-debug CXXFLAGS=\"-g0 -O2\""
# x86_64 Linux (Qt5 & system libs)
- stage: test
env: >-
HOST=x86_64-unknown-linux-gnu
PACKAGES="python3-zmq qtbase5-dev qttools5-dev-tools libssl1.0-dev libevent-dev bsdmainutils libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-dev libdb5.3++-dev libminiupnpc-dev libzmq3-dev libprotobuf-dev protobuf-compiler libqrencode-dev"
NO_DEPENDS=1
RUN_TESTS=true
GOAL="install"
BITCOIN_CONFIG="--enable-zmq --with-incompatible-bdb --enable-glibc-back-compat --enable-reduce-exports --with-gui=qt5 CPPFLAGS=-DDEBUG_LOCKORDER"
# x86_64 Linux, No wallet
- stage: test
env: >-
HOST=x86_64-unknown-linux-gnu
PACKAGES="python3"
DEP_OPTS="NO_WALLET=1"
RUN_TESTS=true
GOAL="install"
BITCOIN_CONFIG="--enable-glibc-back-compat --enable-reduce-exports"
# Cross-Mac
- stage: test
env: >-
HOST=x86_64-apple-darwin14
PACKAGES="cmake imagemagick libcap-dev librsvg2-bin libz-dev libbz2-dev libtiff-tools python-dev python3-setuptools-git"
OSX_SDK=10.11
GOAL="all deploy"
BITCOIN_CONFIG="--enable-gui --enable-reduce-exports --enable-werror"
- stage: lint
env:
cache: false
language: python
python: '3.6'
- &build-template
stage: build
name: linux
env: NAME=linux DOCKER_IMAGE=lbry/build_lbrycrd_gcc EXT=
os: linux
dist: xenial
language: minimal
services:
- docker
install:
- travis_retry pip install flake8==3.5.0
before_script:
- git fetch --unshallow
- mkdir -p ${HOME}/ccache
- docker pull $DOCKER_IMAGE
script:
- if [ "$TRAVIS_EVENT_TYPE" = "pull_request" ]; then test/lint/commit-script-check.sh $TRAVIS_COMMIT_RANGE; fi
- test/lint/git-subtree-check.sh src/crypto/ctaes
- test/lint/git-subtree-check.sh src/secp256k1
- test/lint/git-subtree-check.sh src/univalue
- test/lint/git-subtree-check.sh src/leveldb
- test/lint/check-doc.py
- test/lint/check-rpc-mappings.py .
- test/lint/lint-all.sh
- if [ "$TRAVIS_REPO_SLUG" = "bitcoin/bitcoin" -a "$TRAVIS_EVENT_TYPE" = "cron" ]; then
while read LINE; do travis_retry gpg --keyserver hkp://subset.pool.sks-keyservers.net --recv-keys $LINE; done < contrib/verify-commits/trusted-keys &&
travis_wait 50 contrib/verify-commits/verify-commits.py;
fi
- echo "build..."
- docker run -v "$(pwd):/lbrycrd" -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache ${DOCKER_IMAGE} packaging/build_${NAME}_64bit.sh
before_deploy:
- mkdir -p dist
- sudo zip -Xj dist/lbrycrd-${NAME}.zip src/lbrycrdd${EXT} src/lbrycrd-cli${EXT} src/lbrycrd-tx${EXT}
- sudo zip -Xj dist/lbrycrd-${NAME}-test.zip src/test/test_lbrycrd${EXT} src/test/test_lbrycrd_fuzzy${EXT}
- sha256sum dist/lbrycrd-${NAME}.zip
- sha256sum dist/lbrycrd-${NAME}-test.zip
deploy:
- provider: s3
access_key_id: AKIAICKFHNTR5RITASAQ
secret_access_key:
secure: Qfgs8vGnEUvgiZNP2S9zY8qHEzaDOceF/XTv32jRBOISWfTqOTE56DZbOp8WKHPAqn0dx04jKA1NfV9f06sXU1NVbiJ2VYISo6XAk0n3RBJL3/mhNxvut/zM2DHkFPljWTkWEColS0ZyA3m4eUyJvAw/i+mOBT/zDD/oIlS5Uo5l/x3LmF9fYBuei0ucwSQeNOr2wCMIl+pXrIU7B3lEzXh1asayW6A9y7DOqMLnrSQ7TLlSssbnhuhDVpFx0xxX/U2NPraotbGKdo3wwMbms/lluBe60I/LsDNp9/SZXMDXh2YLGUImr97octwpdzIMjF+kU7QAZJzM7grz8PU9+MQh2V5sn6Xsww2x4PdkmHGz/2FMzhrCrlPf5JCaPBH49G+w4/29HYmMrlimOOVx4qXCpQ/XtWWne/d6MF0qqT6JhdPuD9ohmTpxcHRkCe2fxUw6Yn3dj+/+YoCywAcwcBm5jLpAotmWoCmmcnm9rvB7bIuPPZAjJUZViCnyvwY4Tj3Fb+sOuK4b/O5D2+cuS+WgQRkN/RspYlXrXTIh8Efv/yhW5L9WdzG1OExJDw2hX5VTccRRgIKZxZp80U2eYqn2M07+1nU+ShX4kgiSon46k5cfacLgzLKWEyCxWSSTbsYcwRxvDEjtYy4wxAYx8+J3dgQPs/opDXoQTJMjud0=
bucket: build.lbry.io
upload-dir: lbrycrd/${TRAVIS_BRANCH}
acl: public_read
local_dir: dist
skip_cleanup: true
on:
repo: lbryio/lbrycrd
all_branches: true
- <<: *build-template
name: windows
env: NAME=windows DOCKER_IMAGE=lbry/build_lbrycrd EXT=.exe
- <<: *build-template
name: osx
env: NAME=darwin DOCKER_IMAGE=lbry/build_lbrycrd EXT=
before_install:
- mkdir -p ./depends/SDKs && pushd depends/SDKs && curl -C - ${MAC_OS_SDK} | tar --skip-old-files -xJ && popd
- &test-template
stage: test
env: NAME=linux
os: linux
dist: xenial
language: minimal
git:
clone: false
install:
- mkdir -p testrun && cd testrun
- curl http://build.lbry.io/lbrycrd/${TRAVIS_BRANCH}/lbrycrd-${NAME}-test.zip -o temp.zip
- unzip temp.zip
script: TRIEHASH_FUZZER_BLOCKS=1000 ./test_lbrycrd
- <<: *test-template
# os: windows # doesn't support secrets at the moment
os: linux
dist: xenial
env: NAME=windows
services:
- docker
script:
- docker pull lbry/wine
- docker run -v "$(pwd):/test" -e "WINEDEBUG=-all" -e "TRIEHASH_FUZZER_BLOCKS=1000" -it lbry/wine wine "/test/test_lbrycrd.exe"
- <<: *test-template
os: osx
osx_image: xcode8.3
env: NAME=darwin

15
LICENSE Normal file
View file

@ -0,0 +1,15 @@
The MIT License (MIT)
Copyright (c) 2015-2019 LBRY Inc
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the
following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View file

@ -25,7 +25,7 @@ BITCOIN_WIN_INSTALLER=$(PACKAGE)-$(PACKAGE_VERSION)-win$(WINDOWS_BITS)-setup$(EX
empty :=
space := $(empty) $(empty)
OSX_APP=Bitcoin-Qt.app
OSX_APP=LBRYcrd-Qt.app
OSX_VOLNAME = $(subst $(space),-,$(PACKAGE_NAME))
OSX_DMG = $(OSX_VOLNAME).dmg
OSX_BACKGROUND_SVG=background.svg

265
README.md
View file

@ -1,76 +1,239 @@
Bitcoin Core integration/staging tree
=====================================
# LBRYcrd - The LBRY blockchain
[![Build Status](https://travis-ci.org/bitcoin/bitcoin.svg?branch=master)](https://travis-ci.org/bitcoin/bitcoin)
[![Build Status](https://travis-ci.org/lbryio/lbrycrd.svg?branch=master)](https://travis-ci.org/lbryio/lbrycrd)
[![MIT licensed](https://img.shields.io/dub/l/vibe-d.svg?style=flat)](https://github.com/lbryio/lbry-desktop/blob/master/LICENSE)
https://bitcoincore.org
LBRYcrd uses a blockchain similar to bitcoin's to implement an index and payment system for content on the LBRY network. It is a fork of [bitcoin core](https://github.com/bitcoin/bitcoin). In addition to the libraries used by bitcoin, LBRYcrd also uses [icu4c](https://github.com/unicode-org/icu/tree/master/icu4c).
What is Bitcoin?
----------------
Please read the [lbry.tech overview](https://lbry.tech/overview) for a general understanding of the LBRY pieces. From there you could read the [LBRY spec](https://spec.lbry.com/) for specifics on the data in the blockchain.
Bitcoin is an experimental digital currency that enables instant payments to
anyone, anywhere in the world. Bitcoin uses peer-to-peer technology to operate
with no central authority: managing transactions and issuing money are carried
out collectively by the network. Bitcoin Core is the name of open source
software which enables the use of this currency.
## Table of Contents
For more information, as well as an immediately useable, binary version of
the Bitcoin Core software, see https://bitcoincore.org/en/download/, or read the
[original whitepaper](https://bitcoincore.org/bitcoin.pdf).
1. [Installation](#installation)
2. [Usage](#usage)
1. [Examples](#examples)
2. [Data directory](#data-directory)
3. [Running from Source](#running-from-source)
1. [Ubuntu with pulled static dependencies](#ubuntu-with-pulled-static-dependencies)
2. [Ubuntu with local shared dependencies](#ubuntu-with-local-shared-dependencies)
3. [MacOS (cross-compiled)](<#macos-(cross-compiled)>)
4. [MacOS with local shared dependencies](#macos-with-local-shared-dependencies)
5. [Windows (cross-compiled)](<#windows-(cross-compiled)>)
6. [Use with CLion](#use-with-clion)
4. [Contributing](#contributing)
- [Testnet](#testnet)
5. [Mailing List](#mailing-list)
6. [License](#license)
7. [Security](#security)
8. [Contact](#contact)
License
-------
## Installation
Bitcoin Core is released under the terms of the MIT license. See [COPYING](COPYING) for more
information or see https://opensource.org/licenses/MIT.
Latest binaries are available from https://github.com/lbryio/lbrycrd/releases. There is no installation procedure; the CLI binaries will run as-is and will have any uncommon dependencies statically linked into the binary. The QT GUI is not supported. LBRYcrd is distributed as a collection of executable files; traditional installers are not provided.
Development Process
-------------------
## Usage
The `lbrycrdd` executable will start a LBRYcrd node and connect you to the LBRYcrd network. Use the `lbrycrd-cli` executable
to interact with lbrycrdd through the command line. Command-line help for both executables are available through
the "--help" flag (e.g. `lbrycrdd --help`). Examples:
#### Examples
Run `./lbrycrdd -server -daemon` to start lbrycrdd in the background.
Run `./lbrycrd-cli -getinfo` to check for some basic information about your LBRYcrd node.
Run `./lbrycrd-cli help` to get a list of all commands that you can run. To get help on specific commands run `./lbrycrd-cli [command_name] help`
Test locally:
```sh
./lbrycrdd -server -regtest -txindex # run this in its own window
./lbrycrd-cli -regtest generate 120 # mine 20 spendable coins
./lbrycrd-cli -regtest claimname my_name deadbeef 1 # hold a name claim with 1 coin
./lbrycrd-cli -regtest generate 1 # get that claim into the block
./lbrycrd-cli -regtest listnameclaims # show owned claims
./lbrycrd-cli -regtest getclaimsforname my_name # show claims under that name
./lbrycrd-cli -regtest stop # kill lbrycrdd
rm -fr ~/.lbrycrd/regtest/ # destroy regtest data
```
For further understanding of a "regtest" setup, see the local stack setup instructions here: https://lbry.tech/resources/regtest-setup
The CLI help is also browsable online at https://lbry.tech/api/blockchain
#### Data directory
Lbrycrdd will use the below default data directories (changeable with -datadir):
```sh
Windows: %APPDATA%\lbrycrd
Mac: ~/Library/Application Support/lbrycrd
Unix: ~/.lbrycrd
```
The data directory contains various things such as your default wallet (wallet.dat), debug logs (debug.log), and blockchain data. You can optionally create a configuration file lbrycrd.conf in the default data directory which will be used by default when running lbrycrdd.
For a list of configuration parameters, run `./lbrycrdd --help`. Below is a sample lbrycrd.conf to enable JSON RPC server on lbrycrdd.
```sh
rpcuser=lbry
rpcpassword=xyz123456790
daemon=1
server=1
txindex=1
```
## Running from Source
The easiest way to compile is to utilize the Docker image that contains the necessary compilers: lbry/build_lbrycrd. This will allow you to reproduce the build as made on our build servers. In this sample we map a local lbrycrd folder and a local ccache folder inside the image:
```sh
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd
docker run -v "$(pwd):/lbrycrd" --rm -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache lbry/build_lbrycrd packaging/build_linux_64bit.sh
```
Some examples of compiling directly:
#### Ubuntu with pulled static dependencies
```sh
sudo apt install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd
./packaging/build_linux_64bit.sh
./src/test/test_lbrycrd
```
Other Linux distros would be similar. The build shell script is fairly trivial; take a peek at its contents.
#### Ubuntu with local shared dependencies
Note: using untested dependencies may lead to conflicting results.
```sh
sudo add-apt-repository ppa:bitcoin/bitcoin
sudo apt-get update
sudo apt-get install libdb4.8-dev libdb4.8++-dev libicu-dev libssl-dev libevent-dev \
build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \
libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-dev
# optionally include libminiupnpc-dev libzmq3-dev
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd
./autogen.sh
./configure --enable-static --disable-shared --with-pic --without-gui CXXFLAGS="-O3 -march=native"
make -j$(nproc)
./src/lbrycrdd -server ...
```
#### MacOS (cross-compiled)
```sh
sudo apt-get install clang llvm git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \
libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-dev
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd
# download MacOS SDK from your favorite source
mkdir depends/SDKs
tar ... extract SDK to depends/SDKs/MacOSX10.11.sdk
./packaging/build_darwin_64bit.sh
```
Look in packaging/build_darwin_64bit.sh for further understanding.
#### MacOS with local shared dependencies
```sh
brew install boost berkeley-db@4 icu4c libevent
# fix conflict with gawk pulled first:
brew reinstall readline
brew reinstall gawk
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd/depends
make NO_QT=1
cd ..
./autogen.sh
CONFIG_SITE=$(pwd)/depends/x86_64-apple-darwin15.6.0/share/config.site ./configure --enable-static --disable-shared --with-pic --without-gui --enable-reduce-exports CXXFLAGS=-O2
make -j$(sysctl -n hw.ncpu)
```
#### Windows (cross-compiled)
Compiling on MS Windows (outside of WSL) is not supported. The Windows build is cross-compiled from Linux like so:
```sh
sudo apt-get install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \
g++-mingw-w64-x86-64 mingw-w64-x86-64-dev
update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix
git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd
./packaging/build_windows_64bit.sh
```
If you encounter any errors, please check `doc/build-*.md` for further instructions. If you're still stuck, [create an issue](https://github.com/lbryio/lbrycrd/issues/new) with the output of that command, your system info, and any other information you think might be helpful. The scripts in the packaging folder are simple and will grant extra light on the build process as needed.
#### Use with CLion
CLion has not traditionally supported Autotools projects, although some progress on that is now in the works. We do include a cmake build file for compiling lbrycrd. See contrib/cmake. Alas, CLion doesn't support external projects in cmake, so that particular approach is also insufficient. CLion does support "compile_commands.json" projects. Fortunately, this can be easily generated for lbrycrd like so:
```sh
pip install --user compiledb
./autogen.sh && ./configure --enable-static=no --enable-shared --with-pic --without-gui CXXFLAGS="-O0 -g" CFLAGS="-O0 -g" # or whatever normal lbrycrd config
compiledb make -j10
```
Then open the newly generated compile_commands.json file as a project in CLion. Debugging is supported if you compiled with `-g`. To enable that you will need to create a target in CLion by going to File -> Settings -> Build -> Custom Build Targets. Add an empty target with your choice of name. From there you can go to "Edit Configurations", typically found in a drop-down at the top of the editor. Add a Custom Build Application, select your new target, select the compiled file (i.e. test_lbrycrd or lbrycrdd, etc), and then add any necessary command line parameters. Ensure that there is nothing in the "Before launch" section.
## Contributing
Contributions to this project are welcome, encouraged, and compensated. For more details, see [https://lbry.tech/contribute](https://lbry.tech/contribute)
We follow the same coding guidelines as documented by Bitcoin Core, see [here](/doc/developer-notes.md). To run an automated code formatting check, try:
`git diff -U0 master -- '*.h' '*.cpp' | ./contrib/devtools/clang-format-diff.py -p1`. This will check any commits not on master for proper code formatting.
We try to avoid altering parts of the code that is inherited from Bitcoin Core unless absolutely necessary. This will make it easier to merge changes from Bitcoin Core. If commits are expected not to be merged upstream (i.e. we broke up a commit from Bitcoin Core in order to use a single feature in it), the commit message must contain the string "NOT FOR UPSTREAM MERGE".
The `master` branch is regularly built and tested, but is not guaranteed to be
completely stable. [Tags](https://github.com/bitcoin/bitcoin/tags) are created
regularly to indicate new official, stable release versions of Bitcoin Core.
The contribution workflow is described in [CONTRIBUTING.md](CONTRIBUTING.md).
Testing
-------
completely stable. [Releases](https://github.com/lbryio/lbrycrd/releases) are created
regularly to indicate new official, stable release versions.
Testing and code review is the bottleneck for development; we get more pull
requests than we can review and test on short notice. Please be patient and help out by testing
other people's pull requests, and remember this is a security-critical project where any mistake might cost people
lots of money.
lots of money. Developers are strongly encouraged to write [unit tests](/src/test/README.md) for new code and to
submit new unit tests for old code. Unit tests are compiled by default and can be run with `src/test/test_lbrycrd`
### Automated Testing
The Travis CI system makes sure that every pull request is built, and that unit and sanity tests are automatically run. See https://travis-ci.org/lbryio/lbrycrd
Developers are strongly encouraged to write [unit tests](src/test/README.md) for new code, and to
submit new unit tests for old code. Unit tests can be compiled and run
(assuming they weren't disabled in configure) with: `make check`. Further details on running
and extending unit tests can be found in [/src/test/README.md](/src/test/README.md).
### Testnet
There are also [regression and integration tests](/test), written
in Python, that are run automatically on the build server.
These tests can be run (if the [test dependencies](/test) are installed) with: `test/functional/test_runner.py`
Testnet is maintained for testing purposes and can be accessed using the command `./lbrycrdd -testnet`. If you would like to obtain testnet credits, please contact brannon@lbry.com or grin@lbry.com .
The Travis CI system makes sure that every pull request is built for Windows, Linux, and macOS, and that unit/sanity tests are run automatically.
It is easy to solo mine on testnet. (It's easy on mainnet too, but much harder to win.) For instructions see [SGMiner](https://github.com/lbryio/sgminer-gm) and [Mining Contributions](https://github.com/lbryio/lbrycrd/tree/master/contrib/mining)
### Manual Quality Assurance (QA) Testing
## Mailing List
Changes should be tested by somebody other than the developer who wrote the
code. This is especially important for large or high-risk changes. It is useful
to add a test plan to the pull request description if testing the changes is
not straightforward.
We maintain a mailing list for notifications of upgrades, security issues, and soft/hard forks. To join, visit [https://lbry.com/forklist](https://lbry.com/forklist).
Translations
------------
## License
Changes to translations as well as new translations can be submitted to
[Bitcoin Core's Transifex page](https://www.transifex.com/projects/p/bitcoin/).
This project is MIT licensed. For the full license, see [LICENSE](LICENSE).
Translations are periodically pulled from Transifex and merged into the git repository. See the
[translation process](doc/translation_process.md) for details on how this works.
## Security
**Important**: We do not accept translation changes as GitHub pull requests because the next
pull from Transifex would automatically overwrite them again.
We take security seriously. Please contact [security@lbry.com](mailto:security@lbry.com) regarding any security issues.
Our PGP key is [here](https://lbry.com/faq/pgp-key) if you need it.
Translators should also subscribe to the [mailing list](https://groups.google.com/forum/#!forum/bitcoin-translators).
## Contact
The primary contact for this project is [@BrannonKing](https://github.com/BrannonKing) (brannon@lbry.com)

View file

@ -7,7 +7,7 @@ export LC_ALL=C
set -e
srcdir="$(dirname $0)"
cd "$srcdir"
if [ -z ${LIBTOOLIZE} ] && GLIBTOOLIZE="`which glibtoolize 2>/dev/null`"; then
if [ -z ${LIBTOOLIZE} ] && GLIBTOOLIZE="$(which glibtoolize 2>/dev/null)"; then
LIBTOOLIZE="${GLIBTOOLIZE}"
export LIBTOOLIZE
fi

View file

@ -0,0 +1,120 @@
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_boost_locale.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_BOOST_LOCALE
#
# DESCRIPTION
#
# Test for System library from the Boost C++ libraries. The macro requires
# a preceding call to AX_BOOST_BASE. Further documentation is available at
# <http://randspringer.de/boost/index.html>.
#
# This macro calls:
#
# AC_SUBST(BOOST_LOCALE_LIB)
#
# And sets:
#
# HAVE_BOOST_LOCALE
#
# LICENSE
#
# Copyright (c) 2012 Xiyue Deng <manphiz@gmail.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 2
AC_DEFUN([AX_BOOST_LOCALE],
[
AC_ARG_WITH([boost-locale],
AS_HELP_STRING([--with-boost-locale@<:@=special-lib@:>@],
[use the Locale library from boost - it is possible to specify a certain library for the linker
e.g. --with-boost-locale=boost_locale-gcc-mt ]),
[
if test "$withval" = "no"; then
want_boost="no"
elif test "$withval" = "yes"; then
want_boost="yes"
ax_boost_user_locale_lib=""
else
want_boost="yes"
ax_boost_user_locale_lib="$withval"
fi
],
[want_boost="yes"]
)
if test "x$want_boost" = "xyes"; then
AC_REQUIRE([AC_PROG_CC])
AC_REQUIRE([AC_CANONICAL_BUILD])
CPPFLAGS_SAVED="$CPPFLAGS"
CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
export CPPFLAGS
LDFLAGS_SAVED="$LDFLAGS"
LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
export LDFLAGS
AC_CACHE_CHECK(whether the Boost::Locale library is available,
ax_cv_boost_locale,
[AC_LANG_PUSH([C++])
CXXFLAGS_SAVE=$CXXFLAGS
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[@%:@include <boost/locale.hpp>]],
[[boost::locale::generator gen;
std::locale::global(gen(""));]])],
ax_cv_boost_locale=yes, ax_cv_boost_locale=no)
CXXFLAGS=$CXXFLAGS_SAVE
AC_LANG_POP([C++])
])
if test "x$ax_cv_boost_locale" = "xyes"; then
AC_SUBST(BOOST_CPPFLAGS)
AC_DEFINE(HAVE_BOOST_LOCALE,,[define if the Boost::Locale library is available])
BOOSTLIBDIR=`echo $BOOST_LDFLAGS | sed -e 's/@<:@^\/@:>@*//'`
LDFLAGS_SAVE=$LDFLAGS
if test "x$ax_boost_user_locale_lib" = "x"; then
for libextension in `ls $BOOSTLIBDIR/libboost_locale*.so* $BOOSTLIBDIR/libboost_locale*.dylib* $BOOSTLIBDIR/libboost_locale*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^lib\(boost_locale.*\)\.so.*$;\1;' -e 's;^lib\(boost_locale.*\)\.dylib.*$;\1;' -e 's;^lib\(boost_locale.*\)\.a.*$;\1;'` ; do
ax_lib=${libextension}
AC_CHECK_LIB($ax_lib, exit,
[BOOST_LOCALE_LIB="-l$ax_lib"; AC_SUBST(BOOST_LOCALE_LIB) link_locale="yes"; break],
[link_locale="no"])
done
if test "x$link_locale" != "xyes"; then
for libextension in `ls $BOOSTLIBDIR/boost_locale*.dll* $BOOSTLIBDIR/boost_locale*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^\(boost_locale.*\)\.dll.*$;\1;' -e 's;^\(boost_locale.*\)\.a.*$;\1;'` ; do
ax_lib=${libextension}
AC_CHECK_LIB($ax_lib, exit,
[BOOST_LOCALE_LIB="-l$ax_lib"; AC_SUBST(BOOST_LOCALE_LIB) link_locale="yes"; break],
[link_locale="no"])
done
fi
else
for ax_lib in $ax_boost_user_locale_lib boost_locale-$ax_boost_user_locale_lib; do
AC_CHECK_LIB($ax_lib, exit,
[BOOST_LOCALE_LIB="-l$ax_lib"; AC_SUBST(BOOST_LOCALE_LIB) link_locale="yes"; break],
[link_locale="no"])
done
fi
if test "x$ax_lib" = "x"; then
AC_MSG_ERROR(Could not find a version of the library!)
fi
if test "x$link_locale" = "xno"; then
AC_MSG_ERROR(Could not link against $ax_lib !)
fi
fi
CPPFLAGS="$CPPFLAGS_SAVED"
LDFLAGS="$LDFLAGS_SAVED"
fi
])

82
build.sh Executable file
View file

@ -0,0 +1,82 @@
#!/usr/bin/env bash
set -o pipefail
function HELP {
echo "Use this command to build lbrycrd."
echo "Dependencies will be pulled and built first."
echo "Use autogen & configure directly to avoid this and use system shared libraries instead."
echo
echo "Optional arguments:"
echo "-jN: number of parallel build jobs"
echo "-q: compile the QT GUI (not working at present)"
echo "-d: force a rebuild of dependencies"
echo "-u: run the unit tests when done"
echo "-g: include debug symbols"
echo "-h: show help"
exit 1
}
REBUILD_DEPENDENCIES=false
RUN_UNIT_TESTS=false
COMPILE_WITH_DEBUG=false
DO_NOT_COMPILE_THE_GUI="NO_QT=1"
WITH_COMPILE_THE_GUI=no
if test -z $PARALLEL_JOBS; then
PARALLEL_JOBS=$(expr $(getconf _NPROCESSORS_ONLN) / 2 + 1)
fi
while getopts j:qdugh FLAG; do
case ${FLAG} in
j)
PARALLEL_JOBS=$OPTARG
;;
q)
DO_NOT_COMPILE_THE_GUI=
WITH_COMPILE_THE_GUI=qt5
;;
g)
COMPILE_WITH_DEBUG=true
;;
u)
RUN_UNIT_TESTS=true
;;
d)
REBUILD_DEPENDENCIES=true
;;
h)
HELP
;;
\?)
HELP
;;
esac
done
echo "Compiling with ${PARALLEL_JOBS} jobs in parallel."
BUILD_FLAGS=(CXXFLAGS="-O3 -march=native")
if test "$COMPILE_WITH_DEBUG" = true; then
BUILD_FLAGS=(--with-debug CXXFLAGS="-Og -g")
fi
cd depends
if test "$REBUILD_DEPENDENCIES" = true; then
make clean
fi
make -j${PARALLEL_JOBS} ${DO_NOT_COMPILE_THE_GUI} V=1
cd ..
LC_ALL=C autoreconf --install
CONFIG_SITE=$(pwd)/depends/$($(pwd)/depends/config.guess)/share/config.site ./configure --enable-reduce-exports \
--enable-static --disable-shared --with-pic --with-gui=${WITH_COMPILE_THE_GUI} "${BUILD_FLAGS[@]}"
if test $? -eq 0; then
make -j${PARALLEL_JOBS}
fi
if test $? -eq 0 && "$RUN_UNIT_TESTS" = true; then
./src/test/test_lbrycrd
fi

View file

@ -2,22 +2,22 @@ dnl require autoconf 2.60 (AS_ECHO/AS_ECHO_N)
AC_PREREQ([2.60])
define(_CLIENT_VERSION_MAJOR, 0)
define(_CLIENT_VERSION_MINOR, 17)
define(_CLIENT_VERSION_REVISION, 1)
define(_CLIENT_VERSION_BUILD, 0)
define(_CLIENT_VERSION_REVISION, 3)
define(_CLIENT_VERSION_BUILD, 3)
define(_CLIENT_VERSION_IS_RELEASE, true)
define(_COPYRIGHT_YEAR, 2018)
define(_COPYRIGHT_YEAR, 2021)
define(_COPYRIGHT_HOLDERS,[The %s developers])
define(_COPYRIGHT_HOLDERS_SUBSTITUTION,[[Bitcoin Core]])
AC_INIT([Bitcoin Core],[_CLIENT_VERSION_MAJOR._CLIENT_VERSION_MINOR._CLIENT_VERSION_REVISION],[https://github.com/bitcoin/bitcoin/issues],[bitcoin],[https://bitcoincore.org/])
define(_COPYRIGHT_HOLDERS_SUBSTITUTION,[[LBRYcrd Core]])
AC_INIT([LBRYcrd Core],[_CLIENT_VERSION_MAJOR._CLIENT_VERSION_MINOR._CLIENT_VERSION_REVISION],[https://github.com/lbryio/lbrycrd/issues],[lbrycrd],[https://lbry.com/])
AC_CONFIG_SRCDIR([src/validation.cpp])
AC_CONFIG_HEADERS([src/config/bitcoin-config.h])
AC_CONFIG_AUX_DIR([build-aux])
AC_CONFIG_MACRO_DIR([build-aux/m4])
BITCOIN_DAEMON_NAME=bitcoind
BITCOIN_GUI_NAME=bitcoin-qt
BITCOIN_CLI_NAME=bitcoin-cli
BITCOIN_TX_NAME=bitcoin-tx
BITCOIN_DAEMON_NAME=lbrycrdd
BITCOIN_GUI_NAME=lbrycrd-qt
BITCOIN_CLI_NAME=lbrycrd-cli
BITCOIN_TX_NAME=lbrycrd-tx
dnl Unless the user specified ARFLAGS, force it to be cr
AC_ARG_VAR(ARFLAGS, [Flags for the archiver, defaults to <cr> if not set])
@ -146,6 +146,12 @@ AC_ARG_WITH([qrencode],
[use_qr=$withval],
[use_qr=auto])
AC_ARG_WITH([icu],
[AS_HELP_STRING([--with-icu],
[Required ICU root path])],
[ICU_PREFIX=$withval],
[ICU_PREFIX=auto])
AC_ARG_ENABLE([hardening],
[AS_HELP_STRING([--disable-hardening],
[do not attempt to harden the resulting executables (default is to harden when possible)])],
@ -395,7 +401,7 @@ CPPFLAGS="$CPPFLAGS -DHAVE_BUILD_INFO -D__STDC_FORMAT_MACROS"
AC_ARG_WITH([utils],
[AS_HELP_STRING([--with-utils],
[build bitcoin-cli bitcoin-tx (default=yes)])],
[build lbrycrd-cli lbrycrd-tx (default=yes)])],
[build_bitcoin_utils=$withval],
[build_bitcoin_utils=yes])
@ -640,6 +646,10 @@ if test x$ac_cv_sys_large_files != x &&
CPPFLAGS="$CPPFLAGS -D_LARGE_FILES=$ac_cv_sys_large_files"
fi
AS_IF([test x$enable_static != x && test x$LDFLAGS != xdarwin], [
# darwin should be using -stdlib=libc++ (and may need a -static instead)
AX_CHECK_LINK_FLAG([[-static-libstdc++]], [LDFLAGS="$LDFLAGS -static-libstdc++"])
])
AX_CHECK_LINK_FLAG([[-Wl,--large-address-aware]], [LDFLAGS="$LDFLAGS -Wl,--large-address-aware"])
AX_GCC_FUNC_ATTRIBUTE([visibility])
@ -693,10 +703,12 @@ if test x$TARGET_OS != xwindows; then
AX_CHECK_COMPILE_FLAG([-fPIC],[PIC_FLAGS="-fPIC"])
fi
# All versions of gcc that we commonly use for building are subject to bug
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90348. To work around that, set
# -fstack-reuse=none for all gcc builds. (Only gcc understands this flag)
AX_CHECK_COMPILE_FLAG([-fstack-reuse=none],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -fstack-reuse=none"])
if test x$use_hardening != xno; then
use_hardening=yes
AX_CHECK_COMPILE_FLAG([-Wstack-protector],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -Wstack-protector"])
AX_CHECK_COMPILE_FLAG([-fstack-protector-all],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -fstack-protector-all"])
AX_CHECK_PREPROC_FLAG([-D_FORTIFY_SOURCE=2],[
AX_CHECK_PREPROC_FLAG([-U_FORTIFY_SOURCE],[
@ -716,6 +728,10 @@ if test x$use_hardening != xno; then
*mingw*)
AC_CHECK_LIB([ssp], [main],, AC_MSG_ERROR(libssp missing))
;;
*)
AX_CHECK_COMPILE_FLAG([-Wstack-protector],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -Wstack-protector"])
AX_CHECK_COMPILE_FLAG([-fstack-protector-all],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -fstack-protector-all"])
;;
esac
fi
@ -779,7 +795,7 @@ AC_LINK_IFELSE([AC_LANG_SOURCE([
)
TEMP_LDFLAGS="$LDFLAGS"
LDFLAGS="$TEMP_LDFLAGS $PTHREAD_CFLAGS"
LDFLAGS="$TEMP_LDFLAGS $PTHREAD_LIBS"
AC_MSG_CHECKING([for thread_local support])
AC_LINK_IFELSE([AC_LANG_SOURCE([
#include <thread>
@ -895,6 +911,7 @@ AX_BOOST_SYSTEM
AX_BOOST_FILESYSTEM
AX_BOOST_THREAD
AX_BOOST_CHRONO
AX_BOOST_LOCALE
dnl Boost 1.56 through 1.62 allow using std::atomic instead of its own atomic
dnl counter implementations. In 1.63 and later the std::atomic approach is default.
@ -961,8 +978,7 @@ fi
if test x$use_boost = xyes; then
BOOST_LIBS="$BOOST_LDFLAGS $BOOST_SYSTEM_LIB $BOOST_FILESYSTEM_LIB $BOOST_THREAD_LIB $BOOST_CHRONO_LIB"
BOOST_LIBS="$BOOST_LDFLAGS $BOOST_SYSTEM_LIB $BOOST_FILESYSTEM_LIB $BOOST_LOCALE_LIB $BOOST_THREAD_LIB $BOOST_CHRONO_LIB"
dnl If boost (prior to 1.57) was built without c++11, it emulated scoped enums
dnl using c++98 constructs. Unfortunately, this implementation detail leaked into
@ -1052,6 +1068,26 @@ fi
fi
# the plan for dealing with ICU:
# if the user specifies an ICU prefix, use that one.
# if the user did not specify an ICU prefix but did specify a general prefix use that one.
# otherwise use pkg_config if it's available.
# well, actually, things seem to work fine without this fallback to pkg_config so we'll leave that out for now.
# note: in order to use AC_CHECK_LIB we have to override CPPFLAGS and LDFLAGS
# however, we don't want to keep those overridden after our checks;
# we want to rely on ICU_CPPFLAGS and ICU_LIBS after that
# to further complicate matters there are at least three different naming conventions for ICU libraries
# to simplify things we'll just check one from each convention
AS_IF([test "x${prefix}" != "xNONE" && test "x$ICU_PREFIX" == "xauto"], [
ICU_PREFIX="${prefix}"
])
AS_IF([test "x$ICU_PREFIX" != "xauto"], [
ICU_CPPFLAGS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig pkg-config icu-io icu-uc icu-i18n --cflags)"
ICU_LIBS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig pkg-config icu-io icu-uc icu-i18n --libs)"
])
if test x$use_pkgconfig = xyes; then
: dnl
m4_ifdef(
@ -1079,9 +1115,15 @@ if test x$use_pkgconfig = xyes; then
else
AC_DEFINE_UNQUOTED([ENABLE_ZMQ],[0],[Define to 1 to enable ZMQ functions])
fi
if test "x$ICU_PREFIX" == "xauto"; then
PKG_CHECK_MODULES([ICU], [icu-io, icu-uc, icu-i18n])
fi
]
)
else
else # probably compiling on Windows or cross-compiling for it:
AC_MSG_NOTICE([Configuring for MinGW])
AC_CHECK_HEADER([openssl/crypto.h],,AC_MSG_ERROR(libcrypto headers missing))
AC_CHECK_LIB([crypto], [main],CRYPTO_LIBS=-lcrypto, AC_MSG_ERROR(libcrypto missing))
@ -1126,6 +1168,9 @@ else
fi
fi
AC_MSG_NOTICE([Using ICU_CPPFLAGS="$ICU_CPPFLAGS"])
AC_MSG_NOTICE([Using ICU_LIBS="$ICU_LIBS"])
save_CXXFLAGS="${CXXFLAGS}"
CXXFLAGS="${CXXFLAGS} ${CRYPTO_CFLAGS} ${SSL_CFLAGS}"
AC_CHECK_DECLS([EVP_MD_CTX_new],,,[AC_INCLUDES_DEFAULT
@ -1187,7 +1232,7 @@ AC_MSG_CHECKING([whether to build bitcoind])
AM_CONDITIONAL([BUILD_BITCOIND], [test x$build_bitcoind = xyes])
AC_MSG_RESULT($build_bitcoind)
AC_MSG_CHECKING([whether to build utils (bitcoin-cli bitcoin-tx)])
AC_MSG_CHECKING([whether to build utils (lbrycrd-cli lbrycrd-tx)])
AM_CONDITIONAL([BUILD_BITCOIN_UTILS], [test x$build_bitcoin_utils = xyes])
AC_MSG_RESULT($build_bitcoin_utils)
@ -1288,7 +1333,7 @@ if test x$bitcoin_enable_qt != xno; then
AC_MSG_WARN("xgettext is required to update qt translations")
fi
AC_MSG_CHECKING([whether to build test_bitcoin-qt])
AC_MSG_CHECKING([whether to build test_lbrycrd-qt])
if test x$use_gui_tests$bitcoin_enable_qt_test = xyesyes; then
AC_MSG_RESULT([yes])
BUILD_TEST_QT="yes"
@ -1299,7 +1344,7 @@ fi
AM_CONDITIONAL([ENABLE_ZMQ], [test "x$use_zmq" = "xyes"])
AC_MSG_CHECKING([whether to build test_bitcoin])
AC_MSG_CHECKING([whether to build test_lbrycrd])
if test x$use_tests = xyes; then
AC_MSG_RESULT([yes])
BUILD_TEST="yes"
@ -1385,11 +1430,14 @@ AC_SUBST(LIBTOOL_APP_LDFLAGS)
AC_SUBST(USE_UPNP)
AC_SUBST(USE_QRCODE)
AC_SUBST(BOOST_LIBS)
AC_SUBST(ICU_CPPFLAGS)
AC_SUBST(ICU_LIBS)
AC_SUBST(TESTDEFS)
AC_SUBST(LEVELDB_TARGET_FLAGS)
AC_SUBST(MINIUPNPC_CPPFLAGS)
AC_SUBST(MINIUPNPC_LIBS)
AC_SUBST(CRYPTO_LIBS)
AC_SUBST(SSL_CFLAGS)
AC_SUBST(SSL_LIBS)
AC_SUBST(EVENT_LIBS)
AC_SUBST(EVENT_PTHREADS_LIBS)
@ -1431,7 +1479,7 @@ if test x$need_bundled_univalue = xyes; then
AC_CONFIG_SUBDIRS([src/univalue])
fi
ac_configure_args="${ac_configure_args} --disable-shared --with-pic --with-bignum=no --enable-module-recovery --disable-jni"
ac_configure_args="${ac_configure_args} --enable-static --disable-shared --with-pic --with-bignum=no --enable-module-recovery --disable-jni"
AC_CONFIG_SUBDIRS([src/secp256k1])
AC_OUTPUT

View file

@ -0,0 +1,185 @@
cmake_minimum_required(VERSION 3.10)
project(lbrycrd)
set(CMAKE_CXX_STANDARD 11)
include(cmake/CPM.cmake)
include(ExternalProject)
set(OPTIONS "" CACHE STRING "lbrycrdd configure options")
set(CPPFLAGS "" CACHE STRING "lbrycrdd compiler options")
set(LDFLAGS "" CACHE STRING "lbrycrdd linker options")
set(DISABLE_TESTS OFF CACHE BOOL "compilation without tests")
set(DISABLE_WALLET OFF CACHE BOOL "compilation without wallet support")
set(DISABLE_BENCH OFF CACHE BOOL "compilation without bench support")
if(NOT ${CPM_USE_LOCAL_PACKAGES})
set(OPTIONS "${OPTIONS} --enable-static --disable-shared")
else()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake")
endif()
set(OPTIONS "--without-gui ${OPTIONS} --with-pic")
if (${DISABLE_TESTS})
set(OPTIONS "${OPTIONS} --disable-tests")
endif()
if (${DISABLE_WALLET})
set(OPTIONS "${OPTIONS} --disable-wallet")
endif()
if (${DISABLE_BENCH})
set(OPTIONS "${OPTIONS} --disable-bench")
endif()
string(TOLOWER ${CMAKE_SYSTEM_NAME}-${CMAKE_SYSTEM_PROCESSOR} ARCH)
CPMAddPackage(
NAME OpenSSL
GITHUB_REPOSITORY openssl/openssl
VERSION 1.0.2
GIT_TAG OpenSSL_1_0_2r
DOWNLOAD_ONLY TRUE
)
if(OpenSSL_ADDED)
ExternalProject_Add(OpenSSL
PREFIX openssl
SOURCE_DIR ${OpenSSL_SOURCE_DIR}
CONFIGURE_COMMAND ${OpenSSL_SOURCE_DIR}/Configure ${ARCH} no-shared no-dso no-engines -fPIC --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} OpenSSL)
ExternalProject_Get_Property(OpenSSL INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
set(OPENSSL_CPPFLAGS "CPPFLAGS=-I${INSTALL_DIR}/include")
set(OPENSSL_LDFLAGS "LDFLAGS=-L${INSTALL_DIR}/lib")
endif(OpenSSL_ADDED)
CPMAddPackage(
NAME Libevent
GITHUB_REPOSITORY libevent/libevent
VERSION 2.1.8
GIT_TAG release-2.1.8-stable
DOWNLOAD_ONLY TRUE
)
if(Libevent_ADDED)
ExternalProject_Add(Libevent
PREFIX libevent
DEPENDS ${DEPENDS}
SOURCE_DIR ${Libevent_SOURCE_DIR}
CONFIGURE_COMMAND ${Libevent_SOURCE_DIR}/autogen.sh
&& ${Libevent_SOURCE_DIR}/configure ${OPENSSL_CPPFLAGS} --enable-cxx --disable-shared --with-pic ${OPENSSL_LDFLAGS} --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} Libevent)
ExternalProject_Get_Property(Libevent INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
endif(Libevent_ADDED)
if(NOT ${DISABLE_WALLET})
CPMAddPackage(
NAME BerkeleyDB
VERSION 4.8.30
URL https://download.oracle.com/berkeley-db/db-4.8.30.NC.zip
URL_HASH SHA256=43ecd76886992ea416fdadc54b7f2b83ef249d9a6964bd07708ccae42d0226ce
DOWNLOAD_ONLY TRUE
)
if(NOT ${BerkeleyDB_VERSION} VERSION_LESS "5.0")
set(OPTIONS "${OPTIONS} --with-incompatible-bdb")
endif()
if(BerkeleyDB_ADDED)
ExternalProject_Add(BerkeleyDB
PREFIX bdb
SOURCE_DIR ${BerkeleyDB_SOURCE_DIR}
PATCH_COMMAND sed -i "s/__atomic_compare_exchange/__atomic_compare_exchange_db/" ${BerkeleyDB_SOURCE_DIR}/dbinc/atomic.h
CONFIGURE_COMMAND ${BerkeleyDB_SOURCE_DIR}/dist/configure --enable-cxx --disable-shared --with-pic --prefix=<INSTALL_DIR>
)
set(DEPENDS ${DEPENDS} BerkeleyDB)
ExternalProject_Get_Property(BerkeleyDB INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
endif(BerkeleyDB_ADDED)
endif()
set(BOOST_LIBS chrono,filesystem,system,locale,thread)
string(REPLACE "," ";" BOOST_COMPONENTS ${BOOST_LIBS})
if(NOT ${DISABLE_TESTS})
set(BOOST_LIBS ${BOOST_LIBS},test)
set(BOOST_COMPONENTS ${BOOST_COMPONENTS};unit_test_framework)
endif()
CPMAddPackage(
NAME Boost
GITHUB_REPOSITORY boostorg/boost
VERSION 1.64.0
COMPONENTS ${BOOST_COMPONENTS}
GIT_TAG boost-1.69.0
GIT_SUBMODULES libs/* tools/*
DOWNLOAD_ONLY TRUE
)
# if boost is found system wide we expect to be compiled against icu, so we can skip it
if(Boost_ADDED)
CPMAddPackage(
NAME ICU
GITHUB_REPOSITORY unicode-org/icu
VERSION 63.2
GIT_TAG release-63-2
DOWNLOAD_ONLY TRUE
)
if(ICU_ADDED)
ExternalProject_Add(ICU
PREFIX icu
SOURCE_DIR ${ICU_SOURCE_DIR}
CONFIGURE_COMMAND ${ICU_SOURCE_DIR}/icu4c/source/configure --disable-extras --disable-strict --enable-static
--disable-shared --disable-tests --disable-samples --disable-dyload --disable-layoutex CFLAGS=-fPIC CPPFLAGS=-fPIC --prefix=<INSTALL_DIR>
)
set(DEPENDS ${DEPENDS} ICU)
ExternalProject_Get_Property(ICU INSTALL_DIR)
set(ICU_PATH ${INSTALL_DIR})
set(OPTIONS "${OPTIONS} --with-icu=${ICU_PATH}")
set(LDFLAGS "${LDFLAGS} -L${ICU_PATH}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${ICU_PATH}/include")
endif(ICU_ADDED)
ExternalProject_Add(Boost
PREFIX boost
DEPENDS ${DEPENDS}
SOURCE_DIR ${Boost_SOURCE_DIR}
CONFIGURE_COMMAND ${Boost_SOURCE_DIR}/bootstrap.sh --with-icu=${ICU_PATH} --with-libraries=${BOOST_LIBS} && ${Boost_SOURCE_DIR}/b2 headers
BUILD_COMMAND ${Boost_SOURCE_DIR}/b2 install threading=multi -sNO_BZIP2=1 -sNO_ZLIB=1 link=static linkflags="-L${ICU_PATH}/lib -licuio -licuuc -licudata -licui18n" cxxflags=-fPIC boost.locale.iconv=off boost.locale.posix=off boost.locale.icu=on boost.locale.std=off -sICU_PATH=${ICU_PATH} --prefix=<INSTALL_DIR>
INSTALL_COMMAND ""
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} Boost)
ExternalProject_Get_Property(Boost INSTALL_DIR)
set(OPTIONS "${OPTIONS} --with-boost=${INSTALL_DIR}")
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
set_property(DIRECTORY PROPERTY ADDITIONAL_MAKE_CLEAN_FILES ${Boost_SOURCE_DIR}/bin.v2)
endif(Boost_ADDED)
set(CPPFLAGS "${CPPFLAGS} -Wno-parentheses -Wno-unused-local-typedefs -Wno-deprecated -Wno-implicit-fallthrough -Wno-unused-parameter")
separate_arguments(OPTIONS)
ExternalProject_Add(lbrycrdd
PREFIX lbrycrdd
DEPENDS ${DEPENDS}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../..
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/../../autogen.sh
&& ${CMAKE_CURRENT_SOURCE_DIR}/../../configure ${OPTIONS} CPPFLAGS=${CPPFLAGS} LDFLAGS=${LDFLAGS} --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
BUILD_ALWAYS 1
)

View file

@ -0,0 +1,210 @@
# TheLartians/CPM - A simple Git dependency manager
# =================================================
# See https://github.com/TheLartians/CPM for usage and update instructions.
#
# MIT License
# -----------
#[[
Copyright (c) 2019 Lars Melchior
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
]]
cmake_minimum_required(VERSION 3.10 FATAL_ERROR)
set(CURRENT_CPM_VERSION 0.11.1)
if(CPM_DIRECTORY)
if(NOT ${CPM_DIRECTORY} MATCHES ${CMAKE_CURRENT_LIST_DIR})
if (${CPM_VERSION} VERSION_LESS ${CURRENT_CPM_VERSION})
CPM_HANDLE_OLD_VERSION(${CURRENT_CPM_VERSION})
endif()
return()
endif()
endif()
set(CPM_VERSION ${CURRENT_CPM_VERSION} CACHE INTERNAL "")
set(CPM_DIRECTORY ${CMAKE_CURRENT_LIST_DIR} CACHE INTERNAL "")
set(CPM_PACKAGES "" CACHE INTERNAL "")
option(CPM_USE_LOCAL_PACKAGES "Use locally installed packages (find_package)" ON)
option(CPM_LOCAL_PACKAGES_ONLY "Use only locally installed packages" OFF)
include(FetchContent)
include(CMakeParseArguments)
# Initialize logging prefix
if(NOT CPM_INDENT)
set(CPM_INDENT "CPM:")
endif()
# The main workhorse of CPM
function(CPMAddPackage)
set(oneValueArgs
NAME
VERSION
GIT_TAG
DOWNLOAD_ONLY
GITHUB_REPOSITORY
GITLAB_REPOSITORY
)
set(multiValueArgs
OPTIONS
COMPONENTS
)
cmake_parse_arguments(CPM_ARGS "" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(${CPM_USE_LOCAL_PACKAGES} OR ${CPM_LOCAL_PACKAGES_ONLY})
find_package(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION} OPTIONAL_COMPONENTS ${CPM_ARGS_COMPONENTS} QUIET)
if(${CPM_ARGS_NAME}_FOUND)
message(STATUS "CPM: adding local package ${CPM_ARGS_NAME}@${${CPM_ARGS_NAME}_VERSION}")
set(${CPM_ARGS_NAME}_VERSION "${${CPM_ARGS_NAME}_VERSION}" PARENT_SCOPE)
return()
endif()
if(${CPM_LOCAL_PACKAGES_ONLY})
message(SEND_ERROR "CPM: ${CPM_ARGS_NAME} not found via find_package(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION})")
endif()
endif()
if (NOT CPM_ARGS_VERSION)
set(CPM_ARGS_VERSION 0)
endif()
if (NOT CPM_ARGS_GIT_TAG)
set(CPM_ARGS_GIT_TAG v${CPM_ARGS_VERSION})
endif()
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_TAG ${CPM_ARGS_GIT_TAG})
if(CPM_ARGS_DOWNLOAD_ONLY)
set(DOWNLOAD_ONLY ${CPM_ARGS_DOWNLOAD_ONLY})
else()
set(DOWNLOAD_ONLY NO)
endif()
if (CPM_ARGS_GITHUB_REPOSITORY)
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_REPOSITORY "https://github.com/${CPM_ARGS_GITHUB_REPOSITORY}.git")
endif()
if (CPM_ARGS_GITLAB_REPOSITORY)
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_REPOSITORY "https://gitlab.com/${CPM_ARGS_GITLAB_REPOSITORY}.git")
endif()
if (${CPM_ARGS_NAME} IN_LIST CPM_PACKAGES)
CPM_GET_PACKAGE_VERSION(${CPM_ARGS_NAME})
if(${CPM_PACKAGE_VERSION} VERSION_LESS ${CPM_ARGS_VERSION})
message(WARNING "${CPM_INDENT} requires a newer version of ${CPM_ARGS_NAME} (${CPM_ARGS_VERSION}) than currently included (${CPM_PACKAGE_VERSION}).")
endif()
if (CPM_ARGS_OPTIONS)
foreach(OPTION ${CPM_ARGS_OPTIONS})
CPM_PARSE_OPTION(${OPTION})
if(NOT "${${OPTION_KEY}}" STREQUAL ${OPTION_VALUE})
message(WARNING "${CPM_INDENT} ignoring package option for ${CPM_ARGS_NAME}: ${OPTION_KEY} = ${OPTION_VALUE} (${${OPTION_KEY}})")
endif()
endforeach()
endif()
CPM_FETCH_PACKAGE(${CPM_ARGS_NAME} ${DOWNLOAD_ONLY})
CPMGetProperties(${CPM_ARGS_NAME})
set(${CPM_ARGS_NAME}_VERSION ${CPM_ARGS_VERSION} PARENT_SCOPE)
set(${CPM_ARGS_NAME}_SOURCE_DIR "${${CPM_ARGS_NAME}_SOURCE_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_BINARY_DIR "${${CPM_ARGS_NAME}_BINARY_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_ADDED NO PARENT_SCOPE)
return()
endif()
CPMRegisterPackage(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION})
if (CPM_ARGS_OPTIONS)
foreach(OPTION ${CPM_ARGS_OPTIONS})
CPM_PARSE_OPTION(${OPTION})
set(${OPTION_KEY} ${OPTION_VALUE} CACHE INTERNAL "")
endforeach()
endif()
CPM_DECLARE_PACKAGE(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION} ${CPM_ARGS_GIT_TAG} "${CPM_ARGS_UNPARSED_ARGUMENTS}")
CPM_FETCH_PACKAGE(${CPM_ARGS_NAME} ${DOWNLOAD_ONLY})
CPMGetProperties(${CPM_ARGS_NAME})
set(${CPM_ARGS_NAME}_VERSION ${CPM_ARGS_VERSION} PARENT_SCOPE)
set(${CPM_ARGS_NAME}_SOURCE_DIR "${${CPM_ARGS_NAME}_SOURCE_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_BINARY_DIR "${${CPM_ARGS_NAME}_BINARY_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_ADDED YES PARENT_SCOPE)
endfunction()
function (CPM_DECLARE_PACKAGE PACKAGE VERSION GIT_TAG)
message(STATUS "${CPM_INDENT} adding package ${PACKAGE}@${VERSION} (${GIT_TAG})")
FetchContent_Declare(
${PACKAGE}
${ARGN}
)
endfunction()
function (CPM_FETCH_PACKAGE PACKAGE DOWNLOAD_ONLY)
set(CPM_OLD_INDENT "${CPM_INDENT}")
set(CPM_INDENT "${CPM_INDENT} ${PACKAGE}:")
if(${DOWNLOAD_ONLY})
if(NOT "${PACKAGE}_POPULATED")
FetchContent_Populate(${PACKAGE})
endif()
else()
FetchContent_MakeAvailable(${PACKAGE})
endif()
set(CPM_INDENT "${CPM_OLD_INDENT}")
endfunction()
function (CPMGetProperties PACKAGE)
FetchContent_GetProperties(${PACKAGE})
string(TOLOWER ${PACKAGE} lpackage)
set(${PACKAGE}_SOURCE_DIR "${${lpackage}_SOURCE_DIR}" PARENT_SCOPE)
set(${PACKAGE}_BINARY_DIR "${${lpackage}_BINARY_DIR}" PARENT_SCOPE)
endfunction()
function(CPMRegisterPackage PACKAGE VERSION)
list(APPEND CPM_PACKAGES ${PACKAGE})
set(CPM_PACKAGES ${CPM_PACKAGES} CACHE INTERNAL "")
set("CPM_PACKAGE_${PACKAGE}_VERSION" ${VERSION} CACHE INTERNAL "")
endfunction()
function(CPM_GET_PACKAGE_VERSION PACKAGE)
set(CPM_PACKAGE_VERSION "${CPM_PACKAGE_${PACKAGE}_VERSION}" PARENT_SCOPE)
endfunction()
function(CPM_PARSE_OPTION OPTION)
string(REGEX MATCH "^[^ ]+" OPTION_KEY ${OPTION})
string(LENGTH ${OPTION_KEY} OPTION_KEY_LENGTH)
math(EXPR OPTION_KEY_LENGTH "${OPTION_KEY_LENGTH}+1")
string(SUBSTRING ${OPTION} "${OPTION_KEY_LENGTH}" "-1" OPTION_VALUE)
set(OPTION_KEY "${OPTION_KEY}" PARENT_SCOPE)
set(OPTION_VALUE "${OPTION_VALUE}" PARENT_SCOPE)
endfunction()
function (CPM_HANDLE_OLD_VERSION NEW_CPM_VERSION)
message(AUTHOR_WARNING "${CPM_INDENT} \
A dependency is using a more recent CPM (${NEW_CPM_VERSION}) than the current project (${CPM_VERSION}). \
It is recommended to upgrade CPM to the most recent version. \
See https://github.com/TheLartians/CPM for more information."
)
endfunction()

View file

@ -0,0 +1,171 @@
# Author: sum01 <sum01@protonmail.com>
# Git: https://github.com/sum01/FindBerkeleyDB
# Read the README.md for the full info.
# NOTE: If Berkeley DB ever gets a Pkg-config ".pc" file, add pkg_check_modules() here
# Checks if environment paths are empty, set them if they aren't
if(NOT "$ENV{BERKELEYDB_ROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{BERKELEYDB_ROOT}")
elseif(NOT "$ENV{Berkeleydb_ROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{Berkeleydb_ROOT}")
elseif(NOT "$ENV{BERKELEYDBROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{BERKELEYDBROOT}")
else()
# Set just in case, as it's used regardless if it's empty or not
set(_BERKELEYDB_HINTS "")
endif()
# Allow user to pass a path instead of guessing
if(BerkeleyDB_ROOT_DIR)
set(_BERKELEYDB_PATHS "${BerkeleyDB_ROOT_DIR}")
elseif(CMAKE_SYSTEM_NAME MATCHES ".*[wW]indows.*")
# MATCHES is used to work on any devies with windows in the name
# Shameless copy-paste from FindOpenSSL.cmake v3.8
file(TO_CMAKE_PATH "$ENV{PROGRAMFILES}" _programfiles)
list(APPEND _BERKELEYDB_HINTS "${_programfiles}")
# There's actually production release and version numbers in the file path.
# For example, if they're on v6.2.32: C:/Program Files/Oracle/Berkeley DB 12cR1 6.2.32/
# But this still works to find it, so I'm guessing it can accept partial path matches.
foreach(_TARGET_BERKELEYDB_PATH "Oracle/Berkeley DB" "Berkeley DB")
list(APPEND _BERKELEYDB_PATHS
"${_programfiles}/${_TARGET_BERKELEYDB_PATH}"
"C:/Program Files (x86)/${_TARGET_BERKELEYDB_PATH}"
"C:/Program Files/${_TARGET_BERKELEYDB_PATH}"
"C:/${_TARGET_BERKELEYDB_PATH}"
)
endforeach()
else()
# Paths for anything other than Windows
# Cellar/berkeley-db is for macOS from homebrew installation
list(APPEND _BERKELEYDB_PATHS
"/usr"
"/usr/local"
"/usr/local/Cellar/berkeley-db"
"/opt"
"/opt/local"
)
endif()
# Find includes path
find_path(BerkeleyDB_INCLUDE_DIRS
NAMES "db.h"
HINTS ${_BERKELEYDB_HINTS}
PATH_SUFFIXES "include" "includes"
PATHS ${_BERKELEYDB_PATHS}
)
# Checks if the version file exists, save the version file to a var, and fail if there's no version file
if(BerkeleyDB_INCLUDE_DIRS)
# Read the version file db.h into a variable
file(READ "${BerkeleyDB_INCLUDE_DIRS}/db.h" _BERKELEYDB_DB_HEADER)
# Parse the DB version into variables to be used in the lib names
string(REGEX REPLACE ".*DB_VERSION_MAJOR ([0-9]+).*" "\\1" BerkeleyDB_VERSION_MAJOR "${_BERKELEYDB_DB_HEADER}")
string(REGEX REPLACE ".*DB_VERSION_MINOR ([0-9]+).*" "\\1" BerkeleyDB_VERSION_MINOR "${_BERKELEYDB_DB_HEADER}")
# Patch version example on non-crypto installs: x.x.xNC
string(REGEX REPLACE ".*DB_VERSION_PATCH ([0-9]+(NC)?).*" "\\1" BerkeleyDB_VERSION_PATCH "${_BERKELEYDB_DB_HEADER}")
else()
if(BerkeleyDB_FIND_REQUIRED)
# If the find_package(BerkeleyDB REQUIRED) was used, fail since we couldn't find the header
message(FATAL_ERROR "Failed to find Berkeley DB's header file \"db.h\"! Try setting \"BerkeleyDB_ROOT_DIR\" when initiating Cmake.")
elseif(NOT BerkeleyDB_FIND_QUIETLY)
message(WARNING "Failed to find Berkeley DB's header file \"db.h\"! Try setting \"BerkeleyDB_ROOT_DIR\" when initiating Cmake.")
endif()
# Set some garbage values to the versions since we didn't find a file to read
set(BerkeleyDB_VERSION_MAJOR "0")
set(BerkeleyDB_VERSION_MINOR "0")
set(BerkeleyDB_VERSION_PATCH "0")
endif()
# The actual returned/output version variable (the others can be used if needed)
set(BerkeleyDB_VERSION "${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}.${BerkeleyDB_VERSION_PATCH}")
# Finds the target library for berkeley db, since they all follow the same naming conventions
macro(_berkeleydb_get_lib _BERKELEYDB_OUTPUT_VARNAME _TARGET_BERKELEYDB_LIB)
# Different systems sometimes have a version in the lib name...
# and some have a dash or underscore before the versions.
# CMake recommends to put unversioned names before versioned names
find_library(${_BERKELEYDB_OUTPUT_VARNAME}
NAMES
"${_TARGET_BERKELEYDB_LIB}"
"lib${_TARGET_BERKELEYDB_LIB}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}"
HINTS ${_BERKELEYDB_HINTS}
PATH_SUFFIXES
"lib"
"lib64"
"libs"
"libs64"
PATHS ${_BERKELEYDB_PATHS}
)
# If the library was found, add it to our list of libraries
if(${_BERKELEYDB_OUTPUT_VARNAME})
# If found, append to our libraries variable
# The ${{}} is because the first expands to target the real variable, the second expands the variable's contents...
# and the real variable's contents is the path to the lib. Thus, it appends the path of the lib to BerkeleyDB_LIBRARIES.
list(APPEND BerkeleyDB_LIBRARIES "${${_BERKELEYDB_OUTPUT_VARNAME}}")
endif()
endmacro()
# Find and set the paths of the specific library to the variable
_berkeleydb_get_lib(BerkeleyDB_LIBRARY "db")
# NOTE: Windows doesn't have a db_cxx lib, but instead compiles the cxx code into the "db" lib
_berkeleydb_get_lib(BerkeleyDB_Cxx_LIBRARY "db_cxx")
# NOTE: I don't think Linux/Unix gets an SQL lib
_berkeleydb_get_lib(BerkeleyDB_Sql_LIBRARY "db_sql")
_berkeleydb_get_lib(BerkeleyDB_Stl_LIBRARY "db_stl")
# Needed for find_package_handle_standard_args()
include(FindPackageHandleStandardArgs)
# Fails if required vars aren't found, or if the version doesn't meet specifications.
find_package_handle_standard_args(BerkeleyDB
FOUND_VAR BerkeleyDB_FOUND
REQUIRED_VARS
BerkeleyDB_INCLUDE_DIRS
BerkeleyDB_LIBRARY
BerkeleyDB_LIBRARIES
VERSION_VAR BerkeleyDB_VERSION
)
# Only show the variables in the GUI if they click "advanced".
# Does nothing when using the CLI
mark_as_advanced(FORCE
BerkeleyDB_FOUND
BerkeleyDB_INCLUDE_DIRS
BerkeleyDB_LIBRARIES
BerkeleyDB_VERSION
BerkeleyDB_VERSION_MAJOR
BerkeleyDB_VERSION_MINOR
BerkeleyDB_VERSION_PATCH
BerkeleyDB_LIBRARY
BerkeleyDB_Cxx_LIBRARY
BerkeleyDB_Stl_LIBRARY
BerkeleyDB_Sql_LIBRARY
)
# Create an imported lib for easy linking by external projects
if(BerkeleyDB_FOUND AND BerkeleyDB_LIBRARIES AND NOT TARGET Oracle::BerkeleyDB)
add_library(Oracle::BerkeleyDB UNKNOWN IMPORTED)
set_target_properties(Oracle::BerkeleyDB PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${BerkeleyDB_INCLUDE_DIRS}"
IMPORTED_LOCATION "${BerkeleyDB_LIBRARY}"
INTERFACE_LINK_LIBRARIES "${BerkeleyDB_LIBRARIES}"
)
endif()
include(FindPackageMessage)
# A message that tells the user what includes/libs were found, and obeys the QUIET command.
find_package_message(BerkeleyDB
"Found BerkeleyDB libraries: ${BerkeleyDB_LIBRARIES}"
"[${BerkeleyDB_LIBRARIES}[${BerkeleyDB_INCLUDE_DIRS}]]"
)

View file

@ -0,0 +1,97 @@
# - Try to find libevent
#.rst
# FindLibevent
# ------------
#
# Find Libevent include directories and libraries. Invoke as::
#
# find_package(Libevent
# [version] [EXACT] # Minimum or exact version
# [REQUIRED] # Fail if Libevent is not found
# [COMPONENT <C>...]) # Libraries to look for
#
# Valid components are one or more of:: libevent core extra pthreads openssl.
# Note that 'libevent' contains both core and extra. You must specify one of
# them for the other components.
#
# This module will define the following variables::
#
# LIBEVENT_FOUND - True if headers and requested libraries were found
# LIBEVENT_INCLUDE_DIRS - Libevent include directories
# LIBEVENT_LIBRARIES - Libevent libraries to be linked
# LIBEVENT_<C>_FOUND - Component <C> was found (<C> is uppercase)
# LIBEVENT_<C>_LIBRARY - Library to be linked for Libevent component <C>.
find_package(PkgConfig QUIET)
pkg_check_modules(PC_LIBEVENT QUIET libevent)
# Look for the Libevent 2.0 or 1.4 headers
find_path(LIBEVENT_INCLUDE_DIR
NAMES
event2/event-config.h
event-config.h
HINTS
${PC_LIBEVENT_INCLUDE_DIRS}
)
if(LIBEVENT_INCLUDE_DIR)
set(_version_regex "^#define[ \t]+_EVENT_VERSION[ \t]+\"([^\"]+)\".*")
if(EXISTS "${LIBEVENT_INCLUDE_DIR}/event2/event-config.h")
# Libevent 2.0
file(STRINGS "${LIBEVENT_INCLUDE_DIR}/event2/event-config.h"
LIBEVENT_VERSION REGEX "${_version_regex}")
if("${LIBEVENT_VERSION}" STREQUAL "")
set(LIBEVENT_VERSION ${PC_LIBEVENT_VERSION})
endif()
else()
# Libevent 1.4
file(STRINGS "${LIBEVENT_INCLUDE_DIR}/event-config.h"
LIBEVENT_VERSION REGEX "${_version_regex}")
endif()
string(REGEX REPLACE "${_version_regex}" "\\1"
LIBEVENT_VERSION "${LIBEVENT_VERSION}")
unset(_version_regex)
endif()
set(_LIBEVENT_REQUIRED_VARS)
foreach(COMPONENT ${Libevent_FIND_COMPONENTS})
set(_LIBEVENT_LIBNAME libevent)
# Note: compare two variables to avoid a CMP0054 policy warning
if(COMPONENT STREQUAL _LIBEVENT_LIBNAME)
set(_LIBEVENT_LIBNAME event)
else()
set(_LIBEVENT_LIBNAME "event_${COMPONENT}")
endif()
string(TOUPPER "${COMPONENT}" COMPONENT_UPPER)
find_library(LIBEVENT_${COMPONENT_UPPER}_LIBRARY
NAMES ${_LIBEVENT_LIBNAME}
HINTS ${PC_LIBEVENT_LIBRARY_DIRS}
)
if(LIBEVENT_${COMPONENT_UPPER}_LIBRARY)
set(Libevent_${COMPONENT}_FOUND 1)
endif()
list(APPEND _LIBEVENT_REQUIRED_VARS LIBEVENT_${COMPONENT_UPPER}_LIBRARY)
endforeach()
unset(_LIBEVENT_LIBNAME)
include(FindPackageHandleStandardArgs)
# handle the QUIETLY and REQUIRED arguments and set LIBEVENT_FOUND to TRUE
# if all listed variables are TRUE and the requested version matches.
find_package_handle_standard_args(Libevent REQUIRED_VARS
${_LIBEVENT_REQUIRED_VARS}
LIBEVENT_INCLUDE_DIR
VERSION_VAR LIBEVENT_VERSION
HANDLE_COMPONENTS)
if(LIBEVENT_FOUND)
set(LIBEVENT_INCLUDE_DIRS ${LIBEVENT_INCLUDE_DIR})
set(LIBEVENT_LIBRARIES)
foreach(COMPONENT ${Libevent_FIND_COMPONENTS})
string(TOUPPER "${COMPONENT}" COMPONENT_UPPER)
list(APPEND LIBEVENT_LIBRARIES ${LIBEVENT_${COMPONENT_UPPER}_LIBRARY})
set(LIBEVENT_${COMPONENT_UPPER}_FOUND ${Libevent_${COMPONENT}_FOUND})
endforeach()
endif()
mark_as_advanced(LIBEVENT_INCLUDE_DIR ${_LIBEVENT_REQUIRED_VARS})
unset(_LIBEVENT_REQUIRED_VARS)

View file

@ -0,0 +1,291 @@
#!/usr/bin/env python3
import re
import subprocess as sp
import sys
import json
import urllib.request as req
import jsonschema
re_full = re.compile(r'(?P<name>^.*?$)(?P<desc>.*?)(^Argument.*?$(?P<args>.*?))?(^Result[^\n,]*?:\s*$(?P<resl>.*?))?(^Exampl.*?$(?P<exmp>.*))?', re.DOTALL | re.MULTILINE)
re_argline = re.compile(r'^("?)(?P<name>\w.*?)\1(\s*:.+?,?\s*)?\s+\((?P<type>.*?)\)\s*(?P<desc>.*?)\s*$', re.DOTALL)
def get_obj_from_dirty_text(full_object: str):
lines = full_object.splitlines()
lefts = []
i = 0
while i < len(lines):
idx = lines[i].find('(')
left = lines[i][0:idx].strip() if idx >= 0 else lines[i]
left = left.rstrip('.') # handling , ...
left = left.strip()
left = left.rstrip(',')
lefts.append(left)
while idx >= 0 and i < len(lines) - 1:
idx2 = len(re.match(r'^\s*', lines[i + 1]).group())
if idx2 > idx:
lines[i] += lines.pop(i + 1)[idx2 - 1:]
else:
break
i += 1
ret = None
try:
property_stack = []
object_stack = []
name_stack = []
last_name = None
for i in range(0, len(lines)):
left = lefts[i]
if not left:
continue
line = lines[i].strip()
arg_parsed = re_argline.fullmatch(line)
property_refined_type = 'object'
if arg_parsed is not None:
property_name, property_type, property_desc = arg_parsed.group('name', 'type', 'desc')
property_refined_type, property_required, property_child = get_type(property_type, None)
if property_refined_type is not 'array' and property_refined_type is not 'object':
property_stack[-1][property_name] = {
'type': property_refined_type,
'description': property_desc
}
else:
last_name = property_name
elif len(left) > 1:
match = re.match(r'^(\[)?"(?P<name>\w.*?)"(\])?.*', left)
if match is not None:
last_name = match.group('name')
if match.group(1) is not None and match.group(3) is not None:
left = '['
property_refined_type = 'string'
if 'string' not in line:
raise NotImplementedError('Not implemented: ' + line)
if left.endswith('['):
object_stack.append({'type': 'array', 'items': {'type': property_refined_type}})
property_stack.append({})
name_stack.append(last_name)
elif left.endswith('{'):
object_stack.append({'type': 'object'})
property_stack.append({})
name_stack.append(last_name)
elif (left.endswith(']') and '[' not in left) or (left.endswith('}') and '{' not in left):
obj = object_stack.pop()
prop = property_stack.pop()
name = name_stack.pop()
if len(prop) > 0:
if 'items' in obj:
obj['items']['properties'] = prop
else:
obj['properties'] = prop
if len(property_stack) > 0:
if 'items' in object_stack[-1]:
object_stack[-1]['items']['type'] = obj['type']
if len(prop) > 0:
object_stack[-1]['items']['properties'] = prop
else:
if name is None:
raise RuntimeError('Not expected')
property_stack[-1][name] = obj
else:
ret = obj
if ret is not None:
if i + 1 < len(lines) - 1:
print('WARNING: unparsable data...', file=sys.stderr)
lines = lines[i+1:]
if not lines[0]:
lines = lines[1:]
nret = get_obj_from_dirty_text("\n".join(lines))
if not nret:
nret = get_obj_from_dirty_text("\n".join(lines[1:]))
if nret:
ret.update(nret)
return ret
except Exception as e:
print('Exception: ' + str(e), file=sys.stderr)
print('Unable to cope with: ' + '\n'.join(lines), file=sys.stderr)
return None
def get_type(arg_type: str, full_line: str):
if arg_type is None:
return 'string', True, None
required = 'required' in arg_type or 'optional' not in arg_type
arg_type = arg_type.lower()
if 'array' in arg_type:
return 'array', required, None
if 'numeric' in arg_type or 'number' in arg_type:
return 'number', required, None
if 'bool' in arg_type:
return 'boolean', required, None
if 'string' in arg_type:
return 'string', required, None
if 'object' in arg_type:
properties = get_obj_from_dirty_text(full_line) if full_line is not None else None
return 'object', required, properties
if arg_type.startswith('optional'):
return 'optional', required, None
if arg_type.startswith('json'):
return 'json', required, None
print('Unable to derive type from: ' + arg_type, file=sys.stderr)
return None, False, None
def get_default(arg_refined_type: str, arg_type: str):
if 'default=' in arg_type:
if 'number' in arg_refined_type:
return int(re.match('.*default=([^,)]+)', arg_type).group(1))
if 'string' in arg_refined_type:
return re.match('.*default=([^,)]+)', arg_type).group(1)
if 'boolean' in arg_refined_type:
if 'default=true' in arg_type:
return True
if 'default=false' in arg_type:
return False
raise NotImplementedError('Not implemented: ' + arg_type)
if 'array' in arg_type:
raise NotImplementedError('Not implemented: ' + arg_type)
return None
def parse_single_argument(line: str):
if line:
line = line.strip()
if not line or line.startswith('None'):
return None, None, False
arg_parsed = re_argline.fullmatch(line)
if arg_parsed is None:
if line.startswith('{') or line.startswith('['):
return get_obj_from_dirty_text(line), None, True
else:
print("Unparsable argument: " + line, file=sys.stderr)
descriptor = {
'type': 'array' if line.startswith('[') else 'object',
'description': line,
}
return descriptor, None, True
arg_name, arg_type, arg_desc = arg_parsed.group('name', 'type', 'desc')
if not arg_type:
raise NotImplementedError('Not implemented: ' + arg_type)
arg_refined_type, arg_required, arg_properties = get_type(arg_type, arg_desc)
if arg_properties is not None:
return arg_properties, arg_name, arg_required
arg_refined_default = get_default(arg_refined_type, arg_type)
arg_desc = re.sub('\s+', ' ', arg_desc.strip()) \
if arg_desc and arg_refined_type is not 'object' and arg_refined_type is not 'array' \
else arg_desc.strip() if arg_desc else ''
descriptor = {
'type': arg_refined_type,
'description': arg_desc,
}
if arg_refined_default is not None:
descriptor['default'] = arg_refined_default
return descriptor, arg_name, arg_required
def parse_params(args: str):
arguments = {}
requireds = []
if args:
for line in re.split('\s*\d+\.\s+', args, re.DOTALL):
descriptor, name, required = parse_single_argument(line)
if descriptor is None:
continue
if required:
requireds.append(name)
arguments[name] = descriptor
return arguments, requireds
def get_api(section_name: str, command: str, command_help: str):
parsed = re_full.fullmatch(command_help)
if parsed is None:
raise RuntimeError('Unable to resolve help format for ' + command)
name, desc, args, resl, exmp = parsed.group('name', 'desc', 'args', 'resl', 'exmp')
properties, required = parse_params(args)
result_descriptor, result_name, result_required = parse_single_argument(resl)
desc = re.sub('\s+', ' ', desc.strip()) if desc else name
example_array = exmp.splitlines() if exmp else []
ret = {
'summary': desc,
'description': example_array,
'tags': [section_name],
'params': {
'type': 'object',
'properties': properties,
'required': required
},
}
if result_descriptor is not None:
ret['result'] = result_descriptor
return ret
def write_api():
if len(sys.argv) < 2:
print("Missing required argument: <path to CLI tool>", file=sys.stderr)
sys.exit(1)
cli_tool = sys.argv[1]
result = sp.run([cli_tool, "help"], stdout=sp.PIPE, universal_newlines=True)
commands = result.stdout
sections = re.split('^==\s*(.*?)\s*==$', commands, flags=re.MULTILINE)
methods = {}
for section in sections:
if not section:
continue
lines = section.splitlines()
if len(lines) == 1:
section_name = lines[0]
continue
for command in sorted(lines[1:]):
if not command:
continue
command = command.split(' ')[0]
result = sp.run([cli_tool, "help", command], stdout=sp.PIPE, universal_newlines=True)
methods[command] = get_api(section_name, command, result.stdout)
version = sp.run([cli_tool, "--version"], stdout=sp.PIPE, universal_newlines=True)
wrapper = {
'$schema': 'https://rawgit.com/mzernetsch/jrgen/master/jrgen-spec.schema.json',
'jrgen': '1.1',
'jsonrpc': '1.0', # see https://github.com/bitcoin/bitcoin/pull/12435
'info': {
'title': 'lbrycrd RPC API',
'version': version.stdout.strip(),
'description': []
},
'definitions': {}, # for items used in $ref further down
'methods': methods,
}
schema = req.urlopen(wrapper['$schema']).read().decode('utf-8')
try:
jsonschema.validate(wrapper, schema)
except Exception as e:
print('From schema validation: ' + str(e), file=sys.stderr)
print(json.dumps(wrapper, indent=4))
if __name__ == '__main__':
write_api()

View file

@ -0,0 +1,134 @@
#!/usr/bin/env python3
import re
import subprocess as sp
import sys
import json
re_full = re.compile(r'(?P<name>^.*?$)(?P<desc>.*?)(^Argument.*?$(?P<args>.*?))?(^Result[^\n]*?:\s*$(?P<resl>.*?))?(^Exampl.*?$(?P<exmp>.*))?', re.DOTALL | re.MULTILINE)
re_argline = re.compile(r'^("?)(?P<name>.*?)\1\s+\((?P<type>.*?)\)\s*(?P<desc>.*)$', re.DOTALL)
def get_type(arg_type, full_line):
if arg_type is None:
return 'string'
arg_type = arg_type.lower().split(',')[0].strip()
if 'numeric' in arg_type:
return 'number'
if 'bool' in arg_type:
return 'boolean'
if 'array' in arg_type:
return 'array'
if 'object' in arg_type:
return 'object'
supported_types = ['number', 'string', 'object', 'array', 'optional']
if arg_type in supported_types:
return arg_type
print("get_type: WARNING", arg_type, "is not supported type", file=sys.stderr)
return arg_type
def parse_params(args):
arguments = []
if args:
for line in re.split('\s*\d+\.\s+', args, re.DOTALL):
if not line or not line.strip() or line.strip().startswith('None'):
continue
arg_parsed = re_argline.fullmatch(line)
if arg_parsed is None:
continue
arg_name, arg_type, arg_desc = arg_parsed.group('name', 'type', 'desc')
if not arg_type:
raise Exception('Not implemented: ' + arg_type)
arg_required = 'required' in arg_type or 'optional' not in arg_type
arg_refined_type = get_type(arg_type, line)
arg_desc = re.sub('\s+', ' ', arg_desc.strip()) if arg_desc else []
arguments.append({
'name': arg_name,
'type': arg_refined_type,
'description': arg_desc,
'is_required': arg_required
})
return arguments
def process_examples(examples: str):
if not examples:
return []
examples = examples.strip()
splits = examples.split('\n')
result = []
inner = {}
for s in splits:
if not s:
continue
if '> curl' in s:
inner['curl'] = s.strip()
elif '> lbrycrd' in s:
inner['cli'] = s.strip()
else:
if 'title' in inner:
result.append(inner)
inner = {}
inner['title'] = s.strip()
result.append(inner)
return result
def get_api(section_name, command, command_help):
parsed = re_full.fullmatch(command_help)
if parsed is None:
raise Exception('Unable to resolve help format for ' + command)
name, desc, args, resl, exmp = parsed.group('name', 'desc', 'args', 'resl', 'exmp')
arguments = parse_params(args)
cmd_desc = re.sub('\s+', ' ', desc.strip()) if desc else ''
examp_desc = process_examples(exmp)
cmd_resl = resl.strip() if resl else None
ret = {
'name': command,
'namespace': section_name,
'description': cmd_desc,
'arguments': arguments,
'examples': examp_desc
}
if cmd_resl is not None:
ret['returns'] = cmd_resl
return ret
def write_api():
if len(sys.argv) < 2:
print("Missing required argument: <path to CLI tool>", file=sys.stderr)
sys.exit(1)
cli_tool = sys.argv[1]
result = sp.run([cli_tool, "help"], stdout=sp.PIPE, universal_newlines=True)
commands = result.stdout
sections = re.split('^==\s*(.*?)\s*==$', commands, flags=re.MULTILINE)
apis = []
for section in sections:
if not section:
continue
lines = section.splitlines()
if len(lines) == 1:
section_name = lines[0]
continue
for command in sorted(lines[1:]):
if not command:
continue
command = command.split(' ')[0]
result = sp.run([cli_tool, "help", command], stdout=sp.PIPE, universal_newlines=True)
apis.append(get_api(section_name, command, result.stdout))
print(json.dumps(apis, indent=4))
if __name__ == '__main__':
write_api()

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -36,13 +36,13 @@ if [ -z "${CODESIGN_ALLOCATE}" ]; then
fi
find ${TEMPDIR} -name "*.sign" | while read i; do
SIZE=`stat -c %s "${i}"`
TARGET_FILE="`echo "${i}" | sed 's/\.sign$//'`"
SIZE=$(stat -c %s "${i}")
TARGET_FILE="$(echo "${i}" | sed 's/\.sign$//')"
echo "Allocating space for the signature of size ${SIZE} in ${TARGET_FILE}"
${CODESIGN_ALLOCATE} -i "${TARGET_FILE}" -a ${ARCH} ${SIZE} -o "${i}.tmp"
OFFSET=`${PAGESTUFF} "${i}.tmp" -p | tail -2 | grep offset | sed 's/[^0-9]*//g'`
OFFSET=$(${PAGESTUFF} "${i}.tmp" -p | tail -2 | grep offset | sed 's/[^0-9]*//g')
if [ -z ${QUIET} ]; then
echo "Attaching signature at offset ${OFFSET}"
fi

View file

@ -27,19 +27,19 @@ ${CODESIGN} -f --file-list ${TEMPLIST} "$@" "${BUNDLE}"
grep -v CodeResources < "${TEMPLIST}" | while read i; do
TARGETFILE="${BUNDLE}/`echo "${i}" | sed "s|.*${BUNDLE}/||"`"
SIZE=`pagestuff "$i" -p | tail -2 | grep size | sed 's/[^0-9]*//g'`
OFFSET=`pagestuff "$i" -p | tail -2 | grep offset | sed 's/[^0-9]*//g'`
SIZE=$(pagestuff "$i" -p | tail -2 | grep size | sed 's/[^0-9]*//g')
OFFSET=$(pagestuff "$i" -p | tail -2 | grep offset | sed 's/[^0-9]*//g')
SIGNFILE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}.sign"
DIRNAME="`dirname "${SIGNFILE}"`"
DIRNAME="$(dirname "${SIGNFILE}")"
mkdir -p "${DIRNAME}"
echo "Adding detached signature for: ${TARGETFILE}. Size: ${SIZE}. Offset: ${OFFSET}"
dd if="$i" of="${SIGNFILE}" bs=1 skip=${OFFSET} count=${SIZE} 2>/dev/null
done
grep CodeResources < "${TEMPLIST}" | while read i; do
TARGETFILE="${BUNDLE}/`echo "${i}" | sed "s|.*${BUNDLE}/||"`"
TARGETFILE="${BUNDLE}/$(echo "${i}" | sed "s|.*${BUNDLE}/||")"
RESOURCE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}"
DIRNAME="`dirname "${RESOURCE}"`"
DIRNAME="$(dirname "${RESOURCE}")"
mkdir -p "${DIRNAME}"
echo "Adding resource for: \"${TARGETFILE}\""
cp "${i}" "${RESOURCE}"

57
contrib/mining/README.md Normal file
View file

@ -0,0 +1,57 @@
## Stratum Server Instructions
In simple terms, the stratum protocol is a protocol to distribute crypto mining work to multiple miners. Mining pools typically run a stratum endpoint that the various miners communicate with.
Please refer to other web sources for more information about mining pools or the stratum protocol.
When mining LBC, you can solo mine directly to an instance of a full node (using the node's wallet). Or you can mine as part of a pool.
You can host your own pool or use one of the many hosted LBC pools. See https://miningpoolstats.stream/lbry
This document refers to Yiimp, a derivative of Yaamp, as found here: https://github.com/tpruvot/yiimp.git .
Please refer to the instructions there as well. Yiimp has supported LBRY mining for several years.
Yiimp consists of two pieces: the web GUI for pool management (written in PHP) and the Stratum server (written in C++). The two communicate via polling a MySQL database (or MariaDB).
The web GUI and configuration of the pooling rewards, fees, etc. are out of scope here.
To help you with running Yiimp, we have created two docker images: one for the DB and one for the Yiimp Stratum Server. (See the subfolders.)
Use of the Docker images is optional; you can refer to other Yiimp and MySQL documentation for running it without Docker.
If you are using your own database instance, you will need to import the Yiimp SQL files to establish the yaamp database.
See https://github.com/tpruvot/yiimp/tree/next/sql .
### Sample Usage Steps:
#### 1. Run the full lbrycrd node:
```
./lbrycrdd -testnet -rpcuser=ruser -rpcpassword=rpswd -deprecatedrpc=validateaddress -deprecatedrpc=accounts -daemon
```
The included deprecated RPCs are required for compatibility with Yiimp.
It will need to be caught up to the current block before it is ready.
Remove `-testnet` for the real deal.
#### 2. Run and initialize the datatabase:
```
docker run -d -e MYSQL_ROOT_PASSWORD=patofpaq -e MYSQL_DATABASE=yaamp --network host --name db lbry/yiimp_db
docker exec -it db mysql -uroot -ppatofpaq
use yaamp;
delete from coins;
insert into coins(name, symbol, symbol2, algo, enable, auto_ready, rpcuser, rpcpasswd, rpchost, rpcport, rpccurl, rpcencoding, hasgetinfo, hassubmitblock, usememorypool, usesegwit, auxpow)
values('Local LBRY Instance', 'LBC', 'LBC', 'lbry', 1, 1, 'ruser', 'rpswd', '127.0.0.1', 19245, 1, 'utf-8', 0, 1, 0, 0, 0);
exit
```
Use port 19245 for testnet, port 9245 for main. Set usesegwit to 1 after the segwit fork is enabled on December 11, 2019.
#### 3. Run the stratum server:
```
docker run --network host -d lbry/yiimp_stratum
```
Alternatively, to get more output or see how its called directly:
```
docker run --network host -it lbry/yiimp_stratum bash
cat config/lbry.conf
./stratum config/lbry
```
When testing with an ASIC you may need to modify the TCP server address in said lbry.conf file to be an external IP address.
#### 4. Connect sgminer to it:
```
sgminer -k lbry -o stratum+tcp://127.0.0.1:3334/ -D -T -O mn824Su1wX7ip8WcNYzXwwWqvBvkeWGRo6:x
```
The username there is the account to receive payments from the pool. The password is unused. Tested with https://github.com/lbryio/sgminer-gm.
You can use whatever miner you prefer.

View file

@ -0,0 +1,34 @@
FROM mariadb:10.1-bionic
ARG REPOSITORY=https://github.com/tpruvot/yiimp.git
ENV BUILD_DEPS \
ca-certificates \
git
COPY init-db.sh /docker-entrypoint-initdb.d/
RUN apt-get update \
&& apt-get install -y --no-install-recommends ${BUILD_DEPS} \
&& git clone --progress ${REPOSITORY} ~/yiimp \
&& mkdir /tmp/sql \
&& mv ~/yiimp/sql/2016-04-03-yaamp.sql.gz /tmp/sql/0000-00-00-initial.sql.gz \
&& cp ~/yiimp/sql/*.sql /tmp/sql \
&& apt-get purge -y --auto-remove ${BUILD_DEPS} \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf ~/yiimp
EXPOSE 3306
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="yiimp_db" \
version="1.0" \
org.label-schema.name="yiimp_db" \
org.label-schema.description="Use this to run a compatible MariaDB for yiimp's stratum server" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/yiimp_db yiimp_db"

View file

@ -0,0 +1,10 @@
#!/bin/bash
for f in /tmp/sql/*; do
case "$f" in
*.sql) echo "$0: running $f"; "${mysql[@]}" --force < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done

View file

@ -0,0 +1,54 @@
FROM alpine:3.7
ARG REPOSITORY=https://github.com/tpruvot/yiimp.git
ENV BUILD_DEPS \
build-base \
git
ENV RUN_DEPS \
curl-dev \
gmp-dev \
mariadb-dev \
libssh2-dev \
curl
RUN apk update \
&& apk add --no-cache ${BUILD_DEPS} \
&& apk add --no-cache ${RUN_DEPS} \
&& git clone --progress ${REPOSITORY} ~/yiimp \
&& sed -i 's/ulong/uint64_t/g' ~/yiimp/stratum/algos/rainforest.c \
&& find ~/yiimp -name '*akefile' -exec sed -i 's/-march=native//g' {} + \
&& make -C ~/yiimp/stratum/iniparser \
&& make -C ~/yiimp/stratum \
&& mkdir /var/stratum /var/stratum/config \
&& cp ~/yiimp/stratum/run.sh /var/stratum \
&& cp ~/yiimp/stratum/config/run.sh /var/stratum/config \
&& cp ~/yiimp/stratum/stratum /var/stratum \
&& cp ~/yiimp/stratum/config.sample/lbry.conf /var/stratum/config \
&& sed -i 's/yaamp.com/127.0.0.1/g' /var/stratum/config/lbry.conf \
&& sed -i 's/yaampdb/127.0.0.1/g' /var/stratum/config/lbry.conf \
&& rm -rf ~/yiimp \
&& apk del ${BUILD_DEPS} \
&& rm -rf /var/cache/apk/*
RUN apk add --no-cache bash
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="yiimp_stratum" \
version="1.0" \
org.label-schema.name="yiimp_stratum" \
org.label-schema.description="Use this to run yiimp's stratum server in lbry mode" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/yiimp_stratum yiimp_stratum"
WORKDIR /var/stratum
CMD ["./stratum", "config/lbry"]
EXPOSE 3334

View file

@ -23,7 +23,7 @@ TIMESERVER=http://timestamp.comodoca.com
CERTFILE="win-codesign.cert"
mkdir -p "${OUTSUBDIR}"
basename -a `ls -1 "${SRCDIR}"/*-unsigned.exe` | while read UNSIGNED; do
basename -a $(ls -1 "${SRCDIR}"/*-unsigned.exe) | while read UNSIGNED; do
echo Signing "${UNSIGNED}"
"${OSSLSIGNCODE}" sign -certs "${CERTFILE}" -t "${TIMESERVER}" -in "${SRCDIR}/${UNSIGNED}" -out "${WORKDIR}/${UNSIGNED}" "$@"
"${OSSLSIGNCODE}" extract-signature -pem -in "${WORKDIR}/${UNSIGNED}" -out "${OUTSUBDIR}/${UNSIGNED}.pem" && rm "${WORKDIR}/${UNSIGNED}"

2
depends/.gitignore vendored
View file

@ -9,4 +9,4 @@ mips*
arm*
aarch64*
riscv32*
riscv64*
riscv64*

View file

@ -122,8 +122,8 @@ $(host_prefix)/.stamp_$(final_build_id): $(native_packages) $(packages)
$(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_build_id)
$(AT)@mkdir -p $(@D)
$(AT)sed -e 's|@HOST@|$(host)|' \
-e 's|@CC@|$(toolchain_path)$(host_CC)|' \
-e 's|@CXX@|$(toolchain_path)$(host_CXX)|' \
-e 's|@CC@|$(host_CC)|' \
-e 's|@CXX@|$(host_CXX)|' \
-e 's|@AR@|$(toolchain_path)$(host_AR)|' \
-e 's|@RANLIB@|$(toolchain_path)$(host_RANLIB)|' \
-e 's|@NM@|$(toolchain_path)$(host_NM)|' \
@ -131,7 +131,7 @@ $(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_
-e 's|@build_os@|$(build_os)|' \
-e 's|@host_os@|$(host_os)|' \
-e 's|@CFLAGS@|$(strip $(host_CFLAGS) $(host_$(release_type)_CFLAGS))|' \
-e 's|@CXXFLAGS@|$(strip $(host_CXXFLAGS) $(host_$(release_type)_CXXFLAGS))|' \
-e 's|@CXXFLAGS@|$(strip -pipe $(host_$(release_type)_CXXFLAGS))|' \
-e 's|@CPPFLAGS@|$(strip $(host_CPPFLAGS) $(host_$(release_type)_CPPFLAGS))|' \
-e 's|@LDFLAGS@|$(strip $(host_LDFLAGS) $(host_$(release_type)_LDFLAGS))|' \
-e 's|@allow_host_packages@|$(ALLOW_HOST_PACKAGES)|' \

View file

@ -49,7 +49,7 @@ For linux RISC-V 64-bit cross compilation (there are no packages for 32-bit):
sudo apt-get install curl g++-riscv64-linux-gnu binutils-riscv64-linux-gnu
RISC-V known issue: gcc-7.3.0 and gcc-7.3.1 result in a broken `test_bitcoin` executable (see https://github.com/bitcoin/bitcoin/pull/13543),
RISC-V known issue: gcc-7.3.0 and gcc-7.3.1 result in a broken `test_lbrycrd` executable (see https://github.com/bitcoin/bitcoin/pull/13543),
this is apparently fixed in gcc-8.1.0.
Dependency Options:
@ -67,7 +67,7 @@ The following can be set when running make: make FOO=bar
BUILD_ID_SALT: Optional salt to use when generating build package ids
If some packages are not built, for example `make NO_WALLET=1`, the appropriate
options will be passed to bitcoin's configure. In this case, `--disable-wallet`.
options will be passed to lbrycrd's configure. In this case, `--disable-wallet`.
Additional targets:

View file

@ -174,7 +174,7 @@ $($(1)_preprocessed): | $($(1)_dependencies) $($(1)_extracted)
$(AT)echo Preprocessing $(1)...
$(AT)mkdir -p $$(@D) $($(1)_patch_dir)
$(AT)$(foreach patch,$($(1)_patches),cd $(PATCHES_PATH)/$(1); cp $(patch) $($(1)_patch_dir) ;)
$(AT)cd $$(@D); $(call $(1)_preprocess_cmds, $(1))
$(AT)+cd $$(@D); $(call $(1)_preprocess_cmds, $(1))
$(AT)touch $$@
$($(1)_configured): | $($(1)_preprocessed)
$(AT)echo Configuring $(1)...
@ -190,7 +190,7 @@ $($(1)_built): | $($(1)_configured)
$($(1)_staged): | $($(1)_built)
$(AT)echo Staging $(1)...
$(AT)mkdir -p $($(1)_staging_dir)/$(host_prefix)
$(AT)cd $($(1)_build_dir); $($(1)_stage_env) $(call $(1)_stage_cmds, $(1))
$(AT)+cd $($(1)_build_dir); $($(1)_stage_env) $(call $(1)_stage_cmds, $(1))
$(AT)rm -rf $($(1)_extract_dir)
$(AT)touch $$@
$($(1)_postprocessed): | $($(1)_staged)

File diff suppressed because it is too large Load diff

View file

@ -2,16 +2,18 @@ OSX_MIN_VERSION=10.10
OSX_SDK_VERSION=10.11
OSX_SDK=$(SDK_PATH)/MacOSX$(OSX_SDK_VERSION).sdk
LD64_VERSION=253.9
darwin_CC=clang -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) --sysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION)
darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) --sysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++
darwin_CC=clang -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -B $(host_prefix)/native/bin
darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++ -B $(host_prefix)/native/bin
darwin_CFLAGS=-pipe
darwin_CXXFLAGS=$(darwin_CFLAGS)
darwin_CXXFLAGS=$(darwin_CFLAGS) -std=c++11
darwin_release_CFLAGS=-O2
darwin_release_CFLAGS=-O2 -g
darwin_release_CXXFLAGS=$(darwin_release_CFLAGS)
darwin_debug_CFLAGS=-O1
darwin_debug_CXXFLAGS=$(darwin_debug_CFLAGS)
darwin_debug_CFLAGS=-Og -g
darwin_debug_CXXFLAGS=-O0 -g
darwin_native_toolchain=native_cctools

View file

@ -1,31 +1,34 @@
linux_CFLAGS=-pipe
linux_CXXFLAGS=$(linux_CFLAGS)
linux_CXXFLAGS=$(linux_CFLAGS) -std=c++11
linux_release_CFLAGS=-O2
linux_release_CFLAGS=-O3 -g
ifeq (1,$(shell ldd --version | head -1 | awk '{print $$NF < 2.28}'))
linux_release_CFLAGS+= -include $(BASEDIR)/glibc_version_header/force_link_glibc_2.19.h
endif
linux_release_CXXFLAGS=$(linux_release_CFLAGS)
linux_debug_CFLAGS=-O1
linux_debug_CXXFLAGS=$(linux_debug_CFLAGS)
linux_debug_CFLAGS=-O1 -g
linux_debug_CXXFLAGS=-O0 -g
linux_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC
ifeq (86,$(findstring 86,$(build_arch)))
i686_linux_CC=gcc -m32
i686_linux_CXX=g++ -m32
i686_linux_CC=cc -m32
i686_linux_CXX=c++ -m32
i686_linux_AR=ar
i686_linux_RANLIB=ranlib
i686_linux_NM=nm
i686_linux_STRIP=strip
x86_64_linux_CC=gcc -m64
x86_64_linux_CXX=g++ -m64
x86_64_linux_CC=cc -m64
x86_64_linux_CXX=c++ -m64
x86_64_linux_AR=ar
x86_64_linux_RANLIB=ranlib
x86_64_linux_NM=nm
x86_64_linux_STRIP=strip
else
i686_linux_CC=$(default_host_CC) -m32
i686_linux_CXX=$(default_host_CXX) -m32
x86_64_linux_CC=$(default_host_CC) -m64
x86_64_linux_CXX=$(default_host_CXX) -m64
i686_linux_CC=cc -m32
i686_linux_CXX=c++ -m32
x86_64_linux_CC=cc -m64
x86_64_linux_CXX=c++ -m64
endif

View file

@ -1,10 +1,11 @@
mingw32_CFLAGS=-pipe
mingw32_CXXFLAGS=$(mingw32_CFLAGS)
mingw32_CXXFLAGS=$(mingw32_CFLAGS) -std=c++11
mingw32_release_CFLAGS=-O2
mingw32_release_CFLAGS=-O2 -g
mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS)
mingw32_debug_CFLAGS=-O1
mingw32_debug_CXXFLAGS=$(mingw32_debug_CFLAGS)
mingw32_debug_CFLAGS=-O1 -g
mingw32_debug_CXXFLAGS=-O0 -g
mingw32_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC

View file

@ -1,6 +1,6 @@
package=bdb
$(package)_version=4.8.30
$(package)_download_path=http://download.oracle.com/berkeley-db
$(package)_download_path=https://download.oracle.com/berkeley-db
$(package)_file_name=db-$($(package)_version).NC.tar.gz
$(package)_sha256_hash=12edc0df75bf9abd7f82f821795bcee50f42cb2e5f76a6a281b85732798364ef
$(package)_build_subdir=build_unix
@ -9,7 +9,7 @@ define $(package)_set_vars
$(package)_config_opts=--disable-shared --enable-cxx --disable-replication
$(package)_config_opts_mingw32=--enable-mingw
$(package)_config_opts_linux=--with-pic
$(package)_cxxflags=-std=c++11
$(package)_cppflags_mingw32=-DUNICODE -D_UNICODE
endef
define $(package)_preprocess_cmds
@ -29,3 +29,4 @@ endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install_lib install_include
endef

View file

@ -1,14 +1,19 @@
package=boost
$(package)_version=1_64_0
$(package)_download_path=https://dl.bintray.com/boostorg/release/1.64.0/source/
$(package)_version=1_69_0
$(package)_download_path=https://boostorg.jfrog.io/artifactory/main/release/1.69.0/source/
$(package)_file_name=$(package)_$($(package)_version).tar.bz2
$(package)_sha256_hash=7bcc5caace97baa948931d712ea5f37038dbb1c5d89b43ad4def4ed7cb683332
$(package)_sha256_hash=8f32d4617390d1c2d16f26a27ab60d97807b35440d45891fa340fc2648b04406
$(package)_dependencies=icu
define $(package)_set_vars
$(package)_config_opts_release=variant=release
$(package)_config_opts_debug=variant=debug
$(package)_config_opts=--layout=tagged --build-type=complete --user-config=user-config.jam
$(package)_config_opts+=threading=multi link=static -sNO_BZIP2=1 -sNO_ZLIB=1
$(package)_config_opts+=boost.locale.iconv=off boost.locale.posix=off boost.locale.icu=on boost.locale.std=off -sICU_PATH="$(host_prefix)"
# The stupid ICU_LINK handling reorders the dependencies alphabetically, thus making it impossible to get the link order correct.
# To work around this we're using the ldflags below but we need ICU_LINK to be non-blank so that we don't get an auto-generated conflict with ldflags.
$(package)_config_opts+=-sICU_LINK="-time"
$(package)_config_opts_linux=threadapi=pthread runtime-link=shared
$(package)_config_opts_darwin=--toolset=darwin-4.2.1 runtime-link=shared
$(package)_config_opts_mingw32=binary-format=pe target-os=windows threadapi=win32 runtime-link=static
@ -19,9 +24,15 @@ $(package)_toolset_$(host_os)=gcc
$(package)_archiver_$(host_os)=$($(package)_ar)
$(package)_toolset_darwin=darwin
$(package)_archiver_darwin=$($(package)_libtool)
$(package)_config_libraries=chrono,filesystem,system,thread,test
$(package)_cxxflags=-std=c++11 -fvisibility=hidden
$(package)_config_libraries=chrono,filesystem,system,locale,thread,test
$(package)_cxxflags=-std=c++11 -fvisibility=hidden -Wno-deprecated
$(package)_cxxflags_linux=-fPIC
# The ideal doesn't work because vars are evaluated before any dependency is processed:
# $(package)_ldflags=$$(shell PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig pkg-config icu-io icu-uc icu-i18n --libs)
# So we substitute poorly (as these may not actually match all scenarios):
$(package)_ldflags_mingw32=-L$(host_prefix)/lib -lsicuio -lsicuuc -lsicudt
$(package)_ldflags_linux=-L$(host_prefix)/lib -licuio -licuuc -licudata -licui18n
$(package)_ldflags_darwin=-L$(host_prefix)/lib -licuio -licuuc -licudata -licui18n
endef
define $(package)_preprocess_cmds
@ -29,13 +40,13 @@ define $(package)_preprocess_cmds
endef
define $(package)_config_cmds
./bootstrap.sh --without-icu --with-libraries=$(boost_config_libraries)
./bootstrap.sh --with-icu="$(host_prefix)" --with-libraries="$(boost_config_libraries)"
endef
define $(package)_build_cmds
./b2 -d2 -j2 -d1 --prefix=$($(package)_staging_prefix_dir) $($(package)_config_opts) stage
./b2 -d2 -j`getconf _NPROCESSORS_ONLN` -d1 --reconfigure --prefix=$($(package)_staging_prefix_dir) $($(package)_config_opts) stage
endef
define $(package)_stage_cmds
./b2 -d0 -j4 --prefix=$($(package)_staging_prefix_dir) $($(package)_config_opts) install
./b2 -d0 -j`getconf _NPROCESSORS_ONLN` --prefix=$($(package)_staging_prefix_dir) $($(package)_config_opts) install
endef

45
depends/packages/icu.mk Normal file
View file

@ -0,0 +1,45 @@
package=icu
$(package)_version=63_2
$(package)_download_path=https://github.com/unicode-org/icu/releases/download/release-63-2/
$(package)_file_name=$(package)4c-$($(package)_version)-src.tgz
$(package)_sha256_hash=4671e985b5c11252bff3c2374ab84fd73c609f2603bb6eb23b8b154c69ea4215
$(package)_build_subdir=source
$(package)_standard_opts=--disable-extras --disable-strict --enable-static --disable-shared --disable-tests --disable-samples --disable-dyload --disable-layoutex
define $(package)_set_vars
$(package)_config_opts=$($(package)_standard_opts)
$(package)_config_opts_debug=--enable-debug --disable-release
$(package)_config_opts_release=--disable-debug --enable-release
$(package)_config_opts_mingw32=--with-cross-build="$($(package)_extract_dir)/build"
$(package)_config_opts_darwin=--with-cross-build="$($(package)_extract_dir)/build" LIBTOOL="$($(package)_libtool)"
$(package)_archiver_darwin=$($(package)_libtool)
$(package)_cflags_linux=-fPIC
$(package)_cppflags_linux=-fPIC
$(package)_cxxflags=-std=c++11
endef
define $(package)_preprocess_cmds
PKG_CONFIG_SYSROOT_DIR=/ \
PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \
PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \
mkdir -p build && cd build && \
../source/runConfigureICU Linux $($(package)_standard_opts) CXXFLAGS=-std=c++11 && \
$(MAKE) && cd ..
endef
define $(package)_config_cmds
PKG_CONFIG_SYSROOT_DIR=/ \
PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \
PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \
sed -i.old 's|^GEN_DEPS.c=.*|& $($(package)_cflags)|' config/mh-mingw* && \
sed -i.old 's|^GEN_DEPS.cc=.*|& $($(package)_cxxflags)|' config/mh-mingw* && \
$($(package)_autoconf)
endef
define $(package)_build_cmds
$(MAKE)
endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install
endef

View file

@ -27,4 +27,5 @@ define $(package)_stage_cmds
endef
define $(package)_postprocess_cmds
rm lib/*.la
endef

View file

@ -1,6 +1,6 @@
package=miniupnpc
$(package)_version=2.0.20180203
$(package)_download_path=http://miniupnp.free.fr/files
$(package)_download_path=https://miniupnp.tuxfamily.org/files/
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=90dda8c7563ca6cd4a83e23b3c66dbbea89603a1675bfdb852897c2c9cc220b7

View file

@ -4,37 +4,12 @@ $(package)_download_path=https://github.com/theuni/cctools-port/archive
$(package)_file_name=$($(package)_version).tar.gz
$(package)_sha256_hash=a09c9ba4684670a0375e42d9d67e7f12c1f62581a27f28f7c825d6d7032ccc6a
$(package)_build_subdir=cctools
$(package)_clang_version=3.7.1
$(package)_clang_download_path=http://llvm.org/releases/$($(package)_clang_version)
$(package)_clang_download_file=clang+llvm-$($(package)_clang_version)-x86_64-linux-gnu-ubuntu-14.04.tar.xz
$(package)_clang_file_name=clang-llvm-$($(package)_clang_version)-x86_64-linux-gnu-ubuntu-14.04.tar.xz
$(package)_clang_sha256_hash=99b28a6b48e793705228a390471991386daa33a9717cd9ca007fcdde69608fd9
$(package)_extra_sources=$($(package)_clang_file_name)
define $(package)_fetch_cmds
$(call fetch_file,$(package),$($(package)_download_path),$($(package)_download_file),$($(package)_file_name),$($(package)_sha256_hash)) && \
$(call fetch_file,$(package),$($(package)_clang_download_path),$($(package)_clang_download_file),$($(package)_clang_file_name),$($(package)_clang_sha256_hash))
endef
define $(package)_extract_cmds
mkdir -p $($(package)_extract_dir) && \
echo "$($(package)_sha256_hash) $($(package)_source)" > $($(package)_extract_dir)/.$($(package)_file_name).hash && \
echo "$($(package)_clang_sha256_hash) $($(package)_source_dir)/$($(package)_clang_file_name)" >> $($(package)_extract_dir)/.$($(package)_file_name).hash && \
$(build_SHA256SUM) -c $($(package)_extract_dir)/.$($(package)_file_name).hash && \
mkdir -p toolchain/bin toolchain/lib/clang/3.5/include && \
tar --strip-components=1 -C toolchain -xf $($(package)_source_dir)/$($(package)_clang_file_name) && \
rm -f toolchain/lib/libc++abi.so* && \
echo "#!/bin/sh" > toolchain/bin/$(host)-dsymutil && \
echo "exit 0" >> toolchain/bin/$(host)-dsymutil && \
chmod +x toolchain/bin/$(host)-dsymutil && \
tar --strip-components=1 -xf $($(package)_source)
endef
define $(package)_set_vars
$(package)_config_opts=--target=$(host) --disable-lto-support
$(package)_ldflags+=-Wl,-rpath=\\$$$$$$$$\$$$$$$$$ORIGIN/../lib
$(package)_cc=$($(package)_extract_dir)/toolchain/bin/clang
$(package)_cxx=$($(package)_extract_dir)/toolchain/bin/clang++
$(package)_config_opts=--target=$(host) --disable-lto-support --prefix=/
$(package)_ldflags+=-Wl,-rpath=\\$$$$$$$$\$$$$$$$$ORIGIN/../lib
$(package)_cc=cc
$(package)_cxx=c++
endef
define $(package)_preprocess_cmds
@ -51,15 +26,6 @@ define $(package)_build_cmds
endef
define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install && \
cd $($(package)_extract_dir)/toolchain && \
mkdir -p $($(package)_staging_prefix_dir)/lib/clang/$($(package)_clang_version)/include && \
mkdir -p $($(package)_staging_prefix_dir)/bin $($(package)_staging_prefix_dir)/include && \
cp bin/clang $($(package)_staging_prefix_dir)/bin/ &&\
cp -P bin/clang++ $($(package)_staging_prefix_dir)/bin/ &&\
cp lib/libLTO.so $($(package)_staging_prefix_dir)/lib/ && \
cp -rf lib/clang/$($(package)_clang_version)/include/* $($(package)_staging_prefix_dir)/lib/clang/$($(package)_clang_version)/include/ && \
cp bin/llvm-dsymutil $($(package)_staging_prefix_dir)/bin/$(host)-dsymutil && \
if `test -d include/c++/`; then cp -rf include/c++/ $($(package)_staging_prefix_dir)/include/; fi && \
if `test -d lib/c++/`; then cp -rf lib/c++/ $($(package)_staging_prefix_dir)/lib/; fi
mkdir -p $($(package)_staging_prefix_dir) && \
$(MAKE) DESTDIR=$($(package)_staging_prefix_dir) install
endef

View file

@ -5,7 +5,7 @@ $(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=8f9faeaebad088e772f4ef5e38252d472be4d878c6b3a2718c10a4fcebe7a41c
define $(package)_set_vars
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)"
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc) $($(package)_cflags) $($(package)_cppflags)"
$(package)_config_opts=--prefix=$(host_prefix) --openssldir=$(host_prefix)/etc/openssl
$(package)_config_opts+=no-camellia
$(package)_config_opts+=no-capieng
@ -42,7 +42,6 @@ $(package)_config_opts+=no-weak-ssl-ciphers
$(package)_config_opts+=no-whirlpool
$(package)_config_opts+=no-zlib
$(package)_config_opts+=no-zlib-dynamic
$(package)_config_opts+=$($(package)_cflags) $($(package)_cppflags)
$(package)_config_opts_linux=-fPIC -Wa,--noexecstack
$(package)_config_opts_x86_64_linux=linux-x86_64
$(package)_config_opts_i686_linux=linux-generic32

View file

@ -1,4 +1,4 @@
packages:=boost openssl libevent zeromq
packages:=icu boost openssl libevent zeromq
qt_native_packages = native_protobuf
qt_packages = qrencode protobuf zlib

View file

@ -4,7 +4,6 @@ $(package)_download_path=$(native_$(package)_download_path)
$(package)_file_name=$(native_$(package)_file_name)
$(package)_sha256_hash=$(native_$(package)_sha256_hash)
$(package)_dependencies=native_$(package)
$(package)_cxxflags=-std=c++11
define $(package)_set_vars
$(package)_config_opts=--disable-shared --with-protoc=$(build_prefix)/bin/protoc

View file

@ -2,6 +2,9 @@ PACKAGE=qt
$(package)_version=5.9.6
$(package)_download_path=https://download.qt.io/official_releases/qt/5.9/$($(package)_version)/submodules
$(package)_suffix=opensource-src-$($(package)_version).tar.xz
#$(package)_version=5.12.3
#$(package)_download_path=http://download.qt.io/official_releases/qt/5.12/$($(package)_version)/submodules
#$(package)_suffix=opensource-src-$($(package)_version).tar.gz
$(package)_file_name=qtbase-$($(package)_suffix)
$(package)_sha256_hash=eed620cb268b199bd83b3fc6a471c51d51e1dc2dbb5374fc97a0cc75facbe36f
$(package)_dependencies=openssl zlib

View file

@ -6,9 +6,10 @@ $(package)_sha256_hash=bcbabe1e2c7d0eec4ed612e10b94b112dd5f06fcefa994a0c79a45d83
$(package)_patches=0001-fix-build-with-older-mingw64.patch 0002-disable-pthread_set_name_np.patch
define $(package)_set_vars
$(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror
$(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror --disable-drafts
$(package)_config_opts += --without-libsodium --without-libgssapi_krb5 --without-pgm --without-norm --without-vmci
$(package)_config_opts += --disable-libunwind --disable-radix-tree --without-gcov
$(package)_config_opts_linux=--with-pic
$(package)_cxxflags=-std=c++11
endef
define $(package)_preprocess_cmds
@ -31,5 +32,5 @@ endef
define $(package)_postprocess_cmds
sed -i.old "s/ -lstdc++//" lib/pkgconfig/libzmq.pc && \
rm -rf bin share
rm -rf bin share lib/*.la
endef

View file

@ -17,10 +17,16 @@ Given a transaction hash: returns a transaction in binary, hex-encoded binary, o
For full TX query capability, one must enable the transaction index via "txindex=1" command line / configuration option.
#### Blocks
`GET /rest/block/tip.<bin|hex|json>`
`GET /rest/block/<BLOCK-HASH>.<bin|hex|json>`
`GET /rest/block/<BLOCK-HEIGHT>.<bin|hex|json>`
`GET /rest/block/notxdetails/tip.<bin|hex|json>`
`GET /rest/block/notxdetails/<BLOCK-HASH>.<bin|hex|json>`
`GET /rest/block/notxdetails/<BLOCK-HEIGHT>.<bin|hex|json>`
Given a block hash: returns a block, in binary, hex-encoded binary or JSON formats.
You can give a block height instead of a hash. Height 0 is not available,
but can be negative to go back that many blocks from the tip.
The HTTP request and response are both handled entirely in-memory, thus making maximum memory usage at least 2.66MB (1 MB max block, plus hex encoding) per request.

View file

@ -1,13 +1,13 @@
dist_man1_MANS=
if BUILD_BITCOIND
dist_man1_MANS+=bitcoind.1
dist_man1_MANS+=lbrycrdd.1
endif
if ENABLE_QT
dist_man1_MANS+=bitcoin-qt.1
dist_man1_MANS+=lbrycrd-qt.1
endif
if BUILD_BITCOIN_UTILS
dist_man1_MANS+=bitcoin-cli.1 bitcoin-tx.1
dist_man1_MANS+=lbrycrd-cli.1 lbrycrd-tx.1
endif

View file

@ -1,21 +1,21 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.47.6.
.TH BITCOIN-CLI "1" "December 2018" "bitcoin-cli v0.17.1.0" "User Commands"
.TH BITCOIN-CLI "1" "December 2018" "lbrycrd-cli v0.17.1.0" "User Commands"
.SH NAME
bitcoin-cli \- manual page for bitcoin-cli v0.17.1.0
lbrycrd-cli \- manual page for lbrycrd-cli v0.17.1.0
.SH SYNOPSIS
.B bitcoin-cli
[\fI\,options\/\fR] \fI\,<command> \/\fR[\fI\,params\/\fR] \fI\,Send command to Bitcoin Core\/\fR
.B lbrycrd-cli
[\fI\,options\/\fR] \fI\,<command> \/\fR[\fI\,params\/\fR] \fI\,Send command to LBRYcrd Core\/\fR
.br
.B bitcoin-cli
[\fI\,options\/\fR] \fI\,-named <command> \/\fR[\fI\,name=value\/\fR]... \fI\,Send command to Bitcoin Core (with named arguments)\/\fR
.B lbrycrd-cli
[\fI\,options\/\fR] \fI\,-named <command> \/\fR[\fI\,name=value\/\fR]... \fI\,Send command to LBRYcrd Core (with named arguments)\/\fR
.br
.B bitcoin-cli
.B lbrycrd-cli
[\fI\,options\/\fR] \fI\,help List commands\/\fR
.br
.B bitcoin-cli
.B lbrycrd-cli
[\fI\,options\/\fR] \fI\,help <command> Get help for a command\/\fR
.SH DESCRIPTION
Bitcoin Core RPC client version v0.17.1.0
LBRYcrd Core RPC client version v0.17.1.0
.SH OPTIONS
.HP
\-?
@ -25,7 +25,7 @@ This help message
\fB\-conf=\fR<file>
.IP
Specify configuration file. Relative paths will be prefixed by datadir
location. (default: bitcoin.conf)
location. (default: lbrycrd.conf)
.HP
\fB\-datadir=\fR<dir>
.IP
@ -76,7 +76,7 @@ Wait for RPC server to start
\fB\-rpcwallet=\fR<walletname>
.IP
Send RPC for non\-default wallet on RPC server (needs to exactly match
corresponding \fB\-wallet\fR option passed to bitcoind)
corresponding \fB\-wallet\fR option passed to lbrycrdd)
.HP
\fB\-stdin\fR
.IP
@ -101,11 +101,11 @@ Chain selection options:
.IP
Use the test chain
.SH COPYRIGHT
Copyright (C) 2009-2018 The Bitcoin Core developers
Copyright (C) 2009-2018 The LBRYcrd Core developers
Please contribute if you find Bitcoin Core useful. Visit
<https://bitcoincore.org> for further information about the software.
The source code is available from <https://github.com/bitcoin/bitcoin>.
Please contribute if you find LBRYcrd Core useful. Visit
<https://lbrycrdcore.org> for further information about the software.
The source code is available from <https://github.com/lbrycrd/lbrycrd>.
This is experimental software.
Distributed under the MIT software license, see the accompanying file COPYING

View file

@ -1,12 +1,12 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.47.6.
.TH BITCOIN-QT "1" "December 2018" "bitcoin-qt v0.17.1.0" "User Commands"
.TH BITCOIN-QT "1" "December 2018" "lbrycrd-qt v0.17.1.0" "User Commands"
.SH NAME
bitcoin-qt \- manual page for bitcoin-qt v0.17.1.0
lbrycrd-qt \- manual page for lbrycrd-qt v0.17.1.0
.SH SYNOPSIS
.B bitcoin-qt
.B lbrycrd-qt
[\fI\,command-line options\/\fR]
.SH DESCRIPTION
Bitcoin Core version v0.17.1.0 (64\-bit)
LBRYcrd Core version v0.17.1.0 (64\-bit)
.SH OPTIONS
.HP
\-?
@ -44,7 +44,7 @@ Specify blocks directory (default: <datadir>/blocks)
\fB\-conf=\fR<file>
.IP
Specify configuration file. Relative paths will be prefixed by datadir
location. (default: bitcoin.conf)
location. (default: lbrycrd.conf)
.HP
\fB\-daemon\fR
.IP
@ -98,7 +98,7 @@ Whether to save the mempool on shutdown and load on restart (default: 1)
\fB\-pid=\fR<file>
.IP
Specify pid file. Relative paths will be prefixed by a net\-specific
datadir location. (default: bitcoind.pid)
datadir location. (default: lbrycrdd.pid)
.HP
\fB\-prune=\fR<n>
.IP
@ -604,11 +604,11 @@ Set SSL root certificates for payment request (default: \fB\-system\-\fR)
.IP
Show splash screen on startup (default: 1)
.SH COPYRIGHT
Copyright (C) 2009-2018 The Bitcoin Core developers
Copyright (C) 2009-2018 The LBRYcrd Core developers
Please contribute if you find Bitcoin Core useful. Visit
<https://bitcoincore.org> for further information about the software.
The source code is available from <https://github.com/bitcoin/bitcoin>.
Please contribute if you find LBRYcrd Core useful. Visit
<https://lbrycrdcore.org> for further information about the software.
The source code is available from <https://github.com/lbrycrd/lbrycrd>.
This is experimental software.
Distributed under the MIT software license, see the accompanying file COPYING

View file

@ -1,15 +1,15 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.47.6.
.TH BITCOIN-TX "1" "December 2018" "bitcoin-tx v0.17.1.0" "User Commands"
.TH BITCOIN-TX "1" "December 2018" "lbrycrd-tx v0.17.1.0" "User Commands"
.SH NAME
bitcoin-tx \- manual page for bitcoin-tx v0.17.1.0
lbrycrd-tx \- manual page for lbrycrd-tx v0.17.1.0
.SH SYNOPSIS
.B bitcoin-tx
[\fI\,options\/\fR] \fI\,<hex-tx> \/\fR[\fI\,commands\/\fR] \fI\,Update hex-encoded bitcoin transaction\/\fR
.B lbrycrd-tx
[\fI\,options\/\fR] \fI\,<hex-tx> \/\fR[\fI\,commands\/\fR] \fI\,Update hex-encoded lbrycrd transaction\/\fR
.br
.B bitcoin-tx
[\fI\,options\/\fR] \fI\,-create \/\fR[\fI\,commands\/\fR] \fI\,Create hex-encoded bitcoin transaction\/\fR
.B lbrycrd-tx
[\fI\,options\/\fR] \fI\,-create \/\fR[\fI\,commands\/\fR] \fI\,Create hex-encoded lbrycrd transaction\/\fR
.SH DESCRIPTION
Bitcoin Core bitcoin\-tx utility version v0.17.1.0
LBRYcrd Core lbrycrd\-tx utility version v0.17.1.0
.SH OPTIONS
.HP
\-?
@ -105,11 +105,11 @@ set=NAME:JSON\-STRING
.IP
Set register NAME to given JSON\-STRING
.SH COPYRIGHT
Copyright (C) 2009-2018 The Bitcoin Core developers
Copyright (C) 2009-2018 The LBRYcrd Core developers
Please contribute if you find Bitcoin Core useful. Visit
<https://bitcoincore.org> for further information about the software.
The source code is available from <https://github.com/bitcoin/bitcoin>.
Please contribute if you find LBRYcrd Core useful. Visit
<https://lbrycrdcore.org> for further information about the software.
The source code is available from <https://github.com/lbrycrd/lbrycrd>.
This is experimental software.
Distributed under the MIT software license, see the accompanying file COPYING

View file

@ -1,12 +1,12 @@
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.47.6.
.TH BITCOIND "1" "December 2018" "bitcoind v0.17.1.0" "User Commands"
.TH BITCOIND "1" "December 2018" "lbrycrdd v0.17.1.0" "User Commands"
.SH NAME
bitcoind \- manual page for bitcoind v0.17.1.0
lbrycrdd \- manual page for lbrycrdd v0.17.1.0
.SH SYNOPSIS
.B bitcoind
[\fI\,options\/\fR] \fI\,Start Bitcoin Core Daemon\/\fR
.B lbrycrdd
[\fI\,options\/\fR] \fI\,Start LBRYcrd Core Daemon\/\fR
.SH DESCRIPTION
Bitcoin Core Daemon version v0.17.1.0
LBRYcrd Core Daemon version v0.17.1.0
.SH OPTIONS
.HP
\-?
@ -44,7 +44,7 @@ Specify blocks directory (default: <datadir>/blocks)
\fB\-conf=\fR<file>
.IP
Specify configuration file. Relative paths will be prefixed by datadir
location. (default: bitcoin.conf)
location. (default: lbrycrd.conf)
.HP
\fB\-daemon\fR
.IP
@ -98,7 +98,7 @@ Whether to save the mempool on shutdown and load on restart (default: 1)
\fB\-pid=\fR<file>
.IP
Specify pid file. Relative paths will be prefixed by a net\-specific
datadir location. (default: bitcoind.pid)
datadir location. (default: lbrycrdd.pid)
.HP
\fB\-prune=\fR<n>
.IP
@ -578,11 +578,11 @@ Username for JSON\-RPC connections
.IP
Accept command line and JSON\-RPC commands
.SH COPYRIGHT
Copyright (C) 2009-2018 The Bitcoin Core developers
Copyright (C) 2009-2018 The LBRYcrd Core developers
Please contribute if you find Bitcoin Core useful. Visit
<https://bitcoincore.org> for further information about the software.
The source code is available from <https://github.com/bitcoin/bitcoin>.
Please contribute if you find LBRYcrd Core useful. Visit
<https://lbrycrdcore.org> for further information about the software.
The source code is available from <https://github.com/lbrycrd/lbrycrd>.
This is experimental software.
Distributed under the MIT software license, see the accompanying file COPYING

10
packaging/.dockerignore Normal file
View file

@ -0,0 +1,10 @@
.git
.gitignore
.travis.yml
README.md
LICENSE
hooks
Dockerfile
Makefile
*.sh
*.patch

43
packaging/Dockerfile Normal file
View file

@ -0,0 +1,43 @@
FROM ubuntu:16.04
ENV LANG C.UTF-8
RUN set -xe; \
apt-get update; \
apt-get install --no-install-recommends -y build-essential libtool autotools-dev automake pkg-config git wget apt-utils \
librsvg2-bin libtiff-tools cmake imagemagick libcap-dev libz-dev libbz2-dev python-setuptools xz-utils ccache g++-multilib \
g++-mingw-w64-i686 mingw-w64-i686-dev bsdmainutils curl ca-certificates g++-mingw-w64-x86-64 mingw-w64-x86-64-dev; \
rm -rf /var/lib/apt/lists/*;
RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -; \
echo 'deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-8 main' >> /etc/apt/sources.list; \
apt-get update; \
apt-get install --no-install-recommends -y clang-8 lldb-8 lld-8 libc++-8-dev; \
rm -rf /var/lib/apt/lists/*;
RUN update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang-cpp-8 80; \
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-8 80; \
update-alternatives --install /usr/bin/c++ c++ /usr/bin/clang++ 80; \
update-alternatives --install /usr/bin/cc cc /usr/bin/clang 80; \
update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix; \
update-alternatives --set i686-w64-mingw32-g++ /usr/bin/i686-w64-mingw32-g++-posix; \
/usr/sbin/update-ccache-symlinks; \
cd /usr/include/c++ && ln -s /usr/lib/llvm-8/include/c++/v1; \
cd /usr/lib/llvm-8/lib && ln -s libc++abi.so.1 libc++abi.so;
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="build_lbrycrd" \
version="1.1" \
org.label-schema.name="build_lbrycrd" \
org.label-schema.description="Use this to generate a reproducible build of LBRYcrd" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/build_lbrycrd packaging"
ENV PATH "/usr/lib/ccache:$PATH"
WORKDIR /home
CMD ["/bin/bash"]

47
packaging/build_darwin_64bit.sh Executable file
View file

@ -0,0 +1,47 @@
#!/usr/bin/env bash
set -euo pipefail
# NOTE: this requires that you get the MacOS SDK separately.
# To acquire it, you will need to log into the Apple dev portal.
# From there, you download an Xcode package. Recommended: 7.3.1
# You can extract the SDK from that using contrib/macdeploy/extract
# you will need a folder like this: depends/SDKs/MacOSOSX10.11.sdk
# and ensure that the darwin.mk file version correspondes to the SDK.
if which dpkg-query >/dev/null; then
if dpkg-query -W librsvg2-bin libtiff-tools cmake imagemagick libcap-dev libz-dev libbz2-dev python-setuptools \
build-essential libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates; then
echo "All dependencies satisfied."
else
echo "Missing dependencies detected. Exiting..."
exit 1
fi
fi
if [ ! -e depends/SDKs/MacOSX10.11.sdk ]; then
echo "Missing depends/SDKs/MacOSX10.11.sdk"
exit 1
fi
if which ccache >/dev/null; then
echo "ccache config:"
ccache -ps
fi
pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-apple-darwin14 NO_QT=1 V=1
popd
./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-apple-darwin14
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-reduce-exports --without-gui --with-icu="${DEPS_DIR}" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN)
${DEPS_DIR}/native/bin/x86_64-apple-darwin14-strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx
if which ccache >/dev/null; then
echo "ccache stats:"
ccache -s
fi
echo "OSX 64bit build is complete"

37
packaging/build_linux_64bit.sh Executable file
View file

@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -euo pipefail
if which dpkg-query >/dev/null; then
if dpkg-query -W libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates; then
echo "All dependencies satisfied."
else
echo "Missing dependencies detected. Exiting..."
exit 1
fi
fi
if which ccache >/dev/null; then
echo "ccache config:"
ccache -ps
fi
export CXXFLAGS="${CXXFLAGS:--frecord-gcc-switches}"
echo "CXXFLAGS set to $CXXFLAGS"
cd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-pc-linux-gnu NO_QT=1 V=1
cd ..
./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-pc-linux-gnu
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-static --disable-shared --with-pic --without-gui
make -j$(getconf _NPROCESSORS_ONLN)
strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx
if which ccache >/dev/null; then
echo "ccache stats:"
ccache -s
fi
echo "Linux 64bit build is complete"

View file

@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
if which dpkg-query >/dev/null; then
if dpkg-query -W g++-mingw-w64-i686 mingw-w64-i686-dev \
build-essential libtool autotools-dev automake pkg-config \
bsdmainutils curl ca-certificates; then
echo "All dependencies satisfied."
else
echo "Missing dependencies detected. Exiting..."
exit 1
fi
# sudo update-alternatives --config i686-w64-mingw32-g++ # you have to select posix
fi
if which ccache >/dev/null; then
echo "ccache config:"
ccache -ps
fi
pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=i686-w64-mingw32 NO_QT=1 V=1
popd
./autogen.sh
DEPS_DIR=$(pwd)/depends/i686-w64-mingw32
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN)
i686-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe
if which ccache >/dev/null; then
echo "ccache stats:"
ccache -s
fi
echo "Windows 32bit build is complete"

View file

@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -euo pipefail
if which dpkg-query >/dev/null; then
if dpkg-query -W g++-mingw-w64-x86-64 mingw-w64-x86-64-dev \
build-essential libtool autotools-dev automake pkg-config \
bsdmainutils curl ca-certificates; then
echo "All dependencies satisfied."
else
echo "Missing dependencies detected. Exiting..."
exit 1
fi
#sudo update-alternatives --config x86_64-w64-mingw32-g++ # you have to select posix
fi
if which ccache >/dev/null; then
echo "ccache config:"
ccache -ps
fi
pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-w64-mingw32 NO_QT=1 V=1
popd
./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-w64-mingw32
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN)
x86_64-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe
if which ccache >/dev/null; then
echo "ccache stats:"
ccache -s
fi
echo "Windows 64bit build is complete"

View file

@ -19,7 +19,7 @@ else
LIBUNIVALUE = $(UNIVALUE_LIBS)
endif
BITCOIN_INCLUDES=-I$(builddir) $(BDB_CPPFLAGS) $(BOOST_CPPFLAGS) $(LEVELDB_CPPFLAGS) $(CRYPTO_CFLAGS) $(SSL_CFLAGS)
BITCOIN_INCLUDES=-I$(builddir) $(BDB_CPPFLAGS) $(ICU_CPPFLAGS) $(BOOST_CPPFLAGS) $(LEVELDB_CPPFLAGS) $(CRYPTO_CFLAGS) $(ICU_CFLAGS)
BITCOIN_INCLUDES += -I$(srcdir)/secp256k1/include
BITCOIN_INCLUDES += $(UNIVALUE_CFLAGS)
@ -80,11 +80,11 @@ TESTS =
BENCHMARKS =
if BUILD_BITCOIND
bin_PROGRAMS += bitcoind
bin_PROGRAMS += lbrycrdd
endif
if BUILD_BITCOIN_UTILS
bin_PROGRAMS += bitcoin-cli bitcoin-tx
bin_PROGRAMS += lbrycrd-cli lbrycrd-tx
endif
.PHONY: FORCE check-symbols check-security
@ -102,6 +102,8 @@ BITCOIN_CORE_H = \
chainparamsseeds.h \
checkpoints.h \
checkqueue.h \
claimscriptop.h \
claimtrie.h \
clientversion.h \
coins.h \
compat.h \
@ -127,12 +129,14 @@ BITCOIN_CORE_H = \
key.h \
key_io.h \
keystore.h \
lbry.h \
dbwrapper.h \
limitedmap.h \
logging.h \
memusage.h \
merkleblock.h \
miner.h \
nameclaim.h \
net.h \
net_processing.h \
netaddress.h \
@ -145,11 +149,13 @@ BITCOIN_CORE_H = \
policy/policy.h \
policy/rbf.h \
pow.h \
prefixtrie.h \
protocol.h \
random.h \
reverse_iterator.h \
reverselock.h \
rpc/blockchain.h \
rpc/claimrpchelp.h \
rpc/client.h \
rpc/mining.h \
rpc/protocol.h \
@ -178,10 +184,12 @@ BITCOIN_CORE_H = \
txdb.h \
txmempool.h \
ui_interface.h \
uint256.h \
undo.h \
util.h \
utilmemory.h \
utilmoneystr.h \
utilstrencodings.h \
utiltime.h \
validation.h \
validationinterface.h \
@ -211,7 +219,7 @@ obj/build.h: FORCE
"$(abs_top_srcdir)"
libbitcoin_util_a-clientversion.$(OBJEXT): obj/build.h
# server: shared between bitcoind and bitcoin-qt
# server: shared between lbrycrdd and lbrycrd-qt
libbitcoin_server_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(MINIUPNPC_CPPFLAGS) $(EVENT_CFLAGS) $(EVENT_PTHREADS_CFLAGS)
libbitcoin_server_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
libbitcoin_server_a_SOURCES = \
@ -221,6 +229,9 @@ libbitcoin_server_a_SOURCES = \
blockencodings.cpp \
chain.cpp \
checkpoints.cpp \
claimscriptop.cpp \
claimtrie.cpp \
claimtrieforks.cpp \
consensus/tx_verify.cpp \
httprpc.cpp \
httpserver.cpp \
@ -228,8 +239,10 @@ libbitcoin_server_a_SOURCES = \
index/txindex.cpp \
init.cpp \
dbwrapper.cpp \
lbry.cpp \
merkleblock.cpp \
miner.cpp \
nameclaim.cpp \
net.cpp \
net_processing.cpp \
noui.cpp \
@ -238,8 +251,10 @@ libbitcoin_server_a_SOURCES = \
policy/policy.cpp \
policy/rbf.cpp \
pow.cpp \
prefixtrie.cpp \
rest.cpp \
rpc/blockchain.cpp \
rpc/claimtrie.cpp \
rpc/mining.cpp \
rpc/misc.cpp \
rpc/net.cpp \
@ -253,6 +268,8 @@ libbitcoin_server_a_SOURCES = \
txdb.cpp \
txmempool.cpp \
ui_interface.cpp \
uint256.cpp \
utilstrencodings.cpp \
validation.cpp \
validationinterface.cpp \
versionbits.cpp \
@ -269,7 +286,7 @@ libbitcoin_zmq_a_SOURCES = \
endif
# wallet: shared between bitcoind and bitcoin-qt, but only linked
# wallet: shared between lbrycrdd and lbrycrd-qt, but only linked
# when wallet enabled
libbitcoin_wallet_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
libbitcoin_wallet_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
@ -369,7 +386,7 @@ libbitcoin_consensus_a_SOURCES = \
utilstrencodings.h \
version.h
# common: shared between bitcoind, and bitcoin-qt and non-server tools
# common: shared between lbrycrdd, and lbrycrd-qt and non-server tools
libbitcoin_common_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
libbitcoin_common_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
libbitcoin_common_a_SOURCES = \
@ -383,6 +400,7 @@ libbitcoin_common_a_SOURCES = \
key.cpp \
key_io.cpp \
keystore.cpp \
nameclaim.cpp \
netaddress.cpp \
netbase.cpp \
policy/feerate.cpp \
@ -399,7 +417,7 @@ libbitcoin_common_a_SOURCES = \
# This library *must* be included to make sure that the glibc
# backward-compatibility objects and their sanity checks are linked.
libbitcoin_util_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
libbitcoin_util_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
libbitcoin_util_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS) $(CRYPTO_CFLAGS)
libbitcoin_util_a_SOURCES = \
support/lockedpool.cpp \
chainparamsbase.cpp \
@ -416,6 +434,7 @@ libbitcoin_util_a_SOURCES = \
support/cleanse.cpp \
sync.cpp \
threadinterrupt.cpp \
uint256.cpp \
util.cpp \
utilmoneystr.cpp \
utilstrencodings.cpp \
@ -427,7 +446,7 @@ libbitcoin_util_a_SOURCES += compat/glibc_compat.cpp
AM_LDFLAGS += $(COMPAT_LDFLAGS)
endif
# cli: shared between bitcoin-cli and bitcoin-qt
# cli: shared between lbrycrd-cli and lbrycrd-qt
libbitcoin_cli_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
libbitcoin_cli_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
libbitcoin_cli_a_SOURCES = \
@ -438,16 +457,16 @@ nodist_libbitcoin_util_a_SOURCES = $(srcdir)/obj/build.h
#
# bitcoind binary #
bitcoind_SOURCES = bitcoind.cpp
bitcoind_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
bitcoind_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
bitcoind_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
lbrycrdd_SOURCES = bitcoind.cpp
lbrycrdd_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
lbrycrdd_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
lbrycrdd_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
if TARGET_WINDOWS
bitcoind_SOURCES += bitcoind-res.rc
lbrycrdd_SOURCES += bitcoind-res.rc
endif
bitcoind_LDADD = \
lbrycrdd_LDADD = \
$(LIBBITCOIN_SERVER) \
$(LIBBITCOIN_WALLET) \
$(LIBBITCOIN_COMMON) \
@ -461,38 +480,38 @@ bitcoind_LDADD = \
$(LIBMEMENV) \
$(LIBSECP256K1)
bitcoind_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(ZMQ_LIBS)
lbrycrdd_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(ZMQ_LIBS)
# bitcoin-cli binary #
bitcoin_cli_SOURCES = bitcoin-cli.cpp
bitcoin_cli_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CFLAGS)
bitcoin_cli_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
bitcoin_cli_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
# lbrycrd-cli binary #
lbrycrd_cli_SOURCES = bitcoin-cli.cpp
lbrycrd_cli_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CFLAGS)
lbrycrd_cli_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
lbrycrd_cli_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
if TARGET_WINDOWS
bitcoin_cli_SOURCES += bitcoin-cli-res.rc
lbrycrd_cli_SOURCES += bitcoin-cli-res.rc
endif
bitcoin_cli_LDADD = \
lbrycrd_cli_LDADD = \
$(LIBBITCOIN_CLI) \
$(LIBUNIVALUE) \
$(LIBBITCOIN_UTIL) \
$(LIBBITCOIN_CRYPTO)
bitcoin_cli_LDADD += $(BOOST_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(EVENT_LIBS)
lbrycrd_cli_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(EVENT_LIBS)
#
# bitcoin-tx binary #
bitcoin_tx_SOURCES = bitcoin-tx.cpp
bitcoin_tx_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
bitcoin_tx_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
bitcoin_tx_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
lbrycrd_tx_SOURCES = bitcoin-tx.cpp
lbrycrd_tx_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
lbrycrd_tx_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
lbrycrd_tx_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
if TARGET_WINDOWS
bitcoin_tx_SOURCES += bitcoin-tx-res.rc
lbrycrd_tx_SOURCES += bitcoin-tx-res.rc
endif
bitcoin_tx_LDADD = \
lbrycrd_tx_LDADD = \
$(LIBUNIVALUE) \
$(LIBBITCOIN_COMMON) \
$(LIBBITCOIN_UTIL) \
@ -500,7 +519,7 @@ bitcoin_tx_LDADD = \
$(LIBBITCOIN_CRYPTO) \
$(LIBSECP256K1)
bitcoin_tx_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS)
lbrycrd_tx_LDADD += $(BOOST_LIBS) $(ICU_LIBS) $(CRYPTO_LIBS)
#
# bitcoinconsensus library #

View file

@ -32,7 +32,7 @@ bench_bench_bitcoin_SOURCES = \
nodist_bench_bench_bitcoin_SOURCES = $(GENERATED_BENCH_FILES)
bench_bench_bitcoin_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CLFAGS) $(EVENT_PTHREADS_CFLAGS) -I$(builddir)/bench/
bench_bench_bitcoin_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CLFAGS) $(EVENT_PTHREADS_CFLAGS) $(BOOST_CPPFLAGS) -I$(builddir)/bench/
bench_bench_bitcoin_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
bench_bench_bitcoin_LDADD = \
$(LIBBITCOIN_WALLET) \
@ -55,7 +55,7 @@ if ENABLE_WALLET
bench_bench_bitcoin_SOURCES += bench/coin_selection.cpp
endif
bench_bench_bitcoin_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
bench_bench_bitcoin_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
bench_bench_bitcoin_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
CLEAN_BITCOIN_BENCH = bench/*.gcda bench/*.gcno $(GENERATED_BENCH_FILES)

View file

@ -2,7 +2,7 @@
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
bin_PROGRAMS += qt/bitcoin-qt
bin_PROGRAMS += qt/lbrycrd-qt
EXTRA_LIBRARIES += qt/libbitcoinqt.a
# bitcoin qt core #
@ -366,7 +366,7 @@ BITCOIN_RC = qt/res/bitcoin-qt-res.rc
BITCOIN_QT_INCLUDES = -DQT_NO_KEYWORDS
qt_libbitcoinqt_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(BITCOIN_QT_INCLUDES) \
$(QT_INCLUDES) $(QT_DBUS_INCLUDES) $(PROTOBUF_CFLAGS) $(QR_CFLAGS)
$(QT_INCLUDES) $(QT_DBUS_INCLUDES) $(PROTOBUF_CFLAGS) $(QR_CFLAGS) $(SSL_CFLAGS)
qt_libbitcoinqt_a_CXXFLAGS = $(AM_CXXFLAGS) $(QT_PIE_FLAGS)
qt_libbitcoinqt_a_OBJCXXFLAGS = $(AM_OBJCXXFLAGS) $(QT_PIE_FLAGS)
@ -382,37 +382,37 @@ QT_FORMS_H=$(join $(dir $(QT_FORMS_UI)),$(addprefix ui_, $(notdir $(QT_FORMS_UI:
# Most files will depend on the forms and moc files as includes. Generate them
# before anything else.
$(QT_MOC): $(QT_FORMS_H)
$(qt_libbitcoinqt_a_OBJECTS) $(qt_bitcoin_qt_OBJECTS) : | $(QT_MOC)
$(qt_libbitcoinqt_a_OBJECTS) $(qt_lbrycrd_qt_OBJECTS) : | $(QT_MOC)
#Generating these with a half-written protobuf header leads to wacky results.
#This makes sure it's done.
$(QT_MOC): $(PROTOBUF_H)
$(QT_MOC_CPP): $(PROTOBUF_H)
# bitcoin-qt binary #
qt_bitcoin_qt_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(BITCOIN_QT_INCLUDES) \
# lbrycrd-qt binary #
qt_lbrycrd_qt_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(BITCOIN_QT_INCLUDES) \
$(QT_INCLUDES) $(PROTOBUF_CFLAGS) $(QR_CFLAGS)
qt_bitcoin_qt_CXXFLAGS = $(AM_CXXFLAGS) $(QT_PIE_FLAGS)
qt_lbrycrd_qt_CXXFLAGS = $(AM_CXXFLAGS) $(QT_PIE_FLAGS)
qt_bitcoin_qt_SOURCES = qt/bitcoin.cpp
qt_lbrycrd_qt_SOURCES = qt/bitcoin.cpp
if TARGET_DARWIN
qt_bitcoin_qt_SOURCES += $(BITCOIN_MM)
qt_lbrycrd_qt_SOURCES += $(BITCOIN_MM)
endif
if TARGET_WINDOWS
qt_bitcoin_qt_SOURCES += $(BITCOIN_RC)
qt_lbrycrd_qt_SOURCES += $(BITCOIN_RC)
endif
qt_bitcoin_qt_LDADD = qt/libbitcoinqt.a $(LIBBITCOIN_SERVER)
qt_lbrycrd_qt_LDADD = qt/libbitcoinqt.a $(LIBBITCOIN_SERVER)
if ENABLE_WALLET
qt_bitcoin_qt_LDADD += $(LIBBITCOIN_UTIL) $(LIBBITCOIN_WALLET)
qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_UTIL) $(LIBBITCOIN_WALLET)
endif
if ENABLE_ZMQ
qt_bitcoin_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
endif
qt_bitcoin_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) $(LIBLEVELDB_SSE42) $(LIBMEMENV) \
$(BOOST_LIBS) $(QT_LIBS) $(QT_DBUS_LIBS) $(QR_LIBS) $(PROTOBUF_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) $(LIBLEVELDB_SSE42) $(LIBMEMENV) \
$(BOOST_LIBS) $(QT_LIBS) $(QT_DBUS_LIBS) $(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
$(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
qt_bitcoin_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
qt_bitcoin_qt_LIBTOOLFLAGS = $(AM_LIBTOOLFLAGS) --tag CXX
qt_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
qt_lbrycrd_qt_LIBTOOLFLAGS = $(AM_LIBTOOLFLAGS) --tag CXX
#locale/foo.ts -> locale/foo.qm
QT_QM=$(QT_TS:.ts=.qm)
@ -444,9 +444,9 @@ CLEAN_QT = $(nodist_qt_libbitcoinqt_a_SOURCES) $(QT_QM) $(QT_FORMS_H) qt/*.gcda
CLEANFILES += $(CLEAN_QT)
bitcoin_qt_clean: FORCE
rm -f $(CLEAN_QT) $(qt_libbitcoinqt_a_OBJECTS) $(qt_bitcoin_qt_OBJECTS) qt/bitcoin-qt$(EXEEXT) $(LIBBITCOINQT)
rm -f $(CLEAN_QT) $(qt_libbitcoinqt_a_OBJECTS) $(qt_lbrycrd_qt_OBJECTS) qt/lbrycrd-qt$(EXEEXT) $(LIBBITCOINQT)
bitcoin_qt : qt/bitcoin-qt$(EXEEXT)
bitcoin_qt : qt/lbrycrd-qt$(EXEEXT)
ui_%.h: %.ui
@test -f $(UIC)

View file

@ -2,8 +2,8 @@
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
bin_PROGRAMS += qt/test/test_bitcoin-qt
TESTS += qt/test/test_bitcoin-qt
bin_PROGRAMS += qt/test/test_lbrycrd-qt
TESTS += qt/test/test_lbrycrd-qt
TEST_QT_MOC_CPP = \
qt/test/moc_compattests.cpp \
@ -33,10 +33,10 @@ TEST_BITCOIN_CPP = \
TEST_BITCOIN_H = \
test/test_bitcoin.h
qt_test_test_bitcoin_qt_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(BITCOIN_QT_INCLUDES) \
qt_test_test_lbrycrd_qt_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(BITCOIN_QT_INCLUDES) \
$(QT_INCLUDES) $(QT_TEST_INCLUDES) $(PROTOBUF_CFLAGS)
qt_test_test_bitcoin_qt_SOURCES = \
qt_test_test_lbrycrd_qt_SOURCES = \
qt/test/compattests.cpp \
qt/test/rpcnestedtests.cpp \
qt/test/test_main.cpp \
@ -46,37 +46,37 @@ qt_test_test_bitcoin_qt_SOURCES = \
$(TEST_BITCOIN_CPP) \
$(TEST_BITCOIN_H)
if ENABLE_WALLET
qt_test_test_bitcoin_qt_SOURCES += \
qt_test_test_lbrycrd_qt_SOURCES += \
qt/test/addressbooktests.cpp \
qt/test/paymentservertests.cpp \
qt/test/wallettests.cpp \
wallet/test/wallet_test_fixture.cpp
endif
nodist_qt_test_test_bitcoin_qt_SOURCES = $(TEST_QT_MOC_CPP)
nodist_qt_test_test_lbrycrd_qt_SOURCES = $(TEST_QT_MOC_CPP)
qt_test_test_bitcoin_qt_LDADD = $(LIBBITCOINQT) $(LIBBITCOIN_SERVER)
qt_test_test_lbrycrd_qt_LDADD = $(LIBBITCOINQT) $(LIBBITCOIN_SERVER)
if ENABLE_WALLET
qt_test_test_bitcoin_qt_LDADD += $(LIBBITCOIN_UTIL) $(LIBBITCOIN_WALLET)
qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_UTIL) $(LIBBITCOIN_WALLET)
endif
if ENABLE_ZMQ
qt_test_test_bitcoin_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
endif
qt_test_test_bitcoin_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) \
qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) \
$(LIBLEVELDB_SSE42) $(LIBMEMENV) $(BOOST_LIBS) $(QT_DBUS_LIBS) $(QT_TEST_LIBS) $(QT_LIBS) \
$(QR_LIBS) $(PROTOBUF_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
$(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
$(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
qt_test_test_bitcoin_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
qt_test_test_bitcoin_qt_CXXFLAGS = $(AM_CXXFLAGS) $(QT_PIE_FLAGS)
qt_test_test_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
qt_test_test_lbrycrd_qt_CXXFLAGS = $(AM_CXXFLAGS) $(QT_PIE_FLAGS)
CLEAN_BITCOIN_QT_TEST = $(TEST_QT_MOC_CPP) qt/test/*.gcda qt/test/*.gcno
CLEANFILES += $(CLEAN_BITCOIN_QT_TEST)
test_bitcoin_qt : qt/test/test_bitcoin-qt$(EXEEXT)
test_lbrycrd_qt : qt/test/test_bitcoin-qt$(EXEEXT)
test_bitcoin_qt_check : qt/test/test_bitcoin-qt$(EXEEXT) FORCE
test_lbrycrd_qt_check : qt/test/test_bitcoin-qt$(EXEEXT) FORCE
$(MAKE) check-TESTS TESTS=$^
test_bitcoin_qt_clean: FORCE
rm -f $(CLEAN_BITCOIN_QT_TEST) $(qt_test_test_bitcoin_qt_OBJECTS)
test_lbrycrd_qt_clean: FORCE
rm -f $(CLEAN_BITCOIN_QT_TEST) $(qt_test_test_lbrycrd_qt_OBJECTS)

View file

@ -2,13 +2,14 @@
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
bin_PROGRAMS += test/test_bitcoin
noinst_PROGRAMS += test/test_bitcoin_fuzzy
bin_PROGRAMS += test/test_lbrycrd
noinst_PROGRAMS += test/test_lbrycrd_fuzzy
TEST_SRCDIR = test
TEST_BINARY=test/test_bitcoin$(EXEEXT)
TEST_BINARY=test/test_lbrycrd$(EXEEXT)
JSON_TEST_FILES = \
test/data/base58_encode_decode.json \
test/data/base58_keys_valid.json \
test/data/key_io_valid.json \
test/data/key_io_invalid.json \
test/data/script_tests.json \
@ -40,9 +41,11 @@ BITCOIN_TESTS =\
test/blockchain_tests.cpp \
test/blockencodings_tests.cpp \
test/bloom_tests.cpp \
test/Checkpoints_tests.cpp \
test/bswap_tests.cpp \
test/checkqueue_tests.cpp \
test/coins_tests.cpp \
test/compilerbug_tests.cpp \
test/compress_tests.cpp \
test/crypto_tests.cpp \
test/cuckoocache_tests.cpp \
@ -61,10 +64,19 @@ BITCOIN_TESTS =\
test/miner_tests.cpp \
test/multisig_tests.cpp \
test/net_tests.cpp \
test/claimtriecache_tests.cpp \
test/claimtriebranching_tests.cpp \
test/claimtrieexpirationfork_tests.cpp \
test/claimtriefixture.cpp \
test/claimtriehashfork_tests.cpp \
test/claimtrienormalization_tests.cpp \
test/claimtrierpc_tests.cpp \
test/nameclaim_tests.cpp \
test/netbase_tests.cpp \
test/pmt_tests.cpp \
test/policyestimator_tests.cpp \
test/pow_tests.cpp \
test/prefixtrie_tests.cpp \
test/prevector_tests.cpp \
test/raii_event_tests.cpp \
test/random_tests.cpp \
@ -95,6 +107,7 @@ BITCOIN_TESTS =\
if ENABLE_WALLET
BITCOIN_TESTS += \
wallet/test/accounting_tests.cpp \
wallet/test/claim_rpc_tests.cpp \
wallet/test/db_tests.cpp \
wallet/test/psbt_wallet_tests.cpp \
wallet/test/wallet_tests.cpp \
@ -106,32 +119,32 @@ BITCOIN_TEST_SUITE += \
wallet/test/wallet_test_fixture.h
endif
test_test_bitcoin_SOURCES = $(BITCOIN_TEST_SUITE) $(BITCOIN_TESTS) $(JSON_TEST_FILES) $(RAW_TEST_FILES)
test_test_bitcoin_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(TESTDEFS) $(EVENT_CFLAGS)
test_test_bitcoin_LDADD =
test_test_lbrycrd_SOURCES = $(BITCOIN_TEST_SUITE) $(BITCOIN_TESTS) $(JSON_TEST_FILES) $(RAW_TEST_FILES)
test_test_lbrycrd_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(TESTDEFS) $(EVENT_CFLAGS) $(ICU_CPPFLAGS)
test_test_lbrycrd_LDADD =
if ENABLE_WALLET
test_test_bitcoin_LDADD += $(LIBBITCOIN_WALLET)
test_test_lbrycrd_LDADD += $(LIBBITCOIN_WALLET)
endif
test_test_bitcoin_LDADD += $(LIBBITCOIN_SERVER) $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) \
test_test_lbrycrd_LDADD += $(LIBBITCOIN_SERVER) $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) \
$(LIBLEVELDB) $(LIBLEVELDB_SSE42) $(LIBMEMENV) $(BOOST_LIBS) $(BOOST_UNIT_TEST_FRAMEWORK_LIB) $(LIBSECP256K1) $(EVENT_LIBS) $(EVENT_PTHREADS_LIBS)
test_test_bitcoin_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
test_test_lbrycrd_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
test_test_bitcoin_LDADD += $(LIBBITCOIN_CONSENSUS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS)
test_test_bitcoin_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) -static
test_test_lbrycrd_LDADD += $(LIBBITCOIN_CONSENSUS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS)
test_test_lbrycrd_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) -static
if ENABLE_ZMQ
test_test_bitcoin_LDADD += $(ZMQ_LIBS)
test_test_lbrycrd_LDADD += $(ZMQ_LIBS)
endif
#
# test_bitcoin_fuzzy binary #
test_test_bitcoin_fuzzy_SOURCES = test/test_bitcoin_fuzzy.cpp
test_test_bitcoin_fuzzy_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
test_test_bitcoin_fuzzy_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
test_test_bitcoin_fuzzy_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
test_test_lbrycrd_fuzzy_SOURCES = test/test_bitcoin_fuzzy.cpp
test_test_lbrycrd_fuzzy_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
test_test_lbrycrd_fuzzy_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
test_test_lbrycrd_fuzzy_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
test_test_bitcoin_fuzzy_LDADD = \
test_test_lbrycrd_fuzzy_LDADD = \
$(LIBUNIVALUE) \
$(LIBBITCOIN_SERVER) \
$(LIBBITCOIN_COMMON) \
@ -143,10 +156,9 @@ test_test_bitcoin_fuzzy_LDADD = \
$(LIBBITCOIN_CRYPTO_SHANI) \
$(LIBSECP256K1)
test_test_bitcoin_fuzzy_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS)
#
test_test_lbrycrd_fuzzy_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS)
nodist_test_test_bitcoin_SOURCES = $(GENERATED_TEST_FILES)
nodist_test_test_lbrycrd_SOURCES = $(GENERATED_TEST_FILES)
$(BITCOIN_TESTS): $(GENERATED_TEST_FILES)
@ -154,13 +166,13 @@ CLEAN_BITCOIN_TEST = test/*.gcda test/*.gcno $(GENERATED_TEST_FILES)
CLEANFILES += $(CLEAN_BITCOIN_TEST)
bitcoin_test: $(TEST_BINARY)
lbrycrd_test: $(TEST_BINARY)
bitcoin_test_check: $(TEST_BINARY) FORCE
lbrycrd_test_check: $(TEST_BINARY) FORCE
$(MAKE) check-TESTS TESTS=$^
bitcoin_test_clean : FORCE
rm -f $(CLEAN_BITCOIN_TEST) $(test_test_bitcoin_OBJECTS) $(TEST_BINARY)
lbrycrd_test_clean : FORCE
rm -f $(CLEAN_BITCOIN_TEST) $(test_test_lbrycrd_OBJECTS) $(TEST_BINARY)
check-local: $(BITCOIN_TESTS:.cpp=.cpp.test)
@echo "Running test/util/bitcoin-util-test.py..."
@ -185,3 +197,12 @@ endif
echo "};};"; \
} > "$@.new" && mv -f "$@.new" "$@"
@echo "Generated $@"
%.raw.h: %.raw
@$(MKDIR_P) $(@D)
@echo "namespace alert_tests{" > $@
@echo "static unsigned const char $(*F)[] = {" >> $@
@$(HEXDUMP) -v -e '8/1 "0x%02x, "' -e '"\n"' $< | $(SED) -e 's/0x ,//g' >> $@
@echo "};};" >> $@
@echo "Generated $@"

View file

@ -98,7 +98,7 @@ bool DeserializeFileDB(const fs::path& path, Data& data)
FILE *file = fsbridge::fopen(path, "rb");
CAutoFile filein(file, SER_DISK, CLIENT_VERSION);
if (filein.IsNull())
return error("%s: Failed to open file %s", __func__, path.string());
return false;
return DeserializeDB(filein, data);
}

View file

@ -23,7 +23,7 @@ static const CAmount CENT = 1000000;
* critical; in unusual circumstances like a(nother) overflow bug that allowed
* for the creation of coins out of thin air modification could lead to a fork.
* */
static const CAmount MAX_MONEY = 21000000 * COIN;
static const CAmount MAX_MONEY = 21000000000 * COIN;
inline bool MoneyRange(const CAmount& nValue) { return (nValue >= 0 && nValue <= MAX_MONEY); }
#endif // BITCOIN_AMOUNT_H

View file

@ -41,7 +41,8 @@ static CTxIn MineBlock(const CScript& coinbase_scriptPubKey)
auto block = PrepareBlock(coinbase_scriptPubKey);
while (!CheckProofOfWork(block->GetHash(), block->nBits, Params().GetConsensus())) {
assert(++block->nNonce);
++block->nNonce;
assert(block->nNonce);
}
bool processed{ProcessNewBlock(Params(), block, true, nullptr)};

View file

@ -16,14 +16,14 @@ BEGIN
BEGIN
BLOCK "040904E4" // U.S. English - multilingual (hex)
BEGIN
VALUE "CompanyName", "Bitcoin"
VALUE "FileDescription", "bitcoin-cli (JSON-RPC client for " PACKAGE_NAME ")"
VALUE "CompanyName", "LBRY"
VALUE "FileDescription", "lbrycrd-cli (JSON-RPC client for " PACKAGE_NAME ")"
VALUE "FileVersion", VER_FILEVERSION_STR
VALUE "InternalName", "bitcoin-cli"
VALUE "InternalName", "lbrycrd-cli"
VALUE "LegalCopyright", COPYRIGHT_STR
VALUE "LegalTrademarks1", "Distributed under the MIT software license, see the accompanying file COPYING or http://www.opensource.org/licenses/mit-license.php."
VALUE "OriginalFilename", "bitcoin-cli.exe"
VALUE "ProductName", "bitcoin-cli"
VALUE "OriginalFilename", "lbrycrd-cli.exe"
VALUE "ProductName", "lbrycrd-cli"
VALUE "ProductVersion", VER_PRODUCTVERSION_STR
END
END

View file

@ -48,7 +48,7 @@ static void SetupCliArgs()
gArgs.AddArg("-rpcport=<port>", strprintf("Connect to JSON-RPC on <port> (default: %u or testnet: %u)", defaultBaseParams->RPCPort(), testnetBaseParams->RPCPort()), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-rpcuser=<user>", "Username for JSON-RPC connections", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-rpcwait", "Wait for RPC server to start", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-rpcwallet=<walletname>", "Send RPC for non-default wallet on RPC server (needs to exactly match corresponding -wallet option passed to bitcoind)", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-rpcwallet=<walletname>", "Send RPC for non-default wallet on RPC server (needs to exactly match corresponding -wallet option passed to lbrycrdd)", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-stdin", "Read extra arguments from standard input, one per line until EOF/Ctrl-D (recommended for sensitive information such as passphrases). When combined with -stdinrpcpass, the first line from standard input is used for the RPC password.", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-stdinrpcpass", "Read RPC password from standard input as a single line. When combined with -stdin, the first line from standard input is used for the RPC password.", false, OptionsCategory::OPTIONS);
@ -107,10 +107,10 @@ static int AppInitRPC(int argc, char* argv[])
std::string strUsage = PACKAGE_NAME " RPC client version " + FormatFullVersion() + "\n";
if (!gArgs.IsArgSet("-version")) {
strUsage += "\n"
"Usage: bitcoin-cli [options] <command> [params] Send command to " PACKAGE_NAME "\n"
"or: bitcoin-cli [options] -named <command> [name=value]... Send command to " PACKAGE_NAME " (with named arguments)\n"
"or: bitcoin-cli [options] help List commands\n"
"or: bitcoin-cli [options] help <command> Get help for a command\n";
"Usage: lbrycrd-cli [options] <command> [params] Send command to " PACKAGE_NAME "\n"
"or: lbrycrd-cli [options] -named <command> [name=value]... Send command to " PACKAGE_NAME " (with named arguments)\n"
"or: lbrycrd-cli [options] help List commands\n"
"or: lbrycrd-cli [options] help <command> Get help for a command\n";
strUsage += "\n" + gArgs.GetHelpMessage();
}
@ -379,7 +379,7 @@ static UniValue CallRPC(BaseRequestHandler *rh, const std::string& strMethod, co
if (response.error != -1) {
responseErrorMessage = strprintf(" (error code %d - \"%s\")", response.error, http_errorstring(response.error));
}
throw CConnectionFailed(strprintf("Could not connect to the server %s:%d%s\n\nMake sure the bitcoind server is running and that you are connecting to the correct RPC port.", host, port, responseErrorMessage));
throw CConnectionFailed(strprintf("Could not connect to the server %s:%d%s\n\nMake sure the lbrycrdd server is running and that you are connecting to the correct RPC port.", host, port, responseErrorMessage));
} else if (response.status == HTTP_UNAUTHORIZED) {
if (failedToGetAuthCookie) {
throw std::runtime_error(strprintf(
@ -470,7 +470,7 @@ static int CommandLineRPC(int argc, char *argv[])
strPrint += "error message:\n"+errMsg.get_str();
if (errCode.isNum() && errCode.get_int() == RPC_WALLET_NOT_SPECIFIED) {
strPrint += "\nTry adding \"-rpcwallet=<filename>\" option to bitcoin-cli command line.";
strPrint += "\nTry adding \"-rpcwallet=<filename>\" option to lbrycrd-cli command line.";
}
}
} else {

View file

@ -16,14 +16,14 @@ BEGIN
BEGIN
BLOCK "040904E4" // U.S. English - multilingual (hex)
BEGIN
VALUE "CompanyName", "Bitcoin"
VALUE "FileDescription", "bitcoin-tx (CLI Bitcoin transaction editor utility)"
VALUE "CompanyName", "LBRY"
VALUE "FileDescription", "lbrycrd-tx (CLI LBRYcrd transaction editor utility)"
VALUE "FileVersion", VER_FILEVERSION_STR
VALUE "InternalName", "bitcoin-tx"
VALUE "InternalName", "lbrycrd-tx"
VALUE "LegalCopyright", COPYRIGHT_STR
VALUE "LegalTrademarks1", "Distributed under the MIT software license, see the accompanying file COPYING or http://www.opensource.org/licenses/mit-license.php."
VALUE "OriginalFilename", "bitcoin-tx.exe"
VALUE "ProductName", "bitcoin-tx"
VALUE "OriginalFilename", "lbrycrd-tx.exe"
VALUE "ProductName", "lbrycrd-tx"
VALUE "ProductVersion", VER_PRODUCTVERSION_STR
END
END

View file

@ -98,9 +98,9 @@ static int AppInitRawTx(int argc, char* argv[])
if (argc < 2 || HelpRequested(gArgs)) {
// First part of help message is specific to this utility
std::string strUsage = PACKAGE_NAME " bitcoin-tx utility version " + FormatFullVersion() + "\n\n" +
"Usage: bitcoin-tx [options] <hex-tx> [commands] Update hex-encoded bitcoin transaction\n" +
"or: bitcoin-tx [options] -create [commands] Create hex-encoded bitcoin transaction\n" +
std::string strUsage = PACKAGE_NAME " lbrycrd-tx utility version " + FormatFullVersion() + "\n\n" +
"Usage: lbrycrd-tx [options] <hex-tx> [commands] Update hex-encoded lbrycrd transaction\n" +
"or: lbrycrd-tx [options] -create [commands] Create hex-encoded lbrycrd transaction\n" +
"\n";
strUsage += gArgs.GetHelpMessage();

View file

@ -16,14 +16,14 @@ BEGIN
BEGIN
BLOCK "040904E4" // U.S. English - multilingual (hex)
BEGIN
VALUE "CompanyName", "Bitcoin"
VALUE "FileDescription", "bitcoind (Bitcoin node with a JSON-RPC server)"
VALUE "CompanyName", "LBRY"
VALUE "FileDescription", "lbrycrdd (LBRYcrd node with a JSON-RPC server)"
VALUE "FileVersion", VER_FILEVERSION_STR
VALUE "InternalName", "bitcoind"
VALUE "InternalName", "lbrycrdd"
VALUE "LegalCopyright", COPYRIGHT_STR
VALUE "LegalTrademarks1", "Distributed under the MIT software license, see the accompanying file COPYING or http://www.opensource.org/licenses/mit-license.php."
VALUE "OriginalFilename", "bitcoind.exe"
VALUE "ProductName", "bitcoind"
VALUE "OriginalFilename", "lbrycrdd.exe"
VALUE "ProductName", "lbrycrdd"
VALUE "ProductVersion", VER_PRODUCTVERSION_STR
END
END

View file

@ -61,7 +61,7 @@ static bool AppInit(int argc, char* argv[])
//
// Parameters
//
// If Qt is used, parameters/bitcoin.conf are parsed in qt/bitcoin.cpp's main()
// If Qt is used, parameters/lbrycrd.conf are parsed in qt/bitcoin.cpp's main()
SetupServerArgs();
std::string error;
if (!gArgs.ParseParameters(argc, argv, error)) {
@ -79,7 +79,7 @@ static bool AppInit(int argc, char* argv[])
}
else
{
strUsage += "\nUsage: bitcoind [options] Start " PACKAGE_NAME " Daemon\n";
strUsage += "\nUsage: lbrycrdd [options] Start " PACKAGE_NAME " Daemon\n";
strUsage += "\n" + gArgs.GetHelpMessage();
}
@ -109,12 +109,12 @@ static bool AppInit(int argc, char* argv[])
// Error out when loose non-argument tokens are encountered on command line
for (int i = 1; i < argc; i++) {
if (!IsSwitchChar(argv[i][0])) {
fprintf(stderr, "Error: Command line contains unexpected token '%s', see bitcoind -h for a list of options.\n", argv[i]);
fprintf(stderr, "Error: Command line contains unexpected token '%s', see lbrycrdd -h for a list of options.\n", argv[i]);
return false;
}
}
// -server defaults to true for bitcoind but not for the GUI so do this here
// -server defaults to true for lbrycrdd but not for the GUI so do this here
gArgs.SoftSetBoolArg("-server", true);
// Set this early so that parameter interactions go to console
InitLogging();
@ -141,7 +141,7 @@ static bool AppInit(int argc, char* argv[])
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
#endif
fprintf(stdout, "Bitcoin server starting\n");
fprintf(stdout, "LBRYcrd server starting\n");
// Daemonize
if (daemon(1, 0)) { // don't chdir (1), do close FDs (0)
@ -185,7 +185,7 @@ int main(int argc, char* argv[])
{
SetupEnvironment();
// Connect bitcoind signal handlers
// Connect lbrycrdd signal handlers
noui_connect();
return (AppInit(argc, argv) ? EXIT_SUCCESS : EXIT_FAILURE);

View file

@ -3,6 +3,7 @@
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <bloom.h>
#include <nameclaim.h>
#include <primitives/transaction.h>
#include <hash.h>

View file

@ -209,6 +209,7 @@ public:
//! block header
int32_t nVersion;
uint256 hashMerkleRoot;
uint256 hashClaimTrie;
uint32_t nTime;
uint32_t nBits;
uint32_t nNonce;
@ -237,6 +238,7 @@ public:
nVersion = 0;
hashMerkleRoot = uint256();
hashClaimTrie = uint256();
nTime = 0;
nBits = 0;
nNonce = 0;
@ -253,6 +255,7 @@ public:
nVersion = block.nVersion;
hashMerkleRoot = block.hashMerkleRoot;
hashClaimTrie = block.hashClaimTrie;
nTime = block.nTime;
nBits = block.nBits;
nNonce = block.nNonce;
@ -283,6 +286,7 @@ public:
if (pprev)
block.hashPrevBlock = pprev->GetBlockHash();
block.hashMerkleRoot = hashMerkleRoot;
block.hashClaimTrie = hashClaimTrie;
block.nTime = nTime;
block.nBits = nBits;
block.nNonce = nNonce;
@ -294,6 +298,11 @@ public:
return *phashBlock;
}
uint256 GetBlockPoWHash() const
{
return GetBlockHeader().GetPoWHash();
}
int64_t GetBlockTime() const
{
return (int64_t)nTime;
@ -322,9 +331,10 @@ public:
std::string ToString() const
{
return strprintf("CBlockIndex(pprev=%p, nHeight=%d, merkle=%s, hashBlock=%s)",
return strprintf("CBlockIndex(pprev=%p, nHeight=%d, merkle=%s, claimtrie=%s, hashBlock=%s)",
pprev, nHeight,
hashMerkleRoot.ToString(),
hashClaimTrie.ToString(),
GetBlockHash().ToString());
}
@ -402,6 +412,7 @@ public:
READWRITE(this->nVersion);
READWRITE(hashPrev);
READWRITE(hashMerkleRoot);
READWRITE(hashClaimTrie);
READWRITE(nTime);
READWRITE(nBits);
READWRITE(nNonce);
@ -413,19 +424,20 @@ public:
block.nVersion = nVersion;
block.hashPrevBlock = hashPrev;
block.hashMerkleRoot = hashMerkleRoot;
block.hashClaimTrie = hashClaimTrie;
block.nTime = nTime;
block.nBits = nBits;
block.nNonce = nNonce;
return block.GetHash();
}
std::string ToString() const
{
std::string str = "CDiskBlockIndex(";
str += CBlockIndex::ToString();
str += strprintf("\n hashBlock=%s, hashPrev=%s)",
str += strprintf("\n hashBlock=%s, hashClaimTrie=%s, hashPrev=%s)",
GetBlockHash().ToString(),
hashClaimTrie.ToString(),
hashPrev.ToString());
return str;
}

View file

@ -14,6 +14,38 @@
#include <chainparamsseeds.h>
//#define FIND_GENESIS
#define GENESIS_MERKLE_ROOT "b8211c82c3d15bcd78bba57005b86fed515149a53a425eb592c07af99fe559cc"
#define MAINNET_GENESIS_HASH "9c89283ba0f3227f6c03b70216b9f665f0118d5e0fa729cedf4fb34d6a34f463"
#define MAINNET_GENESIS_NONCE 1287
#define TESTNET_GENESIS_HASH "9c89283ba0f3227f6c03b70216b9f665f0118d5e0fa729cedf4fb34d6a34f463"
#define TESTNET_GENESIS_NONCE 1287
#define REGTEST_GENESIS_HASH "6e3fcf1299d4ec5d79c3a4c91d624a4acf9e2e173d95a1a0504f677669687556"
#define REGTEST_GENESIS_NONCE 1
bool CheckProofOfWork2(uint256 hash, unsigned int nBits, const Consensus::Params& params)
{
bool fNegative;
bool fOverflow;
arith_uint256 bnTarget;
bnTarget.SetCompact(nBits, &fNegative, &fOverflow);
// Check range
if (fNegative || bnTarget == 0 || fOverflow || bnTarget > UintToArith256(params.powLimit))
return error("CheckProofOfWork(): nBits below minimum work");
// Check proof of work matches claimed amount
if (UintToArith256(hash) > bnTarget)
return error("CheckProofOfWork(): hash doesn't match nBits");
return true;
}
static CBlock CreateGenesisBlock(const char* pszTimestamp, const CScript& genesisOutputScript, uint32_t nTime, uint32_t nNonce, uint32_t nBits, int32_t nVersion, const CAmount& genesisReward)
{
CMutableTransaction txNew;
@ -22,6 +54,9 @@ static CBlock CreateGenesisBlock(const char* pszTimestamp, const CScript& genesi
txNew.vout.resize(1);
txNew.vin[0].scriptSig = CScript() << 486604799 << CScriptNum(4) << std::vector<unsigned char>((const unsigned char*)pszTimestamp, (const unsigned char*)pszTimestamp + strlen(pszTimestamp));
txNew.vout[0].nValue = genesisReward;
//txNew.vout[0].scriptPubKey = CScript() << ParseHex("04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f") << OP_CHECKSIG;
//txNew.vout[0].scriptPubKey = CScript() << ParseHex("0425caecb9fbf6cf50979644e85c11e3ec9007fd477fab9683648c6539e59b59c3a4d9b9c0b552c37eee6476f3e0d8425ac0346fe69ad61628b8c340d42fbfa3fd") << OP_CHECKSIG;
//txNew.vout[0].scriptPubKey = CScript() << OP_DUP << OP_HASH160 << ParseHex("e5ff2d9e3a254622ae493573169c0fa94c82fe4f") << OP_EQUALVERIFY << OP_CHECKSIG;
txNew.vout[0].scriptPubKey = genesisOutputScript;
CBlock genesis;
@ -32,6 +67,21 @@ static CBlock CreateGenesisBlock(const char* pszTimestamp, const CScript& genesi
genesis.vtx.push_back(MakeTransactionRef(std::move(txNew)));
genesis.hashPrevBlock.SetNull();
genesis.hashMerkleRoot = BlockMerkleRoot(genesis);
genesis.hashClaimTrie = uint256S("0x0000000000000000000000000000000000000000000000000000000000000001");
#ifdef FIND_GENESIS
while (true)
{
genesis.nNonce += 1;
if (CheckProofOfWork2(genesis.GetPoWHash(), nBits, consensus))
{
std::cout << "nonce: " << genesis.nNonce << std::endl;
std::cout << "hex: " << genesis.GetHash().GetHex() << std::endl;
std::cout << "pow hash: " << genesis.GetPoWHash().GetHex() << std::endl;
break;
}
}
#endif
return genesis;
}
@ -48,8 +98,8 @@ static CBlock CreateGenesisBlock(const char* pszTimestamp, const CScript& genesi
*/
static CBlock CreateGenesisBlock(uint32_t nTime, uint32_t nNonce, uint32_t nBits, int32_t nVersion, const CAmount& genesisReward)
{
const char* pszTimestamp = "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks";
const CScript genesisOutputScript = CScript() << ParseHex("04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f") << OP_CHECKSIG;
const char* pszTimestamp = "insert timestamp string";//"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks";
const CScript genesisOutputScript = CScript() << OP_DUP << OP_HASH160 << ParseHex("345991dbf57bfb014b87006acdfafbfc5fe8292f") << OP_EQUALVERIFY << OP_CHECKSIG;
return CreateGenesisBlock(pszTimestamp, genesisOutputScript, nTime, nNonce, nBits, nVersion, genesisReward);
}
@ -73,20 +123,34 @@ void CChainParams::UpdateVersionBitsParameters(Consensus::DeploymentPos d, int64
class CMainParams : public CChainParams {
public:
CMainParams() {
strNetworkID = "main";
consensus.nSubsidyHalvingInterval = 210000;
strNetworkID = CBaseChainParams::MAIN;
consensus.nSubsidyLevelInterval = 1<<5;
consensus.nMajorityEnforceBlockUpgrade = 750;
consensus.nMajorityRejectBlockOutdated = 950;
consensus.nMajorityWindow = 1000;
consensus.BIP16Exception = uint256S("0x00000000000002dc756eebf4f49723ed8d30cc28a5f108eb94b1ba88ac4f9c22");
consensus.BIP34Height = 227931;
consensus.BIP34Hash = uint256S("0x000000000000024b89b42a942fe0d9fea3bb44ab7bd1b19115dd6a759c0808b8");
consensus.BIP65Height = 388381; // 000000000000000004c2b624ed5d7756c508d90fd0da2c7c679febfa6c4735f0
consensus.BIP66Height = 363725; // 00000000000000000379eaa19dce8c9b722d46ae6a57c2f1a988119488b50931
consensus.powLimit = uint256S("00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 10 * 60;
consensus.BIP34Height = 1;
consensus.BIP34Hash = uint256S("0xdecb9e2cca03a419fd9cca0cb2b1d5ad11b088f22f8f38556d93ac4358b86c24");
// FIXME: adjust heights
consensus.BIP65Height = 200000;
consensus.BIP66Height = 200000;
consensus.powLimit = uint256S("0000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 150; //retarget every block
consensus.nPowTargetSpacing = 150;
consensus.nOriginalClaimExpirationTime = 262974;
consensus.nExtendedClaimExpirationTime = 2102400;
consensus.nExtendedClaimExpirationForkHeight = 400155;
consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 539940; // targeting 21 March 2019
consensus.nMinTakeoverWorkaroundHeight = 496850;
consensus.nMaxTakeoverWorkaroundHeight = 658300; // targeting 30 Oct 2019
consensus.nWitnessForkHeight = 680770; // targeting 11 Dec 2019
consensus.nAllClaimsInMerkleForkHeight = 658310; // targeting 30 Oct 2019
consensus.fPowAllowMinDifficultyBlocks = false;
consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1916; // 95% of 2016
consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing
consensus.nRuleChangeActivationThreshold = 1916; // 95% of a half week
consensus.nMinerConfirmationWindow = 2016;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 1199145601; // January 1, 2008
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 1230767999; // December 31, 2008
@ -96,84 +160,74 @@ public:
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1462060800; // May 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
// Deployment of SegWit (BIP141, BIP143, and BIP147)
// Deployment of SegWit (BIP141, BIP143, and BIP147) -- Unused (see nWitnessForkHeight).
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1;
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1479168000; // November 15th, 2016.
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1510704000; // November 15th, 2017.
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1547942400; // Jan 20, 2019
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1548288000; // Jan 24, 2019
// The best chain should have at least this much work.
consensus.nMinimumChainWork = uint256S("0x0000000000000000000000000000000000000000028822fef1c230963535a90d");
consensus.nMinimumChainWork = uint256S("000000000000000000000000000000000000000000000499ed6684d1bf6f6fd3"); //946000
// By default assume that the signatures in ancestors of this block are valid.
consensus.defaultAssumeValid = uint256S("0x0000000000000000002e63058c023a9a1de233554f28c7b21380b6c9003f36a8"); //534292
consensus.defaultAssumeValid = uint256S("0d3b537afe49820e1c6efc555463f955251b1293c6e5130137e1e25744431172"); //946000
/**
* The message start string is designed to be unlikely to occur in normal data.
* The characters are rarely used upper ASCII, not valid as UTF-8, and produce
* a large 32-bit integer with any alignment.
*/
pchMessageStart[0] = 0xf9;
pchMessageStart[1] = 0xbe;
pchMessageStart[2] = 0xb4;
pchMessageStart[3] = 0xd9;
nDefaultPort = 8333;
pchMessageStart[0] = 0xfa;
pchMessageStart[1] = 0xe4;
pchMessageStart[2] = 0xaa;
pchMessageStart[3] = 0xf1;
nDefaultPort = 9246;
nPruneAfterHeight = 100000;
genesis = CreateGenesisBlock(1231006505, 2083236893, 0x1d00ffff, 1, 50 * COIN);
genesis = CreateGenesisBlock(1446058291, MAINNET_GENESIS_NONCE, 0x1f00ffff, 1, 400000000 * COIN);
consensus.hashGenesisBlock = genesis.GetHash();
assert(consensus.hashGenesisBlock == uint256S("0x000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"));
assert(genesis.hashMerkleRoot == uint256S("0x4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"));
#ifdef FIND_GENESIS
std::cout << "hex: " << consensus.hashGenesisBlock.GetHex() << std::endl;
std::cout << "merkle root: " << genesis.hashMerkleRoot.GetHex() << std::endl;
#else
assert(consensus.hashGenesisBlock == uint256S(MAINNET_GENESIS_HASH));
assert(genesis.hashMerkleRoot == uint256S(GENESIS_MERKLE_ROOT));
#endif
vSeeds.clear();
vFixedSeeds.clear();
// Note that of those which support the service bits prefix, most only support a subset of
// possible options.
// This is fine at runtime as we'll fall back to using them as a oneshot if they don't support the
// service bits we want, but we should get them updated to support all service bits wanted by any
// release ASAP to avoid it where possible.
vSeeds.emplace_back("seed.bitcoin.sipa.be"); // Pieter Wuille, only supports x1, x5, x9, and xd
vSeeds.emplace_back("dnsseed.bluematt.me"); // Matt Corallo, only supports x9
vSeeds.emplace_back("dnsseed.bitcoin.dashjr.org"); // Luke Dashjr
vSeeds.emplace_back("seed.bitcoinstats.com"); // Christian Decker, supports x1 - xf
vSeeds.emplace_back("seed.bitcoin.jonasschnelli.ch"); // Jonas Schnelli, only supports x1, x5, x9, and xd
vSeeds.emplace_back("seed.btc.petertodd.org"); // Peter Todd, only supports x1, x5, x9, and xd
vSeeds.emplace_back("seed.bitcoin.sprovoost.nl"); // Sjors Provoost
vSeeds.emplace_back("dnsseed1.lbry.io"); // LBRY Inc
vSeeds.emplace_back("dnsseed2.lbry.io"); // LBRY Inc
vSeeds.emplace_back("dnsseed3.lbry.io"); // LBRY Inc
vSeeds.emplace_back("seed.lbry.grin.io"); // Grin
vSeeds.emplace_back("seed.allaboutlbc.com"); // Madiator2011
base58Prefixes[PUBKEY_ADDRESS] = std::vector<unsigned char>(1,0);
base58Prefixes[SCRIPT_ADDRESS] = std::vector<unsigned char>(1,5);
base58Prefixes[SECRET_KEY] = std::vector<unsigned char>(1,128);
base58Prefixes[PUBKEY_ADDRESS] = std::vector<unsigned char>(1, 0x55);
base58Prefixes[SCRIPT_ADDRESS] = std::vector<unsigned char>(1, 0x7a);
base58Prefixes[SECRET_KEY] = std::vector<unsigned char>(1, 0x1c);
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x88, 0xB2, 0x1E};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x88, 0xAD, 0xE4};
bech32_hrp = "bc";
bech32_hrp = "lbc";
vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_main, pnSeed6_main + ARRAYLEN(pnSeed6_main));
fMiningRequiresPeers = true;
fDefaultConsistencyChecks = false;
fRequireStandard = true;
fMineBlocksOnDemand = false;
checkpointData = {
{
{ 11111, uint256S("0x0000000069e244f73d78e8fd29ba2fd2ed618bd6fa2ee92559f542fdb26e7c1d")},
{ 33333, uint256S("0x000000002dd5588a74784eaa7ab0507a18ad16a236e7b1ce69f00d7ddfb5d0a6")},
{ 74000, uint256S("0x0000000000573993a3c9e41ce34471c079dcf5f52a0e824a81e7f953b8661a20")},
{105000, uint256S("0x00000000000291ce28027faea320c8d2b054b2e0fe44a773f3eefb151d6bdc97")},
{134444, uint256S("0x00000000000005b12ffd4cd315cd34ffd4a594f430ac814c91184a0d42d2b0fe")},
{168000, uint256S("0x000000000000099e61ea72015e79632f216fe6cb33d7899acb35b75c8303b763")},
{193000, uint256S("0x000000000000059f452a5f7340de6682a977387c17010ff6e6c3bd83ca8b1317")},
{210000, uint256S("0x000000000000048b95347e83192f69cf0366076336c639f9b7228e9ba171342e")},
{216116, uint256S("0x00000000000001b4f4b433e81ee46494af945cf96014816a4e2370f11b23df4e")},
{225430, uint256S("0x00000000000001c108384350f74090433e7fcf79a606b8e797f065b130575932")},
{250000, uint256S("0x000000000000003887df1f29024b06fc2200b55f8af8f35453d7be294df2d214")},
{279000, uint256S("0x0000000000000001ae8c72a0b0c301f67e3afca10e819efa9041e458e9bd7e40")},
{295000, uint256S("0x00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983")},
{ 4000, uint256S("0xa6bbb48f5343eb9b0287c22f3ea8b29f36cf10794a37f8a925a894d6f4519913") },
}
};
chainTxData = ChainTxData{
// Data from rpc: getchaintxstats 4096 0000000000000000002e63058c023a9a1de233554f28c7b21380b6c9003f36a8
/* nTime */ 1532884444,
/* nTxCount */ 331282217,
/* dTxRate */ 2.4
1467272478, 4146, 600.0
/* // Data from rpc: getchaintxstats 4096 0000000000000000002e63058c023a9a1de233554f28c7b21380b6c9003f36a8 */
/* /\* nTime *\/ 1532884444, */
/* /\* nTxCount *\/ 331282217, */
/* /\* dTxRate *\/ 2.4 */
};
/* disable fallback fee on mainnet */
@ -187,16 +241,30 @@ public:
class CTestNetParams : public CChainParams {
public:
CTestNetParams() {
strNetworkID = "test";
consensus.nSubsidyHalvingInterval = 210000;
strNetworkID = CBaseChainParams::TESTNET;
consensus.nSubsidyLevelInterval = 1 << 5;
consensus.nMajorityEnforceBlockUpgrade = 51;
consensus.nMajorityRejectBlockOutdated = 75;
consensus.nMajorityWindow = 100;
consensus.BIP16Exception = uint256S("0x00000000dd30457c001f4095d208cc1296b0eed002427aa599874af7a432b105");
consensus.BIP34Height = 21111;
consensus.BIP34Hash = uint256S("0x0000000023b3a96d3484e5abb3755c413e7d41500f8e2a5c3f0dd01299cd8ef8");
consensus.BIP65Height = 581885; // 00000000007f6655f22f98e72ed80d8b06dc761d5da09df0fa1dc4be4f861eb6
consensus.BIP66Height = 330776; // 000000002104c8c45e99a8853285a3b592602a3ccde2b832481da85e9e4ba182
consensus.powLimit = uint256S("00000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 10 * 60;
// FIXME: adjust heights
consensus.BIP65Height = 1200000;
consensus.BIP66Height = 1200000;
consensus.powLimit = uint256S("0000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 150;
consensus.nPowTargetSpacing = 150;
consensus.nOriginalClaimExpirationTime = 262974;
consensus.nExtendedClaimExpirationTime = 2102400;
consensus.nExtendedClaimExpirationForkHeight = 278160;
consensus.nAllowMinDiffMinHeight = 277299;
consensus.nAllowMinDiffMaxHeight = 1100000;
consensus.nNormalizedNameForkHeight = 993380; // targeting, 21 Feb 2019
consensus.nMinTakeoverWorkaroundHeight = 99;
consensus.nMaxTakeoverWorkaroundHeight = 1198550; // targeting 30 Sep 2019
consensus.nWitnessForkHeight = 1198600;
consensus.nAllClaimsInMerkleForkHeight = 1198560; // targeting 30 Sep 2019
consensus.fPowAllowMinDifficultyBlocks = true;
consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1512; // 75% for testchains
@ -210,36 +278,38 @@ public:
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1456790400; // March 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
// Deployment of SegWit (BIP141, BIP143, and BIP147)
// Deployment of SegWit (BIP141, BIP143, and BIP147) -- Unused (see nWitnessForkHeight).
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1;
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1462060800; // May 1st 2016
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1493596800; // May 1st 2017
// The best chain should have at least this much work.
consensus.nMinimumChainWork = uint256S("0x00000000000000000000000000000000000000000000007dbe94253893cbd463");
consensus.nMinimumChainWork = uint256S("0x000000000000000000000000000000000000000000000000000a0c3931735170");
// By default assume that the signatures in ancestors of this block are valid.
consensus.defaultAssumeValid = uint256S("0x0000000000000037a8cd3e06cd5edbfe9dd1dbcc5dacab279376ef7cfc2b4c75"); //1354312
consensus.defaultAssumeValid = uint256S("9812b0bcb7e889e58d999c897e9eaddb2dab98122ff1cfb238ebeef5351bd48c"); // 1
pchMessageStart[0] = 0x0b;
pchMessageStart[1] = 0x11;
pchMessageStart[2] = 0x09;
pchMessageStart[3] = 0x07;
nDefaultPort = 18333;
pchMessageStart[0] = 0xfa;
pchMessageStart[1] = 0xe4;
pchMessageStart[2] = 0xaa;
pchMessageStart[3] = 0xe1;
nDefaultPort = 19246;
nPruneAfterHeight = 1000;
genesis = CreateGenesisBlock(1296688602, 414098458, 0x1d00ffff, 1, 50 * COIN);
genesis = CreateGenesisBlock(1446058291, TESTNET_GENESIS_NONCE, 0x1f00ffff, 1, 400000000 * COIN);
consensus.hashGenesisBlock = genesis.GetHash();
assert(consensus.hashGenesisBlock == uint256S("0x000000000933ea01ad0ee984209779baaec3ced90fa3f408719526f8d77f4943"));
assert(genesis.hashMerkleRoot == uint256S("0x4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"));
#ifdef FIND_GENESIS
std::cout << "testnet genesis hash: " << genesis.GetHash().GetHex() << std::endl;
std::cout << "testnet merkle hash: " << genesis.hashMerkleRoot.GetHex() << std::endl;
#else
assert(consensus.hashGenesisBlock == uint256S(TESTNET_GENESIS_HASH));
assert(genesis.hashMerkleRoot == uint256S(GENESIS_MERKLE_ROOT));
#endif
vFixedSeeds.clear();
vSeeds.clear();
// nodes with support for servicebits filtering should be at the top
vSeeds.emplace_back("testnet-seed.bitcoin.jonasschnelli.ch");
vSeeds.emplace_back("seed.tbtc.petertodd.org");
vSeeds.emplace_back("seed.testnet.bitcoin.sprovoost.nl");
vSeeds.emplace_back("testnet-seed.bluematt.me"); // Just a static list of stable node(s), only supports x9
vSeeds.emplace_back("testdnsseed1.lbry.io");
vSeeds.emplace_back("testdnsseed2.lbry.io");
base58Prefixes[PUBKEY_ADDRESS] = std::vector<unsigned char>(1,111);
base58Prefixes[SCRIPT_ADDRESS] = std::vector<unsigned char>(1,196);
@ -247,18 +317,19 @@ public:
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94};
bech32_hrp = "tb";
bech32_hrp = "tlbc";
vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_test, pnSeed6_test + ARRAYLEN(pnSeed6_test));
fMiningRequiresPeers = true;
fDefaultConsistencyChecks = false;
fRequireStandard = false;
fMineBlocksOnDemand = false;
fTestnetToBeDeprecatedFieldRPC = true;
checkpointData = {
{
{546, uint256S("000000002a936ca763904c3c35fce2f3556c559c0214345d31b1bcebf76acb70")},
{0, uint256S(TESTNET_GENESIS_HASH)},
}
};
@ -280,18 +351,29 @@ public:
class CRegTestParams : public CChainParams {
public:
CRegTestParams() {
strNetworkID = "regtest";
consensus.nSubsidyHalvingInterval = 150;
strNetworkID = CBaseChainParams::REGTEST;
consensus.nSubsidyLevelInterval = 1 << 5;
consensus.BIP16Exception = uint256();
consensus.BIP34Height = 100000000; // BIP34 has not activated on regtest (far in the future so block v1 are not rejected in tests)
consensus.BIP34Height = 1000; // BIP34 is needed for validation_block_tests
consensus.BIP34Hash = uint256();
// FIXME: update heights and add activation tests
consensus.BIP65Height = 1351; // BIP65 activated on regtest (Used in rpc activation tests)
consensus.BIP66Height = 1251; // BIP66 activated on regtest (Used in rpc activation tests)
consensus.powLimit = uint256S("7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 10 * 60;
consensus.fPowAllowMinDifficultyBlocks = true;
consensus.fPowNoRetargeting = true;
consensus.nPowTargetTimespan = 1;//14 * 24 * 60 * 60; // two weeks
consensus.nPowTargetSpacing = 1;
consensus.nOriginalClaimExpirationTime = 500;
consensus.nExtendedClaimExpirationTime = 600;
consensus.nExtendedClaimExpirationForkHeight = 800;
consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 250; // SDK depends upon this number
consensus.nMinTakeoverWorkaroundHeight = -1;
consensus.nMaxTakeoverWorkaroundHeight = -1;
consensus.nWitnessForkHeight = 150;
consensus.nAllClaimsInMerkleForkHeight = 350;
consensus.fPowAllowMinDifficultyBlocks = false;
consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 108; // 75% for testchains
consensus.nMinerConfirmationWindow = 144; // Faster than normal for regtest (144 instead of 2016)
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
@ -311,27 +393,34 @@ public:
consensus.defaultAssumeValid = uint256S("0x00");
pchMessageStart[0] = 0xfa;
pchMessageStart[1] = 0xbf;
pchMessageStart[2] = 0xb5;
pchMessageStart[3] = 0xda;
nDefaultPort = 18444;
pchMessageStart[1] = 0xe4;
pchMessageStart[2] = 0xaa;
pchMessageStart[3] = 0xd1;
nDefaultPort = 29246;
nPruneAfterHeight = 1000;
genesis = CreateGenesisBlock(1296688602, 2, 0x207fffff, 1, 50 * COIN);
genesis = CreateGenesisBlock(1446058291, REGTEST_GENESIS_NONCE, 0x207fffff, 1, 400000000 * COIN);
consensus.hashGenesisBlock = genesis.GetHash();
assert(consensus.hashGenesisBlock == uint256S("0x0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206"));
assert(genesis.hashMerkleRoot == uint256S("0x4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b"));
#ifdef FIND_GENESIS
std::cout << "regtest genensis hash: " << genesis.GetHash().GetHex() << std::endl;
std::cout << "regtest hashmerkleroot: " << genesis.hashMerkleRoot.GetHex() << std::endl;
#else
assert(consensus.hashGenesisBlock == uint256S(REGTEST_GENESIS_HASH));
assert(genesis.hashMerkleRoot == uint256S(GENESIS_MERKLE_ROOT));
#endif
vFixedSeeds.clear(); //!< Regtest mode doesn't have any fixed seeds.
vSeeds.clear(); //!< Regtest mode doesn't have any DNS seeds.
fMiningRequiresPeers = false;
fDefaultConsistencyChecks = true;
fRequireStandard = false;
fMineBlocksOnDemand = true;
fTestnetToBeDeprecatedFieldRPC = false;
checkpointData = {
{
{0, uint256S("0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206")},
{0, uint256S(REGTEST_GENESIS_HASH)},
}
};
@ -347,7 +436,7 @@ public:
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94};
bech32_hrp = "bcrt";
bech32_hrp = "rlbc";
/* enable fallback fee on regtest */
m_fallback_fee_enabled = true;

View file

@ -6,6 +6,7 @@
#ifndef BITCOIN_CHAINPARAMS_H
#define BITCOIN_CHAINPARAMS_H
#include <arith_uint256.h>
#include <chainparamsbase.h>
#include <consensus/params.h>
#include <primitives/block.h>
@ -94,9 +95,11 @@ protected:
std::string strNetworkID;
CBlock genesis;
std::vector<SeedSpec6> vFixedSeeds;
bool fMiningRequiresPeers;
bool fDefaultConsistencyChecks;
bool fRequireStandard;
bool fMineBlocksOnDemand;
bool fTestnetToBeDeprecatedFieldRPC;
CCheckpointData checkpointData;
ChainTxData chainTxData;
bool m_fallback_fee_enabled;

View file

@ -11,9 +11,9 @@
#include <assert.h>
const std::string CBaseChainParams::MAIN = "main";
const std::string CBaseChainParams::TESTNET = "test";
const std::string CBaseChainParams::REGTEST = "regtest";
const std::string CBaseChainParams::MAIN = "lbrycrd";
const std::string CBaseChainParams::TESTNET = "lbrycrdtest";
const std::string CBaseChainParams::REGTEST = "lbrycrdreg";
void SetupChainParamsBaseOptions()
{
@ -33,11 +33,11 @@ const CBaseChainParams& BaseParams()
std::unique_ptr<CBaseChainParams> CreateBaseChainParams(const std::string& chain)
{
if (chain == CBaseChainParams::MAIN)
return MakeUnique<CBaseChainParams>("", 8332);
return MakeUnique<CBaseChainParams>("", 9245);
else if (chain == CBaseChainParams::TESTNET)
return MakeUnique<CBaseChainParams>("testnet3", 18332);
return MakeUnique<CBaseChainParams>("testnet3", 19245);
else if (chain == CBaseChainParams::REGTEST)
return MakeUnique<CBaseChainParams>("regtest", 18443);
return MakeUnique<CBaseChainParams>("regtest", 29245);
else
throw std::runtime_error(strprintf("%s: Unknown chain %s.", __func__, chain));
}

View file

@ -44,6 +44,12 @@ std::unique_ptr<CBaseChainParams> CreateBaseChainParams(const std::string& chain
*/
void SetupChainParamsBaseOptions();
/**
* Append the help messages for the chainparams options to the
* parameter string.
*/
void AppendParamsHelpMessages(std::string& strUsage, bool debugHelp=true);
/**
* Return the currently selected parameters. This won't change after app
* startup, except for unit tests.
@ -53,4 +59,16 @@ const CBaseChainParams& BaseParams();
/** Sets the params returned by Params() to those for the given network. */
void SelectBaseParams(const std::string& chain);
/**
* Looks for -regtest, -testnet and returns the appropriate BIP70 chain name.
* @return CBaseChainParams::MAX_NETWORK_TYPES if an invalid combination is given. CBaseChainParams::MAIN by default.
*/
std::string ChainNameFromCommandLine();
/**
* Return true if SelectBaseParamsFromCommandLine() has been called to select
* a network.
*/
bool AreBaseParamsConfigured();
#endif // BITCOIN_CHAINPARAMSBASE_H

File diff suppressed because it is too large Load diff

243
src/claimscriptop.cpp Normal file
View file

@ -0,0 +1,243 @@
// Copyright (c) 2018 The LBRY developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <coins.h>
#include <claimscriptop.h>
#include <nameclaim.h>
CClaimScriptAddOp::CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight)
: point(point), nValue(nValue), nHeight(nHeight)
{
}
bool CClaimScriptAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{
return addClaim(trieCache, name, ClaimIdHash(point.hash, point.n));
}
bool CClaimScriptAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
return addClaim(trieCache, name, claimId);
}
bool CClaimScriptAddOp::addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
return trieCache.addClaim(name, point, claimId, nValue, nHeight);
}
bool CClaimScriptAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
return trieCache.addSupport(name, point, nValue, claimId, nHeight);
}
CClaimScriptUndoAddOp::CClaimScriptUndoAddOp(const COutPoint& point, int nHeight) : point(point), nHeight(nHeight)
{
}
bool CClaimScriptUndoAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{
auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return undoAddClaim(trieCache, name, claimId);
}
bool CClaimScriptUndoAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return undoAddClaim(trieCache, name, claimId);
}
bool CClaimScriptUndoAddOp::undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
bool res = trieCache.undoAddClaim(name, point, nHeight);
if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing claim fails\n", __func__);
return res;
}
bool CClaimScriptUndoAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
if (LogAcceptCategory(BCLog::CLAIMS)) {
LogPrintf("--- [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name,
claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf(
"%s: (txid: %s, nOut: %d) Removing support for %s, claimId: %s, from the claim trie due to block disconnect\n",
__func__, point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.undoAddSupport(name, point, nHeight);
if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__);
return res;
}
CClaimScriptSpendOp::CClaimScriptSpendOp(const COutPoint& point, int nHeight, int& nValidHeight)
: point(point), nHeight(nHeight), nValidHeight(nValidHeight)
{
}
bool CClaimScriptSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{
auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return spendClaim(trieCache, name, claimId);
}
bool CClaimScriptSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return spendClaim(trieCache, name, claimId);
}
bool CClaimScriptSpendOp::spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
bool res = trieCache.spendClaim(name, point, nHeight, nValidHeight);
if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing fails\n", __func__);
return res;
}
bool CClaimScriptSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
if (LogAcceptCategory(BCLog::CLAIMS)) {
LogPrintf("+++ [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name,
claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf("%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie\n", __func__,
point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.spendSupport(name, point, nHeight, nValidHeight);
if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__);
return res;
}
CClaimScriptUndoSpendOp::CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight)
: point(point), nValue(nValue), nHeight(nHeight), nValidHeight(nValidHeight)
{
}
bool CClaimScriptUndoSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{
return undoSpendClaim(trieCache, name, ClaimIdHash(point.hash, point.n));
}
bool CClaimScriptUndoSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
return undoSpendClaim(trieCache, name, claimId);
}
bool CClaimScriptUndoSpendOp::undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
return trieCache.undoSpendClaim(name, point, claimId, nValue, nHeight, nValidHeight);
}
bool CClaimScriptUndoSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
return trieCache.undoSpendSupport(name, point, claimId, nValue, nHeight, nValidHeight);
}
static std::string vchToString(const std::vector<unsigned char>& name)
{
return std::string(name.begin(), name.end());
}
bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CScript& scriptPubKey)
{
int op;
std::vector<std::vector<unsigned char> > vvchParams;
if (!DecodeClaimScript(scriptPubKey, op, vvchParams, trieCache.allowSupportMetadata()))
return false;
switch (op) {
case OP_CLAIM_NAME:
return claimOp.claimName(trieCache, vchToString(vvchParams[0]));
case OP_SUPPORT_CLAIM:
return claimOp.supportClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1]));
case OP_UPDATE_CLAIM:
return claimOp.updateClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1]));
}
throw std::runtime_error("Unimplemented OP handler.");
}
void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks)
{
class CSpendClaimHistory : public CClaimScriptSpendOp
{
public:
using CClaimScriptSpendOp::CClaimScriptSpendOp;
bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{
if (CClaimScriptSpendOp::spendClaim(trieCache, name, claimId)) {
callback(name, claimId);
return true;
}
return false;
}
std::function<void(const std::string& name, const uint160& claimId)> callback;
};
spentClaimsType spentClaims;
for (std::size_t j = 0; j < tx.vin.size(); j++) {
const CTxIn& txin = tx.vin[j];
const Coin& coin = view.AccessCoin(txin.prevout);
CScript scriptPubKey;
int scriptHeight = nHeight;
if (coin.out.IsNull() && callbacks.findScriptKey) {
scriptPubKey = callbacks.findScriptKey(txin.prevout);
} else {
scriptHeight = coin.nHeight;
scriptPubKey = coin.out.scriptPubKey;
}
if (scriptPubKey.empty())
continue;
int nValidAtHeight;
CSpendClaimHistory spendClaim(COutPoint(txin.prevout.hash, txin.prevout.n), scriptHeight, nValidAtHeight);
spendClaim.callback = [&spentClaims](const std::string& name, const uint160& claimId) {
spentClaims.emplace_back(name, claimId);
};
if (ProcessClaim(spendClaim, trieCache, scriptPubKey) && callbacks.claimUndoHeights)
callbacks.claimUndoHeights(j, nValidAtHeight);
}
class CAddSpendClaim : public CClaimScriptAddOp
{
public:
using CClaimScriptAddOp::CClaimScriptAddOp;
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{
if (callback(name, claimId))
return CClaimScriptAddOp::updateClaim(trieCache, name, claimId);
return false;
}
std::function<bool(const std::string& name, const uint160& claimId)> callback;
};
for (std::size_t j = 0; j < tx.vout.size(); j++) {
const CTxOut& txout = tx.vout[j];
if (txout.scriptPubKey.empty())
continue;
CAddSpendClaim addClaim(COutPoint(tx.GetHash(), j), txout.nValue, nHeight);
addClaim.callback = [&trieCache, &spentClaims](const std::string& name, const uint160& claimId) -> bool {
for (auto itSpent = spentClaims.begin(); itSpent != spentClaims.end(); ++itSpent) {
if (itSpent->second == claimId && trieCache.normalizeClaimName(name) == trieCache.normalizeClaimName(itSpent->first)) {
spentClaims.erase(itSpent);
return true;
}
}
return false;
};
ProcessClaim(addClaim, trieCache, txout.scriptPubKey);
}
}

245
src/claimscriptop.h Normal file
View file

@ -0,0 +1,245 @@
// Copyright (c) 2018 The LBRY developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef CLAIMSCRIPTOP_H
#define CLAIMSCRIPTOP_H
#include "amount.h"
#include "claimtrie.h"
#include "hash.h"
#include "primitives/transaction.h"
#include "script/script.h"
#include "uint256.h"
#include "util.h"
#include <string>
#include <vector>
/**
* Claim script operation base class
*/
class CClaimScriptOp
{
public:
virtual ~CClaimScriptOp() {}
/**
* Pure virtual, OP_CLAIM_NAME handler
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
*/
virtual bool claimName(CClaimTrieCache& trieCache, const std::string& name) = 0;
/**
* Pure virtual, OP_UPDATE_CLAIM handler
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) = 0;
/**
* Pure virtual, OP_SUPPORT_CLAIM handler
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) = 0;
};
/**
* Class to add claim in trie
*/
class CClaimScriptAddOp : public CClaimScriptOp
{
public:
/**
* Constructor
* @param[in] point pair of transaction hash and its index
* @param[in] nValue value of the claim
* @param[in] nHeight entry height of the claim
*/
CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight);
/**
* Implementation of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName
*/
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/**
* Implementation of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim
*/
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/**
* Implementation of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim
*/
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
protected:
/**
* Reimplement to handle OP_CLAIM_NAME and OP_UPDATE_CLAIM at once
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId);
const COutPoint point;
const CAmount nValue;
const int nHeight;
};
/**
* Class to undo added claim in trie
*/
class CClaimScriptUndoAddOp : public CClaimScriptOp
{
public:
/**
* Constructor
* @param[in] point pair of transaction hash and its index
* @param[in] nHeight entry height of the claim
*/
CClaimScriptUndoAddOp(const COutPoint& point, int nHeight);
/**
* Implementation of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName
*/
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/**
* Implementation of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim
*/
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/**
* Implementation of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim
*/
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
protected:
/**
* Reimplement to handle OP_CLAIM_NAME and OP_UPDATE_CLAIM at once
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId);
const COutPoint point;
const int nHeight;
};
/**
* Class to spend claim from trie
*/
class CClaimScriptSpendOp : public CClaimScriptOp
{
public:
/**
* Constructor
* @param[in] point pair of transaction hash and its index
* @param[in] nHeight entry height of the claim
* @param[out] nValidHeight valid height of the claim
*/
CClaimScriptSpendOp(const COutPoint& point, int nHeight, int& nValidHeight);
/**
* Implementation of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName
*/
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/**
* Implementation of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim
*/
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/**
* Implementation of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim
*/
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
protected:
/**
* Reimplement to handle OP_CLAIM_NAME and OP_UPDATE_CLAIM at once
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId);
const COutPoint point;
const int nHeight;
int& nValidHeight;
};
/**
* Class to undo spent claim from trie
*/
class CClaimScriptUndoSpendOp : public CClaimScriptOp
{
public:
/**
* Constructor
* @param[in] point pair of transaction hash and its index
* @param[in] nValue value of the claim
* @param[in] nHeight entry height of the claim
* @param[in] nValidHeight valid height of the claim
*/
CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight);
/**
* Implementation of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName
*/
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/**
* Implementation of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim
*/
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/**
* Implementation of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim
*/
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
protected:
/**
* Reimplement to handle OP_CLAIM_NAME and OP_UPDATE_CLAIM at once
* @param[in] trieCache trie to operate on
* @param[in] name name of the claim
* @param[in] claimId id of the claim
*/
virtual bool undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId);
const COutPoint point;
const CAmount nValue;
const int nHeight;
const int nValidHeight;
};
/**
* Function to process operation on claim
* @param[in] claimOp operation to be performed
* @param[in] trieCache trie to operate on
* @param[in] scriptPubKey claim script to be decoded
*/
bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CScript& scriptPubKey);
typedef std::pair<std::string, uint160> spentClaimType;
typedef std::vector<spentClaimType> spentClaimsType;
struct CUpdateCacheCallbacks
{
std::function<CScript(const COutPoint& point)> findScriptKey;
std::function<void(int, int)> claimUndoHeights;
};
/**
* Function to spend claim from tie, keeping the successful list on
* @param[in] tx transaction inputs/outputs
* @param[in] trieCache trie to operate on
* @param[in] view coins cache
* @param[in] point pair of transaction hash and its index
* @param[in] nHeight entry height of the claim
* @param[out] fallback optional callbacks
*/
void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks = {});
#endif // CLAIMSCRIPTOP_H

1438
src/claimtrie.cpp Normal file

File diff suppressed because it is too large Load diff

756
src/claimtrie.h Normal file
View file

@ -0,0 +1,756 @@
#ifndef BITCOIN_CLAIMTRIE_H
#define BITCOIN_CLAIMTRIE_H
#include <amount.h>
#include <chain.h>
#include <chainparams.h>
#include <dbwrapper.h>
#include <prefixtrie.h>
#include <primitives/transaction.h>
#include <serialize.h>
#include <uint256.h>
#include <util.h>
#include <map>
#include <string>
#include <vector>
#include <unordered_map>
#include <unordered_set>
// leveldb keys
#define TRIE_NODE 'n'
#define TRIE_NODE_CHILDREN 'b'
#define CLAIM_BY_ID 'i'
#define CLAIM_QUEUE_ROW 'r'
#define CLAIM_QUEUE_NAME_ROW 'm'
#define CLAIM_EXP_QUEUE_ROW 'e'
#define SUPPORT 's'
#define SUPPORT_QUEUE_ROW 'u'
#define SUPPORT_QUEUE_NAME_ROW 'p'
#define SUPPORT_EXP_QUEUE_ROW 'x'
uint256 getValueHash(const COutPoint& outPoint, int nHeightOfLastTakeover);
struct CClaimValue
{
COutPoint outPoint;
uint160 claimId;
CAmount nAmount = 0;
CAmount nEffectiveAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CClaimValue() = default;
CClaimValue(const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight, int nValidAtHeight)
: outPoint(outPoint), claimId(claimId), nAmount(nAmount), nEffectiveAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
CClaimValue(CClaimValue&&) = default;
CClaimValue(const CClaimValue&) = default;
CClaimValue& operator=(CClaimValue&&) = default;
CClaimValue& operator=(const CClaimValue&) = default;
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(claimId);
READWRITE(nAmount);
READWRITE(nHeight);
READWRITE(nValidAtHeight);
}
bool operator<(const CClaimValue& other) const
{
if (nEffectiveAmount < other.nEffectiveAmount)
return true;
if (nEffectiveAmount != other.nEffectiveAmount)
return false;
if (nHeight > other.nHeight)
return true;
if (nHeight != other.nHeight)
return false;
return outPoint != other.outPoint && !(outPoint < other.outPoint);
}
bool operator==(const CClaimValue& other) const
{
return outPoint == other.outPoint && claimId == other.claimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool operator!=(const CClaimValue& other) const
{
return !(*this == other);
}
};
struct CSupportValue
{
COutPoint outPoint;
uint160 supportedClaimId;
CAmount nAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CSupportValue() = default;
CSupportValue(const COutPoint& outPoint, const uint160& supportedClaimId, CAmount nAmount, int nHeight, int nValidAtHeight)
: outPoint(outPoint), supportedClaimId(supportedClaimId), nAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
CSupportValue(CSupportValue&&) = default;
CSupportValue(const CSupportValue&) = default;
CSupportValue& operator=(CSupportValue&&) = default;
CSupportValue& operator=(const CSupportValue&) = default;
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(supportedClaimId);
READWRITE(nAmount);
READWRITE(nHeight);
READWRITE(nValidAtHeight);
}
bool operator==(const CSupportValue& other) const
{
return outPoint == other.outPoint && supportedClaimId == other.supportedClaimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool operator!=(const CSupportValue& other) const
{
return !(*this == other);
}
};
typedef std::vector<CClaimValue> claimEntryType;
typedef std::vector<CSupportValue> supportEntryType;
struct CClaimTrieData
{
uint256 hash;
claimEntryType claims;
int nHeightOfLastTakeover = 0;
CClaimTrieData() = default;
CClaimTrieData(CClaimTrieData&&) = default;
CClaimTrieData(const CClaimTrieData&) = default;
CClaimTrieData& operator=(CClaimTrieData&&) = default;
CClaimTrieData& operator=(const CClaimTrieData& d) = default;
bool insertClaim(const CClaimValue& claim);
bool removeClaim(const COutPoint& outPoint, CClaimValue& claim);
bool getBestClaim(CClaimValue& claim) const;
bool haveClaim(const COutPoint& outPoint) const;
void reorderClaims(const supportEntryType& support);
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(hash);
if (ser_action.ForRead()) {
if (s.eof()) {
claims.clear();
nHeightOfLastTakeover = 0;
return;
}
}
else if (claims.empty())
return;
READWRITE(claims);
READWRITE(nHeightOfLastTakeover);
}
bool operator==(const CClaimTrieData& other) const
{
return hash == other.hash && nHeightOfLastTakeover == other.nHeightOfLastTakeover && claims == other.claims;
}
bool operator!=(const CClaimTrieData& other) const
{
return !(*this == other);
}
bool empty() const
{
return claims.empty();
}
};
struct COutPointHeightType
{
COutPoint outPoint;
int nHeight = 0;
COutPointHeightType() = default;
COutPointHeightType(const COutPoint& outPoint, int nHeight)
: outPoint(outPoint), nHeight(nHeight)
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(nHeight);
}
};
struct CNameOutPointHeightType
{
std::string name;
COutPoint outPoint;
int nHeight = 0;
CNameOutPointHeightType() = default;
CNameOutPointHeightType(std::string name, const COutPoint& outPoint, int nHeight)
: name(std::move(name)), outPoint(outPoint), nHeight(nHeight)
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(outPoint);
READWRITE(nHeight);
}
};
struct CNameOutPointType
{
std::string name;
COutPoint outPoint;
CNameOutPointType() = default;
CNameOutPointType(std::string name, const COutPoint& outPoint)
: name(std::move(name)), outPoint(outPoint)
{
}
bool operator==(const CNameOutPointType& other) const
{
return name == other.name && outPoint == other.outPoint;
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(outPoint);
}
};
struct CClaimIndexElement
{
CClaimIndexElement() = default;
CClaimIndexElement(std::string name, CClaimValue claim)
: name(std::move(name)), claim(std::move(claim))
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(claim);
}
std::string name;
CClaimValue claim;
};
struct CClaimNsupports
{
CClaimNsupports() = default;
CClaimNsupports(CClaimNsupports&&) = default;
CClaimNsupports(const CClaimNsupports&) = default;
CClaimNsupports& operator=(CClaimNsupports&&) = default;
CClaimNsupports& operator=(const CClaimNsupports&) = default;
CClaimNsupports(const CClaimValue& claim, CAmount effectiveAmount, const std::vector<CSupportValue>& supports = {})
: claim(claim), effectiveAmount(effectiveAmount), supports(supports)
{
}
bool IsNull() const
{
return claim.claimId.IsNull();
}
CClaimValue claim;
CAmount effectiveAmount = 0;
std::vector<CSupportValue> supports;
};
static const CClaimNsupports invalid;
struct CClaimSupportToName
{
CClaimSupportToName(const std::string& name, int nLastTakeoverHeight, std::vector<CClaimNsupports> claimsNsupports, std::vector<CSupportValue> unmatchedSupports)
: name(name), nLastTakeoverHeight(nLastTakeoverHeight), claimsNsupports(std::move(claimsNsupports)), unmatchedSupports(std::move(unmatchedSupports))
{
}
const CClaimNsupports& find(const uint160& claimId) const
{
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&claimId](const CClaimNsupports& value) {
return claimId == value.claim.claimId;
});
return it != claimsNsupports.end() ? *it : invalid;
}
const CClaimNsupports& find(const std::string& partialId) const
{
std::string lowered(partialId);
for (auto& c: lowered)
c = std::tolower(c);
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&lowered](const CClaimNsupports& value) {
return value.claim.claimId.GetHex().find(lowered) == 0;
});
return it != claimsNsupports.end() ? *it : invalid;
}
const std::string name;
const int nLastTakeoverHeight;
const std::vector<CClaimNsupports> claimsNsupports;
const std::vector<CSupportValue> unmatchedSupports;
};
class CClaimTrie : public CPrefixTrie<std::string, CClaimTrieData>
{
public:
CClaimTrie() = default;
virtual ~CClaimTrie() = default;
CClaimTrie(CClaimTrie&&) = delete;
CClaimTrie(const CClaimTrie&) = delete;
CClaimTrie(bool fMemory, bool fWipe, int proportionalDelayFactor = 32, std::size_t cacheMB=200);
CClaimTrie& operator=(CClaimTrie&&) = delete;
CClaimTrie& operator=(const CClaimTrie&) = delete;
bool SyncToDisk();
friend class CClaimTrieCacheBase;
friend struct ClaimTrieChainFixture;
friend class CClaimTrieCacheExpirationFork;
friend class CClaimTrieCacheNormalizationFork;
friend bool getClaimById(const uint160&, std::string&, CClaimValue*);
friend bool getClaimById(const std::string&, std::string&, CClaimValue*);
std::size_t getTotalNamesInTrie() const;
std::size_t getTotalClaimsInTrie() const;
CAmount getTotalValueOfClaimsInTrie(bool fControllingOnly) const;
protected:
int nNextHeight = 0;
int nProportionalDelayFactor = 0;
std::unique_ptr<CDBWrapper> db;
};
struct CClaimTrieProofNode
{
CClaimTrieProofNode(std::vector<std::pair<unsigned char, uint256>> children, bool hasValue, const uint256& valHash)
: children(std::move(children)), hasValue(hasValue), valHash(valHash)
{
}
CClaimTrieProofNode(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode(const CClaimTrieProofNode&) = default;
CClaimTrieProofNode& operator=(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode& operator=(const CClaimTrieProofNode&) = default;
std::vector<std::pair<unsigned char, uint256>> children;
bool hasValue;
uint256 valHash;
};
struct CClaimTrieProof
{
CClaimTrieProof() = default;
CClaimTrieProof(CClaimTrieProof&&) = default;
CClaimTrieProof(const CClaimTrieProof&) = default;
CClaimTrieProof& operator=(CClaimTrieProof&&) = default;
CClaimTrieProof& operator=(const CClaimTrieProof&) = default;
std::vector<std::pair<bool, uint256>> pairs;
std::vector<CClaimTrieProofNode> nodes;
int nHeightOfLastTakeover = 0;
bool hasValue = false;
COutPoint outPoint;
};
template <typename T>
class COptional
{
bool own;
T* value;
public:
COptional(T* value = nullptr) : own(false), value(value) {}
COptional(COptional&& o)
{
own = o.own;
value = o.value;
o.own = false;
o.value = nullptr;
}
COptional(T&& o) : own(true)
{
value = new T(std::move(o));
}
~COptional()
{
if (own)
delete value;
}
COptional& operator=(COptional&&) = delete;
bool unique() const
{
return own;
}
operator bool() const
{
return value;
}
operator T*() const
{
return value;
}
T* operator->() const
{
return value;
}
operator T&() const
{
return *value;
}
T& operator*() const
{
return *value;
}
};
template <typename T>
using queueEntryType = std::pair<std::string, T>;
typedef std::vector<queueEntryType<CClaimValue>> claimQueueRowType;
typedef std::map<int, claimQueueRowType> claimQueueType;
typedef std::vector<queueEntryType<CSupportValue>> supportQueueRowType;
typedef std::map<int, supportQueueRowType> supportQueueType;
typedef std::vector<COutPointHeightType> queueNameRowType;
typedef std::map<std::string, queueNameRowType> queueNameType;
typedef std::vector<CNameOutPointHeightType> insertUndoType;
typedef std::vector<CNameOutPointType> expirationQueueRowType;
typedef std::map<int, expirationQueueRowType> expirationQueueType;
typedef std::set<CClaimValue> claimIndexClaimListType;
typedef std::vector<CClaimIndexElement> claimIndexElementListType;
class CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheBase(CClaimTrie* base);
virtual ~CClaimTrieCacheBase() = default;
uint256 getMerkleHash();
bool flush();
bool empty() const;
bool checkConsistency() const;
bool ReadFromDisk(const CBlockIndex* tip);
bool haveClaim(const std::string& name, const COutPoint& outPoint) const;
bool haveClaimInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool haveSupport(const std::string& name, const COutPoint& outPoint) const;
bool haveSupportInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool addClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight);
bool undoAddClaim(const std::string& name, const COutPoint& outPoint, int nHeight);
bool spendClaim(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight);
bool undoSpendClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight, int nValidAtHeight);
bool addSupport(const std::string& name, const COutPoint& outPoint, CAmount nAmount, const uint160& supportedClaimId, int nHeight);
bool undoAddSupport(const std::string& name, const COutPoint& outPoint, int nHeight);
bool spendSupport(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight);
bool undoSpendSupport(const std::string& name, const COutPoint& outPoint, const uint160& supportedClaimId, CAmount nAmount, int nHeight, int nValidAtHeight);
virtual bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
virtual bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo);
virtual bool getProofForName(const std::string& name, CClaimTrieProof& proof);
virtual bool getInfoForName(const std::string& name, CClaimValue& claim) const;
virtual int expirationTime() const;
virtual bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
virtual CClaimSupportToName getClaimsForName(const std::string& name) const;
CClaimTrie::const_iterator find(const std::string& name) const;
void iterate(std::function<void(const std::string&, const CClaimTrieData&)> callback) const;
void dumpToLog(CClaimTrie::const_iterator it, bool diffFromBase = true) const;
virtual std::string adjustNameForValidHeight(const std::string& name, int validHeight) const;
protected:
CClaimTrie* base;
CClaimTrie nodesToAddOrUpdate; // nodes pulled in from base (and possibly modified thereafter), written to base on flush
std::unordered_set<std::string> nodesAlreadyCached; // set of nodes already pulled into cache from base
std::unordered_set<std::string> namesToCheckForTakeover; // takeover numbers are updated on increment
virtual uint256 recursiveComputeMerkleHash(CClaimTrie::iterator& it);
virtual bool recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const;
virtual bool insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover);
virtual bool removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover);
virtual bool insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover);
virtual bool removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover);
supportEntryType getSupportsForName(const std::string& name) const;
int getDelayForName(const std::string& name) const;
virtual int getDelayForName(const std::string& name, const uint160& claimId) const;
CClaimTrie::iterator cacheData(const std::string& name, bool create = true);
bool getLastTakeoverForName(const std::string& name, uint160& claimId, int& takeoverHeight) const;
int getNumBlocksOfContinuousOwnership(const std::string& name) const;
void reactivateClaim(const expirationQueueRowType& row, int height, bool increment);
void reactivateSupport(const expirationQueueRowType& row, int height, bool increment);
expirationQueueType expirationQueueCache;
expirationQueueType supportExpirationQueueCache;
int nNextHeight; // Height of the block that is being worked on, which is
// one greater than the height of the chain's tip
private:
uint256 hashBlock;
std::unordered_map<std::string, std::pair<uint160, int>> takeoverCache;
claimQueueType claimQueueCache; // claims not active yet: to be written to disk on flush
queueNameType claimQueueNameCache;
supportQueueType supportQueueCache; // supports not active yet: to be written to disk on flush
queueNameType supportQueueNameCache;
claimIndexElementListType claimsToAddToByIdIndex; // written to index on flush
claimIndexClaimListType claimsToDeleteFromByIdIndex;
std::unordered_map<std::string, supportEntryType> supportCache; // to be added/updated to base (and disk) on flush
std::unordered_set<std::string> nodesToDelete; // to be removed from base (and disk) on flush
std::unordered_map<std::string, bool> takeoverWorkaround;
std::unordered_set<std::string> removalWorkaround;
bool shouldUseTakeoverWorkaround(const std::string& key) const;
void addTakeoverWorkaroundPotential(const std::string& key);
void confirmTakeoverWorkaroundNeeded(const std::string& key);
bool clear();
void markAsDirty(const std::string& name, bool fCheckTakeover);
bool removeSupport(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover);
bool removeClaim(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover);
template <typename T>
void insertRowsFromQueue(std::vector<T>& result, const std::string& name) const;
template <typename T>
std::vector<queueEntryType<T>>* getQueueCacheRow(int nHeight, bool createIfNotExists);
template <typename T>
COptional<const std::vector<queueEntryType<T>>> getQueueCacheRow(int nHeight) const;
template <typename T>
queueNameRowType* getQueueCacheNameRow(const std::string& name, bool createIfNotExists);
template <typename T>
COptional<const queueNameRowType> getQueueCacheNameRow(const std::string& name) const;
template <typename T>
expirationQueueRowType* getExpirationQueueCacheRow(int nHeight, bool createIfNotExists);
template <typename T>
bool haveInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
template <typename T>
T add(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight);
template <typename T>
bool remove(T& value, const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover = false);
template <typename T>
bool addToQueue(const std::string& name, const T& value);
template <typename T>
bool removeFromQueue(const std::string& name, const COutPoint& outPoint, T& value);
template <typename T>
bool addToCache(const std::string& name, const T& value, bool fCheckTakeover = false);
template <typename T>
bool removeFromCache(const std::string& name, const COutPoint& outPoint, T& value, bool fCheckTakeover = false);
template <typename T>
bool undoSpend(const std::string& name, const T& value, int nValidAtHeight);
template <typename T>
void undoIncrement(insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo, std::set<T>* deleted = nullptr);
template <typename T>
void undoDecrement(insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo, std::vector<CClaimIndexElement>* added = nullptr, std::set<T>* deleted = nullptr);
template <typename T>
void undoIncrement(const std::string& name, insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo);
template <typename T>
void reactivate(const expirationQueueRowType& row, int height, bool increment);
// for unit test
friend struct ClaimTrieChainFixture;
friend class CClaimTrieCacheTest;
};
class CClaimTrieCacheExpirationFork : public CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheExpirationFork(CClaimTrie* base);
void setExpirationTime(int time);
int expirationTime() const override;
virtual void initializeIncrement();
bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo) override;
private:
int nExpirationTime;
bool forkForExpirationChange(bool increment);
};
class CClaimTrieCacheNormalizationFork : public CClaimTrieCacheExpirationFork
{
public:
explicit CClaimTrieCacheNormalizationFork(CClaimTrie* base)
: CClaimTrieCacheExpirationFork(base), overrideInsertNormalization(false), overrideRemoveNormalization(false)
{
}
bool shouldNormalize() const;
// lower-case and normalize any input string name
// see: https://unicode.org/reports/tr15/#Norm_Forms
std::string normalizeClaimName(const std::string& name, bool force = false) const; // public only for validating name field on update op
bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo) override;
bool getProofForName(const std::string& name, CClaimTrieProof& proof) override;
bool getInfoForName(const std::string& name, CClaimValue& claim) const override;
CClaimSupportToName getClaimsForName(const std::string& name) const override;
std::string adjustNameForValidHeight(const std::string& name, int validHeight) const override;
protected:
bool insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover) override;
bool removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover) override;
bool insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover) override;
bool removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover) override;
int getDelayForName(const std::string& name, const uint160& claimId) const override;
private:
bool overrideInsertNormalization;
bool overrideRemoveNormalization;
bool normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo,
claimQueueRowType& removeUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
};
class CClaimTrieCacheHashFork : public CClaimTrieCacheNormalizationFork
{
public:
explicit CClaimTrieCacheHashFork(CClaimTrie* base);
bool getProofForName(const std::string& name, CClaimTrieProof& proof) override;
bool getProofForName(const std::string& name, CClaimTrieProof& proof, const std::function<bool(const CClaimValue&)>& comp);
void initializeIncrement() override;
bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool allowSupportMetadata() const;
protected:
uint256 recursiveComputeMerkleHash(CClaimTrie::iterator& it) override;
bool recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const override;
private:
void copyAllBaseToCache();
};
typedef CClaimTrieCacheHashFork CClaimTrieCache;
#endif // BITCOIN_CLAIMTRIE_H

481
src/claimtrieforks.cpp Normal file
View file

@ -0,0 +1,481 @@
#include <consensus/merkle.h>
#include <chainparams.h>
#include <claimtrie.h>
#include <hash.h>
#include <boost/locale.hpp>
#include <boost/locale/conversion.hpp>
#include <boost/locale/localization_backend.hpp>
#include <boost/scope_exit.hpp>
#include <boost/scoped_ptr.hpp>
CClaimTrieCacheExpirationFork::CClaimTrieCacheExpirationFork(CClaimTrie* base)
: CClaimTrieCacheBase(base)
{
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
}
void CClaimTrieCacheExpirationFork::setExpirationTime(int time)
{
nExpirationTime = time;
}
int CClaimTrieCacheExpirationFork::expirationTime() const
{
return nExpirationTime;
}
bool CClaimTrieCacheExpirationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
if (CClaimTrieCacheBase::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo)) {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
return true;
}
return false;
}
bool CClaimTrieCacheExpirationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo)
{
if (CClaimTrieCacheBase::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo)) {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
return true;
}
return false;
}
void CClaimTrieCacheExpirationFork::initializeIncrement()
{
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nExtendedClaimExpirationForkHeight)
return;
forkForExpirationChange(true);
}
bool CClaimTrieCacheExpirationFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheBase::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nExtendedClaimExpirationForkHeight)
forkForExpirationChange(false);
return ret;
}
bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment)
{
/*
If increment is True, we have forked to extend the expiration time, thus items in the expiration queue
will have their expiration extended by "new expiration time - original expiration time"
If increment is False, we are decremented a block to reverse the fork. Thus items in the expiration queue
will have their expiration extension removed.
*/
//look through db for expiration queues, if we haven't already found it in dirty expiration queue
boost::scoped_ptr<CDBIterator> pcursor(base->db->NewIterator());
for (pcursor->SeekToFirst(); pcursor->Valid(); pcursor->Next()) {
std::pair<uint8_t, int> key;
if (!pcursor->GetKey(key))
continue;
int height = key.second;
if (key.first == CLAIM_EXP_QUEUE_ROW) {
expirationQueueRowType row;
if (pcursor->GetValue(row)) {
reactivateClaim(row, height, increment);
} else {
return error("%s(): error reading expiration queue rows from disk", __func__);
}
} else if (key.first == SUPPORT_EXP_QUEUE_ROW) {
expirationQueueRowType row;
if (pcursor->GetValue(row)) {
reactivateSupport(row, height, increment);
} else {
return error("%s(): error reading support expiration queue rows from disk", __func__);
}
}
}
return true;
}
bool CClaimTrieCacheNormalizationFork::shouldNormalize() const
{
return nNextHeight > Params().GetConsensus().nNormalizedNameForkHeight;
}
std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::string& name, bool force) const
{
if (!force && !shouldNormalize())
return name;
static std::locale utf8;
static bool initialized = false;
if (!initialized) {
static boost::locale::localization_backend_manager manager =
boost::locale::localization_backend_manager::global();
manager.select("icu");
static boost::locale::generator curLocale(manager);
utf8 = curLocale("en_US.UTF8");
initialized = true;
}
std::string normalized;
try {
// Check if it is a valid utf-8 string. If not, it will throw a
// boost::locale::conv::conversion_error exception which we catch later
normalized = boost::locale::conv::to_utf<char>(name, "UTF-8", boost::locale::conv::stop);
if (normalized.empty())
return name;
// these methods supposedly only use the "UTF8" portion of the locale object:
normalized = boost::locale::normalize(normalized, boost::locale::norm_nfd, utf8);
normalized = boost::locale::fold_case(normalized, utf8);
} catch (const boost::locale::conv::conversion_error& e) {
return name;
} catch (const std::bad_cast& e) {
LogPrintf("%s() is invalid or dependencies are missing: %s\n", __func__, e.what());
throw;
} catch (const std::exception& e) { // TODO: change to use ... with current_exception() in c++11
LogPrintf("%s() had an unexpected exception: %s\n", __func__, e.what());
return name;
}
return normalized;
}
bool CClaimTrieCacheNormalizationFork::insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::insertClaimIntoTrie(normalizeClaimName(name, overrideInsertNormalization), claim, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::removeClaimFromTrie(normalizeClaimName(name, overrideRemoveNormalization), outPoint, claim, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::insertSupportIntoMap(normalizeClaimName(name, overrideInsertNormalization), support, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::removeSupportFromMap(normalizeClaimName(name, overrideRemoveNormalization), outPoint, support, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo, claimQueueRowType& removeUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
if (nNextHeight != Params().GetConsensus().nNormalizedNameForkHeight)
return false;
// run the one-time upgrade of all names that need to change
// it modifies the (cache) trie as it goes, so we need to grab everything to be modified first
for (auto it = base->cbegin(); it != base->cend(); ++it) {
const std::string normalized = normalizeClaimName(it.key(), true);
if (normalized == it.key())
continue;
auto& name = it.key();
auto supports = getSupportsForName(name);
for (auto support : supports) {
// if it's already going to expire just skip it
if (support.nHeight + expirationTime() <= nNextHeight)
continue;
assert(removeSupportFromMap(name, support.outPoint, support, false));
expireSupportUndo.emplace_back(name, support);
assert(insertSupportIntoMap(normalized, support, false));
insertSupportUndo.emplace_back(name, support.outPoint, -1);
}
namesToCheckForTakeover.insert(normalized);
auto cached = cacheData(name, false);
if (!cached || cached->empty())
continue;
auto claimsCopy = cached->claims;
auto takeoverHeightCopy = cached->nHeightOfLastTakeover;
for (auto claim : claimsCopy) {
if (claim.nHeight + expirationTime() <= nNextHeight)
continue;
assert(removeClaimFromTrie(name, claim.outPoint, claim, false));
removeUndo.emplace_back(name, claim);
assert(insertClaimIntoTrie(normalized, claim, true));
insertUndo.emplace_back(name, claim.outPoint, -1);
}
takeoverHeightUndo.emplace_back(name, takeoverHeightCopy);
}
return true;
}
bool CClaimTrieCacheNormalizationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
overrideInsertNormalization = normalizeAllNamesInTrieIfNecessary(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
BOOST_SCOPE_EXIT(&overrideInsertNormalization) { overrideInsertNormalization = false; }
BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
}
bool CClaimTrieCacheNormalizationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo)
{
overrideRemoveNormalization = shouldNormalize();
BOOST_SCOPE_EXIT(&overrideRemoveNormalization) { overrideRemoveNormalization = false; }
BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo);
}
bool CClaimTrieCacheNormalizationFork::getProofForName(const std::string& name, CClaimTrieProof& proof)
{
return CClaimTrieCacheExpirationFork::getProofForName(normalizeClaimName(name), proof);
}
bool CClaimTrieCacheNormalizationFork::getInfoForName(const std::string& name, CClaimValue& claim) const
{
return CClaimTrieCacheExpirationFork::getInfoForName(normalizeClaimName(name), claim);
}
CClaimSupportToName CClaimTrieCacheNormalizationFork::getClaimsForName(const std::string& name) const
{
return CClaimTrieCacheExpirationFork::getClaimsForName(normalizeClaimName(name));
}
int CClaimTrieCacheNormalizationFork::getDelayForName(const std::string& name, const uint160& claimId) const
{
return CClaimTrieCacheExpirationFork::getDelayForName(normalizeClaimName(name), claimId);
}
std::string CClaimTrieCacheNormalizationFork::adjustNameForValidHeight(const std::string& name, int validHeight) const
{
return normalizeClaimName(name, validHeight > Params().GetConsensus().nNormalizedNameForkHeight);
}
CClaimTrieCacheHashFork::CClaimTrieCacheHashFork(CClaimTrie* base) : CClaimTrieCacheNormalizationFork(base)
{
}
static const uint256 leafHash = uint256S("0000000000000000000000000000000000000000000000000000000000000002");
static const uint256 emptyHash = uint256S("0000000000000000000000000000000000000000000000000000000000000003");
std::vector<uint256> getClaimHashes(const CClaimTrieData& data)
{
std::vector<uint256> hashes;
for (auto& claim : data.claims)
hashes.push_back(getValueHash(claim.outPoint, data.nHeightOfLastTakeover));
return hashes;
}
template <typename T>
using iCbType = std::function<uint256(T&)>;
template <typename TIterator>
uint256 recursiveBinaryTreeHash(TIterator& it, const iCbType<TIterator>& process)
{
std::vector<uint256> childHashes;
for (auto& child : it.children())
childHashes.emplace_back(process(child));
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
else if (!it.hasChildren())
return {};
auto left = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
auto right = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
return Hash(left.begin(), left.end(), right.begin(), right.end());
}
uint256 CClaimTrieCacheHashFork::recursiveComputeMerkleHash(CClaimTrie::iterator& it)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveComputeMerkleHash(it);
using iterator = CClaimTrie::iterator;
iCbType<iterator> process = [&process](iterator& it) -> uint256 {
if (it->hash.IsNull())
it->hash = recursiveBinaryTreeHash(it, process);
assert(!it->hash.IsNull());
return it->hash;
};
return process(it);
}
bool CClaimTrieCacheHashFork::recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveCheckConsistency(it, failed);
struct CRecursiveBreak {};
using iterator = CClaimTrie::const_iterator;
iCbType<iterator> process = [&failed, &process](iterator& it) -> uint256 {
if (it->hash.IsNull() || it->hash != recursiveBinaryTreeHash(it, process)) {
failed = it.key();
throw CRecursiveBreak();
}
return it->hash;
};
try {
process(it);
} catch (const CRecursiveBreak&) {
return false;
}
return true;
}
std::vector<uint256> ComputeMerklePath(const std::vector<uint256>& hashes, uint32_t idx)
{
uint32_t count = 0;
int matchlevel = -1;
bool matchh = false;
uint256 inner[32], h;
const uint32_t one = 1;
std::vector<uint256> res;
const auto iterateInner = [&](int& level) {
for (; !(count & (one << level)); level++) {
const auto& ihash = inner[level];
if (matchh) {
res.push_back(ihash);
} else if (matchlevel == level) {
res.push_back(h);
matchh = true;
}
h = Hash(ihash.begin(), ihash.end(), h.begin(), h.end());
}
};
while (count < hashes.size()) {
h = hashes[count];
matchh = count == idx;
count++;
int level = 0;
iterateInner(level);
// Store the resulting hash at inner position level.
inner[level] = h;
if (matchh)
matchlevel = level;
}
int level = 0;
while (!(count & (one << level)))
level++;
h = inner[level];
matchh = matchlevel == level;
while (count != (one << level)) {
// If we reach this point, h is an inner value that is not the top.
if (matchh)
res.push_back(h);
h = Hash(h.begin(), h.end(), h.begin(), h.end());
// Increment count to the value it would have if two entries at this
count += (one << level);
level++;
iterateInner(level);
}
return res;
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof)
{
return getProofForName(name, proof, nullptr);
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof, const std::function<bool(const CClaimValue&)>& comp)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::getProofForName(name, proof);
auto fillPairs = [&proof](const std::vector<uint256>& hashes, uint32_t idx) {
auto partials = ComputeMerklePath(hashes, idx);
for (int i = partials.size() - 1; i >= 0; --i)
proof.pairs.emplace_back((idx >> i) & 1, partials[i]);
};
// cache the parent nodes
cacheData(name, false);
getMerkleHash();
proof = CClaimTrieProof();
for (auto& it : static_cast<const CClaimTrie&>(nodesToAddOrUpdate).nodes(name)) {
std::vector<uint256> childHashes;
uint32_t nextCurrentIdx = 0;
for (auto& child : it.children()) {
if (name.find(child.key()) == 0)
nextCurrentIdx = uint32_t(childHashes.size());
childHashes.push_back(child->hash);
}
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
// I am on a node; I need a hash(children, claims)
// if I am the last node on the list, it will be hash(children, x)
// else it will be hash(x, claims)
if (it.key() == name) {
uint32_t nClaimIndex = 0;
auto& claims = it->claims;
auto itClaim = !comp ? claims.begin() : std::find_if(claims.begin(), claims.end(), comp);
if (itClaim != claims.end()) {
proof.hasValue = true;
proof.outPoint = itClaim->outPoint;
proof.nHeightOfLastTakeover = it->nHeightOfLastTakeover;
nClaimIndex = std::distance(claims.begin(), itClaim);
}
auto hash = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
proof.pairs.emplace_back(true, hash);
if (!claimHashes.empty())
fillPairs(claimHashes, nClaimIndex);
} else {
auto hash = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
proof.pairs.emplace_back(false, hash);
if (!childHashes.empty())
fillPairs(childHashes, nextCurrentIdx);
}
}
std::reverse(proof.pairs.begin(), proof.pairs.end());
return true;
}
void CClaimTrieCacheHashFork::copyAllBaseToCache()
{
for (auto it = base->cbegin(); it != base->cend(); ++it)
if (nodesAlreadyCached.insert(it.key()).second)
nodesToAddOrUpdate.insert(it.key(), it.data());
for (auto it = nodesToAddOrUpdate.begin(); it != nodesToAddOrUpdate.end(); ++it)
it->hash.SetNull();
}
void CClaimTrieCacheHashFork::initializeIncrement()
{
CClaimTrieCacheNormalizationFork::initializeIncrement();
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
return;
// if we are forking, we load the entire base trie into the cache trie
// we reset its hash computation so it can be recomputed completely
copyAllBaseToCache();
}
bool CClaimTrieCacheHashFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheNormalizationFork::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
copyAllBaseToCache();
return ret;
}
bool CClaimTrieCacheHashFork::allowSupportMetadata() const
{
return nNextHeight >= Params().GetConsensus().nAllClaimsInMerkleForkHeight;
}

View file

@ -12,7 +12,7 @@
* for both bitcoind and bitcoin-qt, to make it harder for attackers to
* target servers or GUI users specifically.
*/
const std::string CLIENT_NAME("Satoshi");
const std::string CLIENT_NAME("LBRY");
/**
* Client version number

View file

@ -39,8 +39,8 @@ public:
uint32_t nHeight : 31;
//! construct a Coin from a CTxOut and height/coinbase information.
Coin(CTxOut&& outIn, int nHeightIn, bool fCoinBaseIn) : out(std::move(outIn)), fCoinBase(fCoinBaseIn), nHeight(nHeightIn) {}
Coin(const CTxOut& outIn, int nHeightIn, bool fCoinBaseIn) : out(outIn), fCoinBase(fCoinBaseIn),nHeight(nHeightIn) {}
Coin(CTxOut&& outIn, int nHeightIn, bool fCoinBaseIn) : out(std::move(outIn)), fCoinBase(fCoinBaseIn), nHeight(nHeightIn) {}
Coin(const CTxOut& outIn, int nHeightIn, bool fCoinBaseIn) : out(outIn), fCoinBase(fCoinBaseIn), nHeight(nHeightIn) {}
void Clear() {
out.SetNull();

View file

@ -10,9 +10,9 @@
#include <stdint.h>
/** The maximum allowed size for a serialized block, in bytes (only for buffer size limits) */
static const unsigned int MAX_BLOCK_SERIALIZED_SIZE = 4000000;
static const unsigned int MAX_BLOCK_SERIALIZED_SIZE = 8000000;
/** The maximum allowed weight for a block, see BIP 141 (network rule) */
static const unsigned int MAX_BLOCK_WEIGHT = 4000000;
static const unsigned int MAX_BLOCK_WEIGHT = 8000000;
/** The maximum allowed number of signature check operations in a block (network rule) */
static const int64_t MAX_BLOCK_SIGOPS_COST = 80000;
/** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */

View file

@ -48,7 +48,11 @@ struct BIP9Deployment {
*/
struct Params {
uint256 hashGenesisBlock;
int nSubsidyHalvingInterval;
int nSubsidyLevelInterval;
/** Used to check majorities for block version upgrade */
int nMajorityEnforceBlockUpgrade;
int nMajorityRejectBlockOutdated;
int nMajorityWindow;
/* Block hash that is excepted from BIP16 enforcement */
uint256 BIP16Exception;
/** Block height and hash at which BIP34 becomes active */
@ -70,8 +74,30 @@ struct Params {
uint256 powLimit;
bool fPowAllowMinDifficultyBlocks;
bool fPowNoRetargeting;
int nAllowMinDiffMinHeight;
int nAllowMinDiffMaxHeight;
int nNormalizedNameForkHeight;
int nMinTakeoverWorkaroundHeight;
int nMaxTakeoverWorkaroundHeight;
int nWitnessForkHeight;
int64_t nPowTargetSpacing;
int64_t nPowTargetTimespan;
/** how long it took claims to expire before the hard fork */
int64_t nOriginalClaimExpirationTime;
/** how long it takes claims to expire after the hard fork */
int64_t nExtendedClaimExpirationTime;
/** blocks before the hard fork that changed the expiration time */
int64_t nExtendedClaimExpirationForkHeight;
int64_t GetExpirationTime(int64_t nHeight) const {
return nHeight < nExtendedClaimExpirationForkHeight ?
nOriginalClaimExpirationTime :
nExtendedClaimExpirationTime;
}
/** blocks before the hard fork that adds all claims into the merkle hash */
int64_t nAllClaimsInMerkleForkHeight;
int64_t DifficultyAdjustmentInterval() const { return nPowTargetTimespan / nPowTargetSpacing; }
uint256 nMinimumChainWork;
uint256 defaultAssumeValid;

View file

@ -9,6 +9,8 @@
#include <script/interpreter.h>
#include <consensus/validation.h>
#include <nameclaim.h>
// TODO remove the following dependencies
#include <chain.h>
#include <coins.h>
@ -129,8 +131,9 @@ unsigned int GetP2SHSigOpCount(const CTransaction& tx, const CCoinsViewCache& in
const Coin& coin = inputs.AccessCoin(tx.vin[i].prevout);
assert(!coin.IsSpent());
const CTxOut &prevout = coin.out;
if (prevout.scriptPubKey.IsPayToScriptHash())
nSigOps += prevout.scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig);
const CScript& scriptPubKey = StripClaimScriptPrefix(prevout.scriptPubKey);
if (scriptPubKey.IsPayToScriptHash())
nSigOps += scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig);
}
return nSigOps;
}
@ -151,7 +154,8 @@ int64_t GetTransactionSigOpCost(const CTransaction& tx, const CCoinsViewCache& i
const Coin& coin = inputs.AccessCoin(tx.vin[i].prevout);
assert(!coin.IsSpent());
const CTxOut &prevout = coin.out;
nSigOps += CountWitnessSigOps(tx.vin[i].scriptSig, prevout.scriptPubKey, &tx.vin[i].scriptWitness, flags);
const CScript& scriptPubKey = StripClaimScriptPrefix(prevout.scriptPubKey);
nSigOps += CountWitnessSigOps(tx.vin[i].scriptSig, scriptPubKey, &tx.vin[i].scriptWitness, flags);
}
return nSigOps;
}
@ -178,6 +182,12 @@ bool CheckTransaction(const CTransaction& tx, CValidationState &state, bool fChe
nValueOut += txout.nValue;
if (!MoneyRange(nValueOut))
return state.DoS(100, false, REJECT_INVALID, "bad-txns-txouttotal-toolarge");
// check claimtrie transactions
if (ClaimScriptSize(txout.scriptPubKey) > MAX_CLAIM_SCRIPT_SIZE)
return state.DoS(100, false, REJECT_INVALID, "bad-txns-claimscriptsize-toolarge");
if (ClaimNameSize(txout.scriptPubKey) > MAX_CLAIM_NAME_SIZE)
return state.DoS(100, false, REJECT_INVALID, "bad-txns-claimscriptname-toolarge");
}
// Check for duplicate inputs - note that this check is slow so we skip it in CheckBlock

View file

@ -18,7 +18,7 @@ static const unsigned char REJECT_INVALID = 0x10;
static const unsigned char REJECT_OBSOLETE = 0x11;
static const unsigned char REJECT_DUPLICATE = 0x12;
static const unsigned char REJECT_NONSTANDARD = 0x40;
// static const unsigned char REJECT_DUST = 0x41; // part of BIP 61
static const unsigned char REJECT_DUST = 0x41; // part of BIP 61
static const unsigned char REJECT_INSUFFICIENTFEE = 0x42;
static const unsigned char REJECT_CHECKPOINT = 0x43;

View file

@ -15,6 +15,7 @@
#include <util.h>
#include <utilmoneystr.h>
#include <utilstrencodings.h>
#include <nameclaim.h>
UniValue ValueFromAmount(const CAmount& amount)
{
@ -77,6 +78,10 @@ std::string SighashToStr(unsigned char sighash_type)
return it->second;
}
bool IsClaimOpCode(opcodetype code) {
return code == opcodetype::OP_CLAIM_NAME || code == opcodetype::OP_SUPPORT_CLAIM || code == opcodetype::OP_UPDATE_CLAIM;
}
/**
* Create the assembly string representation of a CScript object.
* @param[in] script CScript object to convert into the asm string representation.
@ -90,6 +95,7 @@ std::string ScriptToAsmStr(const CScript& script, const bool fAttemptSighashDeco
opcodetype opcode;
std::vector<unsigned char> vch;
CScript::const_iterator pc = script.begin();
bool isClaimScript = false;
while (pc < script.end()) {
if (!str.empty()) {
str += " ";
@ -98,8 +104,9 @@ std::string ScriptToAsmStr(const CScript& script, const bool fAttemptSighashDeco
str += "[error]";
return str;
}
isClaimScript |= IsClaimOpCode(opcode);
if (0 <= opcode && opcode <= OP_PUSHDATA4) {
if (vch.size() <= static_cast<std::vector<unsigned char>::size_type>(4)) {
if (vch.size() <= static_cast<std::vector<unsigned char>::size_type>(4) && !isClaimScript) {
str += strprintf("%d", CScriptNum(vch, false).getint());
} else {
// the IsUnspendable check makes sure not to try to decode OP_RETURN data that may match the format of a signature
@ -141,12 +148,20 @@ void ScriptToUniv(const CScript& script, UniValue& out, bool include_address)
out.pushKV("hex", HexStr(script.begin(), script.end()));
std::vector<std::vector<unsigned char>> solns;
txnouttype type;
Solver(script, type, solns);
out.pushKV("type", GetTxnOutputType(type));
txnouttype type; int claimOp;
auto stripped = StripClaimScriptPrefix(script, claimOp);
Solver(stripped, type, solns);
if (claimOp >= 0) {
out.pushKV("isclaim", UniValue(claimOp == OP_CLAIM_NAME || claimOp == OP_UPDATE_CLAIM));
out.pushKV("issupport", UniValue(claimOp == OP_SUPPORT_CLAIM));
out.pushKV("subtype", GetTxnOutputType(type));
out.pushKV("type", GetTxnOutputType(TX_NONSTANDARD)); // trying to keep backwards compatibility
}
else
out.pushKV("type", GetTxnOutputType(type)); // trying to keep backwards compatibility
CTxDestination address;
if (include_address && ExtractDestination(script, address)) {
if (include_address && ExtractDestination(stripped, address)) {
out.pushKV("address", EncodeDestination(address));
}
}
@ -162,19 +177,28 @@ void ScriptPubKeyToUniv(const CScript& scriptPubKey,
if (fIncludeHex)
out.pushKV("hex", HexStr(scriptPubKey.begin(), scriptPubKey.end()));
if (!ExtractDestinations(scriptPubKey, type, addresses, nRequired)) {
int claimOp;
auto stripped = StripClaimScriptPrefix(scriptPubKey, claimOp);
auto extracted = ExtractDestinations(stripped, type, addresses, nRequired);
if (extracted)
out.pushKV("reqSigs", nRequired);
if (claimOp >= 0) {
out.pushKV("isclaim", UniValue(claimOp == OP_CLAIM_NAME || claimOp == OP_UPDATE_CLAIM));
out.pushKV("issupport", UniValue(claimOp == OP_SUPPORT_CLAIM));
out.pushKV("subtype", GetTxnOutputType(type));
out.pushKV("type", GetTxnOutputType(TX_NONSTANDARD));
}
else
out.pushKV("type", GetTxnOutputType(type));
return;
}
out.pushKV("reqSigs", nRequired);
out.pushKV("type", GetTxnOutputType(type));
UniValue a(UniValue::VARR);
for (const CTxDestination& addr : addresses) {
a.push_back(EncodeDestination(addr));
if (extracted) {
UniValue a(UniValue::VARR);
for (const CTxDestination &addr : addresses) {
a.push_back(EncodeDestination(addr));
}
out.pushKV("addresses", a);
}
out.pushKV("addresses", a);
}
void TxToUniv(const CTransaction& tx, const uint256& hashBlock, UniValue& entry, bool include_hex, int serialize_flags)

View file

@ -80,14 +80,14 @@ static void SetMaxOpenFiles(leveldb::Options *options) {
// implementation that does not use extra file descriptors (the fds are
// closed after being mmaped).
//
// Increasing the value beyond the default is dangerous because LevelDB will
// fall back to a non-mmap implementation when the file count is too large.
// Increasing the value beyond the nmap count is dangerous because LevelDB will
// fall back to a non-mmap implementation when the file count is too large (thus contending select()).
// On 32-bit Unix host we should decrease the value because the handles use
// up real fds, and we want to avoid fd exhaustion issues.
//
// See PR #12495 for further discussion.
int default_open_files = options->max_open_files;
int default_open_files = 400;
#ifndef WIN32
if (sizeof(void*) < 8) {
options->max_open_files = 64;
@ -100,9 +100,10 @@ static void SetMaxOpenFiles(leveldb::Options *options) {
static leveldb::Options GetOptions(size_t nCacheSize)
{
leveldb::Options options;
options.block_cache = leveldb::NewLRUCache(nCacheSize / 2);
options.write_buffer_size = nCacheSize / 4; // up to two write buffers may be held in memory simultaneously
options.filter_policy = leveldb::NewBloomFilterPolicy(10);
auto write_cache = std::min(nCacheSize / 4, size_t(32 * 1024 * 1024)); // cap write_cache
options.block_cache = leveldb::NewLRUCache(nCacheSize - write_cache * 2);
options.write_buffer_size = write_cache; // up to two write buffers may be held in memory simultaneously
options.filter_policy = leveldb::NewBloomFilterPolicy(12);
options.compression = leveldb::kNoCompression;
options.info_log = new CBitcoinLevelDBLogger();
if (leveldb::kMajorVersion > 1 || (leveldb::kMajorVersion == 1 && leveldb::kMinorVersion >= 16)) {
@ -115,7 +116,7 @@ static leveldb::Options GetOptions(size_t nCacheSize)
}
CDBWrapper::CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory, bool fWipe, bool obfuscate)
: m_name(fs::basename(path))
: m_name(fs::basename(path)), ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION)
{
penv = nullptr;
readoptions.verify_checksums = true;
@ -180,8 +181,16 @@ CDBWrapper::~CDBWrapper()
options.env = nullptr;
}
bool CDBWrapper::Sync() {
CDBBatch batch(*this);
return WriteBatch(batch, true);
}
bool CDBWrapper::WriteBatch(CDBBatch& batch, bool fSync)
{
if (!pdb)
return false;
const bool log_memory = LogAcceptCategory(BCLog::LEVELDB);
double mem_before = 0;
if (log_memory) {

View file

@ -16,9 +16,6 @@
#include <leveldb/db.h>
#include <leveldb/write_batch.h>
static const size_t DBWRAPPER_PREALLOC_KEY_SIZE = 64;
static const size_t DBWRAPPER_PREALLOC_VALUE_SIZE = 1024;
class dbwrapper_error : public std::runtime_error
{
public:
@ -33,87 +30,16 @@ namespace dbwrapper_private {
/** Handle database error by throwing dbwrapper_error exception.
*/
void HandleError(const leveldb::Status& status);
void HandleError(const leveldb::Status& status);
/** Work around circular dependency, as well as for testing in dbwrapper_tests.
* Database obfuscation should be considered an implementation detail of the
* specific database.
*/
const std::vector<unsigned char>& GetObfuscateKey(const CDBWrapper &w);
const std::vector<unsigned char>& GetObfuscateKey(const CDBWrapper &w);
};
/** Batch of changes queued to be written to a CDBWrapper */
class CDBBatch
{
friend class CDBWrapper;
private:
const CDBWrapper &parent;
leveldb::WriteBatch batch;
CDataStream ssKey;
CDataStream ssValue;
size_t size_estimate;
public:
/**
* @param[in] _parent CDBWrapper that this batch is to be submitted to
*/
explicit CDBBatch(const CDBWrapper &_parent) : parent(_parent), ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION), size_estimate(0) { };
void Clear()
{
batch.Clear();
size_estimate = 0;
}
template <typename K, typename V>
void Write(const K& key, const V& value)
{
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
ssValue.reserve(DBWRAPPER_PREALLOC_VALUE_SIZE);
ssValue << value;
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
leveldb::Slice slValue(ssValue.data(), ssValue.size());
batch.Put(slKey, slValue);
// LevelDB serializes writes as:
// - byte: header
// - varint: key length (1 byte up to 127B, 2 bytes up to 16383B, ...)
// - byte[]: key
// - varint: value length
// - byte[]: value
// The formula below assumes the key and value are both less than 16k.
size_estimate += 3 + (slKey.size() > 127) + slKey.size() + (slValue.size() > 127) + slValue.size();
ssKey.clear();
ssValue.clear();
}
template <typename K>
void Erase(const K& key)
{
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
batch.Delete(slKey);
// LevelDB serializes erases as:
// - byte: header
// - varint: key length
// - byte[]: key
// The formula below assumes the key is less than 16kB.
size_estimate += 2 + (slKey.size() > 127) + slKey.size();
ssKey.clear();
}
size_t SizeEstimate() const { return size_estimate; }
};
class CDBIterator
{
private:
@ -127,7 +53,7 @@ public:
* @param[in] _piter The original leveldb iterator.
*/
CDBIterator(const CDBWrapper &_parent, leveldb::Iterator *_piter) :
parent(_parent), piter(_piter) { };
parent(_parent), piter(_piter) { };
~CDBIterator();
bool Valid() const;
@ -136,7 +62,6 @@ public:
template<typename K> void Seek(const K& key) {
CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
piter->Seek(slKey);
@ -173,6 +98,8 @@ public:
};
class CDBBatch;
class CDBWrapper
{
friend const std::vector<unsigned char>& dbwrapper_private::GetObfuscateKey(const CDBWrapper &w);
@ -213,6 +140,8 @@ private:
std::vector<unsigned char> CreateObfuscateKey() const;
public:
mutable CDataStream ssKey, ssValue;
/**
* @param[in] path Location in the filesystem where leveldb data will be stored.
* @param[in] nCacheSize Configures various leveldb cache settings.
@ -222,21 +151,21 @@ public:
* with a zero'd byte array.
*/
CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory = false, bool fWipe = false, bool obfuscate = false);
~CDBWrapper();
virtual ~CDBWrapper();
CDBWrapper(const CDBWrapper&) = delete;
CDBWrapper& operator=(const CDBWrapper&) = delete;
/* CDBWrapper& operator=(const CDBWrapper&) = delete; */
template <typename K, typename V>
bool Read(const K& key, V& value) const
{
CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
assert(ssKey.empty());
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) {
if (status.IsNotFound())
return false;
@ -254,23 +183,17 @@ public:
}
template <typename K, typename V>
bool Write(const K& key, const V& value, bool fSync = false)
{
CDBBatch batch(*this);
batch.Write(key, value);
return WriteBatch(batch, fSync);
}
bool Write(const K& key, const V& value, bool fSync = false);
template <typename K>
bool Exists(const K& key) const
{
CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) {
if (status.IsNotFound())
return false;
@ -281,12 +204,7 @@ public:
}
template <typename K>
bool Erase(const K& key, bool fSync = false)
{
CDBBatch batch(*this);
batch.Erase(key);
return WriteBatch(batch, fSync);
}
bool Erase(const K& key, bool fSync = false);
bool WriteBatch(CDBBatch& batch, bool fSync = false);
@ -299,11 +217,7 @@ public:
return true;
}
bool Sync()
{
CDBBatch batch(*this);
return WriteBatch(batch, true);
}
bool Sync();
CDBIterator *NewIterator()
{
@ -319,8 +233,8 @@ public:
size_t EstimateSize(const K& key_begin, const K& key_end) const
{
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey2.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey1.reserve(ssKey.capacity());
ssKey2.reserve(ssKey.capacity());
ssKey1 << key_begin;
ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
@ -338,8 +252,8 @@ public:
void CompactRange(const K& key_begin, const K& key_end) const
{
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey2.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey1.reserve(ssKey.capacity());
ssKey2.reserve(ssKey.capacity());
ssKey1 << key_begin;
ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
@ -349,4 +263,88 @@ public:
};
/** Batch of changes queued to be written to a CDBWrapper */
class CDBBatch
{
friend class CDBWrapper;
const CDBWrapper &parent;
leveldb::WriteBatch batch;
size_t size_estimate;
CDataStream ssKey, ssValue;
public:
/**
* @param[in] _parent CDBWrapper that this batch is to be submitted to
*/
explicit CDBBatch(const CDBWrapper &_parent) : parent(_parent), size_estimate(0),
ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION) {
ssKey.reserve(parent.ssKey.capacity());
ssValue.reserve(parent.ssValue.capacity());
};
void Clear()
{
batch.Clear();
size_estimate = 0;
}
template <typename K, typename V>
void Write(const K& key, const V& value)
{
assert(ssKey.empty());
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
assert(ssValue.empty());
ssValue << value;
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
leveldb::Slice slValue(ssValue.data(), ssValue.size());
batch.Put(slKey, slValue);
// LevelDB serializes writes as:
// - byte: header
// - varint: key length (1 byte up to 127B, 2 bytes up to 16383B, ...)
// - byte[]: key
// - varint: value length
// - byte[]: value
// The formula below assumes the key and value are both less than 16k.
size_estimate += 3 + (slKey.size() > 127) + slKey.size() + (slValue.size() > 127) + slValue.size();
ssKey.clear();
ssValue.clear();
}
template <typename K>
void Erase(const K& key)
{
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
batch.Delete(slKey);
// LevelDB serializes erases as:
// - byte: header
// - varint: key length
// - byte[]: key
// The formula below assumes the key is less than 16kB.
size_estimate += 2 + (slKey.size() > 127) + slKey.size();
ssKey.clear();
}
size_t SizeEstimate() const { return size_estimate; }
};
template<typename K>
bool CDBWrapper::Erase(const K &key, bool fSync) {
CDBBatch batch(*this);
batch.Erase(key);
return WriteBatch(batch, fSync);
}
template<typename K, typename V>
bool CDBWrapper::Write(const K &key, const V &value, bool fSync) {
CDBBatch batch(*this);
batch.Write(key, value);
return WriteBatch(batch, fSync);
}
#endif // BITCOIN_DBWRAPPER_H

View file

@ -12,6 +12,43 @@ inline uint32_t ROTL32(uint32_t x, int8_t r)
return (x << r) | (x >> (32 - r));
}
uint256 PoWHash(const std::vector<unsigned char>& input)
{
CHash256 h256;
CSHA512 h512;
CRIPEMD160 h160;
std::vector<unsigned char> out;
out.resize(h512.OUTPUT_SIZE);
std::vector<unsigned char> out_small;
out_small.resize(h160.OUTPUT_SIZE);
h256.Write(input.data(), input.size());
h256.Finalize(&out[0]);
h256.Reset();
h512.Write(out.data(), h256.OUTPUT_SIZE);
h512.Finalize(&out[0]);
h160.Write(out.data(), h512.OUTPUT_SIZE / 2);
h160.Finalize(&out_small[0]);
h160.Reset();
h256.Write(out_small.data(), h160.OUTPUT_SIZE);
h160.Write(out.data() + h512.OUTPUT_SIZE / 2, h512.OUTPUT_SIZE / 2);
h160.Finalize(&out_small[0]);
out.resize(h256.OUTPUT_SIZE);
h256.Write(out_small.data(), h160.OUTPUT_SIZE);
h256.Finalize(&out[0]);
uint256 result(out);
return result;
}
unsigned int MurmurHash3(unsigned int nHashSeed, const std::vector<unsigned char>& vDataToHash)
{
// The following is MurmurHash3 (x86_32), see http://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp

View file

@ -190,6 +190,8 @@ uint256 SerializeHash(const T& obj, int nType=SER_GETHASH, int nVersion=PROTOCOL
return ss.GetHash();
}
uint256 PoWHash(const std::vector<unsigned char>& input);
unsigned int MurmurHash3(unsigned int nHashSeed, const std::vector<unsigned char>& vDataToHash);
void BIP32Hash(const ChainCode &chainCode, unsigned int nChild, unsigned char header, const unsigned char data[32], unsigned char output[64]);

View file

@ -218,7 +218,7 @@ static bool InitRPCAuthentication()
LogPrintf("No rpcpassword set - using random cookie authentication.\n");
if (!GenerateAuthCookie(&strRPCUserColonPass)) {
uiInterface.ThreadSafeMessageBox(
_("Error: A fatal internal error occurred, see debug.log for details"), // Same message as AbortNode
_("Error: A fatal internal error occurred generating the cookie, see debug.log for details"), // Same message as AbortNode
"", CClientUIInterface::MSG_ERROR);
return false;
}

View file

@ -28,6 +28,8 @@ protected:
DB(const fs::path& path, size_t n_cache_size,
bool f_memory = false, bool f_wipe = false, bool f_obfuscate = false);
~DB() override {}
/// Read block locator of the chain that the txindex is in sync with.
bool ReadBestBlock(CBlockLocator& locator) const;

View file

@ -16,31 +16,6 @@ constexpr char DB_TXINDEX_BLOCK = 'T';
std::unique_ptr<TxIndex> g_txindex;
struct CDiskTxPos : public CDiskBlockPos
{
unsigned int nTxOffset; // after header
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action) {
READWRITEAS(CDiskBlockPos, *this);
READWRITE(VARINT(nTxOffset));
}
CDiskTxPos(const CDiskBlockPos &blockIn, unsigned int nTxOffsetIn) : CDiskBlockPos(blockIn.nFile, blockIn.nPos), nTxOffset(nTxOffsetIn) {
}
CDiskTxPos() {
SetNull();
}
void SetNull() {
CDiskBlockPos::SetNull();
nTxOffset = 0;
}
};
/**
* Access to the txindex database (indexes/txindex/)
*
@ -54,6 +29,7 @@ class TxIndex::DB : public BaseIndex::DB
{
public:
explicit DB(size_t n_cache_size, bool f_memory = false, bool f_wipe = false);
~DB() override {}
/// Read the disk location of the transaction data with the given hash. Returns false if the
/// transaction hash is not indexed.

View file

@ -9,6 +9,31 @@
#include <index/base.h>
#include <txdb.h>
struct CDiskTxPos : public CDiskBlockPos
{
unsigned int nTxOffset; // after header
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action) {
READWRITEAS(CDiskBlockPos, *this);
READWRITE(VARINT(nTxOffset));
}
CDiskTxPos(const CDiskBlockPos &blockIn, unsigned int nTxOffsetIn) : CDiskBlockPos(blockIn.nFile, blockIn.nPos), nTxOffset(nTxOffsetIn) {
}
CDiskTxPos() {
SetNull();
}
void SetNull() {
CDiskBlockPos::SetNull();
nTxOffset = 0;
}
};
/**
* TxIndex is used to look up transactions included in the blockchain by hash.
* The index is written to a LevelDB database and records the filesystem

View file

@ -14,6 +14,7 @@
#include <chain.h>
#include <chainparams.h>
#include <checkpoints.h>
#include <claimtrie.h>
#include <compat/sanity.h>
#include <consensus/validation.h>
#include <fs.h>
@ -21,6 +22,7 @@
#include <httprpc.h>
#include <index/txindex.h>
#include <key.h>
#include <lbry.h>
#include <validation.h>
#include <miner.h>
#include <netbase.h>
@ -41,6 +43,7 @@
#include <txmempool.h>
#include <torcontrol.h>
#include <ui_interface.h>
#include <uint256.h>
#include <util.h>
#include <utilmoneystr.h>
#include <validationinterface.h>
@ -195,7 +198,7 @@ void Shutdown()
/// for example if the data directory was found to be locked.
/// Be sure that anything that writes files or flushes caches only does this if the respective
/// module was initialized.
RenameThread("bitcoin-shutoff");
RenameThread("lbrycrd-shutoff");
mempool.AddTransactionsUpdated(1);
StopHTTPRPC();
@ -264,6 +267,8 @@ void Shutdown()
pcoinscatcher.reset();
pcoinsdbview.reset();
pblocktree.reset();
delete pclaimTrie;
pclaimTrie = nullptr;
}
g_wallet_init_interface.Stop();
@ -366,6 +371,7 @@ void SetupServerArgs()
gArgs.AddArg("-datadir=<dir>", "Specify data directory", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbcache=<n>", strprintf("Set database cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-claimtriecache=<n>", strprintf("Set claim trie cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-feefilter", strprintf("Tell other nodes to filter invs to us by our mempool min fee (default: %u)", DEFAULT_FEEFILTER), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", false, OptionsCategory::OPTIONS);
@ -393,6 +399,7 @@ void SetupServerArgs()
hidden_args.emplace_back("-sysperms");
#endif
gArgs.AddArg("-txindex", strprintf("Maintain a full transaction index, used by the getrawtransaction rpc call (default: %u)", DEFAULT_TXINDEX), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-memfile=<GiB>", "Use a memory mapped file for the claimtrie allocations (default: use RAM instead)", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-addnode=<ip>", "Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes.", false, OptionsCategory::CONNECTION);
gArgs.AddArg("-banscore=<n>", strprintf("Threshold for disconnecting misbehaving peers (default: %u)", DEFAULT_BANSCORE_THRESHOLD), false, OptionsCategory::CONNECTION);
@ -479,7 +486,6 @@ void SetupServerArgs()
CURRENCY_UNIT, FormatMoney(DEFAULT_TRANSACTION_MAXFEE)), false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-printpriority", strprintf("Log transaction fee per kB when mining blocks (default: %u)", DEFAULT_PRINTPRIORITY), true, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-printtoconsole", "Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile)", false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-shrinkdebugfile", "Shrink debug.log file on client startup (default: 1 when no -debug)", false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-uacomment=<cmt>", "Append comment to the user agent string", false, OptionsCategory::DEBUG_TEST);
SetupChainParamsBaseOptions();
@ -527,8 +533,8 @@ void SetupServerArgs()
std::string LicenseInfo()
{
const std::string URL_SOURCE_CODE = "<https://github.com/bitcoin/bitcoin>";
const std::string URL_WEBSITE = "<https://bitcoincore.org>";
const std::string URL_SOURCE_CODE = "<https://github.com/lbryio/lbrycrd>";
const std::string URL_WEBSITE = "<https://lbry.com>";
return CopyrightHolders(strprintf(_("Copyright (C) %i-%i"), 2009, COPYRIGHT_YEAR) + " ") + "\n" +
"\n" +
@ -633,7 +639,7 @@ static void CleanupBlockRevFiles()
static void ThreadImport(std::vector<fs::path> vImportFiles)
{
const CChainParams& chainparams = Params();
RenameThread("bitcoin-loadblk");
RenameThread("lbrycrd-loadblk");
ScheduleBatchPriority();
{
@ -1229,11 +1235,6 @@ bool AppInitMain()
CreatePidFile(GetPidFile(), getpid());
#endif
if (g_logger->m_print_to_file) {
if (gArgs.GetBoolArg("-shrinkdebugfile", g_logger->DefaultShrinkDebugFile())) {
// Do this first since it both loads a bunch of debug.log into memory,
// and because this needs to happen before any other debug.log printing
g_logger->ShrinkDebugFile();
}
if (!g_logger->OpenDebugLog()) {
return InitError(strprintf("Could not open debug log file %s",
g_logger->m_file_path.string()));
@ -1250,9 +1251,9 @@ bool AppInitMain()
// Warn about relative -datadir path.
if (gArgs.IsArgSet("-datadir") && !fs::path(gArgs.GetArg("-datadir", "")).is_absolute()) {
LogPrintf("Warning: relative datadir option '%s' specified, which will be interpreted relative to the " /* Continued */
"current working directory '%s'. This is fragile, because if bitcoin is started in the future "
"current working directory '%s'. This is fragile, because if lbrycrd is started in the future "
"from a different location, it will be unable to locate the current data files. There could "
"also be data loss if bitcoin is started while in a temporary directory.\n",
"also be data loss if lbrycrd is started while in a temporary directory.\n",
gArgs.GetArg("-datadir", ""), fs::current_path().string());
}
@ -1426,6 +1427,7 @@ bool AppInitMain()
nCoinDBCache = std::min(nCoinDBCache, nMaxCoinsDBCache << 20); // cap total coins db cache
nTotalCache -= nCoinDBCache;
nCoinCacheUsage = nTotalCache; // the rest goes to in-memory cache
std::cout << "nTotalCache: " << nTotalCache << ", nCoinCacheUsage: " << nCoinCacheUsage << ", nCoinDBCache: " << nCoinDBCache << std::endl;
int64_t nMempoolSizeMax = gArgs.GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000;
LogPrintf("Cache configuration:\n");
LogPrintf("* Using %.1fMiB for block index database\n", nBlockTreeDBCache * (1.0 / 1024 / 1024));
@ -1435,6 +1437,8 @@ bool AppInitMain()
LogPrintf("* Using %.1fMiB for chain state database\n", nCoinDBCache * (1.0 / 1024 / 1024));
LogPrintf("* Using %.1fMiB for in-memory UTXO set (plus up to %.1fMiB of unused mempool space)\n", nCoinCacheUsage * (1.0 / 1024 / 1024), nMempoolSizeMax * (1.0 / 1024 / 1024));
g_memfileSize = gArgs.GetArg("-memfile", 0u);
bool fLoaded = false;
while (!fLoaded && !ShutdownRequested()) {
bool fReset = fReindex;
@ -1455,6 +1459,11 @@ bool AppInitMain()
// fails if it's still open from the previous loop. Close it first:
pblocktree.reset();
pblocktree.reset(new CBlockTreeDB(nBlockTreeDBCache, false, fReset));
delete pclaimTrie;
int64_t trieCacheMB = gArgs.GetArg("-claimtriecache", nDefaultDbCache);
trieCacheMB = std::min(trieCacheMB, nMaxDbCache);
trieCacheMB = std::max(trieCacheMB, nMinDbCache);
pclaimTrie = new CClaimTrie(false, fReindex || fReindexChainState, 32, trieCacheMB);
if (fReset) {
pblocktree->WriteReindexing(true);
@ -1528,6 +1537,13 @@ bool AppInitMain()
assert(chainActive.Tip() != nullptr);
}
CClaimTrieCache trieCache(pclaimTrie);
if (!trieCache.ReadFromDisk(chainActive.Tip()))
{
strLoadError = _("Error loading the claim trie from disk");
break;
}
if (!fReset) {
// Note that RewindBlockIndex MUST run even if we're about to -reindex-chainstate.
// It both disconnects blocks based on chainActive, and drops block data in

42
src/lbry.cpp Normal file
View file

@ -0,0 +1,42 @@
#include <lbry.h>
#include <uint256.h>
#include <cstdio>
uint32_t g_memfileSize = 0;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{
if (params.fPowNoRetargeting)
return pindexLast->nBits;
const int64_t retargetTimespan = params.nPowTargetTimespan;
const int64_t nActualTimespan = pindexLast->GetBlockTime() - nFirstBlockTime;
int64_t nModulatedTimespan = nActualTimespan;
int64_t nMaxTimespan;
int64_t nMinTimespan;
nModulatedTimespan = retargetTimespan + (nModulatedTimespan - retargetTimespan) / 8;
nMinTimespan = retargetTimespan - (retargetTimespan / 8); //(150 - 18 = 132)
nMaxTimespan = retargetTimespan + (retargetTimespan / 2); //(150 + 75 = 225)
// Limit adjustment step
if (nModulatedTimespan < nMinTimespan)
nModulatedTimespan = nMinTimespan;
else if (nModulatedTimespan > nMaxTimespan)
nModulatedTimespan = nMaxTimespan;
// Retarget
const arith_uint256 bnPowLimit = UintToArith256(params.powLimit);
arith_uint256 bnNew;
bnNew.SetCompact(pindexLast->nBits);
bnNew *= nModulatedTimespan;
bnNew /= retargetTimespan;
if (bnNew > bnPowLimit)
bnNew = bnPowLimit;
return bnNew.GetCompact();
}

10
src/lbry.h Normal file
View file

@ -0,0 +1,10 @@
#ifndef LBRY_H
#define LBRY_H
#include <chain.h>
#include <chainparams.h>
extern uint32_t g_memfileSize;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nLastRetargetTime, const Consensus::Params& params);
#endif

Some files were not shown because too many files have changed in this diff Show more