Compare commits

..

15 commits

Author SHA1 Message Date
Brannon King
c5b444f8e5 testing gitlab ci
test gitlab ci round 2


test gitlab ci round 3


test gitlab ci round 4


reverted export


test gitlab osx runner


added Macos SDK


fix wine run
2019-06-19 14:20:19 -06:00
Brannon King
8b85d7f646 upped the default validation period 2019-06-19 13:40:51 -06:00
Brannon King
7dcab807d9 restored the current "depends" and friends
fix windows test run


unit test round 2


attempting to fix ccache use on darwin


made ccache optional, no longer pulls clang on darwin build


fixing darwin build from Dockerfile


fixed missing nproc on OSX


updated readme to include regtest example, build examples


fix QT unit tests


made -j get passed down, added build.sh
2019-06-17 09:51:22 -06:00
Brannon King
2819e09e52 fixed ancestors not all in claim trie on packageFees condition 2019-06-17 09:40:34 -06:00
Brannon King
8ad7903aba made cache match legacy_master, removed my bad assert in undo 2019-05-28 15:18:50 -06:00
Brannon King
cc789fc517 added claimtrie field back to getblocktemplate
I also included a test to ensure that we don't forget it next time
2019-05-24 10:32:21 -06:00
Brannon King
e65e77f9bf
Undo compatibility (#281)
* added test for claimname RPC
2019-05-23 11:46:37 -06:00
lbrynaut
70e7743acc Fix a bug that treats all claims as our own wallet txs. 2019-05-22 15:56:52 -06:00
Brannon King
55f5f2049e allow rest/block/height.json
changes from review, added integration test
2019-05-13 15:04:19 -06:00
Brannon King
7f6daef99b code reuse between miner & validator
originally from BvbFan
2019-05-07 16:32:22 -06:00
Jeremy Kauffman
3006f4d99d fix missing spaces in headers 2019-05-07 14:10:51 -06:00
Brannon King
2a1a4ef9c1 pulled in a few minor keepers from the other rebase branch 2019-05-07 14:10:12 -06:00
Brannon King
5ca2e96f5b fixed small claim names coming out as numeric 2019-05-06 16:40:40 -06:00
Brannon King
db55cc6960 fixed slow-running unit tests 2019-05-06 16:29:17 -06:00
lbrynaut
4c9c79e9f5 Rebase lbry on to Bitcoin 0.17.
This contains significant rebase / merge / testing work by Naut
<lbrynaut@protonmail.com>, Anthony Fieroni <bvbfan@abv.bg> and Brannon
King <countprimes@gmail.com>.
2019-05-01 14:50:32 -05:00
132 changed files with 9229 additions and 18634 deletions

View file

@ -1,8 +1,8 @@
<!-- This issue tracker is only for technical issues related to lbrycrd (the LBRY blockchain). <!-- This issue tracker is only for technical issues related to Bitcoin Core.
General questions and/or support requests are best directed to the community chat at https://chat.lbry.org. General bitcoin questions and/or support requests are best directed to the Bitcoin StackExchange at https://bitcoin.stackexchange.com.
For reporting security issues, please email security@lbry.com. For reporting security issues, please read instructions at https://bitcoincore.org/en/contact/.
If the node is "stuck" during sync or giving "block checksum mismatch" errors, please ensure your hardware is stable by running memtest and observe CPU temperature with a load-test tool such as linpack before creating an issue! --> If the node is "stuck" during sync or giving "block checksum mismatch" errors, please ensure your hardware is stable by running memtest and observe CPU temperature with a load-test tool such as linpack before creating an issue! -->
@ -13,7 +13,7 @@ If the node is "stuck" during sync or giving "block checksum mismatch" errors, p
<!--- How reliably can you reproduce the issue, what are the steps to do so? --> <!--- How reliably can you reproduce the issue, what are the steps to do so? -->
<!-- What version of lbrycrd are you using, where did you get it (website, self-compiled, etc)? --> <!-- What version of Bitcoin Core are you using, where did you get it (website, self-compiled, etc)? -->
<!-- What type of machine are you observing the error on (OS/CPU and disk type)? --> <!-- What type of machine are you observing the error on (OS/CPU and disk type)? -->

2
.gitignore vendored
View file

@ -119,5 +119,3 @@ contrib/devtools/split-debug.sh
.idea .idea
cmake-build-*/ cmake-build-*/
compile_commands\.json

72
.gitlab-ci.yml Normal file
View file

@ -0,0 +1,72 @@
stages:
- build
- test
.Build Template: &build_template
stage: build
image: ${BUILD_IMAGE}
variables:
CCACHE_DIR: ${CI_PROJECT_DIR}/ccache
cache:
key: ${CI_JOB_NAME}
paths:
- ${CI_PROJECT_DIR}/ccache
artifacts:
paths:
- src/lbrycrdd
- src/lbrycrd-cli
- src/lbrycrd-tx
- src/test/test_lbrycrd
Build Linux:
<<: *build_template
script: packaging/build_linux_64bit.sh
Build Windows:
<<: *build_template
artifacts:
paths:
- src/lbrycrdd.exe
- src/lbrycrd-cli.exe
- src/lbrycrd-tx.exe
- src/test/test_lbrycrd.exe
script: packaging/build_windows_64bit.sh
Build MacOS:
<<: *build_template
script:
- mkdir -p ./depends/SDKs && pushd depends/SDKs && curl -C - ${MAC_OS_SDK} | tar --skip-old-files -xJ && popd
- packaging/build_darwin_64bit.sh
Test Ubuntu:
variables:
GIT_STRATEGY: none
stage: test
dependencies: [Build Linux]
image: ubuntu:16.04
script: src/test/test_lbrycrd
Test Fedora:
variables:
GIT_STRATEGY: none
stage: test
dependencies: [Build Linux]
image: fedora:26
script: src/test/test_lbrycrd
Test Windows via Wine:
variables:
GIT_STRATEGY: none
WINEDEBUG: -all
stage: test
dependencies: [Build Windows]
image: jess/wine
script: wine src/test/test_lbrycrd.exe
Test MacOS:
variables:
GIT_STRATEGY: none
stage: test
tags: [darwin]
dependencies: [Build MacOS]
script: src/test/test_lbrycrd

View file

@ -7,6 +7,7 @@ cache:
stages: stages:
- build - build
- test - test
- quality
jobs: jobs:
include: include:
@ -14,7 +15,7 @@ jobs:
- &build-template - &build-template
stage: build stage: build
name: linux name: linux
env: NAME=linux DOCKER_IMAGE=lbry/build_lbrycrd_gcc EXT= env: NAME=linux EXT=
os: linux os: linux
dist: xenial dist: xenial
language: minimal language: minimal
@ -22,17 +23,28 @@ jobs:
- docker - docker
install: install:
- mkdir -p ${HOME}/ccache - mkdir -p ${HOME}/ccache
- docker pull $DOCKER_IMAGE - docker pull $DOCKER_BUILD_IMAGE
script: script:
- echo "build..."
- docker run -v "$(pwd):/lbrycrd" -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache ${DOCKER_IMAGE} packaging/build_${NAME}_64bit.sh - docker run -v "$(pwd):/lbrycrd" -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache ${DOCKER_IMAGE} packaging/build_${NAME}_64bit.sh
before_deploy: before_deploy:
- mkdir -p dist - mkdir dist
- sudo zip -Xj dist/lbrycrd-${NAME}.zip src/lbrycrdd${EXT} src/lbrycrd-cli${EXT} src/lbrycrd-tx${EXT} - sudo zip -j dist/lbrycrd-${NAME}.zip src/lbrycrdd${EXT} src/lbrycrd-cli${EXT} src/lbrycrd-tx${EXT}
- sudo zip -Xj dist/lbrycrd-${NAME}-test.zip src/test/test_lbrycrd${EXT} src/test/test_lbrycrd_fuzzy${EXT} - sudo zip -j dist/lbrycrd-${NAME}-test.zip src/test/test_lbrycrd${EXT} src/test/test_lbrycrd_fuzzy${EXT}
- sha256sum dist/lbrycrd-${NAME}.zip - sha256sum dist/lbrycrd-${NAME}.zip
- sha256sum dist/lbrycrd-${NAME}-test.zip - sha256sum dist/lbrycrd-${NAME}-test.zip
deploy: deploy:
- provider: releases
name: $TRAVIS_BRANCH
target_commitish: $TRAVIS_COMMIT
tag_name: $TRAVIS_TAG
api_key:
secure: "Ni5WZNR5CefWXpyDUQLMQbQ2LH4Iot+0SqIoM9c4maW06al1M8vu57vWuj2cESsW7JsaBehCE45Cwmo5kWyEjAiZY8sIMmvixkMP/8uPWuLgNmnIbm7U+d0j652DmZshDYtt8EomqV2RhAx/rmBnzGkruLOw9WTp9ZdBN3WbTt/IpZ2gMgVbGWYGOx+uRw7/yGw8m4gShQheto/dycbyyR3XV2WP9wuLmNYkcQ6JumSoQdDWXcvVfbCwylGq2sLDKwhvfTr4iwYyYsWdmhfdEQl0WcIv5C8xgdiY2vzhi2LmLqFbS/fvKNC26Tfo4bOHFG/eOnvqc+yyEB8B/xqW9Gs+A0TUh/3N30vHYZGcpiDU35DwAN5bZ1+s+mr/ZrNzBJ5BgT8io3g0Ko8gykbDvFQVpg7kxFsqA1YCikEpG86lVGk6clTa5guJvAHse+DfnbWO1nfDxYQXW0md861m0txk8RpTC/TVNyH/lL/vsS7LB67EHhRdZY+O1+5sUGMdtvvhMoxJYCwQGpLkh43KRsKynkMUR94w2O9hc8cknXdV3wrndVz00XNdcur6y4D7HTll1tBrF68CA2yKUSY5hsjtPmdlN+DW8ou/rJiKOpQZ/Xzp69AQEheOFfDPItxQRYxWj0dMOk8eszf0wFvi1N7J/hT/IHnuX5ITfa/T4NE="
file: dist/lbrycrd-${NAME}.zip
skip_cleanup: true
overwrite: true
draft: true
on:
tags: true
- provider: s3 - provider: s3
access_key_id: AKIAICKFHNTR5RITASAQ access_key_id: AKIAICKFHNTR5RITASAQ
secret_access_key: secret_access_key:
@ -48,11 +60,11 @@ jobs:
- <<: *build-template - <<: *build-template
name: windows name: windows
env: NAME=windows DOCKER_IMAGE=lbry/build_lbrycrd EXT=.exe env: NAME=windows EXT=.exe
- <<: *build-template - <<: *build-template
name: osx name: osx
env: NAME=darwin DOCKER_IMAGE=lbry/build_lbrycrd EXT= env: NAME=darwin EXT=
before_install: before_install:
- mkdir -p ./depends/SDKs && pushd depends/SDKs && curl -C - ${MAC_OS_SDK} | tar --skip-old-files -xJ && popd - mkdir -p ./depends/SDKs && pushd depends/SDKs && curl -C - ${MAC_OS_SDK} | tar --skip-old-files -xJ && popd
@ -63,12 +75,12 @@ jobs:
dist: xenial dist: xenial
language: minimal language: minimal
git: git:
clone: false depth: 3
install: install:
- mkdir -p testrun && cd testrun - mkdir testrun && cd testrun
- curl http://build.lbry.io/lbrycrd/${TRAVIS_BRANCH}/lbrycrd-${NAME}-test.zip -o temp.zip - curl http://build.lbry.io/lbrycrd/${TRAVIS_BRANCH}/lbrycrd-${NAME}-test.zip -o temp.zip
- unzip temp.zip - unzip temp.zip
script: TRIEHASH_FUZZER_BLOCKS=1000 ./test_lbrycrd script: ./test_lbrycrd
- <<: *test-template - <<: *test-template
# os: windows # doesn't support secrets at the moment # os: windows # doesn't support secrets at the moment
@ -78,11 +90,20 @@ jobs:
services: services:
- docker - docker
script: script:
- docker pull lbry/wine - docker pull $DOCKER_WINE_IMAGE
- docker run -v "$(pwd):/test" -e "WINEDEBUG=-all" -e "TRIEHASH_FUZZER_BLOCKS=1000" -it lbry/wine wine "/test/test_lbrycrd.exe" - docker run -v "$(pwd):/test" -e "WINEDEBUG=-all" -it $DOCKER_WINE_IMAGE wine "/test/test_lbrycrd.exe"
- <<: *test-template - <<: *test-template
os: osx os: osx
osx_image: xcode8.3 osx_image: xcode8.3
env: NAME=darwin env: NAME=darwin
- stage: quality
name: "check format"
os: linux
dist: xenial
language: minimal
install:
- sudo apt-get install -y clang-format-3.9
script: git diff -U0 origin/master -- '*.h' '*.cpp' | ./contrib/devtools/clang-format-diff.py -p1

56
CMakeLists.txt Normal file
View file

@ -0,0 +1,56 @@
cmake_minimum_required(VERSION 3.7)
project(lbrycrd_clion) # Do not use for full compile. This is for CLion syntax checking only.
set (CMAKE_CXX_STANDARD 11)
if(EXISTS "build/boost")
set(BOOST_ROOT "build/boost" CACHE PATH "Boost library path")
set(Boost_NO_SYSTEM_PATHS on CACHE BOOL "Do not search system for Boost")
endif()
find_package(Boost REQUIRED COMPONENTS filesystem thread chrono locale)
file(GLOB sources
src/*.h src/*.cpp
src/wallet/*.h src/wallet/*.cpp
src/support/*.h src/support/*.cpp src/support/allocators/*.h
src/script/*.h src/script/*.cpp
src/index/*.h src/index/*.cpp
src/interfaces/*.h src/interfaces/*.cpp
src/primitives/*.h src/primitives/*.cpp
src/policy/*.h src/policy/*.cpp
src/crypto/*.h src/crypto/*.cpp
src/consensus/*.h src/consensus/*.cpp
src/compat/*.h src/compat/*.cpp
src/rpc/*.h src/rpc/*.cpp
)
list(FILTER sources EXCLUDE REGEX "src/bitcoin*.cpp$")
include_directories(${Boost_INCLUDE_DIRS}
build/bdb/include
build/libevent/include
build/openssl/include
src/support/allocators
src/support
src/rpc
src/policy
src/wallet src/script
src/leveldb/helpers/memenv
src/leveldb/include
src/config
src/crypto
src/compat
src/obj
src/univalue/include
src/secp256k1/include
src/
)
add_compile_definitions(HAVE_CONFIG_H)
add_executable(lbrycrd-cli src/bitcoin-cli.cpp ${sources})
add_executable(lbrycrd-tx src/bitcoin-tx.cpp ${sources})
add_executable(lbrycrdd src/bitcoind.cpp ${sources})
file(GLOB tests src/test/*.cpp src/wallet/test/*.cpp)
add_executable(test_lbrycrd ${tests} ${sources})
target_include_directories(test_lbrycrd PRIVATE src/test)

106
README.md
View file

@ -1,32 +1,11 @@
# LBRYcrd - The LBRY blockchain # LBRYcrd - The LBRY blockchain
[![Build Status](https://travis-ci.org/lbryio/lbrycrd.svg?branch=master)](https://travis-ci.org/lbryio/lbrycrd) [![Build Status](https://travis-ci.org/lbryio/lbrycrd.svg?branch=master)](https://travis-ci.org/lbryio/lbrycrd)
[![MIT licensed](https://img.shields.io/dub/l/vibe-d.svg?style=flat)](https://github.com/lbryio/lbry-desktop/blob/master/LICENSE)
LBRYcrd uses a blockchain similar to bitcoin's to implement an index and payment system for content on the LBRY network. It is a fork of [bitcoin core](https://github.com/bitcoin/bitcoin). In addition to the libraries used by bitcoin, LBRYcrd also uses [icu4c](https://github.com/unicode-org/icu/tree/master/icu4c). LBRYcrd uses a blockchain similar to bitcoin's to implement an index and payment system for content on the LBRY network. It is a fork of bitcoin core. In addition to the libraries used by bitcoin, LBRYcrd also uses icu4c.
Please read the [lbry.tech overview](https://lbry.tech/overview) for a general understanding of the LBRY pieces. From there you could read the [LBRY spec](https://spec.lbry.com/) for specifics on the data in the blockchain. Please read the [lbry.tech overview](https://lbry.tech/overview) for a general understanding of the LBRY pieces. From there you could read the [LBRY spec](https://spec.lbry.com/) for specifics on the data in the blockchain.
## Table of Contents
1. [Installation](#installation)
2. [Usage](#usage)
1. [Examples](#examples)
2. [Data directory](#data-directory)
3. [Running from Source](#running-from-source)
1. [Ubuntu with pulled static dependencies](#ubuntu-with-pulled-static-dependencies)
2. [Ubuntu with local shared dependencies](#ubuntu-with-local-shared-dependencies)
3. [MacOS (cross-compiled)](<#macos-(cross-compiled)>)
4. [MacOS with local shared dependencies](#macos-with-local-shared-dependencies)
5. [Windows (cross-compiled)](<#windows-(cross-compiled)>)
6. [Use with CLion](#use-with-clion)
4. [Contributing](#contributing)
- [Testnet](#testnet)
5. [Mailing List](#mailing-list)
6. [License](#license)
7. [Security](#security)
8. [Contact](#contact)
## Installation ## Installation
Latest binaries are available from https://github.com/lbryio/lbrycrd/releases. There is no installation procedure; the CLI binaries will run as-is and will have any uncommon dependencies statically linked into the binary. The QT GUI is not supported. LBRYcrd is distributed as a collection of executable files; traditional installers are not provided. Latest binaries are available from https://github.com/lbryio/lbrycrd/releases. There is no installation procedure; the CLI binaries will run as-is and will have any uncommon dependencies statically linked into the binary. The QT GUI is not supported. LBRYcrd is distributed as a collection of executable files; traditional installers are not provided.
@ -37,17 +16,16 @@ The `lbrycrdd` executable will start a LBRYcrd node and connect you to the LBRYc
to interact with lbrycrdd through the command line. Command-line help for both executables are available through to interact with lbrycrdd through the command line. Command-line help for both executables are available through
the "--help" flag (e.g. `lbrycrdd --help`). Examples: the "--help" flag (e.g. `lbrycrdd --help`). Examples:
#### Examples #### Examples:
Run `./lbrycrdd -server -daemon` to start lbrycrdd in the background. Run `./lbrycrdd -server -daemon` to start lbrycrdd in the background.
Run `./lbrycrd-cli -getinfo` to check for some basic information about your LBRYcrd node. Run `./lbrycrd-cli getinfo` to check for some basic information about your LBRYcrd node.
Run `./lbrycrd-cli help` to get a list of all commands that you can run. To get help on specific commands run `./lbrycrd-cli [command_name] help` Run `./lbrycrd-cli help` to get a list of all commands that you can run. To get help on specific commands run `./lbrycrd-cli [command_name] help`
Test locally: Test locally:
```
```sh
./lbrycrdd -server -regtest -txindex # run this in its own window ./lbrycrdd -server -regtest -txindex # run this in its own window
./lbrycrd-cli -regtest generate 120 # mine 20 spendable coins ./lbrycrd-cli -regtest generate 120 # mine 20 spendable coins
./lbrycrd-cli -regtest claimname my_name deadbeef 1 # hold a name claim with 1 coin ./lbrycrd-cli -regtest claimname my_name deadbeef 1 # hold a name claim with 1 coin
@ -57,26 +35,24 @@ Test locally:
./lbrycrd-cli -regtest stop # kill lbrycrdd ./lbrycrd-cli -regtest stop # kill lbrycrdd
rm -fr ~/.lbrycrd/regtest/ # destroy regtest data rm -fr ~/.lbrycrd/regtest/ # destroy regtest data
``` ```
For further understanding of a "regtest" setup, see the local stack setup instructions here: https://lbry.tech/resources/regtest-setup For further understanding of a "regtest" setup, see the local stack setup instructions here: https://lbry.tech/resources/regtest-setup
The CLI help is also browsable online at https://lbry.tech/api/blockchain The CLI help is also browsable online at https://lbry.tech/api/blockchain
#### Data directory #### Data directory:
Lbrycrdd will use the below default data directories (changeable with -datadir): Lbrycrdd will use the below default data directories (changeable with -datadir):
```sh ```
Windows: %APPDATA%\lbrycrd Windows: %APPDATA%\lbrycrd
Mac: ~/Library/Application Support/lbrycrd Mac: ~/Library/Application Support/lbrycrd
Unix: ~/.lbrycrd Unix: ~/.lbrycrd
``` ```
The data directory contains various things such as your default wallet (wallet.dat), debug logs (debug.log), and blockchain data. You can optionally create a configuration file lbrycrd.conf in the default data directory which will be used by default when running lbrycrdd. The data directory contains various things such as your default wallet (wallet.dat), debug logs (debug.log), and blockchain data. You can optionally create a configuration file lbrycrd.conf in the default data directory which will be used by default when running lbrycrdd.
For a list of configuration parameters, run `./lbrycrdd --help`. Below is a sample lbrycrd.conf to enable JSON RPC server on lbrycrdd. For a list of configuration parameters, run `./lbrycrdd --help`. Below is a sample lbrycrd.conf to enable JSON RPC server on lbrycrdd.
```sh ```
rpcuser=lbry rpcuser=lbry
rpcpassword=xyz123456790 rpcpassword=xyz123456790
daemon=1 daemon=1
@ -85,19 +61,15 @@ txindex=1
``` ```
## Running from Source ## Running from Source
The easiest way to compile is to utilize the Docker image that contains the necessary compilers: lbry/build_lbrycrd. This will allow you to reproduce the build as made on our build servers. I this sample we map a local lbrycrd folder and a local ccache folder inside the image:
The easiest way to compile is to utilize the Docker image that contains the necessary compilers: lbry/build_lbrycrd. This will allow you to reproduce the build as made on our build servers. In this sample we map a local lbrycrd folder and a local ccache folder inside the image: ```
```sh
git clone https://github.com/lbryio/lbrycrd.git git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd cd lbrycrd
docker run -v "$(pwd):/lbrycrd" --rm -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache lbry/build_lbrycrd packaging/build_linux_64bit.sh docker run -v "$(pwd):/lbrycrd" --rm -v "${HOME}/ccache:/ccache" -w /lbrycrd -e CCACHE_DIR=/ccache lbry/build_lbrycrd packaging/build_linux_64bit.sh
``` ```
Some examples of compiling directly: Some examples of compiling directly:
#### Ubuntu with pulled static dependencies:
#### Ubuntu with pulled static dependencies ```
```sh
sudo apt install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates sudo apt install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates
git clone https://github.com/lbryio/lbrycrd.git git clone https://github.com/lbryio/lbrycrd.git
cd lbrycrd cd lbrycrd
@ -105,14 +77,10 @@ cd lbrycrd
./src/test/test_lbrycrd ./src/test/test_lbrycrd
``` ```
Other Linux distros would be similar. The build shell script is fairly trivial; take a peek at its contents. Other Linux distros would be similar. The build shell script is fairly trivial; take a peek at its contents.
#### Ubuntu with local shared dependencies:
#### Ubuntu with local shared dependencies
Note: using untested dependencies may lead to conflicting results. Note: using untested dependencies may lead to conflicting results.
```
```sh
sudo add-apt-repository ppa:bitcoin/bitcoin sudo add-apt-repository ppa:bitcoin/bitcoin
sudo apt-get update sudo apt-get update
sudo apt-get install libdb4.8-dev libdb4.8++-dev libicu-dev libssl-dev libevent-dev \ sudo apt-get install libdb4.8-dev libdb4.8++-dev libicu-dev libssl-dev libevent-dev \
@ -129,10 +97,8 @@ make -j$(nproc)
./src/lbrycrdd -server ... ./src/lbrycrdd -server ...
``` ```
#### MacOS (cross-compiled):
#### MacOS (cross-compiled) ```
```sh
sudo apt-get install clang llvm git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \ sudo apt-get install clang llvm git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \
libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-dev libboost-system-dev libboost-filesystem-dev libboost-chrono-dev libboost-test-dev libboost-thread-dev
@ -144,12 +110,9 @@ tar ... extract SDK to depends/SDKs/MacOSX10.11.sdk
./packaging/build_darwin_64bit.sh ./packaging/build_darwin_64bit.sh
``` ```
Look in packaging/build_darwin_64bit.sh for further understanding. Look in packaging/build_darwin_64bit.sh for further understanding.
#### MacOS with local shared dependencies:
#### MacOS with local shared dependencies ```
```sh
brew install boost berkeley-db@4 icu4c libevent brew install boost berkeley-db@4 icu4c libevent
# fix conflict with gawk pulled first: # fix conflict with gawk pulled first:
brew reinstall readline brew reinstall readline
@ -164,12 +127,9 @@ CONFIG_SITE=$(pwd)/depends/x86_64-apple-darwin15.6.0/share/config.site ./configu
make -j$(sysctl -n hw.ncpu) make -j$(sysctl -n hw.ncpu)
``` ```
#### Windows (cross-compiled):
#### Windows (cross-compiled)
Compiling on MS Windows (outside of WSL) is not supported. The Windows build is cross-compiled from Linux like so: Compiling on MS Windows (outside of WSL) is not supported. The Windows build is cross-compiled from Linux like so:
```
```sh
sudo apt-get install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \ sudo apt-get install build-essential git libtool autotools-dev automake pkg-config bsdmainutils curl ca-certificates \
g++-mingw-w64-x86-64 mingw-w64-x86-64-dev g++-mingw-w64-x86-64 mingw-w64-x86-64-dev
@ -183,18 +143,6 @@ cd lbrycrd
If you encounter any errors, please check `doc/build-*.md` for further instructions. If you're still stuck, [create an issue](https://github.com/lbryio/lbrycrd/issues/new) with the output of that command, your system info, and any other information you think might be helpful. The scripts in the packaging folder are simple and will grant extra light on the build process as needed. If you encounter any errors, please check `doc/build-*.md` for further instructions. If you're still stuck, [create an issue](https://github.com/lbryio/lbrycrd/issues/new) with the output of that command, your system info, and any other information you think might be helpful. The scripts in the packaging folder are simple and will grant extra light on the build process as needed.
#### Use with CLion
CLion has not traditionally supported Autotools projects, although some progress on that is now in the works. We do include a cmake build file for compiling lbrycrd. See contrib/cmake. Alas, CLion doesn't support external projects in cmake, so that particular approach is also insufficient. CLion does support "compile_commands.json" projects. Fortunately, this can be easily generated for lbrycrd like so:
```sh
pip install --user compiledb
./autogen.sh && ./configure --enable-static=no --enable-shared --with-pic --without-gui CXXFLAGS="-O0 -g" CFLAGS="-O0 -g" # or whatever normal lbrycrd config
compiledb make -j10
```
Then open the newly generated compile_commands.json file as a project in CLion. Debugging is supported if you compiled with `-g`. To enable that you will need to create a target in CLion by going to File -> Settings -> Build -> Custom Build Targets. Add an empty target with your choice of name. From there you can go to "Edit Configurations", typically found in a drop-down at the top of the editor. Add a Custom Build Application, select your new target, select the compiled file (i.e. test_lbrycrd or lbrycrdd, etc), and then add any necessary command line parameters. Ensure that there is nothing in the "Before launch" section.
## Contributing ## Contributing
Contributions to this project are welcome, encouraged, and compensated. For more details, see [https://lbry.tech/contribute](https://lbry.tech/contribute) Contributions to this project are welcome, encouraged, and compensated. For more details, see [https://lbry.tech/contribute](https://lbry.tech/contribute)
@ -210,8 +158,8 @@ regularly to indicate new official, stable release versions.
Testing and code review is the bottleneck for development; we get more pull Testing and code review is the bottleneck for development; we get more pull
requests than we can review and test on short notice. Please be patient and help out by testing requests than we can review and test on short notice. Please be patient and help out by testing
other people's pull requests, and remember this is a security-critical project where any mistake might cost people other people's pull requests, and remember this is a security-critical project where any mistake might cost people
lots of money. Developers are strongly encouraged to write [unit tests](/src/test/README.md) for new code and to lots of money. Developers are strongly encouraged to write [unit tests](/doc/unit-tests.md) for new code and to
submit new unit tests for old code. Unit tests are compiled by default and can be run with `src/test/test_lbrycrd` submit new unit tests for old code. Unit tests are compiled by default and can be run with `src/test/test_lbrycrd`.
The Travis CI system makes sure that every pull request is built, and that unit and sanity tests are automatically run. See https://travis-ci.org/lbryio/lbrycrd The Travis CI system makes sure that every pull request is built, and that unit and sanity tests are automatically run. See https://travis-ci.org/lbryio/lbrycrd
@ -219,21 +167,17 @@ The Travis CI system makes sure that every pull request is built, and that unit
Testnet is maintained for testing purposes and can be accessed using the command `./lbrycrdd -testnet`. If you would like to obtain testnet credits, please contact brannon@lbry.com or grin@lbry.com . Testnet is maintained for testing purposes and can be accessed using the command `./lbrycrdd -testnet`. If you would like to obtain testnet credits, please contact brannon@lbry.com or grin@lbry.com .
It is easy to solo mine on testnet. (It's easy on mainnet too, but much harder to win.) For instructions see [SGMiner](https://github.com/lbryio/sgminer-gm) and [Mining Contributions](https://github.com/lbryio/lbrycrd/tree/master/contrib/mining)
## Mailing List
We maintain a mailing list for notifications of upgrades, security issues, and soft/hard forks. To join, visit [https://lbry.com/forklist](https://lbry.com/forklist).
## License ## License
This project is MIT licensed. For the full license, see [LICENSE](LICENSE). This project is MIT licensed. For the full license, see [LICENSE](LICENSE).
## Security ## Security
We take security seriously. Please contact [security@lbry.com](mailto:security@lbry.com) regarding any security issues. We take security seriously. Please contact security@lbry.com regarding any security issues.
Our PGP key is [here](https://lbry.com/faq/pgp-key) if you need it. Our PGP key is [here](https://keybase.io/lbry/key.asc) if you need it.
## Contact ## Contact
The primary contact for this project is [@BrannonKing](https://github.com/BrannonKing) (brannon@lbry.com) The primary contact for this project is [@BrannonKing](https://github.com/BrannonKing) (brannon@lbry.com)

View file

@ -7,7 +7,7 @@ export LC_ALL=C
set -e set -e
srcdir="$(dirname $0)" srcdir="$(dirname $0)"
cd "$srcdir" cd "$srcdir"
if [ -z ${LIBTOOLIZE} ] && GLIBTOOLIZE="$(which glibtoolize 2>/dev/null)"; then if [ -z ${LIBTOOLIZE} ] && GLIBTOOLIZE="`which glibtoolize 2>/dev/null`"; then
LIBTOOLIZE="${GLIBTOOLIZE}" LIBTOOLIZE="${GLIBTOOLIZE}"
export LIBTOOLIZE export LIBTOOLIZE
fi fi

View file

@ -2,10 +2,10 @@ dnl require autoconf 2.60 (AS_ECHO/AS_ECHO_N)
AC_PREREQ([2.60]) AC_PREREQ([2.60])
define(_CLIENT_VERSION_MAJOR, 0) define(_CLIENT_VERSION_MAJOR, 0)
define(_CLIENT_VERSION_MINOR, 17) define(_CLIENT_VERSION_MINOR, 17)
define(_CLIENT_VERSION_REVISION, 3) define(_CLIENT_VERSION_REVISION, 1)
define(_CLIENT_VERSION_BUILD, 3) define(_CLIENT_VERSION_BUILD, 0)
define(_CLIENT_VERSION_IS_RELEASE, true) define(_CLIENT_VERSION_IS_RELEASE, true)
define(_COPYRIGHT_YEAR, 2021) define(_COPYRIGHT_YEAR, 2019)
define(_COPYRIGHT_HOLDERS,[The %s developers]) define(_COPYRIGHT_HOLDERS,[The %s developers])
define(_COPYRIGHT_HOLDERS_SUBSTITUTION,[[LBRYcrd Core]]) define(_COPYRIGHT_HOLDERS_SUBSTITUTION,[[LBRYcrd Core]])
AC_INIT([LBRYcrd Core],[_CLIENT_VERSION_MAJOR._CLIENT_VERSION_MINOR._CLIENT_VERSION_REVISION],[https://github.com/lbryio/lbrycrd/issues],[lbrycrd],[https://lbry.com/]) AC_INIT([LBRYcrd Core],[_CLIENT_VERSION_MAJOR._CLIENT_VERSION_MINOR._CLIENT_VERSION_REVISION],[https://github.com/lbryio/lbrycrd/issues],[lbrycrd],[https://lbry.com/])
@ -703,10 +703,6 @@ if test x$TARGET_OS != xwindows; then
AX_CHECK_COMPILE_FLAG([-fPIC],[PIC_FLAGS="-fPIC"]) AX_CHECK_COMPILE_FLAG([-fPIC],[PIC_FLAGS="-fPIC"])
fi fi
# All versions of gcc that we commonly use for building are subject to bug
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90348. To work around that, set
# -fstack-reuse=none for all gcc builds. (Only gcc understands this flag)
AX_CHECK_COMPILE_FLAG([-fstack-reuse=none],[HARDENED_CXXFLAGS="$HARDENED_CXXFLAGS -fstack-reuse=none"])
if test x$use_hardening != xno; then if test x$use_hardening != xno; then
use_hardening=yes use_hardening=yes

View file

@ -1,185 +0,0 @@
cmake_minimum_required(VERSION 3.10)
project(lbrycrd)
set(CMAKE_CXX_STANDARD 11)
include(cmake/CPM.cmake)
include(ExternalProject)
set(OPTIONS "" CACHE STRING "lbrycrdd configure options")
set(CPPFLAGS "" CACHE STRING "lbrycrdd compiler options")
set(LDFLAGS "" CACHE STRING "lbrycrdd linker options")
set(DISABLE_TESTS OFF CACHE BOOL "compilation without tests")
set(DISABLE_WALLET OFF CACHE BOOL "compilation without wallet support")
set(DISABLE_BENCH OFF CACHE BOOL "compilation without bench support")
if(NOT ${CPM_USE_LOCAL_PACKAGES})
set(OPTIONS "${OPTIONS} --enable-static --disable-shared")
else()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake")
endif()
set(OPTIONS "--without-gui ${OPTIONS} --with-pic")
if (${DISABLE_TESTS})
set(OPTIONS "${OPTIONS} --disable-tests")
endif()
if (${DISABLE_WALLET})
set(OPTIONS "${OPTIONS} --disable-wallet")
endif()
if (${DISABLE_BENCH})
set(OPTIONS "${OPTIONS} --disable-bench")
endif()
string(TOLOWER ${CMAKE_SYSTEM_NAME}-${CMAKE_SYSTEM_PROCESSOR} ARCH)
CPMAddPackage(
NAME OpenSSL
GITHUB_REPOSITORY openssl/openssl
VERSION 1.0.2
GIT_TAG OpenSSL_1_0_2r
DOWNLOAD_ONLY TRUE
)
if(OpenSSL_ADDED)
ExternalProject_Add(OpenSSL
PREFIX openssl
SOURCE_DIR ${OpenSSL_SOURCE_DIR}
CONFIGURE_COMMAND ${OpenSSL_SOURCE_DIR}/Configure ${ARCH} no-shared no-dso no-engines -fPIC --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} OpenSSL)
ExternalProject_Get_Property(OpenSSL INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
set(OPENSSL_CPPFLAGS "CPPFLAGS=-I${INSTALL_DIR}/include")
set(OPENSSL_LDFLAGS "LDFLAGS=-L${INSTALL_DIR}/lib")
endif(OpenSSL_ADDED)
CPMAddPackage(
NAME Libevent
GITHUB_REPOSITORY libevent/libevent
VERSION 2.1.8
GIT_TAG release-2.1.8-stable
DOWNLOAD_ONLY TRUE
)
if(Libevent_ADDED)
ExternalProject_Add(Libevent
PREFIX libevent
DEPENDS ${DEPENDS}
SOURCE_DIR ${Libevent_SOURCE_DIR}
CONFIGURE_COMMAND ${Libevent_SOURCE_DIR}/autogen.sh
&& ${Libevent_SOURCE_DIR}/configure ${OPENSSL_CPPFLAGS} --enable-cxx --disable-shared --with-pic ${OPENSSL_LDFLAGS} --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} Libevent)
ExternalProject_Get_Property(Libevent INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
endif(Libevent_ADDED)
if(NOT ${DISABLE_WALLET})
CPMAddPackage(
NAME BerkeleyDB
VERSION 4.8.30
URL https://download.oracle.com/berkeley-db/db-4.8.30.NC.zip
URL_HASH SHA256=43ecd76886992ea416fdadc54b7f2b83ef249d9a6964bd07708ccae42d0226ce
DOWNLOAD_ONLY TRUE
)
if(NOT ${BerkeleyDB_VERSION} VERSION_LESS "5.0")
set(OPTIONS "${OPTIONS} --with-incompatible-bdb")
endif()
if(BerkeleyDB_ADDED)
ExternalProject_Add(BerkeleyDB
PREFIX bdb
SOURCE_DIR ${BerkeleyDB_SOURCE_DIR}
PATCH_COMMAND sed -i "s/__atomic_compare_exchange/__atomic_compare_exchange_db/" ${BerkeleyDB_SOURCE_DIR}/dbinc/atomic.h
CONFIGURE_COMMAND ${BerkeleyDB_SOURCE_DIR}/dist/configure --enable-cxx --disable-shared --with-pic --prefix=<INSTALL_DIR>
)
set(DEPENDS ${DEPENDS} BerkeleyDB)
ExternalProject_Get_Property(BerkeleyDB INSTALL_DIR)
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
endif(BerkeleyDB_ADDED)
endif()
set(BOOST_LIBS chrono,filesystem,system,locale,thread)
string(REPLACE "," ";" BOOST_COMPONENTS ${BOOST_LIBS})
if(NOT ${DISABLE_TESTS})
set(BOOST_LIBS ${BOOST_LIBS},test)
set(BOOST_COMPONENTS ${BOOST_COMPONENTS};unit_test_framework)
endif()
CPMAddPackage(
NAME Boost
GITHUB_REPOSITORY boostorg/boost
VERSION 1.64.0
COMPONENTS ${BOOST_COMPONENTS}
GIT_TAG boost-1.69.0
GIT_SUBMODULES libs/* tools/*
DOWNLOAD_ONLY TRUE
)
# if boost is found system wide we expect to be compiled against icu, so we can skip it
if(Boost_ADDED)
CPMAddPackage(
NAME ICU
GITHUB_REPOSITORY unicode-org/icu
VERSION 63.2
GIT_TAG release-63-2
DOWNLOAD_ONLY TRUE
)
if(ICU_ADDED)
ExternalProject_Add(ICU
PREFIX icu
SOURCE_DIR ${ICU_SOURCE_DIR}
CONFIGURE_COMMAND ${ICU_SOURCE_DIR}/icu4c/source/configure --disable-extras --disable-strict --enable-static
--disable-shared --disable-tests --disable-samples --disable-dyload --disable-layoutex CFLAGS=-fPIC CPPFLAGS=-fPIC --prefix=<INSTALL_DIR>
)
set(DEPENDS ${DEPENDS} ICU)
ExternalProject_Get_Property(ICU INSTALL_DIR)
set(ICU_PATH ${INSTALL_DIR})
set(OPTIONS "${OPTIONS} --with-icu=${ICU_PATH}")
set(LDFLAGS "${LDFLAGS} -L${ICU_PATH}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${ICU_PATH}/include")
endif(ICU_ADDED)
ExternalProject_Add(Boost
PREFIX boost
DEPENDS ${DEPENDS}
SOURCE_DIR ${Boost_SOURCE_DIR}
CONFIGURE_COMMAND ${Boost_SOURCE_DIR}/bootstrap.sh --with-icu=${ICU_PATH} --with-libraries=${BOOST_LIBS} && ${Boost_SOURCE_DIR}/b2 headers
BUILD_COMMAND ${Boost_SOURCE_DIR}/b2 install threading=multi -sNO_BZIP2=1 -sNO_ZLIB=1 link=static linkflags="-L${ICU_PATH}/lib -licuio -licuuc -licudata -licui18n" cxxflags=-fPIC boost.locale.iconv=off boost.locale.posix=off boost.locale.icu=on boost.locale.std=off -sICU_PATH=${ICU_PATH} --prefix=<INSTALL_DIR>
INSTALL_COMMAND ""
BUILD_IN_SOURCE 1
)
set(DEPENDS ${DEPENDS} Boost)
ExternalProject_Get_Property(Boost INSTALL_DIR)
set(OPTIONS "${OPTIONS} --with-boost=${INSTALL_DIR}")
set(LDFLAGS "${LDFLAGS} -L${INSTALL_DIR}/lib")
set(CPPFLAGS "${CPPFLAGS} -I${INSTALL_DIR}/include")
set_property(DIRECTORY PROPERTY ADDITIONAL_MAKE_CLEAN_FILES ${Boost_SOURCE_DIR}/bin.v2)
endif(Boost_ADDED)
set(CPPFLAGS "${CPPFLAGS} -Wno-parentheses -Wno-unused-local-typedefs -Wno-deprecated -Wno-implicit-fallthrough -Wno-unused-parameter")
separate_arguments(OPTIONS)
ExternalProject_Add(lbrycrdd
PREFIX lbrycrdd
DEPENDS ${DEPENDS}
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../..
CONFIGURE_COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/../../autogen.sh
&& ${CMAKE_CURRENT_SOURCE_DIR}/../../configure ${OPTIONS} CPPFLAGS=${CPPFLAGS} LDFLAGS=${LDFLAGS} --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
BUILD_ALWAYS 1
)

View file

@ -1,210 +0,0 @@
# TheLartians/CPM - A simple Git dependency manager
# =================================================
# See https://github.com/TheLartians/CPM for usage and update instructions.
#
# MIT License
# -----------
#[[
Copyright (c) 2019 Lars Melchior
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
]]
cmake_minimum_required(VERSION 3.10 FATAL_ERROR)
set(CURRENT_CPM_VERSION 0.11.1)
if(CPM_DIRECTORY)
if(NOT ${CPM_DIRECTORY} MATCHES ${CMAKE_CURRENT_LIST_DIR})
if (${CPM_VERSION} VERSION_LESS ${CURRENT_CPM_VERSION})
CPM_HANDLE_OLD_VERSION(${CURRENT_CPM_VERSION})
endif()
return()
endif()
endif()
set(CPM_VERSION ${CURRENT_CPM_VERSION} CACHE INTERNAL "")
set(CPM_DIRECTORY ${CMAKE_CURRENT_LIST_DIR} CACHE INTERNAL "")
set(CPM_PACKAGES "" CACHE INTERNAL "")
option(CPM_USE_LOCAL_PACKAGES "Use locally installed packages (find_package)" ON)
option(CPM_LOCAL_PACKAGES_ONLY "Use only locally installed packages" OFF)
include(FetchContent)
include(CMakeParseArguments)
# Initialize logging prefix
if(NOT CPM_INDENT)
set(CPM_INDENT "CPM:")
endif()
# The main workhorse of CPM
function(CPMAddPackage)
set(oneValueArgs
NAME
VERSION
GIT_TAG
DOWNLOAD_ONLY
GITHUB_REPOSITORY
GITLAB_REPOSITORY
)
set(multiValueArgs
OPTIONS
COMPONENTS
)
cmake_parse_arguments(CPM_ARGS "" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(${CPM_USE_LOCAL_PACKAGES} OR ${CPM_LOCAL_PACKAGES_ONLY})
find_package(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION} OPTIONAL_COMPONENTS ${CPM_ARGS_COMPONENTS} QUIET)
if(${CPM_ARGS_NAME}_FOUND)
message(STATUS "CPM: adding local package ${CPM_ARGS_NAME}@${${CPM_ARGS_NAME}_VERSION}")
set(${CPM_ARGS_NAME}_VERSION "${${CPM_ARGS_NAME}_VERSION}" PARENT_SCOPE)
return()
endif()
if(${CPM_LOCAL_PACKAGES_ONLY})
message(SEND_ERROR "CPM: ${CPM_ARGS_NAME} not found via find_package(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION})")
endif()
endif()
if (NOT CPM_ARGS_VERSION)
set(CPM_ARGS_VERSION 0)
endif()
if (NOT CPM_ARGS_GIT_TAG)
set(CPM_ARGS_GIT_TAG v${CPM_ARGS_VERSION})
endif()
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_TAG ${CPM_ARGS_GIT_TAG})
if(CPM_ARGS_DOWNLOAD_ONLY)
set(DOWNLOAD_ONLY ${CPM_ARGS_DOWNLOAD_ONLY})
else()
set(DOWNLOAD_ONLY NO)
endif()
if (CPM_ARGS_GITHUB_REPOSITORY)
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_REPOSITORY "https://github.com/${CPM_ARGS_GITHUB_REPOSITORY}.git")
endif()
if (CPM_ARGS_GITLAB_REPOSITORY)
list(APPEND CPM_ARGS_UNPARSED_ARGUMENTS GIT_REPOSITORY "https://gitlab.com/${CPM_ARGS_GITLAB_REPOSITORY}.git")
endif()
if (${CPM_ARGS_NAME} IN_LIST CPM_PACKAGES)
CPM_GET_PACKAGE_VERSION(${CPM_ARGS_NAME})
if(${CPM_PACKAGE_VERSION} VERSION_LESS ${CPM_ARGS_VERSION})
message(WARNING "${CPM_INDENT} requires a newer version of ${CPM_ARGS_NAME} (${CPM_ARGS_VERSION}) than currently included (${CPM_PACKAGE_VERSION}).")
endif()
if (CPM_ARGS_OPTIONS)
foreach(OPTION ${CPM_ARGS_OPTIONS})
CPM_PARSE_OPTION(${OPTION})
if(NOT "${${OPTION_KEY}}" STREQUAL ${OPTION_VALUE})
message(WARNING "${CPM_INDENT} ignoring package option for ${CPM_ARGS_NAME}: ${OPTION_KEY} = ${OPTION_VALUE} (${${OPTION_KEY}})")
endif()
endforeach()
endif()
CPM_FETCH_PACKAGE(${CPM_ARGS_NAME} ${DOWNLOAD_ONLY})
CPMGetProperties(${CPM_ARGS_NAME})
set(${CPM_ARGS_NAME}_VERSION ${CPM_ARGS_VERSION} PARENT_SCOPE)
set(${CPM_ARGS_NAME}_SOURCE_DIR "${${CPM_ARGS_NAME}_SOURCE_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_BINARY_DIR "${${CPM_ARGS_NAME}_BINARY_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_ADDED NO PARENT_SCOPE)
return()
endif()
CPMRegisterPackage(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION})
if (CPM_ARGS_OPTIONS)
foreach(OPTION ${CPM_ARGS_OPTIONS})
CPM_PARSE_OPTION(${OPTION})
set(${OPTION_KEY} ${OPTION_VALUE} CACHE INTERNAL "")
endforeach()
endif()
CPM_DECLARE_PACKAGE(${CPM_ARGS_NAME} ${CPM_ARGS_VERSION} ${CPM_ARGS_GIT_TAG} "${CPM_ARGS_UNPARSED_ARGUMENTS}")
CPM_FETCH_PACKAGE(${CPM_ARGS_NAME} ${DOWNLOAD_ONLY})
CPMGetProperties(${CPM_ARGS_NAME})
set(${CPM_ARGS_NAME}_VERSION ${CPM_ARGS_VERSION} PARENT_SCOPE)
set(${CPM_ARGS_NAME}_SOURCE_DIR "${${CPM_ARGS_NAME}_SOURCE_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_BINARY_DIR "${${CPM_ARGS_NAME}_BINARY_DIR}" PARENT_SCOPE)
set(${CPM_ARGS_NAME}_ADDED YES PARENT_SCOPE)
endfunction()
function (CPM_DECLARE_PACKAGE PACKAGE VERSION GIT_TAG)
message(STATUS "${CPM_INDENT} adding package ${PACKAGE}@${VERSION} (${GIT_TAG})")
FetchContent_Declare(
${PACKAGE}
${ARGN}
)
endfunction()
function (CPM_FETCH_PACKAGE PACKAGE DOWNLOAD_ONLY)
set(CPM_OLD_INDENT "${CPM_INDENT}")
set(CPM_INDENT "${CPM_INDENT} ${PACKAGE}:")
if(${DOWNLOAD_ONLY})
if(NOT "${PACKAGE}_POPULATED")
FetchContent_Populate(${PACKAGE})
endif()
else()
FetchContent_MakeAvailable(${PACKAGE})
endif()
set(CPM_INDENT "${CPM_OLD_INDENT}")
endfunction()
function (CPMGetProperties PACKAGE)
FetchContent_GetProperties(${PACKAGE})
string(TOLOWER ${PACKAGE} lpackage)
set(${PACKAGE}_SOURCE_DIR "${${lpackage}_SOURCE_DIR}" PARENT_SCOPE)
set(${PACKAGE}_BINARY_DIR "${${lpackage}_BINARY_DIR}" PARENT_SCOPE)
endfunction()
function(CPMRegisterPackage PACKAGE VERSION)
list(APPEND CPM_PACKAGES ${PACKAGE})
set(CPM_PACKAGES ${CPM_PACKAGES} CACHE INTERNAL "")
set("CPM_PACKAGE_${PACKAGE}_VERSION" ${VERSION} CACHE INTERNAL "")
endfunction()
function(CPM_GET_PACKAGE_VERSION PACKAGE)
set(CPM_PACKAGE_VERSION "${CPM_PACKAGE_${PACKAGE}_VERSION}" PARENT_SCOPE)
endfunction()
function(CPM_PARSE_OPTION OPTION)
string(REGEX MATCH "^[^ ]+" OPTION_KEY ${OPTION})
string(LENGTH ${OPTION_KEY} OPTION_KEY_LENGTH)
math(EXPR OPTION_KEY_LENGTH "${OPTION_KEY_LENGTH}+1")
string(SUBSTRING ${OPTION} "${OPTION_KEY_LENGTH}" "-1" OPTION_VALUE)
set(OPTION_KEY "${OPTION_KEY}" PARENT_SCOPE)
set(OPTION_VALUE "${OPTION_VALUE}" PARENT_SCOPE)
endfunction()
function (CPM_HANDLE_OLD_VERSION NEW_CPM_VERSION)
message(AUTHOR_WARNING "${CPM_INDENT} \
A dependency is using a more recent CPM (${NEW_CPM_VERSION}) than the current project (${CPM_VERSION}). \
It is recommended to upgrade CPM to the most recent version. \
See https://github.com/TheLartians/CPM for more information."
)
endfunction()

View file

@ -1,171 +0,0 @@
# Author: sum01 <sum01@protonmail.com>
# Git: https://github.com/sum01/FindBerkeleyDB
# Read the README.md for the full info.
# NOTE: If Berkeley DB ever gets a Pkg-config ".pc" file, add pkg_check_modules() here
# Checks if environment paths are empty, set them if they aren't
if(NOT "$ENV{BERKELEYDB_ROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{BERKELEYDB_ROOT}")
elseif(NOT "$ENV{Berkeleydb_ROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{Berkeleydb_ROOT}")
elseif(NOT "$ENV{BERKELEYDBROOT}" STREQUAL "")
set(_BERKELEYDB_HINTS "$ENV{BERKELEYDBROOT}")
else()
# Set just in case, as it's used regardless if it's empty or not
set(_BERKELEYDB_HINTS "")
endif()
# Allow user to pass a path instead of guessing
if(BerkeleyDB_ROOT_DIR)
set(_BERKELEYDB_PATHS "${BerkeleyDB_ROOT_DIR}")
elseif(CMAKE_SYSTEM_NAME MATCHES ".*[wW]indows.*")
# MATCHES is used to work on any devies with windows in the name
# Shameless copy-paste from FindOpenSSL.cmake v3.8
file(TO_CMAKE_PATH "$ENV{PROGRAMFILES}" _programfiles)
list(APPEND _BERKELEYDB_HINTS "${_programfiles}")
# There's actually production release and version numbers in the file path.
# For example, if they're on v6.2.32: C:/Program Files/Oracle/Berkeley DB 12cR1 6.2.32/
# But this still works to find it, so I'm guessing it can accept partial path matches.
foreach(_TARGET_BERKELEYDB_PATH "Oracle/Berkeley DB" "Berkeley DB")
list(APPEND _BERKELEYDB_PATHS
"${_programfiles}/${_TARGET_BERKELEYDB_PATH}"
"C:/Program Files (x86)/${_TARGET_BERKELEYDB_PATH}"
"C:/Program Files/${_TARGET_BERKELEYDB_PATH}"
"C:/${_TARGET_BERKELEYDB_PATH}"
)
endforeach()
else()
# Paths for anything other than Windows
# Cellar/berkeley-db is for macOS from homebrew installation
list(APPEND _BERKELEYDB_PATHS
"/usr"
"/usr/local"
"/usr/local/Cellar/berkeley-db"
"/opt"
"/opt/local"
)
endif()
# Find includes path
find_path(BerkeleyDB_INCLUDE_DIRS
NAMES "db.h"
HINTS ${_BERKELEYDB_HINTS}
PATH_SUFFIXES "include" "includes"
PATHS ${_BERKELEYDB_PATHS}
)
# Checks if the version file exists, save the version file to a var, and fail if there's no version file
if(BerkeleyDB_INCLUDE_DIRS)
# Read the version file db.h into a variable
file(READ "${BerkeleyDB_INCLUDE_DIRS}/db.h" _BERKELEYDB_DB_HEADER)
# Parse the DB version into variables to be used in the lib names
string(REGEX REPLACE ".*DB_VERSION_MAJOR ([0-9]+).*" "\\1" BerkeleyDB_VERSION_MAJOR "${_BERKELEYDB_DB_HEADER}")
string(REGEX REPLACE ".*DB_VERSION_MINOR ([0-9]+).*" "\\1" BerkeleyDB_VERSION_MINOR "${_BERKELEYDB_DB_HEADER}")
# Patch version example on non-crypto installs: x.x.xNC
string(REGEX REPLACE ".*DB_VERSION_PATCH ([0-9]+(NC)?).*" "\\1" BerkeleyDB_VERSION_PATCH "${_BERKELEYDB_DB_HEADER}")
else()
if(BerkeleyDB_FIND_REQUIRED)
# If the find_package(BerkeleyDB REQUIRED) was used, fail since we couldn't find the header
message(FATAL_ERROR "Failed to find Berkeley DB's header file \"db.h\"! Try setting \"BerkeleyDB_ROOT_DIR\" when initiating Cmake.")
elseif(NOT BerkeleyDB_FIND_QUIETLY)
message(WARNING "Failed to find Berkeley DB's header file \"db.h\"! Try setting \"BerkeleyDB_ROOT_DIR\" when initiating Cmake.")
endif()
# Set some garbage values to the versions since we didn't find a file to read
set(BerkeleyDB_VERSION_MAJOR "0")
set(BerkeleyDB_VERSION_MINOR "0")
set(BerkeleyDB_VERSION_PATCH "0")
endif()
# The actual returned/output version variable (the others can be used if needed)
set(BerkeleyDB_VERSION "${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}.${BerkeleyDB_VERSION_PATCH}")
# Finds the target library for berkeley db, since they all follow the same naming conventions
macro(_berkeleydb_get_lib _BERKELEYDB_OUTPUT_VARNAME _TARGET_BERKELEYDB_LIB)
# Different systems sometimes have a version in the lib name...
# and some have a dash or underscore before the versions.
# CMake recommends to put unversioned names before versioned names
find_library(${_BERKELEYDB_OUTPUT_VARNAME}
NAMES
"${_TARGET_BERKELEYDB_LIB}"
"lib${_TARGET_BERKELEYDB_LIB}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}.${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}${BerkeleyDB_VERSION_MINOR}"
"lib${_TARGET_BERKELEYDB_LIB}${BerkeleyDB_VERSION_MAJOR}"
"lib${_TARGET_BERKELEYDB_LIB}-${BerkeleyDB_VERSION_MAJOR}"
"lib${_TARGET_BERKELEYDB_LIB}_${BerkeleyDB_VERSION_MAJOR}"
HINTS ${_BERKELEYDB_HINTS}
PATH_SUFFIXES
"lib"
"lib64"
"libs"
"libs64"
PATHS ${_BERKELEYDB_PATHS}
)
# If the library was found, add it to our list of libraries
if(${_BERKELEYDB_OUTPUT_VARNAME})
# If found, append to our libraries variable
# The ${{}} is because the first expands to target the real variable, the second expands the variable's contents...
# and the real variable's contents is the path to the lib. Thus, it appends the path of the lib to BerkeleyDB_LIBRARIES.
list(APPEND BerkeleyDB_LIBRARIES "${${_BERKELEYDB_OUTPUT_VARNAME}}")
endif()
endmacro()
# Find and set the paths of the specific library to the variable
_berkeleydb_get_lib(BerkeleyDB_LIBRARY "db")
# NOTE: Windows doesn't have a db_cxx lib, but instead compiles the cxx code into the "db" lib
_berkeleydb_get_lib(BerkeleyDB_Cxx_LIBRARY "db_cxx")
# NOTE: I don't think Linux/Unix gets an SQL lib
_berkeleydb_get_lib(BerkeleyDB_Sql_LIBRARY "db_sql")
_berkeleydb_get_lib(BerkeleyDB_Stl_LIBRARY "db_stl")
# Needed for find_package_handle_standard_args()
include(FindPackageHandleStandardArgs)
# Fails if required vars aren't found, or if the version doesn't meet specifications.
find_package_handle_standard_args(BerkeleyDB
FOUND_VAR BerkeleyDB_FOUND
REQUIRED_VARS
BerkeleyDB_INCLUDE_DIRS
BerkeleyDB_LIBRARY
BerkeleyDB_LIBRARIES
VERSION_VAR BerkeleyDB_VERSION
)
# Only show the variables in the GUI if they click "advanced".
# Does nothing when using the CLI
mark_as_advanced(FORCE
BerkeleyDB_FOUND
BerkeleyDB_INCLUDE_DIRS
BerkeleyDB_LIBRARIES
BerkeleyDB_VERSION
BerkeleyDB_VERSION_MAJOR
BerkeleyDB_VERSION_MINOR
BerkeleyDB_VERSION_PATCH
BerkeleyDB_LIBRARY
BerkeleyDB_Cxx_LIBRARY
BerkeleyDB_Stl_LIBRARY
BerkeleyDB_Sql_LIBRARY
)
# Create an imported lib for easy linking by external projects
if(BerkeleyDB_FOUND AND BerkeleyDB_LIBRARIES AND NOT TARGET Oracle::BerkeleyDB)
add_library(Oracle::BerkeleyDB UNKNOWN IMPORTED)
set_target_properties(Oracle::BerkeleyDB PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${BerkeleyDB_INCLUDE_DIRS}"
IMPORTED_LOCATION "${BerkeleyDB_LIBRARY}"
INTERFACE_LINK_LIBRARIES "${BerkeleyDB_LIBRARIES}"
)
endif()
include(FindPackageMessage)
# A message that tells the user what includes/libs were found, and obeys the QUIET command.
find_package_message(BerkeleyDB
"Found BerkeleyDB libraries: ${BerkeleyDB_LIBRARIES}"
"[${BerkeleyDB_LIBRARIES}[${BerkeleyDB_INCLUDE_DIRS}]]"
)

View file

@ -1,97 +0,0 @@
# - Try to find libevent
#.rst
# FindLibevent
# ------------
#
# Find Libevent include directories and libraries. Invoke as::
#
# find_package(Libevent
# [version] [EXACT] # Minimum or exact version
# [REQUIRED] # Fail if Libevent is not found
# [COMPONENT <C>...]) # Libraries to look for
#
# Valid components are one or more of:: libevent core extra pthreads openssl.
# Note that 'libevent' contains both core and extra. You must specify one of
# them for the other components.
#
# This module will define the following variables::
#
# LIBEVENT_FOUND - True if headers and requested libraries were found
# LIBEVENT_INCLUDE_DIRS - Libevent include directories
# LIBEVENT_LIBRARIES - Libevent libraries to be linked
# LIBEVENT_<C>_FOUND - Component <C> was found (<C> is uppercase)
# LIBEVENT_<C>_LIBRARY - Library to be linked for Libevent component <C>.
find_package(PkgConfig QUIET)
pkg_check_modules(PC_LIBEVENT QUIET libevent)
# Look for the Libevent 2.0 or 1.4 headers
find_path(LIBEVENT_INCLUDE_DIR
NAMES
event2/event-config.h
event-config.h
HINTS
${PC_LIBEVENT_INCLUDE_DIRS}
)
if(LIBEVENT_INCLUDE_DIR)
set(_version_regex "^#define[ \t]+_EVENT_VERSION[ \t]+\"([^\"]+)\".*")
if(EXISTS "${LIBEVENT_INCLUDE_DIR}/event2/event-config.h")
# Libevent 2.0
file(STRINGS "${LIBEVENT_INCLUDE_DIR}/event2/event-config.h"
LIBEVENT_VERSION REGEX "${_version_regex}")
if("${LIBEVENT_VERSION}" STREQUAL "")
set(LIBEVENT_VERSION ${PC_LIBEVENT_VERSION})
endif()
else()
# Libevent 1.4
file(STRINGS "${LIBEVENT_INCLUDE_DIR}/event-config.h"
LIBEVENT_VERSION REGEX "${_version_regex}")
endif()
string(REGEX REPLACE "${_version_regex}" "\\1"
LIBEVENT_VERSION "${LIBEVENT_VERSION}")
unset(_version_regex)
endif()
set(_LIBEVENT_REQUIRED_VARS)
foreach(COMPONENT ${Libevent_FIND_COMPONENTS})
set(_LIBEVENT_LIBNAME libevent)
# Note: compare two variables to avoid a CMP0054 policy warning
if(COMPONENT STREQUAL _LIBEVENT_LIBNAME)
set(_LIBEVENT_LIBNAME event)
else()
set(_LIBEVENT_LIBNAME "event_${COMPONENT}")
endif()
string(TOUPPER "${COMPONENT}" COMPONENT_UPPER)
find_library(LIBEVENT_${COMPONENT_UPPER}_LIBRARY
NAMES ${_LIBEVENT_LIBNAME}
HINTS ${PC_LIBEVENT_LIBRARY_DIRS}
)
if(LIBEVENT_${COMPONENT_UPPER}_LIBRARY)
set(Libevent_${COMPONENT}_FOUND 1)
endif()
list(APPEND _LIBEVENT_REQUIRED_VARS LIBEVENT_${COMPONENT_UPPER}_LIBRARY)
endforeach()
unset(_LIBEVENT_LIBNAME)
include(FindPackageHandleStandardArgs)
# handle the QUIETLY and REQUIRED arguments and set LIBEVENT_FOUND to TRUE
# if all listed variables are TRUE and the requested version matches.
find_package_handle_standard_args(Libevent REQUIRED_VARS
${_LIBEVENT_REQUIRED_VARS}
LIBEVENT_INCLUDE_DIR
VERSION_VAR LIBEVENT_VERSION
HANDLE_COMPONENTS)
if(LIBEVENT_FOUND)
set(LIBEVENT_INCLUDE_DIRS ${LIBEVENT_INCLUDE_DIR})
set(LIBEVENT_LIBRARIES)
foreach(COMPONENT ${Libevent_FIND_COMPONENTS})
string(TOUPPER "${COMPONENT}" COMPONENT_UPPER)
list(APPEND LIBEVENT_LIBRARIES ${LIBEVENT_${COMPONENT_UPPER}_LIBRARY})
set(LIBEVENT_${COMPONENT_UPPER}_FOUND ${Libevent_${COMPONENT}_FOUND})
endforeach()
endif()
mark_as_advanced(LIBEVENT_INCLUDE_DIR ${_LIBEVENT_REQUIRED_VARS})
unset(_LIBEVENT_REQUIRED_VARS)

View file

@ -59,7 +59,6 @@ def get_obj_from_dirty_text(full_object: str):
last_name = property_name last_name = property_name
elif len(left) > 1: elif len(left) > 1:
match = re.match(r'^(\[)?"(?P<name>\w.*?)"(\])?.*', left) match = re.match(r'^(\[)?"(?P<name>\w.*?)"(\])?.*', left)
if match is not None:
last_name = match.group('name') last_name = match.group('name')
if match.group(1) is not None and match.group(3) is not None: if match.group(1) is not None and match.group(3) is not None:
left = '[' left = '['
@ -97,15 +96,7 @@ def get_obj_from_dirty_text(full_object: str):
ret = obj ret = obj
if ret is not None: if ret is not None:
if i + 1 < len(lines) - 1: if i + 1 < len(lines) - 1:
print('WARNING: unparsable data...', file=sys.stderr) print('Ignoring this data (below the parsed object): ' + "\n".join(lines[i+1:]), file=sys.stderr)
lines = lines[i+1:]
if not lines[0]:
lines = lines[1:]
nret = get_obj_from_dirty_text("\n".join(lines))
if not nret:
nret = get_obj_from_dirty_text("\n".join(lines[1:]))
if nret:
ret.update(nret)
return ret return ret
except Exception as e: except Exception as e:
print('Exception: ' + str(e), file=sys.stderr) print('Exception: ' + str(e), file=sys.stderr)
@ -122,7 +113,7 @@ def get_type(arg_type: str, full_line: str):
arg_type = arg_type.lower() arg_type = arg_type.lower()
if 'array' in arg_type: if 'array' in arg_type:
return 'array', required, None return 'array', required, None
if 'numeric' in arg_type or 'number' in arg_type: if 'numeric' in arg_type:
return 'number', required, None return 'number', required, None
if 'bool' in arg_type: if 'bool' in arg_type:
return 'boolean', required, None return 'boolean', required, None
@ -132,11 +123,6 @@ def get_type(arg_type: str, full_line: str):
properties = get_obj_from_dirty_text(full_line) if full_line is not None else None properties = get_obj_from_dirty_text(full_line) if full_line is not None else None
return 'object', required, properties return 'object', required, properties
if arg_type.startswith('optional'):
return 'optional', required, None
if arg_type.startswith('json'):
return 'json', required, None
print('Unable to derive type from: ' + arg_type, file=sys.stderr) print('Unable to derive type from: ' + arg_type, file=sys.stderr)
return None, False, None return None, False, None

View file

@ -13,22 +13,17 @@ def get_type(arg_type, full_line):
if arg_type is None: if arg_type is None:
return 'string' return 'string'
arg_type = arg_type.lower().split(',')[0].strip() arg_type = arg_type.lower()
if 'numeric' in arg_type: if 'numeric' in arg_type:
return 'number' return 'number'
if 'bool' in arg_type: if 'bool' in arg_type:
return 'boolean' return 'boolean'
if 'array' in arg_type: if 'string' in arg_type:
return 'array' return 'string'
if 'object' in arg_type: if 'object' in arg_type:
return 'object' return 'object'
supported_types = ['number', 'string', 'object', 'array', 'optional'] raise Exception('Not implemented: ' + arg_type)
if arg_type in supported_types:
return arg_type
print("get_type: WARNING", arg_type, "is not supported type", file=sys.stderr)
return arg_type
def parse_params(args): def parse_params(args):
@ -39,7 +34,7 @@ def parse_params(args):
continue continue
arg_parsed = re_argline.fullmatch(line) arg_parsed = re_argline.fullmatch(line)
if arg_parsed is None: if arg_parsed is None:
continue raise Exception("Unparsable argument: " + line)
arg_name, arg_type, arg_desc = arg_parsed.group('name', 'type', 'desc') arg_name, arg_type, arg_desc = arg_parsed.group('name', 'type', 'desc')
if not arg_type: if not arg_type:
raise Exception('Not implemented: ' + arg_type) raise Exception('Not implemented: ' + arg_type)

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -36,13 +36,13 @@ if [ -z "${CODESIGN_ALLOCATE}" ]; then
fi fi
find ${TEMPDIR} -name "*.sign" | while read i; do find ${TEMPDIR} -name "*.sign" | while read i; do
SIZE=$(stat -c %s "${i}") SIZE=`stat -c %s "${i}"`
TARGET_FILE="$(echo "${i}" | sed 's/\.sign$//')" TARGET_FILE="`echo "${i}" | sed 's/\.sign$//'`"
echo "Allocating space for the signature of size ${SIZE} in ${TARGET_FILE}" echo "Allocating space for the signature of size ${SIZE} in ${TARGET_FILE}"
${CODESIGN_ALLOCATE} -i "${TARGET_FILE}" -a ${ARCH} ${SIZE} -o "${i}.tmp" ${CODESIGN_ALLOCATE} -i "${TARGET_FILE}" -a ${ARCH} ${SIZE} -o "${i}.tmp"
OFFSET=$(${PAGESTUFF} "${i}.tmp" -p | tail -2 | grep offset | sed 's/[^0-9]*//g') OFFSET=`${PAGESTUFF} "${i}.tmp" -p | tail -2 | grep offset | sed 's/[^0-9]*//g'`
if [ -z ${QUIET} ]; then if [ -z ${QUIET} ]; then
echo "Attaching signature at offset ${OFFSET}" echo "Attaching signature at offset ${OFFSET}"
fi fi

View file

@ -27,19 +27,19 @@ ${CODESIGN} -f --file-list ${TEMPLIST} "$@" "${BUNDLE}"
grep -v CodeResources < "${TEMPLIST}" | while read i; do grep -v CodeResources < "${TEMPLIST}" | while read i; do
TARGETFILE="${BUNDLE}/`echo "${i}" | sed "s|.*${BUNDLE}/||"`" TARGETFILE="${BUNDLE}/`echo "${i}" | sed "s|.*${BUNDLE}/||"`"
SIZE=$(pagestuff "$i" -p | tail -2 | grep size | sed 's/[^0-9]*//g') SIZE=`pagestuff "$i" -p | tail -2 | grep size | sed 's/[^0-9]*//g'`
OFFSET=$(pagestuff "$i" -p | tail -2 | grep offset | sed 's/[^0-9]*//g') OFFSET=`pagestuff "$i" -p | tail -2 | grep offset | sed 's/[^0-9]*//g'`
SIGNFILE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}.sign" SIGNFILE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}.sign"
DIRNAME="$(dirname "${SIGNFILE}")" DIRNAME="`dirname "${SIGNFILE}"`"
mkdir -p "${DIRNAME}" mkdir -p "${DIRNAME}"
echo "Adding detached signature for: ${TARGETFILE}. Size: ${SIZE}. Offset: ${OFFSET}" echo "Adding detached signature for: ${TARGETFILE}. Size: ${SIZE}. Offset: ${OFFSET}"
dd if="$i" of="${SIGNFILE}" bs=1 skip=${OFFSET} count=${SIZE} 2>/dev/null dd if="$i" of="${SIGNFILE}" bs=1 skip=${OFFSET} count=${SIZE} 2>/dev/null
done done
grep CodeResources < "${TEMPLIST}" | while read i; do grep CodeResources < "${TEMPLIST}" | while read i; do
TARGETFILE="${BUNDLE}/$(echo "${i}" | sed "s|.*${BUNDLE}/||")" TARGETFILE="${BUNDLE}/`echo "${i}" | sed "s|.*${BUNDLE}/||"`"
RESOURCE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}" RESOURCE="${TEMPDIR}/${OUTROOT}/${TARGETFILE}"
DIRNAME="$(dirname "${RESOURCE}")" DIRNAME="`dirname "${RESOURCE}"`"
mkdir -p "${DIRNAME}" mkdir -p "${DIRNAME}"
echo "Adding resource for: \"${TARGETFILE}\"" echo "Adding resource for: \"${TARGETFILE}\""
cp "${i}" "${RESOURCE}" cp "${i}" "${RESOURCE}"

View file

@ -1,57 +0,0 @@
## Stratum Server Instructions
In simple terms, the stratum protocol is a protocol to distribute crypto mining work to multiple miners. Mining pools typically run a stratum endpoint that the various miners communicate with.
Please refer to other web sources for more information about mining pools or the stratum protocol.
When mining LBC, you can solo mine directly to an instance of a full node (using the node's wallet). Or you can mine as part of a pool.
You can host your own pool or use one of the many hosted LBC pools. See https://miningpoolstats.stream/lbry
This document refers to Yiimp, a derivative of Yaamp, as found here: https://github.com/tpruvot/yiimp.git .
Please refer to the instructions there as well. Yiimp has supported LBRY mining for several years.
Yiimp consists of two pieces: the web GUI for pool management (written in PHP) and the Stratum server (written in C++). The two communicate via polling a MySQL database (or MariaDB).
The web GUI and configuration of the pooling rewards, fees, etc. are out of scope here.
To help you with running Yiimp, we have created two docker images: one for the DB and one for the Yiimp Stratum Server. (See the subfolders.)
Use of the Docker images is optional; you can refer to other Yiimp and MySQL documentation for running it without Docker.
If you are using your own database instance, you will need to import the Yiimp SQL files to establish the yaamp database.
See https://github.com/tpruvot/yiimp/tree/next/sql .
### Sample Usage Steps:
#### 1. Run the full lbrycrd node:
```
./lbrycrdd -testnet -rpcuser=ruser -rpcpassword=rpswd -deprecatedrpc=validateaddress -deprecatedrpc=accounts -daemon
```
The included deprecated RPCs are required for compatibility with Yiimp.
It will need to be caught up to the current block before it is ready.
Remove `-testnet` for the real deal.
#### 2. Run and initialize the datatabase:
```
docker run -d -e MYSQL_ROOT_PASSWORD=patofpaq -e MYSQL_DATABASE=yaamp --network host --name db lbry/yiimp_db
docker exec -it db mysql -uroot -ppatofpaq
use yaamp;
delete from coins;
insert into coins(name, symbol, symbol2, algo, enable, auto_ready, rpcuser, rpcpasswd, rpchost, rpcport, rpccurl, rpcencoding, hasgetinfo, hassubmitblock, usememorypool, usesegwit, auxpow)
values('Local LBRY Instance', 'LBC', 'LBC', 'lbry', 1, 1, 'ruser', 'rpswd', '127.0.0.1', 19245, 1, 'utf-8', 0, 1, 0, 0, 0);
exit
```
Use port 19245 for testnet, port 9245 for main. Set usesegwit to 1 after the segwit fork is enabled on December 11, 2019.
#### 3. Run the stratum server:
```
docker run --network host -d lbry/yiimp_stratum
```
Alternatively, to get more output or see how its called directly:
```
docker run --network host -it lbry/yiimp_stratum bash
cat config/lbry.conf
./stratum config/lbry
```
When testing with an ASIC you may need to modify the TCP server address in said lbry.conf file to be an external IP address.
#### 4. Connect sgminer to it:
```
sgminer -k lbry -o stratum+tcp://127.0.0.1:3334/ -D -T -O mn824Su1wX7ip8WcNYzXwwWqvBvkeWGRo6:x
```
The username there is the account to receive payments from the pool. The password is unused. Tested with https://github.com/lbryio/sgminer-gm.
You can use whatever miner you prefer.

View file

@ -1,34 +0,0 @@
FROM mariadb:10.1-bionic
ARG REPOSITORY=https://github.com/tpruvot/yiimp.git
ENV BUILD_DEPS \
ca-certificates \
git
COPY init-db.sh /docker-entrypoint-initdb.d/
RUN apt-get update \
&& apt-get install -y --no-install-recommends ${BUILD_DEPS} \
&& git clone --progress ${REPOSITORY} ~/yiimp \
&& mkdir /tmp/sql \
&& mv ~/yiimp/sql/2016-04-03-yaamp.sql.gz /tmp/sql/0000-00-00-initial.sql.gz \
&& cp ~/yiimp/sql/*.sql /tmp/sql \
&& apt-get purge -y --auto-remove ${BUILD_DEPS} \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf ~/yiimp
EXPOSE 3306
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="yiimp_db" \
version="1.0" \
org.label-schema.name="yiimp_db" \
org.label-schema.description="Use this to run a compatible MariaDB for yiimp's stratum server" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/yiimp_db yiimp_db"

View file

@ -1,10 +0,0 @@
#!/bin/bash
for f in /tmp/sql/*; do
case "$f" in
*.sql) echo "$0: running $f"; "${mysql[@]}" --force < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done

View file

@ -1,54 +0,0 @@
FROM alpine:3.7
ARG REPOSITORY=https://github.com/tpruvot/yiimp.git
ENV BUILD_DEPS \
build-base \
git
ENV RUN_DEPS \
curl-dev \
gmp-dev \
mariadb-dev \
libssh2-dev \
curl
RUN apk update \
&& apk add --no-cache ${BUILD_DEPS} \
&& apk add --no-cache ${RUN_DEPS} \
&& git clone --progress ${REPOSITORY} ~/yiimp \
&& sed -i 's/ulong/uint64_t/g' ~/yiimp/stratum/algos/rainforest.c \
&& find ~/yiimp -name '*akefile' -exec sed -i 's/-march=native//g' {} + \
&& make -C ~/yiimp/stratum/iniparser \
&& make -C ~/yiimp/stratum \
&& mkdir /var/stratum /var/stratum/config \
&& cp ~/yiimp/stratum/run.sh /var/stratum \
&& cp ~/yiimp/stratum/config/run.sh /var/stratum/config \
&& cp ~/yiimp/stratum/stratum /var/stratum \
&& cp ~/yiimp/stratum/config.sample/lbry.conf /var/stratum/config \
&& sed -i 's/yaamp.com/127.0.0.1/g' /var/stratum/config/lbry.conf \
&& sed -i 's/yaampdb/127.0.0.1/g' /var/stratum/config/lbry.conf \
&& rm -rf ~/yiimp \
&& apk del ${BUILD_DEPS} \
&& rm -rf /var/cache/apk/*
RUN apk add --no-cache bash
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="yiimp_stratum" \
version="1.0" \
org.label-schema.name="yiimp_stratum" \
org.label-schema.description="Use this to run yiimp's stratum server in lbry mode" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/yiimp_stratum yiimp_stratum"
WORKDIR /var/stratum
CMD ["./stratum", "config/lbry"]
EXPOSE 3334

View file

@ -23,7 +23,7 @@ TIMESERVER=http://timestamp.comodoca.com
CERTFILE="win-codesign.cert" CERTFILE="win-codesign.cert"
mkdir -p "${OUTSUBDIR}" mkdir -p "${OUTSUBDIR}"
basename -a $(ls -1 "${SRCDIR}"/*-unsigned.exe) | while read UNSIGNED; do basename -a `ls -1 "${SRCDIR}"/*-unsigned.exe` | while read UNSIGNED; do
echo Signing "${UNSIGNED}" echo Signing "${UNSIGNED}"
"${OSSLSIGNCODE}" sign -certs "${CERTFILE}" -t "${TIMESERVER}" -in "${SRCDIR}/${UNSIGNED}" -out "${WORKDIR}/${UNSIGNED}" "$@" "${OSSLSIGNCODE}" sign -certs "${CERTFILE}" -t "${TIMESERVER}" -in "${SRCDIR}/${UNSIGNED}" -out "${WORKDIR}/${UNSIGNED}" "$@"
"${OSSLSIGNCODE}" extract-signature -pem -in "${WORKDIR}/${UNSIGNED}" -out "${OUTSUBDIR}/${UNSIGNED}.pem" && rm "${WORKDIR}/${UNSIGNED}" "${OSSLSIGNCODE}" extract-signature -pem -in "${WORKDIR}/${UNSIGNED}" -out "${OUTSUBDIR}/${UNSIGNED}.pem" && rm "${WORKDIR}/${UNSIGNED}"

View file

@ -131,7 +131,7 @@ $(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_
-e 's|@build_os@|$(build_os)|' \ -e 's|@build_os@|$(build_os)|' \
-e 's|@host_os@|$(host_os)|' \ -e 's|@host_os@|$(host_os)|' \
-e 's|@CFLAGS@|$(strip $(host_CFLAGS) $(host_$(release_type)_CFLAGS))|' \ -e 's|@CFLAGS@|$(strip $(host_CFLAGS) $(host_$(release_type)_CFLAGS))|' \
-e 's|@CXXFLAGS@|$(strip -pipe $(host_$(release_type)_CXXFLAGS))|' \ -e 's|@CXXFLAGS@|$(strip $(host_CXXFLAGS) $(host_$(release_type)_CXXFLAGS))|' \
-e 's|@CPPFLAGS@|$(strip $(host_CPPFLAGS) $(host_$(release_type)_CPPFLAGS))|' \ -e 's|@CPPFLAGS@|$(strip $(host_CPPFLAGS) $(host_$(release_type)_CPPFLAGS))|' \
-e 's|@LDFLAGS@|$(strip $(host_LDFLAGS) $(host_$(release_type)_LDFLAGS))|' \ -e 's|@LDFLAGS@|$(strip $(host_LDFLAGS) $(host_$(release_type)_LDFLAGS))|' \
-e 's|@allow_host_packages@|$(ALLOW_HOST_PACKAGES)|' \ -e 's|@allow_host_packages@|$(ALLOW_HOST_PACKAGES)|' \

File diff suppressed because it is too large Load diff

View file

@ -7,13 +7,12 @@ darwin_CC=clang -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroo
darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++ -B $(host_prefix)/native/bin darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++ -B $(host_prefix)/native/bin
darwin_CFLAGS=-pipe darwin_CFLAGS=-pipe
darwin_CXXFLAGS=$(darwin_CFLAGS) -std=c++11 darwin_CXXFLAGS=$(darwin_CFLAGS)
darwin_release_CFLAGS=-O2 -g darwin_release_CFLAGS=-O2
darwin_release_CXXFLAGS=$(darwin_release_CFLAGS) darwin_release_CXXFLAGS=$(darwin_release_CFLAGS)
darwin_debug_CFLAGS=-Og -g darwin_debug_CFLAGS=-Og
darwin_debug_CXXFLAGS=-O0 -g darwin_debug_CXXFLAGS=$(darwin_debug_CFLAGS)
darwin_native_toolchain=native_cctools darwin_native_toolchain=native_cctools

View file

@ -1,14 +1,11 @@
linux_CFLAGS=-pipe linux_CFLAGS=-pipe
linux_CXXFLAGS=$(linux_CFLAGS) -std=c++11 linux_CXXFLAGS=$(linux_CFLAGS)
linux_release_CFLAGS=-O3 -g linux_release_CFLAGS=-O2
ifeq (1,$(shell ldd --version | head -1 | awk '{print $$NF < 2.28}'))
linux_release_CFLAGS+= -include $(BASEDIR)/glibc_version_header/force_link_glibc_2.19.h
endif
linux_release_CXXFLAGS=$(linux_release_CFLAGS) linux_release_CXXFLAGS=$(linux_release_CFLAGS)
linux_debug_CFLAGS=-O1 -g linux_debug_CFLAGS=-Og
linux_debug_CXXFLAGS=-O0 -g linux_debug_CXXFLAGS=$(linux_debug_CFLAGS)
linux_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC linux_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC

View file

@ -1,11 +1,10 @@
mingw32_CFLAGS=-pipe mingw32_CFLAGS=-pipe
mingw32_CXXFLAGS=$(mingw32_CFLAGS) -std=c++11 mingw32_CXXFLAGS=$(mingw32_CFLAGS)
mingw32_release_CFLAGS=-O2 -g mingw32_release_CFLAGS=-O2
mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS) mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS)
mingw32_debug_CFLAGS=-O1 -g mingw32_debug_CFLAGS=-O1
mingw32_debug_CXXFLAGS=-O0 -g mingw32_debug_CXXFLAGS=$(mingw32_debug_CFLAGS)
mingw32_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC mingw32_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC

View file

@ -1,6 +1,6 @@
package=bdb package=bdb
$(package)_version=4.8.30 $(package)_version=4.8.30
$(package)_download_path=https://download.oracle.com/berkeley-db $(package)_download_path=http://download.oracle.com/berkeley-db
$(package)_file_name=db-$($(package)_version).NC.tar.gz $(package)_file_name=db-$($(package)_version).NC.tar.gz
$(package)_sha256_hash=12edc0df75bf9abd7f82f821795bcee50f42cb2e5f76a6a281b85732798364ef $(package)_sha256_hash=12edc0df75bf9abd7f82f821795bcee50f42cb2e5f76a6a281b85732798364ef
$(package)_build_subdir=build_unix $(package)_build_subdir=build_unix
@ -9,7 +9,7 @@ define $(package)_set_vars
$(package)_config_opts=--disable-shared --enable-cxx --disable-replication $(package)_config_opts=--disable-shared --enable-cxx --disable-replication
$(package)_config_opts_mingw32=--enable-mingw $(package)_config_opts_mingw32=--enable-mingw
$(package)_config_opts_linux=--with-pic $(package)_config_opts_linux=--with-pic
$(package)_cppflags_mingw32=-DUNICODE -D_UNICODE $(package)_cxxflags=-std=c++11
endef endef
define $(package)_preprocess_cmds define $(package)_preprocess_cmds
@ -29,4 +29,3 @@ endef
define $(package)_stage_cmds define $(package)_stage_cmds
$(MAKE) DESTDIR=$($(package)_staging_dir) install_lib install_include $(MAKE) DESTDIR=$($(package)_staging_dir) install_lib install_include
endef endef

View file

@ -1,6 +1,6 @@
package=boost package=boost
$(package)_version=1_69_0 $(package)_version=1_69_0
$(package)_download_path=https://boostorg.jfrog.io/artifactory/main/release/1.69.0/source/ $(package)_download_path=https://dl.bintray.com/boostorg/release/1.69.0/source/
$(package)_file_name=$(package)_$($(package)_version).tar.bz2 $(package)_file_name=$(package)_$($(package)_version).tar.bz2
$(package)_sha256_hash=8f32d4617390d1c2d16f26a27ab60d97807b35440d45891fa340fc2648b04406 $(package)_sha256_hash=8f32d4617390d1c2d16f26a27ab60d97807b35440d45891fa340fc2648b04406
$(package)_dependencies=icu $(package)_dependencies=icu

View file

@ -31,8 +31,6 @@ define $(package)_config_cmds
PKG_CONFIG_SYSROOT_DIR=/ \ PKG_CONFIG_SYSROOT_DIR=/ \
PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \ PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \
PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \ PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \
sed -i.old 's|^GEN_DEPS.c=.*|& $($(package)_cflags)|' config/mh-mingw* && \
sed -i.old 's|^GEN_DEPS.cc=.*|& $($(package)_cxxflags)|' config/mh-mingw* && \
$($(package)_autoconf) $($(package)_autoconf)
endef endef

View file

@ -27,5 +27,4 @@ define $(package)_stage_cmds
endef endef
define $(package)_postprocess_cmds define $(package)_postprocess_cmds
rm lib/*.la
endef endef

View file

@ -1,6 +1,6 @@
package=miniupnpc package=miniupnpc
$(package)_version=2.0.20180203 $(package)_version=2.0.20180203
$(package)_download_path=https://miniupnp.tuxfamily.org/files/ $(package)_download_path=http://miniupnp.free.fr/files
$(package)_file_name=$(package)-$($(package)_version).tar.gz $(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=90dda8c7563ca6cd4a83e23b3c66dbbea89603a1675bfdb852897c2c9cc220b7 $(package)_sha256_hash=90dda8c7563ca6cd4a83e23b3c66dbbea89603a1675bfdb852897c2c9cc220b7

View file

@ -5,7 +5,7 @@ $(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=8f9faeaebad088e772f4ef5e38252d472be4d878c6b3a2718c10a4fcebe7a41c $(package)_sha256_hash=8f9faeaebad088e772f4ef5e38252d472be4d878c6b3a2718c10a4fcebe7a41c
define $(package)_set_vars define $(package)_set_vars
$(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc) $($(package)_cflags) $($(package)_cppflags)" $(package)_config_env=AR="$($(package)_ar)" RANLIB="$($(package)_ranlib)" CC="$($(package)_cc)"
$(package)_config_opts=--prefix=$(host_prefix) --openssldir=$(host_prefix)/etc/openssl $(package)_config_opts=--prefix=$(host_prefix) --openssldir=$(host_prefix)/etc/openssl
$(package)_config_opts+=no-camellia $(package)_config_opts+=no-camellia
$(package)_config_opts+=no-capieng $(package)_config_opts+=no-capieng
@ -42,6 +42,7 @@ $(package)_config_opts+=no-weak-ssl-ciphers
$(package)_config_opts+=no-whirlpool $(package)_config_opts+=no-whirlpool
$(package)_config_opts+=no-zlib $(package)_config_opts+=no-zlib
$(package)_config_opts+=no-zlib-dynamic $(package)_config_opts+=no-zlib-dynamic
$(package)_config_opts+=$($(package)_cflags) $($(package)_cppflags)
$(package)_config_opts_linux=-fPIC -Wa,--noexecstack $(package)_config_opts_linux=-fPIC -Wa,--noexecstack
$(package)_config_opts_x86_64_linux=linux-x86_64 $(package)_config_opts_x86_64_linux=linux-x86_64
$(package)_config_opts_i686_linux=linux-generic32 $(package)_config_opts_i686_linux=linux-generic32

View file

@ -4,6 +4,7 @@ $(package)_download_path=$(native_$(package)_download_path)
$(package)_file_name=$(native_$(package)_file_name) $(package)_file_name=$(native_$(package)_file_name)
$(package)_sha256_hash=$(native_$(package)_sha256_hash) $(package)_sha256_hash=$(native_$(package)_sha256_hash)
$(package)_dependencies=native_$(package) $(package)_dependencies=native_$(package)
$(package)_cxxflags=-std=c++11
define $(package)_set_vars define $(package)_set_vars
$(package)_config_opts=--disable-shared --with-protoc=$(build_prefix)/bin/protoc $(package)_config_opts=--disable-shared --with-protoc=$(build_prefix)/bin/protoc

View file

@ -2,9 +2,6 @@ PACKAGE=qt
$(package)_version=5.9.6 $(package)_version=5.9.6
$(package)_download_path=https://download.qt.io/official_releases/qt/5.9/$($(package)_version)/submodules $(package)_download_path=https://download.qt.io/official_releases/qt/5.9/$($(package)_version)/submodules
$(package)_suffix=opensource-src-$($(package)_version).tar.xz $(package)_suffix=opensource-src-$($(package)_version).tar.xz
#$(package)_version=5.12.3
#$(package)_download_path=http://download.qt.io/official_releases/qt/5.12/$($(package)_version)/submodules
#$(package)_suffix=opensource-src-$($(package)_version).tar.gz
$(package)_file_name=qtbase-$($(package)_suffix) $(package)_file_name=qtbase-$($(package)_suffix)
$(package)_sha256_hash=eed620cb268b199bd83b3fc6a471c51d51e1dc2dbb5374fc97a0cc75facbe36f $(package)_sha256_hash=eed620cb268b199bd83b3fc6a471c51d51e1dc2dbb5374fc97a0cc75facbe36f
$(package)_dependencies=openssl zlib $(package)_dependencies=openssl zlib

View file

@ -6,10 +6,9 @@ $(package)_sha256_hash=bcbabe1e2c7d0eec4ed612e10b94b112dd5f06fcefa994a0c79a45d83
$(package)_patches=0001-fix-build-with-older-mingw64.patch 0002-disable-pthread_set_name_np.patch $(package)_patches=0001-fix-build-with-older-mingw64.patch 0002-disable-pthread_set_name_np.patch
define $(package)_set_vars define $(package)_set_vars
$(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror --disable-drafts $(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror
$(package)_config_opts += --without-libsodium --without-libgssapi_krb5 --without-pgm --without-norm --without-vmci
$(package)_config_opts += --disable-libunwind --disable-radix-tree --without-gcov
$(package)_config_opts_linux=--with-pic $(package)_config_opts_linux=--with-pic
$(package)_cxxflags=-std=c++11
endef endef
define $(package)_preprocess_cmds define $(package)_preprocess_cmds
@ -32,5 +31,5 @@ endef
define $(package)_postprocess_cmds define $(package)_postprocess_cmds
sed -i.old "s/ -lstdc++//" lib/pkgconfig/libzmq.pc && \ sed -i.old "s/ -lstdc++//" lib/pkgconfig/libzmq.pc && \
rm -rf bin share lib/*.la rm -rf bin share
endef endef

View file

@ -1,6 +1,21 @@
FROM ubuntu:16.04 FROM ubuntu:16.04
ARG VCS_REF
ARG BUILD_DATE
ENV LANG C.UTF-8 ENV LANG C.UTF-8
LABEL maintainer="Brannon King" \
decription="build_lbrycrd" \
version="1.0" \
org.label-schema.name="build_lbrycrd" \
org.label-schema.description="Use this to generate a reproducible build of LBRYcrd" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t build_lbrycrd packaging"
RUN set -xe; \ RUN set -xe; \
apt-get update; \ apt-get update; \
apt-get install --no-install-recommends -y build-essential libtool autotools-dev automake pkg-config git wget apt-utils \ apt-get install --no-install-recommends -y build-essential libtool autotools-dev automake pkg-config git wget apt-utils \
@ -21,22 +36,7 @@ RUN update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang-cpp-8
update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix; \ update-alternatives --set x86_64-w64-mingw32-g++ /usr/bin/x86_64-w64-mingw32-g++-posix; \
update-alternatives --set i686-w64-mingw32-g++ /usr/bin/i686-w64-mingw32-g++-posix; \ update-alternatives --set i686-w64-mingw32-g++ /usr/bin/i686-w64-mingw32-g++-posix; \
/usr/sbin/update-ccache-symlinks; \ /usr/sbin/update-ccache-symlinks; \
cd /usr/include/c++ && ln -s /usr/lib/llvm-8/include/c++/v1; \ cd /usr/include/c++ && ln -s /usr/lib/llvm-8/include/c++/v1;
cd /usr/lib/llvm-8/lib && ln -s libc++abi.so.1 libc++abi.so;
ARG VCS_REF
ARG BUILD_DATE
LABEL maintainer="blockchain@lbry.com" \
decription="build_lbrycrd" \
version="1.1" \
org.label-schema.name="build_lbrycrd" \
org.label-schema.description="Use this to generate a reproducible build of LBRYcrd" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/lbryio/lbrycrd" \
org.label-schema.schema-version="1.0.0-rc1" \
org.label-schema.vendor="LBRY" \
org.label-schema.docker.cmd="docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` --build-arg VCS_REF=`git rev-parse --short HEAD` -t lbry/build_lbrycrd packaging"
ENV PATH "/usr/lib/ccache:$PATH" ENV PATH "/usr/lib/ccache:$PATH"
WORKDIR /home WORKDIR /home

View file

@ -30,13 +30,13 @@ if which ccache >/dev/null; then
fi fi
pushd depends pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-apple-darwin14 NO_QT=1 V=1 make -j`getconf _NPROCESSORS_ONLN` HOST=x86_64-apple-darwin14 NO_QT=1 V=1
popd popd
./autogen.sh ./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-apple-darwin14 DEPS_DIR=`pwd`/depends/x86_64-apple-darwin14
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-reduce-exports --without-gui --with-icu="${DEPS_DIR}" --enable-static --disable-shared CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-reduce-exports --without-gui --with-icu="${DEPS_DIR}" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN) make -j`getconf _NPROCESSORS_ONLN`
${DEPS_DIR}/native/bin/x86_64-apple-darwin14-strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx ${DEPS_DIR}/native/bin/x86_64-apple-darwin14-strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx
if which ccache >/dev/null; then if which ccache >/dev/null; then

View file

@ -16,17 +16,14 @@ if which ccache >/dev/null; then
ccache -ps ccache -ps
fi fi
export CXXFLAGS="${CXXFLAGS:--frecord-gcc-switches}"
echo "CXXFLAGS set to $CXXFLAGS"
cd depends cd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-pc-linux-gnu NO_QT=1 V=1 make -j`getconf _NPROCESSORS_ONLN` HOST=x86_64-pc-linux-gnu NO_QT=1 V=1
cd .. cd ..
./autogen.sh ./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-pc-linux-gnu DEPS_DIR=`pwd`/depends/x86_64-pc-linux-gnu
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-static --disable-shared --with-pic --without-gui CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --enable-static --disable-shared --with-pic --without-gui
make -j$(getconf _NPROCESSORS_ONLN) make -j`getconf _NPROCESSORS_ONLN`
strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx strip src/lbrycrdd src/lbrycrd-cli src/lbrycrd-tx
if which ccache >/dev/null; then if which ccache >/dev/null; then

View file

@ -21,13 +21,13 @@ if which ccache >/dev/null; then
fi fi
pushd depends pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=i686-w64-mingw32 NO_QT=1 V=1 make -j`getconf _NPROCESSORS_ONLN` HOST=i686-w64-mingw32 NO_QT=1 V=1
popd popd
./autogen.sh ./autogen.sh
DEPS_DIR=$(pwd)/depends/i686-w64-mingw32 DEPS_DIR=`pwd`/depends/i686-w64-mingw32
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN) make -j`getconf _NPROCESSORS_ONLN`
i686-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe i686-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe
if which ccache >/dev/null; then if which ccache >/dev/null; then

View file

@ -20,13 +20,13 @@ if which ccache >/dev/null; then
fi fi
pushd depends pushd depends
make -j$(getconf _NPROCESSORS_ONLN) HOST=x86_64-w64-mingw32 NO_QT=1 V=1 make -j`getconf _NPROCESSORS_ONLN` HOST=x86_64-w64-mingw32 NO_QT=1 V=1
popd popd
./autogen.sh ./autogen.sh
DEPS_DIR=$(pwd)/depends/x86_64-w64-mingw32 DEPS_DIR=`pwd`/depends/x86_64-w64-mingw32
CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared CONFIG_SITE=${DEPS_DIR}/share/config.site ./configure --prefix=/ --without-gui --with-icu="$DEPS_DIR" --enable-static --disable-shared
make -j$(getconf _NPROCESSORS_ONLN) make -j`getconf _NPROCESSORS_ONLN`
x86_64-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe x86_64-w64-mingw32-strip src/lbrycrdd.exe src/lbrycrd-cli.exe src/lbrycrd-tx.exe
if which ccache >/dev/null; then if which ccache >/dev/null; then

View file

@ -149,13 +149,11 @@ BITCOIN_CORE_H = \
policy/policy.h \ policy/policy.h \
policy/rbf.h \ policy/rbf.h \
pow.h \ pow.h \
prefixtrie.h \
protocol.h \ protocol.h \
random.h \ random.h \
reverse_iterator.h \ reverse_iterator.h \
reverselock.h \ reverselock.h \
rpc/blockchain.h \ rpc/blockchain.h \
rpc/claimrpchelp.h \
rpc/client.h \ rpc/client.h \
rpc/mining.h \ rpc/mining.h \
rpc/protocol.h \ rpc/protocol.h \
@ -251,7 +249,6 @@ libbitcoin_server_a_SOURCES = \
policy/policy.cpp \ policy/policy.cpp \
policy/rbf.cpp \ policy/rbf.cpp \
pow.cpp \ pow.cpp \
prefixtrie.cpp \
rest.cpp \ rest.cpp \
rpc/blockchain.cpp \ rpc/blockchain.cpp \
rpc/claimtrie.cpp \ rpc/claimtrie.cpp \

View file

@ -45,7 +45,6 @@ BITCOIN_TESTS =\
test/bswap_tests.cpp \ test/bswap_tests.cpp \
test/checkqueue_tests.cpp \ test/checkqueue_tests.cpp \
test/coins_tests.cpp \ test/coins_tests.cpp \
test/compilerbug_tests.cpp \
test/compress_tests.cpp \ test/compress_tests.cpp \
test/crypto_tests.cpp \ test/crypto_tests.cpp \
test/cuckoocache_tests.cpp \ test/cuckoocache_tests.cpp \
@ -66,17 +65,11 @@ BITCOIN_TESTS =\
test/net_tests.cpp \ test/net_tests.cpp \
test/claimtriecache_tests.cpp \ test/claimtriecache_tests.cpp \
test/claimtriebranching_tests.cpp \ test/claimtriebranching_tests.cpp \
test/claimtrieexpirationfork_tests.cpp \
test/claimtriefixture.cpp \
test/claimtriehashfork_tests.cpp \
test/claimtrienormalization_tests.cpp \
test/claimtrierpc_tests.cpp \
test/nameclaim_tests.cpp \ test/nameclaim_tests.cpp \
test/netbase_tests.cpp \ test/netbase_tests.cpp \
test/pmt_tests.cpp \ test/pmt_tests.cpp \
test/policyestimator_tests.cpp \ test/policyestimator_tests.cpp \
test/pow_tests.cpp \ test/pow_tests.cpp \
test/prefixtrie_tests.cpp \
test/prevector_tests.cpp \ test/prevector_tests.cpp \
test/raii_event_tests.cpp \ test/raii_event_tests.cpp \
test/random_tests.cpp \ test/random_tests.cpp \
@ -157,6 +150,7 @@ test_test_lbrycrd_fuzzy_LDADD = \
$(LIBSECP256K1) $(LIBSECP256K1)
test_test_lbrycrd_fuzzy_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) test_test_lbrycrd_fuzzy_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS)
#
nodist_test_test_lbrycrd_SOURCES = $(GENERATED_TEST_FILES) nodist_test_test_lbrycrd_SOURCES = $(GENERATED_TEST_FILES)

View file

@ -98,7 +98,7 @@ bool DeserializeFileDB(const fs::path& path, Data& data)
FILE *file = fsbridge::fopen(path, "rb"); FILE *file = fsbridge::fopen(path, "rb");
CAutoFile filein(file, SER_DISK, CLIENT_VERSION); CAutoFile filein(file, SER_DISK, CLIENT_VERSION);
if (filein.IsNull()) if (filein.IsNull())
return false; return error("%s: Failed to open file %s", __func__, path.string());
return DeserializeDB(filein, data); return DeserializeDB(filein, data);
} }

View file

@ -3,7 +3,6 @@
// file COPYING or http://www.opensource.org/licenses/mit-license.php. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <bloom.h> #include <bloom.h>
#include <nameclaim.h>
#include <primitives/transaction.h> #include <primitives/transaction.h>
#include <hash.h> #include <hash.h>

View file

@ -123,7 +123,7 @@ void CChainParams::UpdateVersionBitsParameters(Consensus::DeploymentPos d, int64
class CMainParams : public CChainParams { class CMainParams : public CChainParams {
public: public:
CMainParams() { CMainParams() {
strNetworkID = CBaseChainParams::MAIN; strNetworkID = "lbrycrd";
consensus.nSubsidyLevelInterval = 1<<5; consensus.nSubsidyLevelInterval = 1<<5;
consensus.nMajorityEnforceBlockUpgrade = 750; consensus.nMajorityEnforceBlockUpgrade = 750;
consensus.nMajorityRejectBlockOutdated = 950; consensus.nMajorityRejectBlockOutdated = 950;
@ -132,8 +132,8 @@ public:
consensus.BIP34Height = 1; consensus.BIP34Height = 1;
consensus.BIP34Hash = uint256S("0xdecb9e2cca03a419fd9cca0cb2b1d5ad11b088f22f8f38556d93ac4358b86c24"); consensus.BIP34Hash = uint256S("0xdecb9e2cca03a419fd9cca0cb2b1d5ad11b088f22f8f38556d93ac4358b86c24");
// FIXME: adjust heights // FIXME: adjust heights
consensus.BIP65Height = 200000; consensus.BIP65Height = 600000;
consensus.BIP66Height = 200000; consensus.BIP66Height = 600000;
consensus.powLimit = uint256S("0000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"); consensus.powLimit = uint256S("0000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
consensus.nPowTargetTimespan = 150; //retarget every block consensus.nPowTargetTimespan = 150; //retarget every block
consensus.nPowTargetSpacing = 150; consensus.nPowTargetSpacing = 150;
@ -143,14 +143,10 @@ public:
consensus.nAllowMinDiffMinHeight = -1; consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1; consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 539940; // targeting 21 March 2019 consensus.nNormalizedNameForkHeight = 539940; // targeting 21 March 2019
consensus.nMinTakeoverWorkaroundHeight = 496850;
consensus.nMaxTakeoverWorkaroundHeight = 658300; // targeting 30 Oct 2019
consensus.nWitnessForkHeight = 680770; // targeting 11 Dec 2019
consensus.nAllClaimsInMerkleForkHeight = 658310; // targeting 30 Oct 2019
consensus.fPowAllowMinDifficultyBlocks = false; consensus.fPowAllowMinDifficultyBlocks = false;
consensus.fPowNoRetargeting = false; consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1916; // 95% of a half week consensus.nRuleChangeActivationThreshold = 1916; // 95% of 2016
consensus.nMinerConfirmationWindow = 2016; consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28; consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 1199145601; // January 1, 2008 consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 1199145601; // January 1, 2008
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 1230767999; // December 31, 2008 consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 1230767999; // December 31, 2008
@ -160,16 +156,17 @@ public:
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1462060800; // May 1st, 2016 consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1462060800; // May 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017 consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
// Deployment of SegWit (BIP141, BIP143, and BIP147) -- Unused (see nWitnessForkHeight). // Deployment of SegWit (BIP141, BIP143, and BIP147)
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1; consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1;
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1547942400; // Jan 20, 2019 consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1547942400; // Jan 20, 2019
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1548288000; // Jan 24, 2019 consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1548288000; // Jan 24, 2019
// The best chain should have at least this much work. // The best chain should have at least this much work.
consensus.nMinimumChainWork = uint256S("000000000000000000000000000000000000000000000499ed6684d1bf6f6fd3"); //946000 consensus.nMinimumChainWork = uint256S("0x000000000000000000000000000000000000000000000000607ca7e806c4c1e9"); //400000
// By default assume that the signatures in ancestors of this block are valid. // By default assume that the signatures in ancestors of this block are valid.
consensus.defaultAssumeValid = uint256S("0d3b537afe49820e1c6efc555463f955251b1293c6e5130137e1e25744431172"); //946000 //consensus.defaultAssumeValid = uint256S("0xf0e56e70782af63ccb49c76e852540688755869ba59ec68cac9c04a6b4d9f5ca"); //400000
consensus.defaultAssumeValid = uint256S("0xa6bbb48f5343eb9b0287c22f3ea8b29f36cf10794a37f8a925a894d6f4519913"); //4000
/** /**
* The message start string is designed to be unlikely to occur in normal data. * The message start string is designed to be unlikely to occur in normal data.
@ -195,11 +192,9 @@ public:
vSeeds.clear(); vSeeds.clear();
vFixedSeeds.clear(); vFixedSeeds.clear();
vSeeds.emplace_back("dnsseed1.lbry.io"); // LBRY Inc vSeeds.emplace_back("dnsseed1.lbry.io"); // lbry.io
vSeeds.emplace_back("dnsseed2.lbry.io"); // LBRY Inc vSeeds.emplace_back("dnsseed2.lbry.io"); // lbry.io
vSeeds.emplace_back("dnsseed3.lbry.io"); // LBRY Inc vSeeds.emplace_back("dnsseed3.lbry.io"); // lbry.io
vSeeds.emplace_back("seed.lbry.grin.io"); // Grin
vSeeds.emplace_back("seed.allaboutlbc.com"); // Madiator2011
base58Prefixes[PUBKEY_ADDRESS] = std::vector<unsigned char>(1, 0x55); base58Prefixes[PUBKEY_ADDRESS] = std::vector<unsigned char>(1, 0x55);
base58Prefixes[SCRIPT_ADDRESS] = std::vector<unsigned char>(1, 0x7a); base58Prefixes[SCRIPT_ADDRESS] = std::vector<unsigned char>(1, 0x7a);
@ -207,7 +202,9 @@ public:
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x88, 0xB2, 0x1E}; base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x88, 0xB2, 0x1E};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x88, 0xAD, 0xE4}; base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x88, 0xAD, 0xE4};
bech32_hrp = "lbc"; vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_main, pnSeed6_main + ARRAYLEN(pnSeed6_main));
bech32_hrp = "bc";
vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_main, pnSeed6_main + ARRAYLEN(pnSeed6_main)); vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_main, pnSeed6_main + ARRAYLEN(pnSeed6_main));
@ -241,7 +238,7 @@ public:
class CTestNetParams : public CChainParams { class CTestNetParams : public CChainParams {
public: public:
CTestNetParams() { CTestNetParams() {
strNetworkID = CBaseChainParams::TESTNET; strNetworkID = "lbrycrdtest";
consensus.nSubsidyLevelInterval = 1 << 5; consensus.nSubsidyLevelInterval = 1 << 5;
consensus.nMajorityEnforceBlockUpgrade = 51; consensus.nMajorityEnforceBlockUpgrade = 51;
consensus.nMajorityRejectBlockOutdated = 75; consensus.nMajorityRejectBlockOutdated = 75;
@ -261,10 +258,6 @@ public:
consensus.nAllowMinDiffMinHeight = 277299; consensus.nAllowMinDiffMinHeight = 277299;
consensus.nAllowMinDiffMaxHeight = 1100000; consensus.nAllowMinDiffMaxHeight = 1100000;
consensus.nNormalizedNameForkHeight = 993380; // targeting, 21 Feb 2019 consensus.nNormalizedNameForkHeight = 993380; // targeting, 21 Feb 2019
consensus.nMinTakeoverWorkaroundHeight = 99;
consensus.nMaxTakeoverWorkaroundHeight = 1198550; // targeting 30 Sep 2019
consensus.nWitnessForkHeight = 1198600;
consensus.nAllClaimsInMerkleForkHeight = 1198560; // targeting 30 Sep 2019
consensus.fPowAllowMinDifficultyBlocks = true; consensus.fPowAllowMinDifficultyBlocks = true;
consensus.fPowNoRetargeting = false; consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1512; // 75% for testchains consensus.nRuleChangeActivationThreshold = 1512; // 75% for testchains
@ -278,7 +271,7 @@ public:
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1456790400; // March 1st, 2016 consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1456790400; // March 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017 consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
// Deployment of SegWit (BIP141, BIP143, and BIP147) -- Unused (see nWitnessForkHeight). // Deployment of SegWit (BIP141, BIP143, and BIP147)
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1; consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].bit = 1;
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1462060800; // May 1st 2016 consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nStartTime = 1462060800; // May 1st 2016
consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1493596800; // May 1st 2017 consensus.vDeployments[Consensus::DEPLOYMENT_SEGWIT].nTimeout = 1493596800; // May 1st 2017
@ -317,7 +310,7 @@ public:
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF}; base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94}; base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94};
bech32_hrp = "tlbc"; bech32_hrp = "tb";
vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_test, pnSeed6_test + ARRAYLEN(pnSeed6_test)); vFixedSeeds = std::vector<SeedSpec6>(pnSeed6_test, pnSeed6_test + ARRAYLEN(pnSeed6_test));
@ -351,7 +344,7 @@ public:
class CRegTestParams : public CChainParams { class CRegTestParams : public CChainParams {
public: public:
CRegTestParams() { CRegTestParams() {
strNetworkID = CBaseChainParams::REGTEST; strNetworkID = "lbrycrdreg";
consensus.nSubsidyLevelInterval = 1 << 5; consensus.nSubsidyLevelInterval = 1 << 5;
consensus.BIP16Exception = uint256(); consensus.BIP16Exception = uint256();
consensus.BIP34Height = 1000; // BIP34 is needed for validation_block_tests consensus.BIP34Height = 1000; // BIP34 is needed for validation_block_tests
@ -368,10 +361,6 @@ public:
consensus.nAllowMinDiffMinHeight = -1; consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1; consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 250; // SDK depends upon this number consensus.nNormalizedNameForkHeight = 250; // SDK depends upon this number
consensus.nMinTakeoverWorkaroundHeight = -1;
consensus.nMaxTakeoverWorkaroundHeight = -1;
consensus.nWitnessForkHeight = 150;
consensus.nAllClaimsInMerkleForkHeight = 350;
consensus.fPowAllowMinDifficultyBlocks = false; consensus.fPowAllowMinDifficultyBlocks = false;
consensus.fPowNoRetargeting = false; consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 108; // 75% for testchains consensus.nRuleChangeActivationThreshold = 108; // 75% for testchains
@ -436,7 +425,7 @@ public:
base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF}; base58Prefixes[EXT_PUBLIC_KEY] = {0x04, 0x35, 0x87, 0xCF};
base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94}; base58Prefixes[EXT_SECRET_KEY] = {0x04, 0x35, 0x83, 0x94};
bech32_hrp = "rlbc"; bech32_hrp = "bcrt";
/* enable fallback fee on regtest */ /* enable fallback fee on regtest */
m_fallback_fee_enabled = true; m_fallback_fee_enabled = true;

View file

@ -13,7 +13,7 @@
const std::string CBaseChainParams::MAIN = "lbrycrd"; const std::string CBaseChainParams::MAIN = "lbrycrd";
const std::string CBaseChainParams::TESTNET = "lbrycrdtest"; const std::string CBaseChainParams::TESTNET = "lbrycrdtest";
const std::string CBaseChainParams::REGTEST = "lbrycrdreg"; const std::string CBaseChainParams::REGTEST = "regtest";
void SetupChainParamsBaseOptions() void SetupChainParamsBaseOptions()
{ {

View file

@ -2,9 +2,8 @@
// Distributed under the MIT software license, see the accompanying // Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <coins.h> #include "claimscriptop.h"
#include <claimscriptop.h> #include "nameclaim.h"
#include <nameclaim.h>
CClaimScriptAddOp::CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight) CClaimScriptAddOp::CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight)
: point(point), nValue(nValue), nHeight(nHeight) : point(point), nValue(nValue), nHeight(nHeight)
@ -38,37 +37,32 @@ CClaimScriptUndoAddOp::CClaimScriptUndoAddOp(const COutPoint& point, int nHeight
bool CClaimScriptUndoAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptUndoAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
auto claimId = ClaimIdHash(point.hash, point.n); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrintf("--- [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return undoAddClaim(trieCache, name, claimId); return undoAddClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrintf("--- [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return undoAddClaim(trieCache, name, claimId); return undoAddClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoAddOp::undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); LogPrintf("%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
bool res = trieCache.undoAddClaim(name, point, nHeight); bool res = trieCache.undoAddClaim(name, point, nHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing claim fails\n", __func__); LogPrintf("%s: Removing fails\n", __func__);
return res; return res;
} }
bool CClaimScriptUndoAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
if (LogAcceptCategory(BCLog::CLAIMS)) { LogPrintf("--- [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf("--- [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, LogPrintf("%s: (txid: %s, nOut: %d) Removing support for %s, claimId: %s, from the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf(
"%s: (txid: %s, nOut: %d) Removing support for %s, claimId: %s, from the claim trie due to block disconnect\n",
__func__, point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.undoAddSupport(name, point, nHeight); bool res = trieCache.undoAddSupport(name, point, nHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__); LogPrintf("%s: Removing support fails\n", __func__);
return res; return res;
} }
@ -80,36 +74,32 @@ CClaimScriptSpendOp::CClaimScriptSpendOp(const COutPoint& point, int nHeight, in
bool CClaimScriptSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
auto claimId = ClaimIdHash(point.hash, point.n); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrintf("+++ [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return spendClaim(trieCache, name, claimId); return spendClaim(trieCache, name, claimId);
} }
bool CClaimScriptSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrintf("+++ [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
return spendClaim(trieCache, name, claimId); return spendClaim(trieCache, name, claimId);
} }
bool CClaimScriptSpendOp::spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); LogPrintf("%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
bool res = trieCache.spendClaim(name, point, nHeight, nValidHeight); bool res = trieCache.spendClaim(name, point, nHeight, nValidHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing fails\n", __func__); LogPrintf("%s: Removing fails\n", __func__);
return res; return res;
} }
bool CClaimScriptSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
if (LogAcceptCategory(BCLog::CLAIMS)) { LogPrintf("+++ [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf("+++ [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, LogPrintf("%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
claimId.GetHex(), point.hash.ToString(), point.n);
LogPrintf("%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie\n", __func__,
point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.spendSupport(name, point, nHeight, nValidHeight); bool res = trieCache.spendSupport(name, point, nHeight, nValidHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__); LogPrintf("%s: Removing support fails\n", __func__);
return res; return res;
} }
@ -130,13 +120,13 @@ bool CClaimScriptUndoSpendOp::updateClaim(CClaimTrieCache& trieCache, const std:
bool CClaimScriptUndoSpendOp::undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoSpendOp::undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); LogPrintf("%s: (txid: %s, nOut: %d) Restoring %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
return trieCache.undoSpendClaim(name, point, claimId, nValue, nHeight, nValidHeight); return trieCache.undoSpendClaim(name, point, claimId, nValue, nHeight, nValidHeight);
} }
bool CClaimScriptUndoSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); LogPrintf("%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString());
return trieCache.undoSpendSupport(name, point, claimId, nValue, nHeight, nValidHeight); return trieCache.undoSpendSupport(name, point, claimId, nValue, nHeight, nValidHeight);
} }
@ -149,7 +139,7 @@ bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CSc
{ {
int op; int op;
std::vector<std::vector<unsigned char> > vvchParams; std::vector<std::vector<unsigned char> > vvchParams;
if (!DecodeClaimScript(scriptPubKey, op, vvchParams, trieCache.allowSupportMetadata())) if (!DecodeClaimScript(scriptPubKey, op, vvchParams))
return false; return false;
switch (op) { switch (op) {
@ -163,81 +153,59 @@ bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CSc
throw std::runtime_error("Unimplemented OP handler."); throw std::runtime_error("Unimplemented OP handler.");
} }
void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks) bool SpendClaim(CClaimTrieCache& trieCache, const CScript& scriptPubKey, const COutPoint& point, int nHeight, int& nValidHeight, spentClaimsType& spentClaims)
{ {
class CSpendClaimHistory : public CClaimScriptSpendOp class CSpendClaimHistory : public CClaimScriptSpendOp
{ {
public: public:
using CClaimScriptSpendOp::CClaimScriptSpendOp; CSpendClaimHistory(spentClaimsType& spentClaims, const COutPoint& point, int nHeight, int& nValidHeight)
: CClaimScriptSpendOp(point, nHeight, nValidHeight), spentClaims(spentClaims)
{
}
bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{ {
if (CClaimScriptSpendOp::spendClaim(trieCache, name, claimId)) { if (CClaimScriptSpendOp::spendClaim(trieCache, name, claimId)) {
callback(name, claimId); spentClaims.emplace_back(name, claimId);
return true; return true;
} }
return false; return false;
} }
std::function<void(const std::string& name, const uint160& claimId)> callback;
private:
spentClaimsType& spentClaims;
}; };
spentClaimsType spentClaims; CSpendClaimHistory spendClaim(spentClaims, point, nHeight, nValidHeight);
return ProcessClaim(spendClaim, trieCache, scriptPubKey);
for (std::size_t j = 0; j < tx.vin.size(); j++) {
const CTxIn& txin = tx.vin[j];
const Coin& coin = view.AccessCoin(txin.prevout);
CScript scriptPubKey;
int scriptHeight = nHeight;
if (coin.out.IsNull() && callbacks.findScriptKey) {
scriptPubKey = callbacks.findScriptKey(txin.prevout);
} else {
scriptHeight = coin.nHeight;
scriptPubKey = coin.out.scriptPubKey;
}
if (scriptPubKey.empty())
continue;
int nValidAtHeight;
CSpendClaimHistory spendClaim(COutPoint(txin.prevout.hash, txin.prevout.n), scriptHeight, nValidAtHeight);
spendClaim.callback = [&spentClaims](const std::string& name, const uint160& claimId) {
spentClaims.emplace_back(name, claimId);
};
if (ProcessClaim(spendClaim, trieCache, scriptPubKey) && callbacks.claimUndoHeights)
callbacks.claimUndoHeights(j, nValidAtHeight);
} }
bool AddSpendClaim(CClaimTrieCache& trieCache, const CScript& scriptPubKey, const COutPoint& point, CAmount nValue, int nHeight, spentClaimsType& spentClaims)
{
class CAddSpendClaim : public CClaimScriptAddOp class CAddSpendClaim : public CClaimScriptAddOp
{ {
public: public:
using CClaimScriptAddOp::CClaimScriptAddOp; CAddSpendClaim(spentClaimsType& spentClaims, const COutPoint& point, CAmount nValue, int nHeight)
: CClaimScriptAddOp(point, nValue, nHeight), spentClaims(spentClaims)
{
}
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{ {
if (callback(name, claimId)) spentClaimsType::iterator itSpent = spentClaims.begin();
return CClaimScriptAddOp::updateClaim(trieCache, name, claimId); for (; itSpent != spentClaims.end(); ++itSpent) {
return false;
}
std::function<bool(const std::string& name, const uint160& claimId)> callback;
};
for (std::size_t j = 0; j < tx.vout.size(); j++) {
const CTxOut& txout = tx.vout[j];
if (txout.scriptPubKey.empty())
continue;
CAddSpendClaim addClaim(COutPoint(tx.GetHash(), j), txout.nValue, nHeight);
addClaim.callback = [&trieCache, &spentClaims](const std::string& name, const uint160& claimId) -> bool {
for (auto itSpent = spentClaims.begin(); itSpent != spentClaims.end(); ++itSpent) {
if (itSpent->second == claimId && trieCache.normalizeClaimName(name) == trieCache.normalizeClaimName(itSpent->first)) { if (itSpent->second == claimId && trieCache.normalizeClaimName(name) == trieCache.normalizeClaimName(itSpent->first)) {
spentClaims.erase(itSpent); spentClaims.erase(itSpent);
return true; return CClaimScriptAddOp::updateClaim(trieCache, name, claimId);
} }
} }
return false; return false;
}
private:
spentClaimsType& spentClaims;
}; };
ProcessClaim(addClaim, trieCache, txout.scriptPubKey);
} CAddSpendClaim addClaim(spentClaims, point, nValue, nHeight);
return ProcessClaim(addClaim, trieCache, scriptPubKey);
} }

View file

@ -59,17 +59,17 @@ public:
*/ */
CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight); CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int nHeight);
/** /**
* Implementation of OP_CLAIM_NAME handler * Implamention of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName * @see CClaimScriptOp::claimName
*/ */
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override; bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/** /**
* Implementation of OP_UPDATE_CLAIM handler * Implamention of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim * @see CClaimScriptOp::updateClaim
*/ */
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/** /**
* Implementation of OP_SUPPORT_CLAIM handler * Implamention of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim * @see CClaimScriptOp::supportClaim
*/ */
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
@ -100,17 +100,17 @@ public:
*/ */
CClaimScriptUndoAddOp(const COutPoint& point, int nHeight); CClaimScriptUndoAddOp(const COutPoint& point, int nHeight);
/** /**
* Implementation of OP_CLAIM_NAME handler * Implamention of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName * @see CClaimScriptOp::claimName
*/ */
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override; bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/** /**
* Implementation of OP_UPDATE_CLAIM handler * Implamention of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim * @see CClaimScriptOp::updateClaim
*/ */
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/** /**
* Implementation of OP_SUPPORT_CLAIM handler * Implamention of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim * @see CClaimScriptOp::supportClaim
*/ */
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
@ -141,17 +141,17 @@ public:
*/ */
CClaimScriptSpendOp(const COutPoint& point, int nHeight, int& nValidHeight); CClaimScriptSpendOp(const COutPoint& point, int nHeight, int& nValidHeight);
/** /**
* Implementation of OP_CLAIM_NAME handler * Implamention of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName * @see CClaimScriptOp::claimName
*/ */
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override; bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/** /**
* Implementation of OP_UPDATE_CLAIM handler * Implamention of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim * @see CClaimScriptOp::updateClaim
*/ */
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/** /**
* Implementation of OP_SUPPORT_CLAIM handler * Implamention of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim * @see CClaimScriptOp::supportClaim
*/ */
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
@ -184,17 +184,17 @@ public:
*/ */
CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight); CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight);
/** /**
* Implementation of OP_CLAIM_NAME handler * Implamention of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName * @see CClaimScriptOp::claimName
*/ */
bool claimName(CClaimTrieCache& trieCache, const std::string& name) override; bool claimName(CClaimTrieCache& trieCache, const std::string& name) override;
/** /**
* Implementation of OP_UPDATE_CLAIM handler * Implamention of OP_UPDATE_CLAIM handler
* @see CClaimScriptOp::updateClaim * @see CClaimScriptOp::updateClaim
*/ */
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
/** /**
* Implementation of OP_SUPPORT_CLAIM handler * Implamention of OP_SUPPORT_CLAIM handler
* @see CClaimScriptOp::supportClaim * @see CClaimScriptOp::supportClaim
*/ */
bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override; bool supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override;
@ -225,21 +225,26 @@ typedef std::pair<std::string, uint160> spentClaimType;
typedef std::vector<spentClaimType> spentClaimsType; typedef std::vector<spentClaimType> spentClaimsType;
struct CUpdateCacheCallbacks
{
std::function<CScript(const COutPoint& point)> findScriptKey;
std::function<void(int, int)> claimUndoHeights;
};
/** /**
* Function to spend claim from tie, keeping the successful list on * Function to spend claim from tie, keeping the successful list on
* @param[in] tx transaction inputs/outputs
* @param[in] trieCache trie to operate on * @param[in] trieCache trie to operate on
* @param[in] view coins cache * @param[in] scriptPubKey claim script to be decoded
* @param[in] point pair of transaction hash and its index * @param[in] point pair of transaction hash and its index
* @param[in] nHeight entry height of the claim * @param[in] nHeight entry height of the claim
* @param[out] fallback optional callbacks * @param[out] nValidHeight valid height of the claim
* @param[out] spentClaims inserts successfully spent claim
*/ */
void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks = {}); bool SpendClaim(CClaimTrieCache& trieCache, const CScript& scriptPubKey, const COutPoint& point, int nHeight, int& nValidHeight, spentClaimsType& spentClaims);
/**
* Function to add / update (that present in spent list) claim in trie
* @param[in] trieCache trie to operate on
* @param[in] scriptPubKey claim script to be decoded
* @param[in] point pair of transaction hash and its index
* @param[in] nValue ` value of the claim
* @param[in] nHeight entry height of the claim
* @param[out] spentClaims erases successfully added claim
*/
bool AddSpendClaim(CClaimTrieCache& trieCache, const CScript& scriptPubKey, const COutPoint& point, CAmount nValue, int nHeight, spentClaimsType& spentClaims);
#endif // CLAIMSCRIPTOP_H #endif // CLAIMSCRIPTOP_H

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,67 +1,41 @@
#include "claimtrie.h"
#include <consensus/merkle.h> #include <boost/algorithm/string.hpp>
#include <chainparams.h> #include <boost/foreach.hpp>
#include <claimtrie.h>
#include <hash.h>
#include <boost/locale.hpp>
#include <boost/locale/conversion.hpp> #include <boost/locale/conversion.hpp>
#include <boost/locale/localization_backend.hpp> #include <boost/locale/localization_backend.hpp>
#include <boost/locale.hpp>
#include <boost/scope_exit.hpp> #include <boost/scope_exit.hpp>
#include <boost/scoped_ptr.hpp>
CClaimTrieCacheExpirationFork::CClaimTrieCacheExpirationFork(CClaimTrie* base) void CClaimTrieCacheExpirationFork::removeAndAddToExpirationQueue(expirationQueueRowType &row, int height, bool increment) const
: CClaimTrieCacheBase(base)
{ {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight)); for (expirationQueueRowType::iterator e = row.begin(); e != row.end(); ++e)
}
void CClaimTrieCacheExpirationFork::setExpirationTime(int time)
{ {
nExpirationTime = time; // remove and insert with new expiration time
removeFromExpirationQueue(e->name, e->outPoint, height);
int extend_expiration = Params().GetConsensus().nExtendedClaimExpirationTime - Params().GetConsensus().nOriginalClaimExpirationTime;
int new_expiration_height = increment ? height + extend_expiration : height - extend_expiration;
nameOutPointType entry(e->name, e->outPoint);
addToExpirationQueue(new_expiration_height, entry);
} }
int CClaimTrieCacheExpirationFork::expirationTime() const }
void CClaimTrieCacheExpirationFork::removeAndAddSupportToExpirationQueue(expirationQueueRowType &row, int height, bool increment) const
{ {
return nExpirationTime; for (expirationQueueRowType::iterator e = row.begin(); e != row.end(); ++e)
}
bool CClaimTrieCacheExpirationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{ {
if (CClaimTrieCacheBase::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo)) { // remove and insert with new expiration time
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight)); removeSupportFromExpirationQueue(e->name, e->outPoint, height);
return true; int extend_expiration = Params().GetConsensus().nExtendedClaimExpirationTime - Params().GetConsensus().nOriginalClaimExpirationTime;
} int new_expiration_height = increment ? height + extend_expiration : height - extend_expiration;
return false; nameOutPointType entry(e->name, e->outPoint);
addSupportToExpirationQueue(new_expiration_height, entry);
} }
bool CClaimTrieCacheExpirationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo)
{
if (CClaimTrieCacheBase::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo)) {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
return true;
}
return false;
} }
void CClaimTrieCacheExpirationFork::initializeIncrement() bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment) const
{
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nExtendedClaimExpirationForkHeight)
return;
forkForExpirationChange(true);
}
bool CClaimTrieCacheExpirationFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheBase::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nExtendedClaimExpirationForkHeight)
forkForExpirationChange(false);
return ret;
}
bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment)
{ {
/* /*
If increment is True, we have forked to extend the expiration time, thus items in the expiration queue If increment is True, we have forked to extend the expiration time, thus items in the expiration queue
@ -71,40 +45,75 @@ bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment)
will have their expiration extension removed. will have their expiration extension removed.
*/ */
// look through dirty expiration queues
std::set<int> dirtyHeights;
for (expirationQueueType::const_iterator i = base->dirtyExpirationQueueRows.begin(); i != base->dirtyExpirationQueueRows.end(); ++i)
{
int height = i->first;
dirtyHeights.insert(height);
expirationQueueRowType row = i->second;
removeAndAddToExpirationQueue(row, height, increment);
}
std::set<int> dirtySupportHeights;
for (expirationQueueType::const_iterator i = base->dirtySupportExpirationQueueRows.begin(); i != base->dirtySupportExpirationQueueRows.end(); ++i)
{
int height = i->first;
dirtySupportHeights.insert(height);
expirationQueueRowType row = i->second;
removeAndAddSupportToExpirationQueue(row, height, increment);
}
//look through db for expiration queues, if we haven't already found it in dirty expiration queue //look through db for expiration queues, if we haven't already found it in dirty expiration queue
boost::scoped_ptr<CDBIterator> pcursor(base->db->NewIterator()); boost::scoped_ptr<CDBIterator> pcursor(const_cast<CDBWrapper*>(&base->db)->NewIterator());
for (pcursor->SeekToFirst(); pcursor->Valid(); pcursor->Next()) { pcursor->SeekToFirst();
std::pair<uint8_t, int> key; while (pcursor->Valid())
if (!pcursor->GetKey(key)) {
continue; std::pair<char, int> key;
if (pcursor->GetKey(key))
{
int height = key.second; int height = key.second;
if (key.first == CLAIM_EXP_QUEUE_ROW) { // if we've looked through this in dirtyExprirationQueueRows, don't use it
// because its stale
if ((key.first == EXP_QUEUE_ROW) & (dirtyHeights.count(height) == 0))
{
expirationQueueRowType row; expirationQueueRowType row;
if (pcursor->GetValue(row)) { if (pcursor->GetValue(row))
reactivateClaim(row, height, increment); {
} else { removeAndAddToExpirationQueue(row, height, increment);
}
else
{
return error("%s(): error reading expiration queue rows from disk", __func__); return error("%s(): error reading expiration queue rows from disk", __func__);
} }
} else if (key.first == SUPPORT_EXP_QUEUE_ROW) { }
else if ((key.first == SUPPORT_EXP_QUEUE_ROW) & (dirtySupportHeights.count(height) == 0))
{
expirationQueueRowType row; expirationQueueRowType row;
if (pcursor->GetValue(row)) { if (pcursor->GetValue(row))
reactivateSupport(row, height, increment); {
} else { removeAndAddSupportToExpirationQueue(row, height, increment);
}
else
{
return error("%s(): error reading support expiration queue rows from disk", __func__); return error("%s(): error reading support expiration queue rows from disk", __func__);
} }
} }
}
pcursor->Next();
} }
return true; return true;
} }
bool CClaimTrieCacheNormalizationFork::shouldNormalize() const
{ bool CClaimTrieCacheNormalizationFork::shouldNormalize() const {
return nNextHeight > Params().GetConsensus().nNormalizedNameForkHeight; return nCurrentHeight > Params().GetConsensus().nNormalizedNameForkHeight;
} }
std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::string& name, bool force) const std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::string& name, bool force) const {
{
if (!force && !shouldNormalize()) if (!force && !shouldNormalize())
return name; return name;
@ -122,6 +131,7 @@ std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::stri
std::string normalized; std::string normalized;
try { try {
// Check if it is a valid utf-8 string. If not, it will throw a // Check if it is a valid utf-8 string. If not, it will throw a
// boost::locale::conv::conversion_error exception which we catch later // boost::locale::conv::conversion_error exception which we catch later
normalized = boost::locale::conv::to_utf<char>(name, "UTF-8", boost::locale::conv::stop); normalized = boost::locale::conv::to_utf<char>(name, "UTF-8", boost::locale::conv::stop);
@ -131,12 +141,15 @@ std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::stri
// these methods supposedly only use the "UTF8" portion of the locale object: // these methods supposedly only use the "UTF8" portion of the locale object:
normalized = boost::locale::normalize(normalized, boost::locale::norm_nfd, utf8); normalized = boost::locale::normalize(normalized, boost::locale::norm_nfd, utf8);
normalized = boost::locale::fold_case(normalized, utf8); normalized = boost::locale::fold_case(normalized, utf8);
} catch (const boost::locale::conv::conversion_error& e) { }
catch (const boost::locale::conv::conversion_error& e){
return name; return name;
} catch (const std::bad_cast& e) { }
catch (const std::bad_cast& e) {
LogPrintf("%s() is invalid or dependencies are missing: %s\n", __func__, e.what()); LogPrintf("%s() is invalid or dependencies are missing: %s\n", __func__, e.what());
throw; throw;
} catch (const std::exception& e) { // TODO: change to use ... with current_exception() in c++11 }
catch (const std::exception& e) { // TODO: change to use ... with current_exception() in c++11
LogPrintf("%s() had an unexpected exception: %s\n", __func__, e.what()); LogPrintf("%s() had an unexpected exception: %s\n", __func__, e.what());
return name; return name;
} }
@ -144,338 +157,144 @@ std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::stri
return normalized; return normalized;
} }
bool CClaimTrieCacheNormalizationFork::insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover) bool CClaimTrieCacheNormalizationFork::insertClaimIntoTrie(const std::string& name, CClaimValue claim,
{ bool fCheckTakeover) const {
return CClaimTrieCacheExpirationFork::insertClaimIntoTrie(normalizeClaimName(name, overrideInsertNormalization), claim, fCheckTakeover); return CClaimTrieCacheExpirationFork::insertClaimIntoTrie(normalizeClaimName(name, overrideInsertNormalization), claim, fCheckTakeover);
} }
bool CClaimTrieCacheNormalizationFork::removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover) bool CClaimTrieCacheNormalizationFork::removeClaimFromTrie(const std::string& name, const COutPoint& outPoint,
{ CClaimValue& claim, bool fCheckTakeover) const {
return CClaimTrieCacheExpirationFork::removeClaimFromTrie(normalizeClaimName(name, overrideRemoveNormalization), outPoint, claim, fCheckTakeover); return CClaimTrieCacheExpirationFork::removeClaimFromTrie(normalizeClaimName(name, overrideRemoveNormalization), outPoint, claim, fCheckTakeover);
} }
bool CClaimTrieCacheNormalizationFork::insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover) bool CClaimTrieCacheNormalizationFork::insertSupportIntoMap(const std::string& name, CSupportValue support,
{ bool fCheckTakeover) const {
return CClaimTrieCacheExpirationFork::insertSupportIntoMap(normalizeClaimName(name, overrideInsertNormalization), support, fCheckTakeover); return CClaimTrieCacheExpirationFork::insertSupportIntoMap(normalizeClaimName(name, overrideInsertNormalization), support, fCheckTakeover);
} }
bool CClaimTrieCacheNormalizationFork::removeSupportFromMap(const std::string& name, const COutPoint& outPoint,
bool CClaimTrieCacheNormalizationFork::removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover) CSupportValue& support, bool fCheckTakeover) const {
{
return CClaimTrieCacheExpirationFork::removeSupportFromMap(normalizeClaimName(name, overrideRemoveNormalization), outPoint, support, fCheckTakeover); return CClaimTrieCacheExpirationFork::removeSupportFromMap(normalizeClaimName(name, overrideRemoveNormalization), outPoint, support, fCheckTakeover);
} }
bool CClaimTrieCacheNormalizationFork::normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo, claimQueueRowType& removeUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo) struct claimsForNormalization: public claimsForNameType {
{ std::string normalized;
if (nNextHeight != Params().GetConsensus().nNormalizedNameForkHeight) claimsForNormalization(const std::vector<CClaimValue>& claims, const std::vector<CSupportValue>& supports,
return false; int nLastTakeoverHeight, const std::string& name, const std::string& normalized)
: claimsForNameType(claims, supports, nLastTakeoverHeight, name), normalized(normalized) {}
};
bool CClaimTrieCacheNormalizationFork::normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo, claimQueueRowType& removeUndo,
insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int> >& takeoverHeightUndo) const {
struct CNameChangeDetector: public CNodeCallback {
std::vector<claimsForNormalization> hits;
const CClaimTrieCacheNormalizationFork* owner;
CNameChangeDetector(const CClaimTrieCacheNormalizationFork* owner): owner(owner) {}
void visit(const std::string& name, const CClaimTrieNode* node) {
if (node->claims.empty()) return;
const std::string normalized = owner->normalizeClaimName(name, true);
if (normalized == name) return;
supportMapEntryType supports;
owner->getSupportsForName(name, supports);
const claimsForNormalization cfn(node->claims, supports, node->nHeightOfLastTakeover, name, normalized);
hits.push_back(cfn);
}
};
if (nCurrentHeight == Params().GetConsensus().nNormalizedNameForkHeight) {
// run the one-time upgrade of all names that need to change // run the one-time upgrade of all names that need to change
// it modifies the (cache) trie as it goes, so we need to grab everything to be modified first // it modifies the (cache) trie as it goes, so we need to grab
// everything to be modified first
CNameChangeDetector detector(this);
iterateTrie(detector);
for (auto it = base->cbegin(); it != base->cend(); ++it) { for (std::vector<claimsForNormalization>::iterator it = detector.hits.begin(); it != detector.hits.end(); ++it) {
const std::string normalized = normalizeClaimName(it.key(), true); BOOST_FOREACH(CSupportValue support, it->supports) {
if (normalized == it.key())
continue;
auto& name = it.key();
auto supports = getSupportsForName(name);
for (auto support : supports) {
// if it's already going to expire just skip it // if it's already going to expire just skip it
if (support.nHeight + expirationTime() <= nNextHeight) if (support.nHeight + base->nExpirationTime <= nCurrentHeight)
continue; continue;
assert(removeSupportFromMap(name, support.outPoint, support, false)); bool success = removeSupportFromMap(it->name, support.outPoint, support, false);
expireSupportUndo.emplace_back(name, support); assert(success);
assert(insertSupportIntoMap(normalized, support, false)); expireSupportUndo.push_back(std::make_pair(it->name, support));
insertSupportUndo.emplace_back(name, support.outPoint, -1); success = insertSupportIntoMap(it->normalized, support, false);
assert(success);
insertSupportUndo.push_back(nameOutPointHeightType(it->name, support.outPoint, -1));
} }
namesToCheckForTakeover.insert(normalized); BOOST_FOREACH(CClaimValue claim, it->claims) {
if (claim.nHeight + base->nExpirationTime <= nCurrentHeight)
auto cached = cacheData(name, false);
if (!cached || cached->empty())
continue; continue;
auto claimsCopy = cached->claims; bool success = removeClaimFromTrie(it->name, claim.outPoint, claim, false);
auto takeoverHeightCopy = cached->nHeightOfLastTakeover; assert(success);
for (auto claim : claimsCopy) { removeUndo.push_back(std::make_pair(it->name, claim));
if (claim.nHeight + expirationTime() <= nNextHeight)
continue;
assert(removeClaimFromTrie(name, claim.outPoint, claim, false)); success = insertClaimIntoTrie(it->normalized, claim, true);
removeUndo.emplace_back(name, claim); assert(success);
assert(insertClaimIntoTrie(normalized, claim, true)); insertUndo.push_back(nameOutPointHeightType(it->name, claim.outPoint, -1));
insertUndo.emplace_back(name, claim.outPoint, -1);
} }
takeoverHeightUndo.emplace_back(name, takeoverHeightCopy); takeoverHeightUndo.push_back(std::make_pair(it->name, it->nLastTakeoverHeight));
} }
return true; return true;
} }
return false;
bool CClaimTrieCacheNormalizationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
overrideInsertNormalization = normalizeAllNamesInTrieIfNecessary(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
BOOST_SCOPE_EXIT(&overrideInsertNormalization) { overrideInsertNormalization = false; }
BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
} }
bool CClaimTrieCacheNormalizationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo) bool CClaimTrieCacheNormalizationFork::incrementBlock(insertUndoType& insertUndo,
{ claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int> >& takeoverHeightUndo) {
overrideInsertNormalization = normalizeAllNamesInTrieIfNecessary(insertUndo, expireUndo, insertSupportUndo,
expireSupportUndo, takeoverHeightUndo);
BOOST_SCOPE_EXIT(&overrideInsertNormalization) { overrideInsertNormalization = false; } BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::incrementBlock(insertUndo, expireUndo, insertSupportUndo,
expireSupportUndo, takeoverHeightUndo);
}
bool CClaimTrieCacheNormalizationFork::decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int> >& takeoverHeightUndo) {
overrideRemoveNormalization = shouldNormalize(); overrideRemoveNormalization = shouldNormalize();
BOOST_SCOPE_EXIT(&overrideRemoveNormalization) { overrideRemoveNormalization = false; } BOOST_SCOPE_EXIT(&overrideRemoveNormalization) { overrideRemoveNormalization = false; } BOOST_SCOPE_EXIT_END
BOOST_SCOPE_EXIT_END return CClaimTrieCacheExpirationFork::decrementBlock(insertUndo, expireUndo, insertSupportUndo,
return CClaimTrieCacheExpirationFork::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo); expireSupportUndo, takeoverHeightUndo);
} }
bool CClaimTrieCacheNormalizationFork::getProofForName(const std::string& name, CClaimTrieProof& proof) bool CClaimTrieCacheNormalizationFork::getProofForName(const std::string& name, CClaimTrieProof& proof) const {
{
return CClaimTrieCacheExpirationFork::getProofForName(normalizeClaimName(name), proof); return CClaimTrieCacheExpirationFork::getProofForName(normalizeClaimName(name), proof);
} }
bool CClaimTrieCacheNormalizationFork::getInfoForName(const std::string& name, CClaimValue& claim) const bool CClaimTrieCacheNormalizationFork::getInfoForName(const std::string& name, CClaimValue& claim) const {
{
return CClaimTrieCacheExpirationFork::getInfoForName(normalizeClaimName(name), claim); return CClaimTrieCacheExpirationFork::getInfoForName(normalizeClaimName(name), claim);
} }
CClaimSupportToName CClaimTrieCacheNormalizationFork::getClaimsForName(const std::string& name) const claimsForNameType CClaimTrieCacheNormalizationFork::getClaimsForName(const std::string& name) const {
{
return CClaimTrieCacheExpirationFork::getClaimsForName(normalizeClaimName(name)); return CClaimTrieCacheExpirationFork::getClaimsForName(normalizeClaimName(name));
} }
int CClaimTrieCacheNormalizationFork::getDelayForName(const std::string& name, const uint160& claimId) const int CClaimTrieCacheNormalizationFork::getDelayForName(const std::string& name, const uint160& claimId) const {
{
return CClaimTrieCacheExpirationFork::getDelayForName(normalizeClaimName(name), claimId); return CClaimTrieCacheExpirationFork::getDelayForName(normalizeClaimName(name), claimId);
} }
std::string CClaimTrieCacheNormalizationFork::adjustNameForValidHeight(const std::string& name, int validHeight) const void CClaimTrieCacheNormalizationFork::addClaimToQueues(const std::string& name, CClaimValue& claim) const {
{ return CClaimTrieCacheExpirationFork::addClaimToQueues(normalizeClaimName(name,
claim.nValidAtHeight > Params().GetConsensus().nNormalizedNameForkHeight), claim);
}
bool CClaimTrieCacheNormalizationFork::addSupportToQueues(const std::string& name, CSupportValue& support) const {
return CClaimTrieCacheExpirationFork::addSupportToQueues(normalizeClaimName(name,
support.nValidAtHeight > Params().GetConsensus().nNormalizedNameForkHeight), support);
}
std::string CClaimTrieCacheNormalizationFork::adjustNameForValidHeight(const std::string& name, int validHeight) const {
return normalizeClaimName(name, validHeight > Params().GetConsensus().nNormalizedNameForkHeight); return normalizeClaimName(name, validHeight > Params().GetConsensus().nNormalizedNameForkHeight);
} }
CClaimTrieCacheHashFork::CClaimTrieCacheHashFork(CClaimTrie* base) : CClaimTrieCacheNormalizationFork(base)
{
}
static const uint256 leafHash = uint256S("0000000000000000000000000000000000000000000000000000000000000002");
static const uint256 emptyHash = uint256S("0000000000000000000000000000000000000000000000000000000000000003");
std::vector<uint256> getClaimHashes(const CClaimTrieData& data)
{
std::vector<uint256> hashes;
for (auto& claim : data.claims)
hashes.push_back(getValueHash(claim.outPoint, data.nHeightOfLastTakeover));
return hashes;
}
template <typename T>
using iCbType = std::function<uint256(T&)>;
template <typename TIterator>
uint256 recursiveBinaryTreeHash(TIterator& it, const iCbType<TIterator>& process)
{
std::vector<uint256> childHashes;
for (auto& child : it.children())
childHashes.emplace_back(process(child));
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
else if (!it.hasChildren())
return {};
auto left = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
auto right = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
return Hash(left.begin(), left.end(), right.begin(), right.end());
}
uint256 CClaimTrieCacheHashFork::recursiveComputeMerkleHash(CClaimTrie::iterator& it)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveComputeMerkleHash(it);
using iterator = CClaimTrie::iterator;
iCbType<iterator> process = [&process](iterator& it) -> uint256 {
if (it->hash.IsNull())
it->hash = recursiveBinaryTreeHash(it, process);
assert(!it->hash.IsNull());
return it->hash;
};
return process(it);
}
bool CClaimTrieCacheHashFork::recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveCheckConsistency(it, failed);
struct CRecursiveBreak {};
using iterator = CClaimTrie::const_iterator;
iCbType<iterator> process = [&failed, &process](iterator& it) -> uint256 {
if (it->hash.IsNull() || it->hash != recursiveBinaryTreeHash(it, process)) {
failed = it.key();
throw CRecursiveBreak();
}
return it->hash;
};
try {
process(it);
} catch (const CRecursiveBreak&) {
return false;
}
return true;
}
std::vector<uint256> ComputeMerklePath(const std::vector<uint256>& hashes, uint32_t idx)
{
uint32_t count = 0;
int matchlevel = -1;
bool matchh = false;
uint256 inner[32], h;
const uint32_t one = 1;
std::vector<uint256> res;
const auto iterateInner = [&](int& level) {
for (; !(count & (one << level)); level++) {
const auto& ihash = inner[level];
if (matchh) {
res.push_back(ihash);
} else if (matchlevel == level) {
res.push_back(h);
matchh = true;
}
h = Hash(ihash.begin(), ihash.end(), h.begin(), h.end());
}
};
while (count < hashes.size()) {
h = hashes[count];
matchh = count == idx;
count++;
int level = 0;
iterateInner(level);
// Store the resulting hash at inner position level.
inner[level] = h;
if (matchh)
matchlevel = level;
}
int level = 0;
while (!(count & (one << level)))
level++;
h = inner[level];
matchh = matchlevel == level;
while (count != (one << level)) {
// If we reach this point, h is an inner value that is not the top.
if (matchh)
res.push_back(h);
h = Hash(h.begin(), h.end(), h.begin(), h.end());
// Increment count to the value it would have if two entries at this
count += (one << level);
level++;
iterateInner(level);
}
return res;
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof)
{
return getProofForName(name, proof, nullptr);
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof, const std::function<bool(const CClaimValue&)>& comp)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::getProofForName(name, proof);
auto fillPairs = [&proof](const std::vector<uint256>& hashes, uint32_t idx) {
auto partials = ComputeMerklePath(hashes, idx);
for (int i = partials.size() - 1; i >= 0; --i)
proof.pairs.emplace_back((idx >> i) & 1, partials[i]);
};
// cache the parent nodes
cacheData(name, false);
getMerkleHash();
proof = CClaimTrieProof();
for (auto& it : static_cast<const CClaimTrie&>(nodesToAddOrUpdate).nodes(name)) {
std::vector<uint256> childHashes;
uint32_t nextCurrentIdx = 0;
for (auto& child : it.children()) {
if (name.find(child.key()) == 0)
nextCurrentIdx = uint32_t(childHashes.size());
childHashes.push_back(child->hash);
}
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
// I am on a node; I need a hash(children, claims)
// if I am the last node on the list, it will be hash(children, x)
// else it will be hash(x, claims)
if (it.key() == name) {
uint32_t nClaimIndex = 0;
auto& claims = it->claims;
auto itClaim = !comp ? claims.begin() : std::find_if(claims.begin(), claims.end(), comp);
if (itClaim != claims.end()) {
proof.hasValue = true;
proof.outPoint = itClaim->outPoint;
proof.nHeightOfLastTakeover = it->nHeightOfLastTakeover;
nClaimIndex = std::distance(claims.begin(), itClaim);
}
auto hash = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
proof.pairs.emplace_back(true, hash);
if (!claimHashes.empty())
fillPairs(claimHashes, nClaimIndex);
} else {
auto hash = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
proof.pairs.emplace_back(false, hash);
if (!childHashes.empty())
fillPairs(childHashes, nextCurrentIdx);
}
}
std::reverse(proof.pairs.begin(), proof.pairs.end());
return true;
}
void CClaimTrieCacheHashFork::copyAllBaseToCache()
{
for (auto it = base->cbegin(); it != base->cend(); ++it)
if (nodesAlreadyCached.insert(it.key()).second)
nodesToAddOrUpdate.insert(it.key(), it.data());
for (auto it = nodesToAddOrUpdate.begin(); it != nodesToAddOrUpdate.end(); ++it)
it->hash.SetNull();
}
void CClaimTrieCacheHashFork::initializeIncrement()
{
CClaimTrieCacheNormalizationFork::initializeIncrement();
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
return;
// if we are forking, we load the entire base trie into the cache trie
// we reset its hash computation so it can be recomputed completely
copyAllBaseToCache();
}
bool CClaimTrieCacheHashFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheNormalizationFork::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
copyAllBaseToCache();
return ret;
}
bool CClaimTrieCacheHashFork::allowSupportMetadata() const
{
return nNextHeight >= Params().GetConsensus().nAllClaimsInMerkleForkHeight;
}

View file

@ -77,12 +77,6 @@ struct Params {
int nAllowMinDiffMinHeight; int nAllowMinDiffMinHeight;
int nAllowMinDiffMaxHeight; int nAllowMinDiffMaxHeight;
int nNormalizedNameForkHeight; int nNormalizedNameForkHeight;
int nMinTakeoverWorkaroundHeight;
int nMaxTakeoverWorkaroundHeight;
int nWitnessForkHeight;
int64_t nPowTargetSpacing; int64_t nPowTargetSpacing;
int64_t nPowTargetTimespan; int64_t nPowTargetTimespan;
/** how long it took claims to expire before the hard fork */ /** how long it took claims to expire before the hard fork */
@ -96,8 +90,6 @@ struct Params {
nOriginalClaimExpirationTime : nOriginalClaimExpirationTime :
nExtendedClaimExpirationTime; nExtendedClaimExpirationTime;
} }
/** blocks before the hard fork that adds all claims into the merkle hash */
int64_t nAllClaimsInMerkleForkHeight;
int64_t DifficultyAdjustmentInterval() const { return nPowTargetTimespan / nPowTargetSpacing; } int64_t DifficultyAdjustmentInterval() const { return nPowTargetTimespan / nPowTargetSpacing; }
uint256 nMinimumChainWork; uint256 nMinimumChainWork;
uint256 defaultAssumeValid; uint256 defaultAssumeValid;

View file

@ -154,8 +154,7 @@ int64_t GetTransactionSigOpCost(const CTransaction& tx, const CCoinsViewCache& i
const Coin& coin = inputs.AccessCoin(tx.vin[i].prevout); const Coin& coin = inputs.AccessCoin(tx.vin[i].prevout);
assert(!coin.IsSpent()); assert(!coin.IsSpent());
const CTxOut &prevout = coin.out; const CTxOut &prevout = coin.out;
const CScript& scriptPubKey = StripClaimScriptPrefix(prevout.scriptPubKey); nSigOps += CountWitnessSigOps(tx.vin[i].scriptSig, prevout.scriptPubKey, &tx.vin[i].scriptWitness, flags);
nSigOps += CountWitnessSigOps(tx.vin[i].scriptSig, scriptPubKey, &tx.vin[i].scriptWitness, flags);
} }
return nSigOps; return nSigOps;
} }

View file

@ -15,7 +15,6 @@
#include <util.h> #include <util.h>
#include <utilmoneystr.h> #include <utilmoneystr.h>
#include <utilstrencodings.h> #include <utilstrencodings.h>
#include <nameclaim.h>
UniValue ValueFromAmount(const CAmount& amount) UniValue ValueFromAmount(const CAmount& amount)
{ {
@ -148,20 +147,12 @@ void ScriptToUniv(const CScript& script, UniValue& out, bool include_address)
out.pushKV("hex", HexStr(script.begin(), script.end())); out.pushKV("hex", HexStr(script.begin(), script.end()));
std::vector<std::vector<unsigned char>> solns; std::vector<std::vector<unsigned char>> solns;
txnouttype type; int claimOp; txnouttype type;
auto stripped = StripClaimScriptPrefix(script, claimOp); Solver(script, type, solns);
Solver(stripped, type, solns); out.pushKV("type", GetTxnOutputType(type));
if (claimOp >= 0) {
out.pushKV("isclaim", UniValue(claimOp == OP_CLAIM_NAME || claimOp == OP_UPDATE_CLAIM));
out.pushKV("issupport", UniValue(claimOp == OP_SUPPORT_CLAIM));
out.pushKV("subtype", GetTxnOutputType(type));
out.pushKV("type", GetTxnOutputType(TX_NONSTANDARD)); // trying to keep backwards compatibility
}
else
out.pushKV("type", GetTxnOutputType(type)); // trying to keep backwards compatibility
CTxDestination address; CTxDestination address;
if (include_address && ExtractDestination(stripped, address)) { if (include_address && ExtractDestination(script, address)) {
out.pushKV("address", EncodeDestination(address)); out.pushKV("address", EncodeDestination(address));
} }
} }
@ -177,29 +168,20 @@ void ScriptPubKeyToUniv(const CScript& scriptPubKey,
if (fIncludeHex) if (fIncludeHex)
out.pushKV("hex", HexStr(scriptPubKey.begin(), scriptPubKey.end())); out.pushKV("hex", HexStr(scriptPubKey.begin(), scriptPubKey.end()));
int claimOp; if (!ExtractDestinations(scriptPubKey, type, addresses, nRequired)) {
auto stripped = StripClaimScriptPrefix(scriptPubKey, claimOp); out.pushKV("type", GetTxnOutputType(type));
auto extracted = ExtractDestinations(stripped, type, addresses, nRequired); return;
if (extracted)
out.pushKV("reqSigs", nRequired);
if (claimOp >= 0) {
out.pushKV("isclaim", UniValue(claimOp == OP_CLAIM_NAME || claimOp == OP_UPDATE_CLAIM));
out.pushKV("issupport", UniValue(claimOp == OP_SUPPORT_CLAIM));
out.pushKV("subtype", GetTxnOutputType(type));
out.pushKV("type", GetTxnOutputType(TX_NONSTANDARD));
} }
else
out.pushKV("reqSigs", nRequired);
out.pushKV("type", GetTxnOutputType(type)); out.pushKV("type", GetTxnOutputType(type));
if (extracted) {
UniValue a(UniValue::VARR); UniValue a(UniValue::VARR);
for (const CTxDestination& addr : addresses) { for (const CTxDestination& addr : addresses) {
a.push_back(EncodeDestination(addr)); a.push_back(EncodeDestination(addr));
} }
out.pushKV("addresses", a); out.pushKV("addresses", a);
} }
}
void TxToUniv(const CTransaction& tx, const uint256& hashBlock, UniValue& entry, bool include_hex, int serialize_flags) void TxToUniv(const CTransaction& tx, const uint256& hashBlock, UniValue& entry, bool include_hex, int serialize_flags)
{ {

View file

@ -80,14 +80,14 @@ static void SetMaxOpenFiles(leveldb::Options *options) {
// implementation that does not use extra file descriptors (the fds are // implementation that does not use extra file descriptors (the fds are
// closed after being mmaped). // closed after being mmaped).
// //
// Increasing the value beyond the nmap count is dangerous because LevelDB will // Increasing the value beyond the default is dangerous because LevelDB will
// fall back to a non-mmap implementation when the file count is too large (thus contending select()). // fall back to a non-mmap implementation when the file count is too large.
// On 32-bit Unix host we should decrease the value because the handles use // On 32-bit Unix host we should decrease the value because the handles use
// up real fds, and we want to avoid fd exhaustion issues. // up real fds, and we want to avoid fd exhaustion issues.
// //
// See PR #12495 for further discussion. // See PR #12495 for further discussion.
int default_open_files = 400; int default_open_files = options->max_open_files;
#ifndef WIN32 #ifndef WIN32
if (sizeof(void*) < 8) { if (sizeof(void*) < 8) {
options->max_open_files = 64; options->max_open_files = 64;
@ -100,10 +100,9 @@ static void SetMaxOpenFiles(leveldb::Options *options) {
static leveldb::Options GetOptions(size_t nCacheSize) static leveldb::Options GetOptions(size_t nCacheSize)
{ {
leveldb::Options options; leveldb::Options options;
auto write_cache = std::min(nCacheSize / 4, size_t(32 * 1024 * 1024)); // cap write_cache options.block_cache = leveldb::NewLRUCache(nCacheSize / 2);
options.block_cache = leveldb::NewLRUCache(nCacheSize - write_cache * 2); options.write_buffer_size = nCacheSize / 4; // up to two write buffers may be held in memory simultaneously
options.write_buffer_size = write_cache; // up to two write buffers may be held in memory simultaneously options.filter_policy = leveldb::NewBloomFilterPolicy(10);
options.filter_policy = leveldb::NewBloomFilterPolicy(12);
options.compression = leveldb::kNoCompression; options.compression = leveldb::kNoCompression;
options.info_log = new CBitcoinLevelDBLogger(); options.info_log = new CBitcoinLevelDBLogger();
if (leveldb::kMajorVersion > 1 || (leveldb::kMajorVersion == 1 && leveldb::kMinorVersion >= 16)) { if (leveldb::kMajorVersion > 1 || (leveldb::kMajorVersion == 1 && leveldb::kMinorVersion >= 16)) {
@ -116,7 +115,7 @@ static leveldb::Options GetOptions(size_t nCacheSize)
} }
CDBWrapper::CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory, bool fWipe, bool obfuscate) CDBWrapper::CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory, bool fWipe, bool obfuscate)
: m_name(fs::basename(path)), ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION) : m_name(fs::basename(path))
{ {
penv = nullptr; penv = nullptr;
readoptions.verify_checksums = true; readoptions.verify_checksums = true;
@ -181,16 +180,8 @@ CDBWrapper::~CDBWrapper()
options.env = nullptr; options.env = nullptr;
} }
bool CDBWrapper::Sync() {
CDBBatch batch(*this);
return WriteBatch(batch, true);
}
bool CDBWrapper::WriteBatch(CDBBatch& batch, bool fSync) bool CDBWrapper::WriteBatch(CDBBatch& batch, bool fSync)
{ {
if (!pdb)
return false;
const bool log_memory = LogAcceptCategory(BCLog::LEVELDB); const bool log_memory = LogAcceptCategory(BCLog::LEVELDB);
double mem_before = 0; double mem_before = 0;
if (log_memory) { if (log_memory) {

View file

@ -16,6 +16,9 @@
#include <leveldb/db.h> #include <leveldb/db.h>
#include <leveldb/write_batch.h> #include <leveldb/write_batch.h>
static const size_t DBWRAPPER_PREALLOC_KEY_SIZE = 64;
static const size_t DBWRAPPER_PREALLOC_VALUE_SIZE = 1024;
class dbwrapper_error : public std::runtime_error class dbwrapper_error : public std::runtime_error
{ {
public: public:
@ -40,6 +43,77 @@ namespace dbwrapper_private {
}; };
/** Batch of changes queued to be written to a CDBWrapper */
class CDBBatch
{
friend class CDBWrapper;
private:
const CDBWrapper &parent;
leveldb::WriteBatch batch;
CDataStream ssKey;
CDataStream ssValue;
size_t size_estimate;
public:
/**
* @param[in] _parent CDBWrapper that this batch is to be submitted to
*/
explicit CDBBatch(const CDBWrapper &_parent) : parent(_parent), ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION), size_estimate(0) { };
void Clear()
{
batch.Clear();
size_estimate = 0;
}
template <typename K, typename V>
void Write(const K& key, const V& value)
{
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
ssValue.reserve(DBWRAPPER_PREALLOC_VALUE_SIZE);
ssValue << value;
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
leveldb::Slice slValue(ssValue.data(), ssValue.size());
batch.Put(slKey, slValue);
// LevelDB serializes writes as:
// - byte: header
// - varint: key length (1 byte up to 127B, 2 bytes up to 16383B, ...)
// - byte[]: key
// - varint: value length
// - byte[]: value
// The formula below assumes the key and value are both less than 16k.
size_estimate += 3 + (slKey.size() > 127) + slKey.size() + (slValue.size() > 127) + slValue.size();
ssKey.clear();
ssValue.clear();
}
template <typename K>
void Erase(const K& key)
{
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
batch.Delete(slKey);
// LevelDB serializes erases as:
// - byte: header
// - varint: key length
// - byte[]: key
// The formula below assumes the key is less than 16kB.
size_estimate += 2 + (slKey.size() > 127) + slKey.size();
ssKey.clear();
}
size_t SizeEstimate() const { return size_estimate; }
};
class CDBIterator class CDBIterator
{ {
private: private:
@ -62,6 +136,7 @@ public:
template<typename K> void Seek(const K& key) { template<typename K> void Seek(const K& key) {
CDataStream ssKey(SER_DISK, CLIENT_VERSION); CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key; ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size()); leveldb::Slice slKey(ssKey.data(), ssKey.size());
piter->Seek(slKey); piter->Seek(slKey);
@ -98,8 +173,6 @@ public:
}; };
class CDBBatch;
class CDBWrapper class CDBWrapper
{ {
friend const std::vector<unsigned char>& dbwrapper_private::GetObfuscateKey(const CDBWrapper &w); friend const std::vector<unsigned char>& dbwrapper_private::GetObfuscateKey(const CDBWrapper &w);
@ -140,8 +213,6 @@ private:
std::vector<unsigned char> CreateObfuscateKey() const; std::vector<unsigned char> CreateObfuscateKey() const;
public: public:
mutable CDataStream ssKey, ssValue;
/** /**
* @param[in] path Location in the filesystem where leveldb data will be stored. * @param[in] path Location in the filesystem where leveldb data will be stored.
* @param[in] nCacheSize Configures various leveldb cache settings. * @param[in] nCacheSize Configures various leveldb cache settings.
@ -151,7 +222,7 @@ public:
* with a zero'd byte array. * with a zero'd byte array.
*/ */
CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory = false, bool fWipe = false, bool obfuscate = false); CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory = false, bool fWipe = false, bool obfuscate = false);
virtual ~CDBWrapper(); ~CDBWrapper();
CDBWrapper(const CDBWrapper&) = delete; CDBWrapper(const CDBWrapper&) = delete;
/* CDBWrapper& operator=(const CDBWrapper&) = delete; */ /* CDBWrapper& operator=(const CDBWrapper&) = delete; */
@ -159,13 +230,13 @@ public:
template <typename K, typename V> template <typename K, typename V>
bool Read(const K& key, V& value) const bool Read(const K& key, V& value) const
{ {
assert(ssKey.empty()); CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key; ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size()); leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue; std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue); leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) { if (!status.ok()) {
if (status.IsNotFound()) if (status.IsNotFound())
return false; return false;
@ -183,17 +254,23 @@ public:
} }
template <typename K, typename V> template <typename K, typename V>
bool Write(const K& key, const V& value, bool fSync = false); bool Write(const K& key, const V& value, bool fSync = false)
{
CDBBatch batch(*this);
batch.Write(key, value);
return WriteBatch(batch, fSync);
}
template <typename K> template <typename K>
bool Exists(const K& key) const bool Exists(const K& key) const
{ {
CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey << key; ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size()); leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue; std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue); leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) { if (!status.ok()) {
if (status.IsNotFound()) if (status.IsNotFound())
return false; return false;
@ -204,7 +281,12 @@ public:
} }
template <typename K> template <typename K>
bool Erase(const K& key, bool fSync = false); bool Erase(const K& key, bool fSync = false)
{
CDBBatch batch(*this);
batch.Erase(key);
return WriteBatch(batch, fSync);
}
bool WriteBatch(CDBBatch& batch, bool fSync = false); bool WriteBatch(CDBBatch& batch, bool fSync = false);
@ -217,7 +299,11 @@ public:
return true; return true;
} }
bool Sync(); bool Sync()
{
CDBBatch batch(*this);
return WriteBatch(batch, true);
}
CDBIterator *NewIterator() CDBIterator *NewIterator()
{ {
@ -233,8 +319,8 @@ public:
size_t EstimateSize(const K& key_begin, const K& key_end) const size_t EstimateSize(const K& key_begin, const K& key_end) const
{ {
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION); CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(ssKey.capacity()); ssKey1.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey2.reserve(ssKey.capacity()); ssKey2.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey1 << key_begin; ssKey1 << key_begin;
ssKey2 << key_end; ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size()); leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
@ -252,8 +338,8 @@ public:
void CompactRange(const K& key_begin, const K& key_end) const void CompactRange(const K& key_begin, const K& key_end) const
{ {
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION); CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(ssKey.capacity()); ssKey1.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey2.reserve(ssKey.capacity()); ssKey2.reserve(DBWRAPPER_PREALLOC_KEY_SIZE);
ssKey1 << key_begin; ssKey1 << key_begin;
ssKey2 << key_end; ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size()); leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
@ -263,88 +349,4 @@ public:
}; };
/** Batch of changes queued to be written to a CDBWrapper */
class CDBBatch
{
friend class CDBWrapper;
const CDBWrapper &parent;
leveldb::WriteBatch batch;
size_t size_estimate;
CDataStream ssKey, ssValue;
public:
/**
* @param[in] _parent CDBWrapper that this batch is to be submitted to
*/
explicit CDBBatch(const CDBWrapper &_parent) : parent(_parent), size_estimate(0),
ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION) {
ssKey.reserve(parent.ssKey.capacity());
ssValue.reserve(parent.ssValue.capacity());
};
void Clear()
{
batch.Clear();
size_estimate = 0;
}
template <typename K, typename V>
void Write(const K& key, const V& value)
{
assert(ssKey.empty());
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
assert(ssValue.empty());
ssValue << value;
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
leveldb::Slice slValue(ssValue.data(), ssValue.size());
batch.Put(slKey, slValue);
// LevelDB serializes writes as:
// - byte: header
// - varint: key length (1 byte up to 127B, 2 bytes up to 16383B, ...)
// - byte[]: key
// - varint: value length
// - byte[]: value
// The formula below assumes the key and value are both less than 16k.
size_estimate += 3 + (slKey.size() > 127) + slKey.size() + (slValue.size() > 127) + slValue.size();
ssKey.clear();
ssValue.clear();
}
template <typename K>
void Erase(const K& key)
{
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
batch.Delete(slKey);
// LevelDB serializes erases as:
// - byte: header
// - varint: key length
// - byte[]: key
// The formula below assumes the key is less than 16kB.
size_estimate += 2 + (slKey.size() > 127) + slKey.size();
ssKey.clear();
}
size_t SizeEstimate() const { return size_estimate; }
};
template<typename K>
bool CDBWrapper::Erase(const K &key, bool fSync) {
CDBBatch batch(*this);
batch.Erase(key);
return WriteBatch(batch, fSync);
}
template<typename K, typename V>
bool CDBWrapper::Write(const K &key, const V &value, bool fSync) {
CDBBatch batch(*this);
batch.Write(key, value);
return WriteBatch(batch, fSync);
}
#endif // BITCOIN_DBWRAPPER_H #endif // BITCOIN_DBWRAPPER_H

View file

@ -28,8 +28,6 @@ protected:
DB(const fs::path& path, size_t n_cache_size, DB(const fs::path& path, size_t n_cache_size,
bool f_memory = false, bool f_wipe = false, bool f_obfuscate = false); bool f_memory = false, bool f_wipe = false, bool f_obfuscate = false);
~DB() override {}
/// Read block locator of the chain that the txindex is in sync with. /// Read block locator of the chain that the txindex is in sync with.
bool ReadBestBlock(CBlockLocator& locator) const; bool ReadBestBlock(CBlockLocator& locator) const;

View file

@ -29,7 +29,6 @@ class TxIndex::DB : public BaseIndex::DB
{ {
public: public:
explicit DB(size_t n_cache_size, bool f_memory = false, bool f_wipe = false); explicit DB(size_t n_cache_size, bool f_memory = false, bool f_wipe = false);
~DB() override {}
/// Read the disk location of the transaction data with the given hash. Returns false if the /// Read the disk location of the transaction data with the given hash. Returns false if the
/// transaction hash is not indexed. /// transaction hash is not indexed.

View file

@ -22,7 +22,6 @@
#include <httprpc.h> #include <httprpc.h>
#include <index/txindex.h> #include <index/txindex.h>
#include <key.h> #include <key.h>
#include <lbry.h>
#include <validation.h> #include <validation.h>
#include <miner.h> #include <miner.h>
#include <netbase.h> #include <netbase.h>
@ -371,7 +370,6 @@ void SetupServerArgs()
gArgs.AddArg("-datadir=<dir>", "Specify data directory", false, OptionsCategory::OPTIONS); gArgs.AddArg("-datadir=<dir>", "Specify data directory", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), true, OptionsCategory::OPTIONS); gArgs.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbcache=<n>", strprintf("Set database cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS); gArgs.AddArg("-dbcache=<n>", strprintf("Set database cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-claimtriecache=<n>", strprintf("Set claim trie cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), false, OptionsCategory::OPTIONS); gArgs.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-feefilter", strprintf("Tell other nodes to filter invs to us by our mempool min fee (default: %u)", DEFAULT_FEEFILTER), true, OptionsCategory::OPTIONS); gArgs.AddArg("-feefilter", strprintf("Tell other nodes to filter invs to us by our mempool min fee (default: %u)", DEFAULT_FEEFILTER), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", false, OptionsCategory::OPTIONS); gArgs.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", false, OptionsCategory::OPTIONS);
@ -399,7 +397,6 @@ void SetupServerArgs()
hidden_args.emplace_back("-sysperms"); hidden_args.emplace_back("-sysperms");
#endif #endif
gArgs.AddArg("-txindex", strprintf("Maintain a full transaction index, used by the getrawtransaction rpc call (default: %u)", DEFAULT_TXINDEX), false, OptionsCategory::OPTIONS); gArgs.AddArg("-txindex", strprintf("Maintain a full transaction index, used by the getrawtransaction rpc call (default: %u)", DEFAULT_TXINDEX), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-memfile=<GiB>", "Use a memory mapped file for the claimtrie allocations (default: use RAM instead)", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-addnode=<ip>", "Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes.", false, OptionsCategory::CONNECTION); gArgs.AddArg("-addnode=<ip>", "Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes.", false, OptionsCategory::CONNECTION);
gArgs.AddArg("-banscore=<n>", strprintf("Threshold for disconnecting misbehaving peers (default: %u)", DEFAULT_BANSCORE_THRESHOLD), false, OptionsCategory::CONNECTION); gArgs.AddArg("-banscore=<n>", strprintf("Threshold for disconnecting misbehaving peers (default: %u)", DEFAULT_BANSCORE_THRESHOLD), false, OptionsCategory::CONNECTION);
@ -486,6 +483,7 @@ void SetupServerArgs()
CURRENCY_UNIT, FormatMoney(DEFAULT_TRANSACTION_MAXFEE)), false, OptionsCategory::DEBUG_TEST); CURRENCY_UNIT, FormatMoney(DEFAULT_TRANSACTION_MAXFEE)), false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-printpriority", strprintf("Log transaction fee per kB when mining blocks (default: %u)", DEFAULT_PRINTPRIORITY), true, OptionsCategory::DEBUG_TEST); gArgs.AddArg("-printpriority", strprintf("Log transaction fee per kB when mining blocks (default: %u)", DEFAULT_PRINTPRIORITY), true, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-printtoconsole", "Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile)", false, OptionsCategory::DEBUG_TEST); gArgs.AddArg("-printtoconsole", "Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile)", false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-shrinkdebugfile", "Shrink debug.log file on client startup (default: 1 when no -debug)", false, OptionsCategory::DEBUG_TEST);
gArgs.AddArg("-uacomment=<cmt>", "Append comment to the user agent string", false, OptionsCategory::DEBUG_TEST); gArgs.AddArg("-uacomment=<cmt>", "Append comment to the user agent string", false, OptionsCategory::DEBUG_TEST);
SetupChainParamsBaseOptions(); SetupChainParamsBaseOptions();
@ -1235,6 +1233,11 @@ bool AppInitMain()
CreatePidFile(GetPidFile(), getpid()); CreatePidFile(GetPidFile(), getpid());
#endif #endif
if (g_logger->m_print_to_file) { if (g_logger->m_print_to_file) {
if (gArgs.GetBoolArg("-shrinkdebugfile", g_logger->DefaultShrinkDebugFile())) {
// Do this first since it both loads a bunch of debug.log into memory,
// and because this needs to happen before any other debug.log printing
g_logger->ShrinkDebugFile();
}
if (!g_logger->OpenDebugLog()) { if (!g_logger->OpenDebugLog()) {
return InitError(strprintf("Could not open debug log file %s", return InitError(strprintf("Could not open debug log file %s",
g_logger->m_file_path.string())); g_logger->m_file_path.string()));
@ -1437,8 +1440,6 @@ bool AppInitMain()
LogPrintf("* Using %.1fMiB for chain state database\n", nCoinDBCache * (1.0 / 1024 / 1024)); LogPrintf("* Using %.1fMiB for chain state database\n", nCoinDBCache * (1.0 / 1024 / 1024));
LogPrintf("* Using %.1fMiB for in-memory UTXO set (plus up to %.1fMiB of unused mempool space)\n", nCoinCacheUsage * (1.0 / 1024 / 1024), nMempoolSizeMax * (1.0 / 1024 / 1024)); LogPrintf("* Using %.1fMiB for in-memory UTXO set (plus up to %.1fMiB of unused mempool space)\n", nCoinCacheUsage * (1.0 / 1024 / 1024), nMempoolSizeMax * (1.0 / 1024 / 1024));
g_memfileSize = gArgs.GetArg("-memfile", 0u);
bool fLoaded = false; bool fLoaded = false;
while (!fLoaded && !ShutdownRequested()) { while (!fLoaded && !ShutdownRequested()) {
bool fReset = fReindex; bool fReset = fReindex;
@ -1460,10 +1461,7 @@ bool AppInitMain()
pblocktree.reset(); pblocktree.reset();
pblocktree.reset(new CBlockTreeDB(nBlockTreeDBCache, false, fReset)); pblocktree.reset(new CBlockTreeDB(nBlockTreeDBCache, false, fReset));
delete pclaimTrie; delete pclaimTrie;
int64_t trieCacheMB = gArgs.GetArg("-claimtriecache", nDefaultDbCache); pclaimTrie = new CClaimTrie(false, fReindex);
trieCacheMB = std::min(trieCacheMB, nMaxDbCache);
trieCacheMB = std::max(trieCacheMB, nMinDbCache);
pclaimTrie = new CClaimTrie(false, fReindex || fReindexChainState, 32, trieCacheMB);
if (fReset) { if (fReset) {
pblocktree->WriteReindexing(true); pblocktree->WriteReindexing(true);
@ -1505,6 +1503,12 @@ bool AppInitMain()
break; break;
} }
if (!pclaimTrie->ReadFromDisk(true))
{
strLoadError = _("Error loading the claim trie from disk");
break;
}
// At this point we're either in reindex or we've loaded a useful // At this point we're either in reindex or we've loaded a useful
// block tree into mapBlockIndex! // block tree into mapBlockIndex!
@ -1537,13 +1541,6 @@ bool AppInitMain()
assert(chainActive.Tip() != nullptr); assert(chainActive.Tip() != nullptr);
} }
CClaimTrieCache trieCache(pclaimTrie);
if (!trieCache.ReadFromDisk(chainActive.Tip()))
{
strLoadError = _("Error loading the claim trie from disk");
break;
}
if (!fReset) { if (!fReset) {
// Note that RewindBlockIndex MUST run even if we're about to -reindex-chainstate. // Note that RewindBlockIndex MUST run even if we're about to -reindex-chainstate.
// It both disconnects blocks based on chainActive, and drops block data in // It both disconnects blocks based on chainActive, and drops block data in
@ -1704,6 +1701,9 @@ bool AppInitMain()
} }
LogPrintf("nBestHeight = %d\n", chain_active_height); LogPrintf("nBestHeight = %d\n", chain_active_height);
const Consensus::Params& consensusParams = Params().GetConsensus();
pclaimTrie->setExpirationTime(consensusParams.GetExpirationTime(chain_active_height));
if (gArgs.GetBoolArg("-listenonion", DEFAULT_LISTEN_ONION)) if (gArgs.GetBoolArg("-listenonion", DEFAULT_LISTEN_ONION))
StartTorControl(); StartTorControl();

View file

@ -3,8 +3,6 @@
#include <cstdio> #include <cstdio>
uint32_t g_memfileSize = 0;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params) unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{ {
if (params.fPowNoRetargeting) if (params.fPowNoRetargeting)

View file

@ -4,7 +4,6 @@
#include <chain.h> #include <chain.h>
#include <chainparams.h> #include <chainparams.h>
extern uint32_t g_memfileSize;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nLastRetargetTime, const Consensus::Params& params); unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nLastRetargetTime, const Consensus::Params& params);
#endif #endif

View file

@ -221,9 +221,6 @@ class RandomAccessFile {
// Get a name for the file, only for error reporting // Get a name for the file, only for error reporting
virtual std::string GetName() const = 0; virtual std::string GetName() const = 0;
virtual char* AllocateScratch(std::size_t size) const { return new char[size]; };
virtual void DeallocateScratch(char* pointer) const { delete[] pointer; };
private: private:
// No copying allowed // No copying allowed
RandomAccessFile(const RandomAccessFile&); RandomAccessFile(const RandomAccessFile&);

View file

@ -73,15 +73,15 @@ Status ReadBlock(RandomAccessFile* file,
// Read the block contents as well as the type/crc footer. // Read the block contents as well as the type/crc footer.
// See table_builder.cc for the code that built this structure. // See table_builder.cc for the code that built this structure.
size_t n = static_cast<size_t>(handle.size()); size_t n = static_cast<size_t>(handle.size());
char* buf = file->AllocateScratch(n + kBlockTrailerSize); char* buf = new char[n + kBlockTrailerSize];
Slice contents; Slice contents;
Status s = file->Read(handle.offset(), n + kBlockTrailerSize, &contents, buf); Status s = file->Read(handle.offset(), n + kBlockTrailerSize, &contents, buf);
if (!s.ok()) { if (!s.ok()) {
file->DeallocateScratch(buf); delete[] buf;
return s; return s;
} }
if (contents.size() != n + kBlockTrailerSize) { if (contents.size() != n + kBlockTrailerSize) {
file->DeallocateScratch(buf); delete[] buf;
return Status::Corruption("truncated block read", file->GetName()); return Status::Corruption("truncated block read", file->GetName());
} }
@ -91,7 +91,7 @@ Status ReadBlock(RandomAccessFile* file,
const uint32_t crc = crc32c::Unmask(DecodeFixed32(data + n + 1)); const uint32_t crc = crc32c::Unmask(DecodeFixed32(data + n + 1));
const uint32_t actual = crc32c::Value(data, n + 1); const uint32_t actual = crc32c::Value(data, n + 1);
if (actual != crc) { if (actual != crc) {
file->DeallocateScratch(buf); delete[] buf;
s = Status::Corruption("block checksum mismatch", file->GetName()); s = Status::Corruption("block checksum mismatch", file->GetName());
return s; return s;
} }
@ -103,7 +103,7 @@ Status ReadBlock(RandomAccessFile* file,
// File implementation gave us pointer to some other data. // File implementation gave us pointer to some other data.
// Use it directly under the assumption that it will be live // Use it directly under the assumption that it will be live
// while the file is open. // while the file is open.
file->DeallocateScratch(buf); delete[] buf;
result->data = Slice(data, n); result->data = Slice(data, n);
result->heap_allocated = false; result->heap_allocated = false;
result->cachable = false; // Do not double-cache result->cachable = false; // Do not double-cache
@ -118,23 +118,23 @@ Status ReadBlock(RandomAccessFile* file,
case kSnappyCompression: { case kSnappyCompression: {
size_t ulength = 0; size_t ulength = 0;
if (!port::Snappy_GetUncompressedLength(data, n, &ulength)) { if (!port::Snappy_GetUncompressedLength(data, n, &ulength)) {
file->DeallocateScratch(buf); delete[] buf;
return Status::Corruption("corrupted compressed block contents", file->GetName()); return Status::Corruption("corrupted compressed block contents", file->GetName());
} }
char* ubuf = new char[ulength]; char* ubuf = new char[ulength];
if (!port::Snappy_Uncompress(data, n, ubuf)) { if (!port::Snappy_Uncompress(data, n, ubuf)) {
file->DeallocateScratch(buf); delete[] buf;
delete[] ubuf; delete[] ubuf;
return Status::Corruption("corrupted compressed block contents", file->GetName()); return Status::Corruption("corrupted compressed block contents", file->GetName());
} }
file->DeallocateScratch(buf); delete[] buf;
result->data = Slice(ubuf, ulength); result->data = Slice(ubuf, ulength);
result->heap_allocated = true; result->heap_allocated = true;
result->cachable = true; result->cachable = true;
break; break;
} }
default: default:
file->DeallocateScratch(buf); delete[] buf;
return Status::Corruption("bad block type", file->GetName()); return Status::Corruption("bad block type", file->GetName());
} }

View file

@ -162,7 +162,6 @@ class PosixRandomAccessFile: public RandomAccessFile {
} }
Status s; Status s;
assert(scratch);
ssize_t r = pread(fd, scratch, n, static_cast<off_t>(offset)); ssize_t r = pread(fd, scratch, n, static_cast<off_t>(offset));
*result = Slice(scratch, (r < 0) ? 0 : r); *result = Slice(scratch, (r < 0) ? 0 : r);
if (r < 0) { if (r < 0) {
@ -189,19 +188,19 @@ class PosixMmapReadableFile: public RandomAccessFile {
public: public:
// base[0,length-1] contains the mmapped contents of the file. // base[0,length-1] contains the mmapped contents of the file.
PosixMmapReadableFile(std::string fname, void* base, size_t length, PosixMmapReadableFile(const std::string& fname, void* base, size_t length,
Limiter* limiter) Limiter* limiter)
: filename_(std::move(fname)), mmapped_region_(base), length_(length), : filename_(fname), mmapped_region_(base), length_(length),
limiter_(limiter) { limiter_(limiter) {
} }
~PosixMmapReadableFile() override { virtual ~PosixMmapReadableFile() {
munmap(mmapped_region_, length_); munmap(mmapped_region_, length_);
limiter_->Release(); limiter_->Release();
} }
Status Read(uint64_t offset, size_t n, Slice* result, virtual Status Read(uint64_t offset, size_t n, Slice* result,
char* scratch) const override { char* scratch) const {
Status s; Status s;
if (offset + n > length_) { if (offset + n > length_) {
*result = Slice(); *result = Slice();
@ -212,10 +211,7 @@ class PosixMmapReadableFile: public RandomAccessFile {
return s; return s;
} }
std::string GetName() const override { return filename_; } virtual std::string GetName() const { return filename_; }
char* AllocateScratch(std::size_t size) const override { return nullptr; }
void DeallocateScratch(char* pointer) const override { assert(pointer == nullptr); }
}; };
class PosixWritableFile : public WritableFile { class PosixWritableFile : public WritableFile {
@ -589,8 +585,8 @@ static int MaxMmaps() {
if (mmap_limit >= 0) { if (mmap_limit >= 0) {
return mmap_limit; return mmap_limit;
} }
// Up to 400 mmaps for 64-bit binaries (800MB); none for smaller pointer sizes. // Up to 4096 mmaps for 64-bit binaries; none for smaller pointer sizes.
mmap_limit = sizeof(void*) >= 8 ? 400 : 0; mmap_limit = sizeof(void*) >= 8 ? 4096 : 0;
return mmap_limit; return mmap_limit;
} }

View file

@ -37,12 +37,6 @@ bool BCLog::Logger::OpenDebugLog()
assert(m_fileout == nullptr); assert(m_fileout == nullptr);
assert(!m_file_path.empty()); assert(!m_file_path.empty());
if (fs::exists(m_file_path)) {
fs::path old_file_path(m_file_path);
old_file_path += ".old";
fs::rename(m_file_path, old_file_path);
}
m_fileout = fsbridge::fopen(m_file_path, "a"); m_fileout = fsbridge::fopen(m_file_path, "a");
if (!m_fileout) { if (!m_fileout) {
return false; return false;
@ -89,6 +83,11 @@ bool BCLog::Logger::WillLogCategory(BCLog::LogFlags category) const
return (m_categories.load(std::memory_order_relaxed) & category) != 0; return (m_categories.load(std::memory_order_relaxed) & category) != 0;
} }
bool BCLog::Logger::DefaultShrinkDebugFile() const
{
return m_categories == BCLog::NONE;
}
struct CLogCategoryDesc struct CLogCategoryDesc
{ {
BCLog::LogFlags flag; BCLog::LogFlags flag;
@ -120,7 +119,6 @@ const CLogCategoryDesc LogCategories[] =
{BCLog::COINDB, "coindb"}, {BCLog::COINDB, "coindb"},
{BCLog::QT, "qt"}, {BCLog::QT, "qt"},
{BCLog::LEVELDB, "leveldb"}, {BCLog::LEVELDB, "leveldb"},
{BCLog::CLAIMS, "claims"},
{BCLog::ALL, "1"}, {BCLog::ALL, "1"},
{BCLog::ALL, "all"}, {BCLog::ALL, "all"},
}; };
@ -232,3 +230,44 @@ void BCLog::Logger::LogPrintStr(const std::string &str)
} }
} }
} }
void BCLog::Logger::ShrinkDebugFile()
{
// Amount of debug.log to save at end when shrinking (must fit in memory)
constexpr size_t RECENT_DEBUG_HISTORY_SIZE = 10 * 1000000;
assert(!m_file_path.empty());
// Scroll debug.log if it's getting too big
FILE* file = fsbridge::fopen(m_file_path, "r");
// Special files (e.g. device nodes) may not have a size.
size_t log_size = 0;
try {
log_size = fs::file_size(m_file_path);
} catch (boost::filesystem::filesystem_error &) {}
// If debug.log file is more than 10% bigger the RECENT_DEBUG_HISTORY_SIZE
// trim it down by saving only the last RECENT_DEBUG_HISTORY_SIZE bytes
if (file && log_size > 11 * (RECENT_DEBUG_HISTORY_SIZE / 10))
{
// Restart the file with some of the end
std::vector<char> vch(RECENT_DEBUG_HISTORY_SIZE, 0);
if (fseek(file, -((long)vch.size()), SEEK_END)) {
LogPrintf("Failed to shrink debug log file: fseek(...) failed\n");
fclose(file);
return;
}
int nBytes = fread(vch.data(), 1, vch.size(), file);
fclose(file);
file = fsbridge::fopen(m_file_path, "w");
if (file)
{
fwrite(vch.data(), 1, nBytes, file);
fclose(file);
}
}
else if (file != nullptr)
fclose(file);
}

View file

@ -53,7 +53,6 @@ namespace BCLog {
COINDB = (1 << 18), COINDB = (1 << 18),
QT = (1 << 19), QT = (1 << 19),
LEVELDB = (1 << 20), LEVELDB = (1 << 20),
CLAIMS = (1 << 30),
ALL = ~(uint32_t)0, ALL = ~(uint32_t)0,
}; };
@ -93,6 +92,7 @@ namespace BCLog {
bool Enabled() const { return m_print_to_console || m_print_to_file; } bool Enabled() const { return m_print_to_console || m_print_to_file; }
bool OpenDebugLog(); bool OpenDebugLog();
void ShrinkDebugFile();
uint32_t GetCategoryMask() const { return m_categories.load(); } uint32_t GetCategoryMask() const { return m_categories.load(); }
@ -102,6 +102,8 @@ namespace BCLog {
bool DisableCategory(const std::string& str); bool DisableCategory(const std::string& str);
bool WillLogCategory(LogFlags category) const; bool WillLogCategory(LogFlags category) const;
bool DefaultShrinkDebugFile() const;
}; };
} // namespace BCLog } // namespace BCLog

View file

@ -159,8 +159,8 @@ static inline size_t DynamicUsage(const std::unordered_set<X, Y>& s)
return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(sizeof(void*) * s.bucket_count()); return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(sizeof(void*) * s.bucket_count());
} }
template<typename X, typename Y, typename ... Z> template<typename X, typename Y, typename Z>
static inline size_t DynamicUsage(const std::unordered_map<X, Y, Z...>& m) static inline size_t DynamicUsage(const std::unordered_map<X, Y, Z>& m)
{ {
return MallocUsage(sizeof(unordered_node<std::pair<const X, Y> >)) * m.size() + MallocUsage(sizeof(void*) * m.bucket_count()); return MallocUsage(sizeof(unordered_node<std::pair<const X, Y> >)) * m.size() + MallocUsage(sizeof(void*) * m.bucket_count());
} }

View file

@ -53,35 +53,6 @@ int64_t UpdateTime(CBlockHeader* pblock, const Consensus::Params& consensusParam
return nNewTime - nOldTime; return nNewTime - nOldTime;
} }
void blockToCache(const CBlock* pblock, CClaimTrieCache& trieCache, int nHeight)
{
insertUndoType dummyInsertUndo;
claimQueueRowType dummyExpireUndo;
insertUndoType dummyInsertSupportUndo;
supportQueueRowType dummyExpireSupportUndo;
std::vector<std::pair<std::string, int> > dummyTakeoverHeightUndo;
CUpdateCacheCallbacks callbacks = {
.findScriptKey = [&pblock](const COutPoint& point) {
for (auto& tx : pblock->vtx)
if (tx->GetHash() == point.hash && point.n < tx->vout.size())
return tx->vout[point.n].scriptPubKey;
return CScript{};
},
.claimUndoHeights = {}
};
trieCache.initializeIncrement();
CCoinsViewCache view(pcoinsTip.get());
for (auto& tx : pblock->vtx)
if (!tx->IsCoinBase())
UpdateCache(*tx, trieCache, view, nHeight, callbacks);
trieCache.incrementBlock(dummyInsertUndo, dummyExpireUndo, dummyInsertSupportUndo, dummyExpireSupportUndo, dummyTakeoverHeightUndo);
}
BlockAssembler::Options::Options() { BlockAssembler::Options::Options() {
blockMinFeeRate = CFeeRate(DEFAULT_BLOCK_MIN_TX_FEE); blockMinFeeRate = CFeeRate(DEFAULT_BLOCK_MIN_TX_FEE);
nBlockMaxWeight = DEFAULT_BLOCK_MAX_WEIGHT; nBlockMaxWeight = DEFAULT_BLOCK_MAX_WEIGHT;
@ -147,6 +118,12 @@ std::unique_ptr<CBlockTemplate> BlockAssembler::CreateNewBlock(const CScript& sc
CBlockIndex* pindexPrev = chainActive.Tip(); CBlockIndex* pindexPrev = chainActive.Tip();
assert(pindexPrev != nullptr); assert(pindexPrev != nullptr);
nHeight = pindexPrev->nHeight + 1; nHeight = pindexPrev->nHeight + 1;
if (!pclaimTrie)
{
LogPrintf("CreateNewBlock(): pclaimTrie is invalid");
return NULL;
}
CClaimTrieCache trieCache(pclaimTrie);
pblock->nVersion = ComputeBlockVersion(pindexPrev, chainparams.GetConsensus()); pblock->nVersion = ComputeBlockVersion(pindexPrev, chainparams.GetConsensus());
// -regtest only: allow overriding block.nVersion with // -regtest only: allow overriding block.nVersion with
// -blockversion=N to test forking scenarios // -blockversion=N to test forking scenarios
@ -173,7 +150,7 @@ std::unique_ptr<CBlockTemplate> BlockAssembler::CreateNewBlock(const CScript& sc
int nPackagesSelected = 0; int nPackagesSelected = 0;
int nDescendantsUpdated = 0; int nDescendantsUpdated = 0;
addPackageTxs(nPackagesSelected, nDescendantsUpdated); addPackageTxs(nPackagesSelected, nDescendantsUpdated, trieCache);
int64_t nTime1 = GetTimeMicros(); int64_t nTime1 = GetTimeMicros();
@ -201,14 +178,20 @@ std::unique_ptr<CBlockTemplate> BlockAssembler::CreateNewBlock(const CScript& sc
pblock->nNonce = 0; pblock->nNonce = 0;
pblocktemplate->vTxSigOpsCost[0] = WITNESS_SCALE_FACTOR * GetLegacySigOpCount(*pblock->vtx[0]); pblocktemplate->vTxSigOpsCost[0] = WITNESS_SCALE_FACTOR * GetLegacySigOpCount(*pblock->vtx[0]);
CClaimTrieCache trieCache(pclaimTrie); insertUndoType dummyInsertUndo;
blockToCache(pblock, trieCache, nHeight); claimQueueRowType dummyExpireUndo;
insertUndoType dummyInsertSupportUndo;
supportQueueRowType dummyExpireSupportUndo;
std::vector<std::pair<std::string, int> > dummyTakeoverHeightUndo;
trieCache.incrementBlock(dummyInsertUndo, dummyExpireUndo, dummyInsertSupportUndo, dummyExpireSupportUndo, dummyTakeoverHeightUndo);
pblock->hashClaimTrie = trieCache.getMerkleHash(); pblock->hashClaimTrie = trieCache.getMerkleHash();
CValidationState state; CValidationState state;
if (!TestBlockValidity(state, chainparams, *pblock, pindexPrev, false, false)) { if (!TestBlockValidity(state, chainparams, *pblock, pindexPrev, false, false)) {
if (!trieCache.empty()) // if (trieCache.checkConsistency()) // TODO: bring back after prefixtrie merge
trieCache.dumpToLog(trieCache.find({})); // trieCache.dumpToLog(trieCache.begin());
throw std::runtime_error(strprintf("%s: TestBlockValidity failed: %s", __func__, FormatStateMessage(state))); throw std::runtime_error(strprintf("%s: TestBlockValidity failed: %s", __func__, FormatStateMessage(state)));
} }
int64_t nTime2 = GetTimeMicros(); int64_t nTime2 = GetTimeMicros();
@ -328,6 +311,42 @@ void BlockAssembler::SortForBlock(const CTxMemPool::setEntries& package, std::ve
std::sort(sortedEntries.begin(), sortedEntries.end(), CompareTxIterByAncestorCount()); std::sort(sortedEntries.begin(), sortedEntries.end(), CompareTxIterByAncestorCount());
} }
void iterToTrieCache(CTxMemPool::txiter iter, CClaimTrieCache& trieCache, const CTxMemPool::setEntries& entries, int nHeight)
{
spentClaimsType spentClaims;
auto& tx = iter->GetTx();
CCoinsViewCache view(pcoinsTip.get());
for (const CTxIn& txin: tx.vin) {
const Coin& coin = view.AccessCoin(txin.prevout);
CScript scriptPubKey;
int scriptHeight = nHeight;
if (coin.out.IsNull()) {
for (auto entry : entries) {
auto& e = entry->GetTx();
if (e.GetHash() != txin.prevout.hash || txin.prevout.n >= e.vout.size())
continue;
scriptPubKey = e.vout[txin.prevout.n].scriptPubKey;
break;
}
} else {
scriptPubKey = coin.out.scriptPubKey;
scriptHeight = coin.nHeight;
}
if (!scriptPubKey.empty()) {
int throwaway;
SpendClaim(trieCache, scriptPubKey, COutPoint(txin.prevout.hash, txin.prevout.n), scriptHeight, throwaway, spentClaims);
}
}
for (unsigned int i = 0; i < tx.vout.size(); ++i) {
const CTxOut& txout = tx.vout[i];
if (!txout.scriptPubKey.empty())
AddSpendClaim(trieCache, txout.scriptPubKey, COutPoint(tx.GetHash(), i), txout.nValue, nHeight, spentClaims);
}
}
// This transaction selection algorithm orders the mempool based // This transaction selection algorithm orders the mempool based
// on feerate of a transaction including all unconfirmed ancestors. // on feerate of a transaction including all unconfirmed ancestors.
// Since we don't remove transactions from the mempool as we select them // Since we don't remove transactions from the mempool as we select them
@ -338,7 +357,7 @@ void BlockAssembler::SortForBlock(const CTxMemPool::setEntries& package, std::ve
// Each time through the loop, we compare the best transaction in // Each time through the loop, we compare the best transaction in
// mapModifiedTxs with the next transaction in the mempool to decide what // mapModifiedTxs with the next transaction in the mempool to decide what
// transaction package to work on next. // transaction package to work on next.
void BlockAssembler::addPackageTxs(int &nPackagesSelected, int &nDescendantsUpdated) void BlockAssembler::addPackageTxs(int &nPackagesSelected, int &nDescendantsUpdated, CClaimTrieCache& trieCache)
{ {
// mapModifiedTx will store sorted packages after they are modified // mapModifiedTx will store sorted packages after they are modified
// because some of their txs are already in the block // because some of their txs are already in the block
@ -455,6 +474,7 @@ void BlockAssembler::addPackageTxs(int &nPackagesSelected, int &nDescendantsUpda
SortForBlock(ancestors, sortedEntries); SortForBlock(ancestors, sortedEntries);
for (size_t i=0; i<sortedEntries.size(); ++i) { for (size_t i=0; i<sortedEntries.size(); ++i) {
iterToTrieCache(sortedEntries[i], trieCache, inBlock, nHeight);
AddToBlock(sortedEntries[i]); AddToBlock(sortedEntries[i]);
// Erase from the modified set, if present // Erase from the modified set, if present
mapModifiedTx.erase(sortedEntries[i]); mapModifiedTx.erase(sortedEntries[i]);

View file

@ -170,7 +170,7 @@ private:
/** Add transactions based on feerate including unconfirmed ancestors /** Add transactions based on feerate including unconfirmed ancestors
* Increments nPackagesSelected / nDescendantsUpdated with corresponding * Increments nPackagesSelected / nDescendantsUpdated with corresponding
* statistics from the package selection (for logging statistics). */ * statistics from the package selection (for logging statistics). */
void addPackageTxs(int &nPackagesSelected, int &nDescendantsUpdated) EXCLUSIVE_LOCKS_REQUIRED(mempool.cs); void addPackageTxs(int &nPackagesSelected, int &nDescendantsUpdated, CClaimTrieCache& trieCache) EXCLUSIVE_LOCKS_REQUIRED(mempool.cs);
// helper functions for addPackageTxs() // helper functions for addPackageTxs()
/** Remove confirmed (inBlock) entries from given set */ /** Remove confirmed (inBlock) entries from given set */

View file

@ -1,6 +1,9 @@
#include <boost/foreach.hpp>
#include "nameclaim.h" #include "nameclaim.h"
#include "hash.h" #include "hash.h"
#include "util.h" #include "util.h"
#include "claimtrie.h"
std::vector<unsigned char> uint32_t_to_vch(uint32_t n) std::vector<unsigned char> uint32_t_to_vch(uint32_t n)
{ {
@ -18,35 +21,25 @@ uint32_t vch_to_uint32_t(std::vector<unsigned char>& vchN)
uint32_t n; uint32_t n;
static const size_t uint32Size = sizeof(uint32_t); static const size_t uint32Size = sizeof(uint32_t);
if (vchN.size() != uint32Size) { if (vchN.size() != uint32Size) {
LogPrintf("%s() : a vector<unsigned char> with size other than 4 has been given\n", __func__); LogPrintf("%s() : a vector<unsigned char> with size other than 4 has been given", __func__);
return 0; return 0;
} }
n = vchN[0] << 24 | vchN[1] << 16 | vchN[2] << 8 | vchN[3]; n = vchN[0] << 24 | vchN[1] << 16 | vchN[2] << 8 | vchN[3];
return n; return n;
} }
CScript ClaimNameScript(std::string name, std::string value, bool fakeSuffix) CScript ClaimNameScript(std::string name, std::string value)
{ {
std::vector<unsigned char> vchName(name.begin(), name.end()); std::vector<unsigned char> vchName(name.begin(), name.end());
std::vector<unsigned char> vchValue(value.begin(), value.end()); std::vector<unsigned char> vchValue(value.begin(), value.end());
auto ret = CScript() << OP_CLAIM_NAME << vchName << vchValue << OP_2DROP << OP_DROP; return CScript() << OP_CLAIM_NAME << vchName << vchValue << OP_2DROP << OP_DROP << OP_TRUE;
if (fakeSuffix) ret.push_back(OP_TRUE);
return ret;
} }
CScript SupportClaimScript(std::string name, uint160 claimId, std::string value, bool fakeSuffix) CScript SupportClaimScript(std::string name, uint160 claimId)
{ {
std::vector<unsigned char> vchName(name.begin(), name.end()); std::vector<unsigned char> vchName(name.begin(), name.end());
std::vector<unsigned char> vchClaimId(claimId.begin(), claimId.end()); std::vector<unsigned char> vchClaimId(claimId.begin(), claimId.end());
CScript ret; return CScript() << OP_SUPPORT_CLAIM << vchName << vchClaimId << OP_2DROP << OP_DROP << OP_TRUE;
if (value.empty())
ret = CScript() << OP_SUPPORT_CLAIM << vchName << vchClaimId << OP_2DROP << OP_DROP;
else {
std::vector<unsigned char> vchValue(value.begin(), value.end());
ret = CScript() << OP_SUPPORT_CLAIM << vchName << vchClaimId << vchValue << OP_2DROP << OP_2DROP;
}
if (fakeSuffix) ret.push_back(OP_TRUE);
return ret;
} }
CScript UpdateClaimScript(std::string name, uint160 claimId, std::string value) CScript UpdateClaimScript(std::string name, uint160 claimId, std::string value)
@ -57,15 +50,14 @@ CScript UpdateClaimScript(std::string name, uint160 claimId, std::string value)
return CScript() << OP_UPDATE_CLAIM << vchName << vchClaimId << vchValue << OP_2DROP << OP_2DROP << OP_TRUE; return CScript() << OP_UPDATE_CLAIM << vchName << vchClaimId << vchValue << OP_2DROP << OP_2DROP << OP_TRUE;
} }
bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, bool allowSupportMetadata) bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams)
{ {
CScript::const_iterator pc = scriptIn.begin(); CScript::const_iterator pc = scriptIn.begin();
return DecodeClaimScript(scriptIn, op, vvchParams, pc, allowSupportMetadata); return DecodeClaimScript(scriptIn, op, vvchParams, pc);
} }
bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, CScript::const_iterator& pc, bool allowSupportMetadata) bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, CScript::const_iterator& pc)
{ {
op = -1;
opcodetype opcode; opcodetype opcode;
if (!scriptIn.GetOp(pc, opcode)) if (!scriptIn.GetOp(pc, opcode))
{ {
@ -86,7 +78,6 @@ bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector
// OP_CLAIM_NAME vchName vchValue OP_2DROP OP_DROP pubkeyscript // OP_CLAIM_NAME vchName vchValue OP_2DROP OP_DROP pubkeyscript
// OP_UPDATE_CLAIM vchName vchClaimId vchValue OP_2DROP OP_2DROP pubkeyscript // OP_UPDATE_CLAIM vchName vchClaimId vchValue OP_2DROP OP_2DROP pubkeyscript
// OP_SUPPORT_CLAIM vchName vchClaimId OP_2DROP OP_DROP pubkeyscript // OP_SUPPORT_CLAIM vchName vchClaimId OP_2DROP OP_DROP pubkeyscript
// OP_SUPPORT_CLAIM vchName vchClaimId vchValue OP_2DROP OP_2DROP pubkeyscript
// All others are invalid. // All others are invalid.
if (!scriptIn.GetOp(pc, opcode, vchParam1) || opcode < 0 || opcode > OP_PUSHDATA4) if (!scriptIn.GetOp(pc, opcode, vchParam1) || opcode < 0 || opcode > OP_PUSHDATA4)
@ -104,42 +95,35 @@ bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector
return false; return false;
} }
} }
if (op == OP_UPDATE_CLAIM)
if (!scriptIn.GetOp(pc, opcode, vchParam3)) {
if (!scriptIn.GetOp(pc, opcode, vchParam3) || opcode < 0 || opcode > OP_PUSHDATA4)
{ {
return false; return false;
} }
auto last_drop = OP_DROP; }
if (opcode >= 0 && opcode <= OP_PUSHDATA4 && op != OP_CLAIM_NAME) if (!scriptIn.GetOp(pc, opcode) || opcode != OP_2DROP)
{ {
return false;
}
if (!scriptIn.GetOp(pc, opcode)) if (!scriptIn.GetOp(pc, opcode))
{ {
return false; return false;
} }
last_drop = OP_2DROP; if ((op == OP_CLAIM_NAME || op == OP_SUPPORT_CLAIM) && opcode != OP_DROP)
}
else if (op == OP_UPDATE_CLAIM)
{ {
return false; return false;
} }
if (opcode != OP_2DROP) else if ((op == OP_UPDATE_CLAIM) && opcode != OP_2DROP)
{
return false;
}
if (!scriptIn.GetOp(pc, opcode) || opcode != last_drop)
{
return false;
}
if (op == OP_SUPPORT_CLAIM && last_drop == OP_2DROP && !allowSupportMetadata)
{ {
return false; return false;
} }
vvchParams.push_back(std::move(vchParam1)); vvchParams.push_back(vchParam1);
vvchParams.push_back(std::move(vchParam2)); vvchParams.push_back(vchParam2);
if (last_drop == OP_2DROP) if (op == OP_UPDATE_CLAIM)
{ {
vvchParams.push_back(std::move(vchParam3)); vvchParams.push_back(vchParam3);
} }
return true; return true;
} }
@ -200,7 +184,7 @@ CAmount CalcMinClaimTrieFee(const CTransaction& tx, const CAmount &minFeePerName
} }
CAmount min_fee = 0; CAmount min_fee = 0;
for (const CTxOut& txout: tx.vout) BOOST_FOREACH(const CTxOut& txout, tx.vout)
{ {
int op; int op;
std::vector<std::vector<unsigned char> > vvchParams; std::vector<std::vector<unsigned char> > vvchParams;

View file

@ -11,7 +11,7 @@
// This is the minimum claim fee per character in the name of an OP_CLAIM_NAME command that must // This is the minimum claim fee per character in the name of an OP_CLAIM_NAME command that must
// be attached to transactions for it to be accepted into the memory pool. // be attached to transactions for it to be accepted into the memory pool.
// Rationale: current implementation of the claim trie uses more memory for longer name claims // Rationale: current implementation of the claim trie uses more memory for longer name claims
// due to the fact that each character is assigned a trie node regardless of whether it contains // due to the fact that each chracater is assigned a trie node regardless of whether it contains
// any claims or not. In the future, we can switch to a radix tree implementation where // any claims or not. In the future, we can switch to a radix tree implementation where
// empty nodes do not take up any memory and the minimum fee can be priced on a per claim // empty nodes do not take up any memory and the minimum fee can be priced on a per claim
// basis. // basis.
@ -25,11 +25,11 @@
// Scripts exceeding this size are rejected in CheckTransaction in main.cpp // Scripts exceeding this size are rejected in CheckTransaction in main.cpp
#define MAX_CLAIM_NAME_SIZE 255 #define MAX_CLAIM_NAME_SIZE 255
CScript ClaimNameScript(std::string name, std::string value, bool fakeSuffix=true); CScript ClaimNameScript(std::string name, std::string value);
CScript SupportClaimScript(std::string name, uint160 claimId, std::string value="", bool fakeSuffix=true); CScript SupportClaimScript(std::string name, uint160 claimId);
CScript UpdateClaimScript(std::string name, uint160 claimId, std::string value); CScript UpdateClaimScript(std::string name, uint160 claimId, std::string value);
bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, bool allowSupportMetadata=true); bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams);
bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, CScript::const_iterator& pc, bool allowSupportMetadata=true); bool DecodeClaimScript(const CScript& scriptIn, int& op, std::vector<std::vector<unsigned char> >& vvchParams, CScript::const_iterator& pc);
CScript StripClaimScriptPrefix(const CScript& scriptIn); CScript StripClaimScriptPrefix(const CScript& scriptIn);
CScript StripClaimScriptPrefix(const CScript& scriptIn, int& op); CScript StripClaimScriptPrefix(const CScript& scriptIn, int& op);
uint160 ClaimIdHash(const uint256& txhash, uint32_t nOut); uint160 ClaimIdHash(const uint256& txhash, uint32_t nOut);

View file

@ -10,9 +10,11 @@
#include <consensus/validation.h> #include <consensus/validation.h>
#include <validation.h> #include <validation.h>
#include <coins.h> #include <coins.h>
#include <tinyformat.h>
#include <util.h>
#include <utilstrencodings.h> #include <utilstrencodings.h>
#include <nameclaim.h> #include "nameclaim.h"
CAmount GetDustThreshold(const CTxOut& txout, const CFeeRate& dustRelayFeeIn) CAmount GetDustThreshold(const CTxOut& txout, const CFeeRate& dustRelayFeeIn)
@ -115,7 +117,8 @@ bool IsStandardTx(const CTransaction& tx, std::string& reason)
unsigned int nDataOut = 0; unsigned int nDataOut = 0;
txnouttype whichType; txnouttype whichType;
for (const CTxOut& txout : tx.vout) { for (const CTxOut& txout : tx.vout) {
if (!::IsStandard(StripClaimScriptPrefix(txout.scriptPubKey), whichType)) { const CScript& scriptPubKey = StripClaimScriptPrefix(txout.scriptPubKey);
if (!::IsStandard(scriptPubKey, whichType)) {
reason = "scriptpubkey"; reason = "scriptpubkey";
return false; return false;
} }
@ -168,7 +171,8 @@ bool AreInputsStandard(const CTransaction& tx, const CCoinsViewCache& mapInputs)
std::vector<std::vector<unsigned char> > vSolutions; std::vector<std::vector<unsigned char> > vSolutions;
txnouttype whichType; txnouttype whichType;
// get the scriptPubKey corresponding to this input: // get the scriptPubKey corresponding to this input:
if (!Solver(StripClaimScriptPrefix(prev.scriptPubKey), whichType, vSolutions)) const CScript& prevScript = StripClaimScriptPrefix(prev.scriptPubKey);
if (!Solver(prevScript, whichType, vSolutions))
return false; return false;
if (whichType == TX_SCRIPTHASH) if (whichType == TX_SCRIPTHASH)
@ -204,7 +208,7 @@ bool IsWitnessStandard(const CTransaction& tx, const CCoinsViewCache& mapInputs)
const CTxOut &prev = mapInputs.AccessCoin(tx.vin[i].prevout).out; const CTxOut &prev = mapInputs.AccessCoin(tx.vin[i].prevout).out;
// get the scriptPubKey corresponding to this input: // get the scriptPubKey corresponding to this input:
CScript prevScript = StripClaimScriptPrefix(prev.scriptPubKey); CScript prevScript = prev.scriptPubKey;
if (prevScript.IsPayToScriptHash()) { if (prevScript.IsPayToScriptHash()) {
std::vector <std::vector<unsigned char> > stack; std::vector <std::vector<unsigned char> > stack;

View file

@ -45,7 +45,7 @@ static const unsigned int MAX_STANDARD_P2WSH_SCRIPT_SIZE = 3600;
* standard and should be done with care and ideally rarely. It makes sense to * standard and should be done with care and ideally rarely. It makes sense to
* only increase the dust limit after prior releases were already not creating * only increase the dust limit after prior releases were already not creating
* outputs below the new threshold */ * outputs below the new threshold */
static const unsigned int DUST_RELAY_TX_FEE = 1000; static const unsigned int DUST_RELAY_TX_FEE = 3000;
/** /**
* Standard script verification flags that standard transactions will comply * Standard script verification flags that standard transactions will comply
* with. However scripts violating these flags may still be present in valid * with. However scripts violating these flags may still be present in valid

View file

@ -1,577 +0,0 @@
#include <claimtrie.h>
#include <fs.h>
#include <lbry.h>
#include <limits>
#include <memory>
#include <prefixtrie.h>
#include <boost/interprocess/allocators/private_node_allocator.hpp>
#include <boost/interprocess/indexes/null_index.hpp>
#include <boost/interprocess/managed_mapped_file.hpp>
namespace bip = boost::interprocess;
typedef bip::basic_managed_mapped_file <
char,
bip::rbtree_best_fit<bip::null_mutex_family, bip::offset_ptr<void>>,
bip::null_index
> managed_mapped_file;
template <typename T>
using node_allocator = bip::private_node_allocator<T, managed_mapped_file::segment_manager>;
static managed_mapped_file::segment_manager* segmentManager()
{
struct CSharedMemoryFile
{
CSharedMemoryFile() : file(GetDataDir() / "shared.mem")
{
fs::remove(file);
auto size = (uint64_t)g_memfileSize * 1024ULL * 1024ULL * 1024ULL;
// using string() to remove w_char filename encoding on Windows
menaged_file.reset(new managed_mapped_file(bip::create_only, file.string().c_str(), size));
}
~CSharedMemoryFile()
{
menaged_file.reset();
fs::remove(file);
}
managed_mapped_file::segment_manager* segmentManager()
{
return menaged_file->get_segment_manager();
}
const fs::path file;
std::unique_ptr<managed_mapped_file> menaged_file;
};
static CSharedMemoryFile shem;
return shem.segmentManager();
}
template <typename T>
static node_allocator<T>& nodeAllocator()
{
static node_allocator<T> allocator(segmentManager());
return allocator;
}
template <typename T, class... Args>
static std::shared_ptr<T> nodeAllocate(Args&&... args)
{
return std::allocate_shared<T>(nodeAllocator<T>(), std::forward<Args>(args)...);
}
template <typename T, class... Args>
static std::shared_ptr<T> allocateShared(Args&&... args)
{
static auto allocate = g_memfileSize ? nodeAllocate<T, Args...> : std::make_shared<T, Args...>;
try {
return allocate(std::forward<Args>(args)...);
}
catch (const bip::bad_alloc&) {
allocate = std::make_shared<T, Args...>; // in case we fill up the memfile
LogPrint(BCLog::BENCH, "WARNING: The memfile is full; reverting to the RAM allocator for %s.\n", typeid(T).name());
return allocate(std::forward<Args>(args)...);
}
}
template <typename TKey, typename TData>
template <bool IsConst>
CPrefixTrie<TKey, TData>::Iterator<IsConst>::Iterator(const TKey& name, const std::shared_ptr<Node>& node) noexcept : name(name), node(node)
{
}
template <typename TKey, typename TData>
template <bool IsConst>
template <bool C>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>& CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator=(const CPrefixTrie<TKey, TData>::Iterator<C>& o) noexcept
{
name = o.name;
node = o.node;
stack.clear();
stack.reserve(o.stack.size());
for (auto& i : o.stack)
stack.push_back(Bookmark{i.name, i.it, i.end});
return *this;
}
template <typename TKey, typename TData>
template <bool IsConst>
bool CPrefixTrie<TKey, TData>::Iterator<IsConst>::hasNext() const
{
auto shared = node.lock();
if (!shared) return false;
if (!shared->children.empty()) return true;
for (auto it = stack.rbegin(); it != stack.rend(); ++it) {
auto mark = *it; // copy
if (++mark.it != mark.end)
return true;
}
return false;
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>& CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator++()
{
auto shared = node.lock();
assert(shared);
// going in pre-order (NLR). See https://en.wikipedia.org/wiki/Tree_traversal
// if there are any children we have to go there first
if (!shared->children.empty()) {
auto& children = shared->children;
auto it = children.begin();
stack.emplace_back(Bookmark{name, it, children.end()});
auto& postfix = it->first;
name.insert(name.end(), postfix.begin(), postfix.end());
node = it->second;
return *this;
}
// move to next sibling:
while (!stack.empty()) {
auto& back = stack.back();
if (++back.it != back.end) {
name = back.name;
auto& postfix = back.it->first;
name.insert(name.end(), postfix.begin(), postfix.end());
node = back.it->second;
return *this;
}
stack.pop_back();
}
// must be at the end:
node.reset();
name = TKey();
return *this;
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst> CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator++(int x)
{
auto ret = *this;
++(*this);
return ret;
}
template <typename TKey, typename TData>
template <bool IsConst>
CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator bool() const
{
return !node.expired();
}
template <typename TKey, typename TData>
template <bool IsConst>
bool CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator==(const Iterator& o) const
{
return node.lock() == o.node.lock();
}
template <typename TKey, typename TData>
template <bool IsConst>
bool CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator!=(const Iterator& o) const
{
return !(*this == o);
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>::reference CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator*()
{
return reference{name, data()};
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>::const_reference CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator*() const
{
return const_reference{name, data()};
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>::pointer CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator->()
{
return &(data());
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>::const_pointer CPrefixTrie<TKey, TData>::Iterator<IsConst>::operator->() const
{
return &(data());
}
template <typename TKey, typename TData>
template <bool IsConst>
const TKey& CPrefixTrie<TKey, TData>::Iterator<IsConst>::key() const
{
return name;
}
template <typename TKey, typename TData>
template <bool IsConst>
typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>::data_reference CPrefixTrie<TKey, TData>::Iterator<IsConst>::data()
{
auto shared = node.lock();
assert(shared);
return *(shared->data);
}
template <typename TKey, typename TData>
template <bool IsConst>
const TData& CPrefixTrie<TKey, TData>::Iterator<IsConst>::data() const
{
auto shared = node.lock();
assert(shared);
return *(shared->data);
}
template <typename TKey, typename TData>
template <bool IsConst>
std::size_t CPrefixTrie<TKey, TData>::Iterator<IsConst>::depth() const
{
return stack.size();
}
template <typename TKey, typename TData>
template <bool IsConst>
bool CPrefixTrie<TKey, TData>::Iterator<IsConst>::hasChildren() const
{
auto shared = node.lock();
return shared && !shared->children.empty();
}
template <typename TKey, typename TData>
template <bool IsConst>
std::vector<typename CPrefixTrie<TKey, TData>::template Iterator<IsConst>> CPrefixTrie<TKey, TData>::Iterator<IsConst>::children() const
{
auto shared = node.lock();
if (!shared) return {};
std::vector<Iterator<IsConst>> ret;
ret.reserve(shared->children.size());
for (auto& child : shared->children) {
auto key = name;
key.insert(key.end(), child.first.begin(), child.first.end());
ret.emplace_back(key, child.second);
}
return ret;
}
template <typename TKey, typename TData>
template <typename TIterator, typename TNode>
TIterator CPrefixTrie<TKey, TData>::find(const TKey& key, TNode node, TIterator end)
{
TIterator it(key, TNode());
using CBType = callback<TNode>;
CBType cb = [&it](const TKey&, TNode node) {
it.node = node;
};
return find(key, node, cb) ? it : end;
}
template <typename TKey, typename TData>
template <typename TNode>
bool CPrefixTrie<TKey, TData>::find(const TKey& key, TNode node, const callback<TNode>& cb)
{
auto& children = node->children;
if (children.empty()) return false;
auto it = children.lower_bound(key);
if (it != children.end() && it->first == key) {
cb(key, it->second);
return true;
}
if (it != children.begin()) --it;
const auto count = match(key, it->first);
if (count != it->first.size()) return false;
if (count == key.size()) return false;
cb(it->first, it->second);
return find(TKey(key.begin() + count, key.end()), it->second, cb);
}
template <typename TKey, typename TData>
template <typename TIterator, typename TNode>
std::vector<TIterator> CPrefixTrie<TKey, TData>::nodes(const TKey& key, TNode root)
{
std::vector<TIterator> ret;
ret.reserve(1 + key.size());
ret.emplace_back(TKey{}, root);
if (key.empty()) return ret;
TKey name;
using CBType = callback<TNode>;
CBType cb = [&name, &ret](const TKey& key, TNode node) {
name.insert(name.end(), key.begin(), key.end());
ret.emplace_back(name, node);
};
find(key, root, cb);
return ret;
}
template <typename TKey, typename TData>
std::shared_ptr<typename CPrefixTrie<TKey, TData>::Node>& CPrefixTrie<TKey, TData>::insert(const TKey& key, std::shared_ptr<typename CPrefixTrie<TKey, TData>::Node>& node)
{
std::size_t count = 0;
auto& children = node->children;
auto it = children.lower_bound(key);
if (it != children.end()) {
if (it->first == key)
return it->second;
count = match(key, it->first);
}
if (count == 0 && it != children.begin()) {
--it;
count = match(key, it->first);
}
if (count == 0) {
++size;
it = children.emplace(key, allocateShared<Node>()).first;
return it->second;
}
if (count < it->first.size()) {
TKey prefix(key.begin(), key.begin() + count);
TKey postfix(it->first.begin() + count, it->first.end());
auto nodes = std::move(it->second);
children.erase(it);
++size;
it = children.emplace(std::move(prefix), allocateShared<Node>()).first;
it->second->children.emplace(std::move(postfix), std::move(nodes));
if (key.size() == count)
return it->second;
it->second->data = allocateShared<TData>();
}
return insert(TKey(key.begin() + count, key.end()), it->second);
}
template <typename TKey, typename TData>
void CPrefixTrie<TKey, TData>::erase(const TKey& key, std::shared_ptr<Node>& node)
{
std::vector<typename TChildren::value_type> nodes;
nodes.emplace_back(TKey(), node);
using CBType = callback<std::shared_ptr<Node>>;
CBType cb = [&nodes](const TKey& k, std::shared_ptr<Node> n) {
nodes.emplace_back(k, n);
};
if (!find(key, node, cb))
return;
nodes.back().second->data = allocateShared<TData>();
for (; nodes.size() > 1; nodes.pop_back()) {
// if we have only one child and no data ourselves, bring them up to our level
auto& cNode = nodes.back().second;
auto onlyOneChild = cNode->children.size() == 1;
auto noData = cNode->data->empty();
if (onlyOneChild && noData) {
auto child = cNode->children.begin();
auto& prefix = nodes.back().first;
auto newKey = prefix;
auto& postfix = child->first;
newKey.insert(newKey.end(), postfix.begin(), postfix.end());
auto& pNode = nodes[nodes.size() - 2].second;
pNode->children.emplace(std::move(newKey), std::move(child->second));
pNode->children.erase(prefix);
--size;
continue;
}
auto noChildren = cNode->children.empty();
if (noChildren && noData) {
auto& pNode = nodes[nodes.size() - 2].second;
pNode->children.erase(nodes.back().first);
--size;
continue;
}
break;
}
}
template <typename TKey, typename TData>
CPrefixTrie<TKey, TData>::CPrefixTrie() : size(0), root(allocateShared<Node>())
{
root->data = allocateShared<TData>();
}
template <typename TKey, typename TData>
template <typename TDataUni>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::insert(const TKey& key, TDataUni&& data)
{
auto& node = key.empty() ? root : insert(key, root);
node->data = allocateShared<TData>(std::forward<TDataUni>(data));
return key.empty() ? begin() : iterator{key, node};
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::copy(CPrefixTrie<TKey, TData>::const_iterator it)
{
auto& key = it.key();
auto& node = key.empty() ? root : insert(key, root);
node->data = it.node.lock()->data;
return key.empty() ? begin() : iterator{key, node};
}
template <typename TKey, typename TData>
template <typename TDataUni>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::insert(CPrefixTrie<TKey, TData>::iterator& it, const TKey& key, TDataUni&& data)
{
auto shared = it.node.lock();
assert(shared);
auto copy = it;
if (!key.empty()) {
auto name = it.key();
name.insert(name.end(), key.begin(), key.end());
auto& node = insert(key, shared);
copy = iterator{std::move(name), node};
}
copy.node.lock()->data = allocateShared<TData>(std::forward<TDataUni>(data));
return copy;
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::find(const TKey& key)
{
if (empty()) return end();
if (key.empty()) return {key, root};
return find(key, root, end());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::find(const TKey& key) const
{
if (empty()) return end();
if (key.empty()) return {key, root};
return find(key, root, end());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::find(CPrefixTrie<TKey, TData>::iterator& it, const TKey& key)
{
if (key.empty()) return it;
auto shared = it.node.lock();
assert(shared);
return find(key, shared, end());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::find(CPrefixTrie<TKey, TData>::const_iterator& it, const TKey& key) const
{
if (key.empty()) return it;
auto shared = it.node.lock();
assert(shared);
return find(key, shared, end());
}
template <typename TKey, typename TData>
bool CPrefixTrie<TKey, TData>::contains(const TKey& key) const
{
return find(key) != end();
}
template <typename TKey, typename TData>
TData& CPrefixTrie<TKey, TData>::at(const TKey& key)
{
return find(key).data();
}
template <typename TKey, typename TData>
std::vector<typename CPrefixTrie<TKey, TData>::iterator> CPrefixTrie<TKey, TData>::nodes(const TKey& key)
{
if (empty()) return {};
return nodes<iterator>(key, root);
}
template <typename TKey, typename TData>
std::vector<typename CPrefixTrie<TKey, TData>::const_iterator> CPrefixTrie<TKey, TData>::nodes(const TKey& key) const
{
if (empty()) return {};
return nodes<const_iterator>(key, root);
}
template <typename TKey, typename TData>
bool CPrefixTrie<TKey, TData>::erase(const TKey& key)
{
auto size_was = height();
if (key.empty()) {
root->data = allocateShared<TData>();
} else {
erase(key, root);
}
return size_was != height();
}
template <typename TKey, typename TData>
void CPrefixTrie<TKey, TData>::clear()
{
size = 0;
root->data = allocateShared<TData>();
root->children.clear();
}
template <typename TKey, typename TData>
bool CPrefixTrie<TKey, TData>::empty() const
{
return height() == 0;
}
template <typename TKey, typename TData>
std::size_t CPrefixTrie<TKey, TData>::height() const
{
return size + (root->data->empty() ? 0 : 1);
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::begin()
{
return find(TKey());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::iterator CPrefixTrie<TKey, TData>::end()
{
return {};
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::cbegin()
{
return find(TKey());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::cend()
{
return {};
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::begin() const
{
return find(TKey());
}
template <typename TKey, typename TData>
typename CPrefixTrie<TKey, TData>::const_iterator CPrefixTrie<TKey, TData>::end() const
{
return {};
}
using Key = std::string;
using Data = CClaimTrieData;
using Trie = CPrefixTrie<Key, Data>;
using iterator = Trie::iterator;
using const_iterator = Trie::const_iterator;
template class CPrefixTrie<Key, Data>;
template class Trie::Iterator<true>;
template class Trie::Iterator<false>;
template const_iterator& const_iterator::operator=<>(const iterator&) noexcept;
template iterator Trie::insert<>(const Key&, Data&);
template iterator Trie::insert<>(const Key&, Data&&);
template iterator Trie::insert<>(const Key&, const Data&);
template iterator Trie::insert<>(iterator&, const Key&, Data&);
template iterator Trie::insert<>(iterator&, const Key&, Data&&);
template iterator Trie::insert<>(iterator&, const Key&, const Data&);

View file

@ -1,225 +0,0 @@
#ifndef BITCOIN_PREFIXTRIE_H
#define BITCOIN_PREFIXTRIE_H
#include <algorithm>
#include <functional>
#include <map>
#include <memory>
#include <string>
#include <type_traits>
#include <vector>
#include <boost/container/flat_map.hpp>
namespace bc = boost::container;
template <typename TKey, typename TData>
class CPrefixTrie
{
class Node
{
template <bool>
friend class Iterator;
friend class CPrefixTrie<TKey, TData>;
bc::flat_map<TKey, std::shared_ptr<Node>> children;
public:
Node() = default;
Node(const Node&) = delete;
Node(Node&& o) noexcept = default;
Node& operator=(Node&&) noexcept = default;
Node& operator=(const Node&) = delete;
std::shared_ptr<TData> data;
};
using TChildren = decltype(Node::children);
template <bool IsConst>
class Iterator
{
template <bool>
friend class Iterator;
friend class CPrefixTrie<TKey, TData>;
using TKeyRef = std::reference_wrapper<const TKey>;
using TDataRef = std::reference_wrapper<typename std::conditional<IsConst, const TData, TData>::type>;
using TPair = std::pair<TKeyRef, TDataRef>;
using ConstTPair = std::pair<TKeyRef, const TData>;
TKey name;
std::weak_ptr<Node> node;
struct Bookmark {
TKey name;
typename TChildren::iterator it;
typename TChildren::iterator end;
};
std::vector<Bookmark> stack;
public:
// Iterator traits
using value_type = TPair;
using const_pointer = const TData* const;
using const_reference = ConstTPair;
using data_reference = typename std::conditional<IsConst, const TData&, TData&>::type;
using pointer = typename std::conditional<IsConst, const TData* const, TData* const>::type;
using reference = typename std::conditional<IsConst, ConstTPair, TPair>::type;
using difference_type = std::ptrdiff_t;
using iterator_category = std::forward_iterator_tag;
Iterator() = default;
Iterator(const Iterator&) = default;
Iterator(Iterator&& o) noexcept = default;
Iterator(const TKey& name, const std::shared_ptr<Node>& node) noexcept;
template <bool C>
inline Iterator(const Iterator<C>& o) noexcept
{
*this = o;
}
Iterator& operator=(const Iterator&) = default;
Iterator& operator=(Iterator&& o) = default;
template <bool C>
Iterator& operator=(const Iterator<C>& o) noexcept;
bool hasNext() const;
Iterator& operator++();
Iterator operator++(int);
operator bool() const;
bool operator==(const Iterator& o) const;
bool operator!=(const Iterator& o) const;
reference operator*();
const_reference operator*() const;
pointer operator->();
const_pointer operator->() const;
const TKey& key() const;
data_reference data();
const TData& data() const;
std::size_t depth() const;
bool hasChildren() const;
std::vector<Iterator> children() const;
};
size_t size;
std::shared_ptr<Node> root;
template <typename TNode>
using callback = std::function<void(const TKey&, TNode)>;
template <typename TIterator, typename TNode>
static TIterator find(const TKey& key, TNode node, TIterator end);
template <typename TNode>
static bool find(const TKey& key, TNode node, const callback<TNode>& cb);
template <typename TIterator, typename TNode>
static std::vector<TIterator> nodes(const TKey& key, TNode root);
std::shared_ptr<Node>& insert(const TKey& key, std::shared_ptr<Node>& node);
void erase(const TKey& key, std::shared_ptr<Node>& node);
public:
using iterator = Iterator<false>;
using const_iterator = Iterator<true>;
CPrefixTrie();
template <typename TDataUni>
iterator insert(const TKey& key, TDataUni&& data);
template <typename TDataUni>
iterator insert(iterator& it, const TKey& key, TDataUni&& data);
iterator copy(const_iterator it);
iterator find(const TKey& key);
const_iterator find(const TKey& key) const;
iterator find(iterator& it, const TKey& key);
const_iterator find(const_iterator& it, const TKey& key) const;
bool contains(const TKey& key) const;
TData& at(const TKey& key);
std::vector<iterator> nodes(const TKey& key);
std::vector<const_iterator> nodes(const TKey& key) const;
bool erase(const TKey& key);
void clear();
bool empty() const;
size_t height() const;
iterator begin();
iterator end();
const_iterator cbegin();
const_iterator cend();
const_iterator begin() const;
const_iterator end() const;
};
template <typename T, typename O>
inline bool operator==(const std::reference_wrapper<T>& ref, const O& obj)
{
return ref.get() == obj;
}
template <typename T, typename O>
inline bool operator!=(const std::reference_wrapper<T>& ref, const O& obj)
{
return !(ref == obj);
}
template <typename T>
inline bool operator==(const std::reference_wrapper<std::shared_ptr<T>>& ref, const T& obj)
{
auto ptr = ref.get();
return ptr && *ptr == obj;
}
template <typename T>
inline bool operator!=(const std::reference_wrapper<std::shared_ptr<T>>& ref, const T& obj)
{
return !(ref == obj);
}
template <typename T>
inline bool operator==(const std::reference_wrapper<std::unique_ptr<T>>& ref, const T& obj)
{
auto ptr = ref.get();
return ptr && *ptr == obj;
}
template <typename T>
inline bool operator!=(const std::reference_wrapper<std::unique_ptr<T>>& ref, const T& obj)
{
return !(ref == obj);
}
template <typename TKey>
static std::size_t match(const TKey& a, const TKey& b)
{
std::size_t count = 0;
auto ait = a.cbegin(), aend = a.cend();
auto bit = b.cbegin(), bend = b.cend();
while (ait != aend && bit != bend) {
if (*ait != *bit) break;
++count;
++ait;
++bit;
}
return count;
}
#endif // BITCOIN_PREFIXTRIE_H

View file

@ -17,7 +17,7 @@ static const struct {
} network_styles[] = { } network_styles[] = {
{"lbrycrd", QAPP_APP_NAME_DEFAULT, 0, 0, ""}, {"lbrycrd", QAPP_APP_NAME_DEFAULT, 0, 0, ""},
{"lbrycrdtest", QAPP_APP_NAME_TESTNET, 70, 30, QT_TRANSLATE_NOOP("SplashScreen", "[testnet]")}, {"lbrycrdtest", QAPP_APP_NAME_TESTNET, 70, 30, QT_TRANSLATE_NOOP("SplashScreen", "[testnet]")},
{"lbrycrdreg", QAPP_APP_NAME_REGTEST, 160, 30, "[regtest]"} {"regtest", QAPP_APP_NAME_REGTEST, 160, 30, "[regtest]"}
}; };
static const unsigned network_styles_count = sizeof(network_styles)/sizeof(*network_styles); static const unsigned network_styles_count = sizeof(network_styles)/sizeof(*network_styles);

View file

@ -228,11 +228,11 @@ void PaymentServer::ipcParseCommandLine(interfaces::Node& node, int argc, char*
PaymentRequestPlus request; PaymentRequestPlus request;
if (readPaymentRequestFromFile(arg, request)) if (readPaymentRequestFromFile(arg, request))
{ {
if (request.getDetails().network() == CBaseChainParams::MAIN) if (request.getDetails().network() == "lbrycrd")
{ {
node.selectParams(CBaseChainParams::MAIN); node.selectParams(CBaseChainParams::MAIN);
} }
else if (request.getDetails().network() == CBaseChainParams::TESTNET) else if (request.getDetails().network() == "lbrycrdtest")
{ {
node.selectParams(CBaseChainParams::TESTNET); node.selectParams(CBaseChainParams::TESTNET);
} }

View file

@ -1319,8 +1319,6 @@ static UniValue getchaintips(const JSONRPCRequest& request)
" \"height\": xxxx,\n" " \"height\": xxxx,\n"
" \"hash\": \"xxxx\",\n" " \"hash\": \"xxxx\",\n"
" \"branchlen\": 1 (numeric) length of branch connecting the tip to the main chain\n" " \"branchlen\": 1 (numeric) length of branch connecting the tip to the main chain\n"
" \"branchhash\": \"xxxx\", (string) hash of the historical block where we branched\n"
" \"branchhashNext\": \"xxxx\", (string) block hash of the first block down this chain\n"
" \"status\": \"xxxx\" (string) status of the chain (active, valid-fork, valid-headers, headers-only, invalid)\n" " \"status\": \"xxxx\" (string) status of the chain (active, valid-fork, valid-headers, headers-only, invalid)\n"
" }\n" " }\n"
"]\n" "]\n"
@ -1374,19 +1372,8 @@ static UniValue getchaintips(const JSONRPCRequest& request)
obj.pushKV("height", block->nHeight); obj.pushKV("height", block->nHeight);
obj.pushKV("hash", block->phashBlock->GetHex()); obj.pushKV("hash", block->phashBlock->GetHex());
// not use ForkAt method because we need the previous one as well const int branchLen = block->nHeight - chainActive.FindFork(block)->nHeight;
const CBlockIndex *forkAt = block, *forkPrev = block;
while (forkAt && !chainActive.Contains(forkAt)) {
forkPrev = forkAt;
forkAt = forkAt->pprev;
}
const int branchLen = block->nHeight - forkAt->nHeight;
obj.pushKV("branchlen", branchLen); obj.pushKV("branchlen", branchLen);
if (forkAt != forkPrev) {
obj.pushKV("branchhash", forkAt->phashBlock->GetHex());
obj.pushKV("branchhashNext", forkPrev->phashBlock->GetHex());
}
std::string status; std::string status;
if (chainActive.Contains(block)) { if (chainActive.Contains(block)) {

View file

@ -1,345 +0,0 @@
#ifndef CLAIMRPCHELP_H
#define CLAIMRPCHELP_H
// always keep defines T_ + value in upper case
#define T_NORMALIZEDNAME "normalizedName"
#define T_BLOCKHASH "blockhash"
#define T_CLAIMS "claims"
#define T_CLAIMID "claimId"
#define T_TXID "txId"
#define T_N "n"
#define T_AMOUNT "amount"
#define T_HEIGHT "height"
#define T_VALUE "value"
#define T_NAME "name"
#define T_VALIDATHEIGHT "validAtHeight"
#define T_NAMES "names"
#define T_EFFECTIVEAMOUNT "effectiveAmount"
#define T_LASTTAKEOVERHEIGHT "lastTakeoverHeight"
#define T_SUPPORTS "supports"
#define T_SUPPORTSWITHOUTCLAIM "supportsWithoutClaim"
#define T_TOTALNAMES "totalNames"
#define T_TOTALCLAIMS "totalClaims"
#define T_TOTALVALUE "totalValue"
#define T_CONTROLLINGONLY "controllingOnly"
#define T_CLAIMTYPE "claimType"
#define T_DEPTH "depth"
#define T_INCLAIMTRIE "inClaimTrie"
#define T_ISCONTROLLING "isControlling"
#define T_INSUPPORTMAP "inSupportMap"
#define T_INQUEUE "inQueue"
#define T_BLOCKSTOVALID "blocksToValid"
#define T_NODES "nodes"
#define T_CHILDREN "children"
#define T_CHARACTER "character"
#define T_NODEHASH "nodeHash"
#define T_VALUEHASH "valueHash"
#define T_PAIRS "pairs"
#define T_ODD "odd"
#define T_HASH "hash"
#define T_BID "bid"
#define T_SEQUENCE "sequence"
#define T_CLAIMSADDEDORUPDATED "claimsAddedOrUpdated"
#define T_SUPPORTSADDEDORUPDATED "supportsAddedOrUpdated"
#define T_CLAIMSREMOVED "claimsRemoved"
#define T_SUPPORTSREMOVED "supportsRemoved"
#define T_ADDRESS "address"
#define T_PENDINGAMOUNT "pendingAmount"
enum {
GETCLAIMSINTRIE = 0,
GETNAMESINTRIE,
GETVALUEFORNAME,
GETCLAIMSFORNAME,
GETCLAIMBYID,
GETTOTALCLAIMEDNAMES,
GETTOTALCLAIMS,
GETTOTALVALUEOFCLAIMS,
GETCLAIMSFORTX,
GETNAMEPROOF,
CHECKNORMALIZATION,
GETCLAIMBYBID,
GETCLAIMBYSEQ,
GETCLAIMPROOFBYBID,
GETCLAIMPROOFBYSEQ,
GETCHANGESINBLOCK,
};
#define S3_(pre, name, def) pre "\"" name "\"" def "\n"
#define S3(pre, name, def) S3_(pre, name, def)
#define S1(str) str "\n"
#define NAME_TEXT " (string) the name to look up"
#define BLOCKHASH_TEXT " (string, optional) get claims in the trie\n" \
" at the block specified\n" \
" by this block hash.\n" \
" If none is given,\n" \
" the latest active\n" \
" block will be used."
#define CLAIM_OUTPUT \
S3(" ", T_NORMALIZEDNAME, " (string) the name of the claim (after normalization)") \
S3(" ", T_NAME, " (string) the original name of this claim (before normalization)") \
S3(" ", T_VALUE, " (string) the value of this claim") \
S3(" ", T_ADDRESS, " (string) the destination address of this claim") \
S3(" ", T_CLAIMID, " (string) the claimId of the claim") \
S3(" ", T_TXID, " (string) the txid of the claim") \
S3(" ", T_N, " (numeric) the index of the claim in the transaction's list of outputs") \
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located") \
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the support became/becomes valid") \
S3(" ", T_AMOUNT, " (numeric) the amount of the claim") \
S3(" ", T_EFFECTIVEAMOUNT, " (numeric) the amount plus amount from all supports associated with the claim") \
S3(" ", T_PENDINGAMOUNT, " (numeric) expected amount when claim and its supports are all valid") \
S3(" ", T_SUPPORTS, ": [ (array of object) supports for this claim") \
S3(" ", T_VALUE, " (string) the metadata of the support if any") \
S3(" ", T_ADDRESS, " (string) the destination address of the support") \
S3(" ", T_TXID, " (string) the txid of the support") \
S3(" ", T_N, " (numeric) the index of the support in the transaction's list of outputs") \
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located") \
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the support became/becomes valid") \
S3(" ", T_AMOUNT, " (numeric) the amount of the support") \
S1(" ]") \
S3(" ", T_LASTTAKEOVERHEIGHT, " (numeric) the last height at which ownership of the name changed") \
S3(" ", T_BID, " (numeric) lower value means a higher bid rate, ordered by effective amount") \
S3(" ", T_SEQUENCE, " (numeric) lower value means an older one in sequence, ordered by height of insertion")
#define PROOF_OUTPUT \
S3(" ", T_NODES, ": [ (array of object, pre-fork) full nodes\n" \
" (i.e. those which lead to the requested name)") \
S3(" ", T_CHILDREN, ": [ (array of object) the children of the node") \
S3(" ", T_CHARACTER, " (string) the character which leads from the parent to this child node") \
S3(" ", T_NODEHASH, " (string, if exists) the hash of the node if this is a leaf node") \
S1(" ]") \
S3(" ", T_VALUEHASH, " (string, if exists) the hash of this node's value, if" \
" it has one. If this is the requested name this\n" \
" will not exist whether the node has a value or not") \
S1(" ]") \
S3(" ", T_PAIRS, ": [ (array of pairs, post-fork) hash can be validated by" \
" hashing claim from the bottom up") \
S3(" ", T_ODD, " (boolean) this value goes on the right of hash") \
S3(" ", T_HASH, " (string) the hash to be mixed in") \
S1(" ]") \
S3(" ", T_TXID, " (string, if exists) the txid of the claim which controls" \
" this name, if there is one.") \
S3(" ", T_N, " (numeric) the index of the claim in the transaction's list of outputs") \
S3(" ", T_LASTTAKEOVERHEIGHT, " (numeric) the last height at which ownership of the name changed")
static const char* const rpc_help[] = {
// GETCLAIMSINTRIE
S1("getclaimsintrie ( \"" T_BLOCKHASH R"(" )
Return all claims in the name trie. Deprecated
Arguments:)")
S3("1. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
S3(" ", T_NORMALIZEDNAME, " (string) the name of the claim(s) (after normalization)")
S3(" ", T_CLAIMS, ": [ (array of object) the claims for this name")
S3(" ", T_NAME, " (string) the original name of this claim (before normalization)")
S3(" ", T_VALUE, " (string) the value of this claim")
S3(" ", T_ADDRESS, " (string) the destination address of this claim")
S3(" ", T_CLAIMID, " (string) the claimId of the claim")
S3(" ", T_TXID, " (string) the txid of the claim")
S3(" ", T_N, " (numeric) the index of the claim in the transaction's list of outputs")
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located")
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the claim became/becomes valid")
S3(" ", T_AMOUNT, " (numeric) the amount of the claim")
S1(" ]")
"]",
// GETNAMESINTRIE
S1("getnamesintrie ( \"" T_BLOCKHASH R"(" )
Return all claim names in the trie.
Arguments:)")
S3("1. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
S3(" ", T_NAMES, " all names in the trie that have claims")
"]",
// GETVALUEFORNAME
S1("getvalueforname \"" T_NAME "\" ( \"" T_BLOCKHASH "\" \"" T_CLAIMID R"(" )
Return the winning or specified by claimId value associated with a name
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S3("3. ", T_CLAIMID, " (string, optional) can be partial one")
S1("Result: [")
CLAIM_OUTPUT
"]",
// GETCLAIMSFORNAME
S1("getclaimsforname \"" T_NAME "\" ( \"" T_BLOCKHASH R"(" )
Return all claims and supports for a name
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
S3(" ", T_NORMALIZEDNAME, " (string) the name of the claim(s) (after normalization)")
S3(" ", T_CLAIMS, ": [ (array of object) the claims for this name")
S3(" ", T_NAME, " (string) the original name of this claim (before normalization)")
S3(" ", T_VALUE, " (string) the value of this claim")
S3(" ", T_ADDRESS, " (string) the destination address of this claim")
S3(" ", T_CLAIMID, " (string) the claimId of the claim")
S3(" ", T_TXID, " (string) the txid of the claim")
S3(" ", T_N, " (numeric) the index of the claim in the transaction's list of outputs")
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located")
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the claim became/becomes valid")
S3(" ", T_AMOUNT, " (numeric) the amount of the claim")
S3(" ", T_EFFECTIVEAMOUNT, " (numeric) the amount plus amount from all supports associated with the claim")
S3(" ", T_PENDINGAMOUNT, " (numeric) expected amount when claim and its support got valid")
S3(" ", T_SUPPORTS, ": [ (array of object) supports for this claim")
S3(" ", T_VALUE, " (string) the metadata of the support if any")
S3(" ", T_ADDRESS, " (string) the destination address of the support")
S3(" ", T_TXID, " (string) the txid of the support")
S3(" ", T_N, " (numeric) the index of the support in the transaction's list of outputs")
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located")
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the support became/becomes valid")
S3(" ", T_AMOUNT, " (numeric) the amount of the support")
S1(" ]")
S3(" ", T_BID, " (numeric) lower value means a higher bid rate, ordered by effective amount")
S3(" ", T_SEQUENCE, " (numeric) lower value means an older one in sequence, ordered by height of insertion")
S1(" ]")
S3(" ", T_LASTTAKEOVERHEIGHT, " (numeric) the last height at which ownership of the name changed")
S3(" ", T_SUPPORTSWITHOUTCLAIM, ": [")
S3(" ", T_TXID, " (string) the txid of the support")
S3(" ", T_N, " (numeric) the index of the support in the transaction's list of outputs")
S3(" ", T_HEIGHT, " (numeric) the height of the block in which this transaction is located")
S3(" ", T_VALIDATHEIGHT, " (numeric) the height at which the support became/becomes valid")
S3(" ", T_AMOUNT, " (numeric) the amount of the support")
S1(" ]")
"]",
// GETCLAIMBYID
S1("getclaimbyid \"" T_CLAIMID R"("
Get a claim by claim id
Arguments:)")
S3("1. ", T_CLAIMID, " (string) the claimId of this claim or patial id (at least 3 chars)")
S1("Result: [")
CLAIM_OUTPUT
"]",
// GETTOTALCLAIMEDNAMES
S1(R"(gettotalclaimednames
Return the total number of names that have been
Arguments:)")
S1("Result:")
S3(" ", T_TOTALNAMES, " (numeric) the total number of names in the trie")
,
// GETTOTALCLAIMS
S1(R"(gettotalclaims
Return the total number of active claims in the trie
Arguments:)")
S1("Result:")
S3(" ", T_TOTALCLAIMS, " (numeric) the total number of active claims")
,
// GETTOTALVALUEOFCLAIMS
S1("gettotalvalueofclaims ( " T_CONTROLLINGONLY R"( )
Return the total value of the claims in the trie
Arguments:)")
S3("1. ", T_CONTROLLINGONLY, " (boolean) only include the value of controlling claims")
S1("Result:")
S3(" ", T_TOTALVALUE, " (numeric) the total value of the claims in the trie")
,
// GETCLAIMSFORTX
S1("getclaimsfortx \"" T_TXID R"("
Return any claims or supports found in a transaction
Arguments:)")
S3("1. ", T_TXID, " (string) the txid of the transaction to check for unspent claims")
S1("Result: [")
S3(" ", T_N, " (numeric) the index of the claim in the transaction's list of outputs")
S3(" ", T_CLAIMTYPE, " (string) claim or support")
S3(" ", T_NAME, " (string) the name claimed or supported")
S3(" ", T_CLAIMID, " (string) if a claim, its ID")
S3(" ", T_VALUE, " (string) if a claim, its value")
S3(" ", T_DEPTH, " (numeric) the depth of the transaction in the main chain")
S3(" ", T_INCLAIMTRIE, " (boolean) if a name claim, whether the claim is active, i.e. has made it into the trie")
S3(" ", T_ISCONTROLLING, " (boolean) if a name claim, whether the claim is the current controlling claim for the name")
S3(" ", T_INSUPPORTMAP, " (boolean) if a support, whether the support is active, i.e. has made it into the support map")
S3(" ", T_INQUEUE, " (boolean) whether the claim is in a queue waiting to be inserted into the trie or support map")
S3(" ", T_BLOCKSTOVALID, " (numeric) if in a queue, the number of blocks until it's inserted into the trie or support map")
"]",
// GETNAMEPROOF
S1("getnameproof \"" T_NAME "\" ( \"" T_BLOCKHASH "\" \"" T_CLAIMID R"(" )
Return the cryptographic proof that a name maps to a value or doesn't.
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S3("3. ", T_CLAIMID, R"( (string, optional, post-fork) for validating a specific claim
can be partial one)")
S1("Result: [")
PROOF_OUTPUT
"]",
// CHECKNORMALIZATION
S1("checknormalization \"" T_NAME R"("
Given an unnormalized name of a claim, return normalized version of it
Arguments:)")
S3("1. ", T_NAME, " (string) the name to normalize")
S1("Result:")
S3(" ", T_NORMALIZEDNAME, " (string) normalized name")
,
// GETCLAIMBYBID
S1("getclaimbybid \"" T_NAME "\" ( " T_BID " \"" T_BLOCKHASH R"(" )
Get a claim by bid
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_BID, " (numeric, optional) bid number")
S3("3. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
CLAIM_OUTPUT
"]",
// GETCLAIMBYSEQ
S1("getclaimbyseq \"" T_NAME "\" ( " T_SEQUENCE " \"" T_BLOCKHASH R"(" )
Get a claim by sequence
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_SEQUENCE, " (numeric, optional) sequence number")
S3("3. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
CLAIM_OUTPUT
"]",
// GETCLAIMPROOFBYBID
S1("getclaimproofbyid \"" T_NAME "\" ( " T_BID " \"" T_BLOCKHASH R"(" )
Return the cryptographic proof that a name maps to a value or doesn't by a bid.
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_BID, " (numeric, optional) bid number")
S3("3. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
PROOF_OUTPUT
"]",
// GETCLAIMPROOFBYSEQ
S1("getclaimproofbyseq \"" T_NAME "\" ( " T_SEQUENCE " \"" T_BLOCKHASH R"(" )
Return the cryptographic proof that a name maps to a value or doesn't by a sequence.
Arguments:)")
S3("1. ", T_NAME, NAME_TEXT)
S3("2. ", T_SEQUENCE, " (numeric, optional) sequence number")
S3("3. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
PROOF_OUTPUT
"]",
// GETCHANGESINBLOCK
S1("getchangesinblock ( \"" T_BLOCKHASH R"(" )
Return the list of claims added, updated, and removed as pulled from the queued work for that block."
Use this method to determine which claims or supports went live on a given block."
Arguments:)")
S3("1. ", T_BLOCKHASH, BLOCKHASH_TEXT)
S1("Result: [")
S3(" ", T_CLAIMSADDEDORUPDATED, " (array of string) claimIDs added or updated in the trie")
S3(" ", T_CLAIMSREMOVED, " (array of string) claimIDs that were removed from the trie")
S3(" ", T_SUPPORTSADDEDORUPDATED, " (array of string) IDs of supports added or updated")
S3(" ", T_SUPPORTSREMOVED, " (array of string) IDs that were removed from the trie")
"]",
};
#endif // CLAIMRPCHELP_H

File diff suppressed because it is too large Load diff

View file

@ -80,7 +80,6 @@ static const CRPCConvertParam vRPCConvertParams[] =
{ "scantxoutset", 1, "scanobjects" }, { "scantxoutset", 1, "scanobjects" },
{ "addmultisigaddress", 0, "nrequired" }, { "addmultisigaddress", 0, "nrequired" },
{ "addmultisigaddress", 1, "keys" }, { "addmultisigaddress", 1, "keys" },
{ "addtimelockedaddress", 0, "timelock" },
{ "createmultisig", 0, "nrequired" }, { "createmultisig", 0, "nrequired" },
{ "createmultisig", 1, "keys" }, { "createmultisig", 1, "keys" },
{ "listunspent", 0, "minconf" }, { "listunspent", 0, "minconf" },
@ -172,15 +171,6 @@ static const CRPCConvertParam vRPCConvertParams[] =
{ "rescanblockchain", 0, "start_height"}, { "rescanblockchain", 0, "start_height"},
{ "rescanblockchain", 1, "stop_height"}, { "rescanblockchain", 1, "stop_height"},
{ "createwallet", 1, "disable_private_keys"}, { "createwallet", 1, "disable_private_keys"},
{ "listnameclaims", 0, "includesupports"},
{ "listnameclaims", 1, "activeonly"},
{ "listnameclaims", 2, "minconf"},
{ "getclaimbybid", 1, "bid"},
{ "getclaimbyseq", 1, "sequence"},
{ "getclaimproofbybid", 1, "bid"},
{ "getclaimproofbyseq", 1, "sequence"},
{ "supportclaim", 4, "isTip"},
{ "gettotalvalueofclaims", 0, "controlling_only"},
}; };
class CRPCConvertTable class CRPCConvertTable

View file

@ -25,11 +25,6 @@
#include <utilstrencodings.h> #include <utilstrencodings.h>
#include <validationinterface.h> #include <validationinterface.h>
#include <warnings.h> #include <warnings.h>
#ifdef ENABLE_WALLET
#include <wallet/rpcwallet.h>
#include <wallet/wallet.h>
#endif
#include <memory> #include <memory>
#include <stdint.h> #include <stdint.h>
@ -377,8 +372,6 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
UniValue lpval = NullUniValue; UniValue lpval = NullUniValue;
std::set<std::string> setClientRules; std::set<std::string> setClientRules;
int64_t nMaxVersionPreVB = -1; int64_t nMaxVersionPreVB = -1;
UniValue aMutable(UniValue::VARR);
bool wantsCoinbaseTxn = false;
if (!request.params[0].isNull()) if (!request.params[0].isNull())
{ {
const UniValue& oparam = request.params[0].get_obj(); const UniValue& oparam = request.params[0].get_obj();
@ -393,17 +386,6 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid mode"); throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid mode");
lpval = find_value(oparam, "longpollid"); lpval = find_value(oparam, "longpollid");
const UniValue& capval = find_value(oparam, "capabilities");
if (capval.isArray()) {
for (std::size_t i = 0; i < capval.size(); ++i)
if (capval[i].get_str() == "coinbase/append") // should be coinbase/* ? we dont' care what they do to the coinbase
aMutable.push_back(capval[i]);
#ifdef ENABLE_WALLET
else if (capval[i].get_str() == "coinbasetxn")
wantsCoinbaseTxn = true;
#endif
}
if (strMode == "proposal") if (strMode == "proposal")
{ {
const UniValue& dataval = find_value(oparam, "data"); const UniValue& dataval = find_value(oparam, "data");
@ -451,7 +433,7 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
if (strMode != "template") if (strMode != "template")
throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid mode"); throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid mode");
if (Params().NetworkIDString() == CBaseChainParams::MAIN) if (Params().NetworkIDString() != "lbrycrdreg") // who should own this constant?
{ {
if (!g_connman) if (!g_connman)
throw JSONRPCError(RPC_CLIENT_P2P_DISABLED, "Error: Peer-to-peer functionality missing or disabled"); throw JSONRPCError(RPC_CLIENT_P2P_DISABLED, "Error: Peer-to-peer functionality missing or disabled");
@ -537,20 +519,8 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
fLastTemplateSupportsSegwit = fSupportsSegwit; fLastTemplateSupportsSegwit = fSupportsSegwit;
// Create new block // Create new block
CScript newBlockScript = CScript() << OP_TRUE; CScript scriptDummy = CScript() << OP_TRUE;
#ifdef ENABLE_WALLET pblocktemplate = BlockAssembler(Params()).CreateNewBlock(scriptDummy, fSupportsSegwit);
if (wantsCoinbaseTxn) {
std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request);
if (!wallet)
throw JSONRPCError(RPC_INVALID_PARAMS, "No wallet to comply with coinbasetxn request.");
std::shared_ptr<CReserveScript> coinbase_script;
wallet->GetScriptForMining(coinbase_script); // tops up and locks inside
if (!coinbase_script)
throw JSONRPCError(RPC_INVALID_PARAMS, "Unable to acquire address for coinbasetxn request.");
newBlockScript = coinbase_script->reserveScript;
}
#endif
pblocktemplate = BlockAssembler(Params()).CreateNewBlock(newBlockScript, fSupportsSegwit);
if (!pblocktemplate) if (!pblocktemplate)
throw JSONRPCError(RPC_OUT_OF_MEMORY, "Out of memory"); throw JSONRPCError(RPC_OUT_OF_MEMORY, "Out of memory");
@ -567,11 +537,8 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
// NOTE: If at some point we support pre-segwit miners post-segwit-activation, this needs to take segwit support into consideration // NOTE: If at some point we support pre-segwit miners post-segwit-activation, this needs to take segwit support into consideration
const bool fPreSegWit = (ThresholdState::ACTIVE != VersionBitsState(pindexPrev, consensusParams, Consensus::DEPLOYMENT_SEGWIT, versionbitscache)); const bool fPreSegWit = (ThresholdState::ACTIVE != VersionBitsState(pindexPrev, consensusParams, Consensus::DEPLOYMENT_SEGWIT, versionbitscache));
if (!fPreSegWit && !fSupportsSegwit)
throw JSONRPCError(RPC_INVALID_PARAMETER, "Segwit support is now required. Please include \"segwit\" in the client's rules.");
UniValue aCaps(UniValue::VARR); aCaps.push_back("proposal"); UniValue aCaps(UniValue::VARR); aCaps.push_back("proposal");
UniValue result(UniValue::VOBJ);
UniValue transactions(UniValue::VARR); UniValue transactions(UniValue::VARR);
std::map<uint256, int64_t> setTxIndex; std::map<uint256, int64_t> setTxIndex;
@ -581,8 +548,7 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
uint256 txHash = tx.GetHash(); uint256 txHash = tx.GetHash();
setTxIndex[txHash] = i++; setTxIndex[txHash] = i++;
auto isCoinbase = tx.IsCoinBase(); if (tx.IsCoinBase())
if (isCoinbase && !wantsCoinbaseTxn)
continue; continue;
UniValue entry(UniValue::VOBJ); UniValue entry(UniValue::VOBJ);
@ -609,9 +575,6 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
entry.pushKV("sigops", nTxSigOps); entry.pushKV("sigops", nTxSigOps);
entry.pushKV("weight", GetTransactionWeight(tx)); entry.pushKV("weight", GetTransactionWeight(tx));
if (isCoinbase)
result.pushKV("coinbasetxn", entry);
else
transactions.push_back(entry); transactions.push_back(entry);
} }
@ -620,10 +583,12 @@ static UniValue getblocktemplate(const JSONRPCRequest& request)
arith_uint256 hashTarget = arith_uint256().SetCompact(pblock->nBits); arith_uint256 hashTarget = arith_uint256().SetCompact(pblock->nBits);
UniValue aMutable(UniValue::VARR);
aMutable.push_back("time"); aMutable.push_back("time");
aMutable.push_back("transactions"); aMutable.push_back("transactions");
aMutable.push_back("prevblock"); aMutable.push_back("prevblock");
UniValue result(UniValue::VOBJ);
result.pushKV("capabilities", aCaps); result.pushKV("capabilities", aCaps);
UniValue aRules(UniValue::VARR); UniValue aRules(UniValue::VARR);

View file

@ -555,7 +555,6 @@ static UniValue decoderawtransaction(const JSONRPCRequest& request)
" \"asm\" : \"asm\", (string) the asm\n" " \"asm\" : \"asm\", (string) the asm\n"
" \"hex\" : \"hex\", (string) the hex\n" " \"hex\" : \"hex\", (string) the hex\n"
" \"reqSigs\" : n, (numeric) The required sigs\n" " \"reqSigs\" : n, (numeric) The required sigs\n"
" \"subtype\" : \"pubkeyhash\", (numeric) For claims and supports, this represents the type of suffix\n"
" \"type\" : \"pubkeyhash\", (string) The type, eg 'pubkeyhash'\n" " \"type\" : \"pubkeyhash\", (string) The type, eg 'pubkeyhash'\n"
" \"addresses\" : [ (json array of string)\n" " \"addresses\" : [ (json array of string)\n"
" \"12tvKAXCxZjSmdNbao16dKXC8tRWfcF5oc\" (string) lbry address\n" " \"12tvKAXCxZjSmdNbao16dKXC8tRWfcF5oc\" (string) lbry address\n"
@ -1758,7 +1757,7 @@ UniValue converttopsbt(const JSONRPCRequest& request)
"createpsbt and walletcreatefundedpsbt should be used for new applications.\n" "createpsbt and walletcreatefundedpsbt should be used for new applications.\n"
"\nArguments:\n" "\nArguments:\n"
"1. \"hexstring\" (string, required) The hex string of a raw transaction\n" "1. \"hexstring\" (string, required) The hex string of a raw transaction\n"
"2. permitsigdata (boolean, optional, default=false) If true, any signatures in the input will be discarded and conversion\n" "2. permitsigdata (boolean, optional, default=false) If true, any signatures in the input will be discarded and conversion.\n"
" will continue. If false, RPC will fail if any signatures are present.\n" " will continue. If false, RPC will fail if any signatures are present.\n"
"3. iswitness (boolean, optional) Whether the transaction hex is a serialized witness transaction.\n" "3. iswitness (boolean, optional) Whether the transaction hex is a serialized witness transaction.\n"
" If iswitness is not present, heuristic tests will be used in decoding. If true, only witness deserializaion\n" " If iswitness is not present, heuristic tests will be used in decoding. If true, only witness deserializaion\n"

View file

@ -521,7 +521,7 @@ std::string HelpExampleCli(const std::string& methodname, const std::string& arg
std::string HelpExampleRpc(const std::string& methodname, const std::string& args) std::string HelpExampleRpc(const std::string& methodname, const std::string& args)
{ {
return "> curl --user myusername --data-binary '{\"jsonrpc\": \"1.0\", \"id\":\"curltest\", " return "> curl --user myusername --data-binary '{\"jsonrpc\": \"1.0\", \"id\":\"curltest\", "
"\"method\": \"" + methodname + "\", \"params\": [" + args + "] }' -H 'content-type: text/plain;' http://127.0.0.1:9245/\n"; "\"method\": \"" + methodname + "\", \"params\": [" + args + "] }' -H 'content-type: text/plain;' http://127.0.0.1:8332/\n";
} }
void RPCSetTimerInterfaceIfUnset(RPCTimerInterface *iface) void RPCSetTimerInterfaceIfUnset(RPCTimerInterface *iface)

View file

@ -64,21 +64,9 @@ void CScheduler::serviceQueue()
// Explicitly use a template here to avoid hitting that overload. // Explicitly use a template here to avoid hitting that overload.
while (!shouldStop() && !taskQueue.empty()) { while (!shouldStop() && !taskQueue.empty()) {
boost::chrono::system_clock::time_point timeToWaitFor = taskQueue.begin()->first; boost::chrono::system_clock::time_point timeToWaitFor = taskQueue.begin()->first;
try { if (newTaskScheduled.wait_until<>(lock, timeToWaitFor) == boost::cv_status::timeout)
if (newTaskScheduled.wait_until<>(lock, timeToWaitFor) == boost::cv_status::timeout) {
break; // Exit loop after timeout, it means we reached the time of the event break; // Exit loop after timeout, it means we reached the time of the event
} }
} catch (boost::thread_interrupted) {
// We need to make sure we don't ignore this, or the thread won't end
throw;
} catch (...) {
// Some boost versions have a bug that can cause a time prior to system boot (or wake from sleep) to throw an exception instead of return timeout
// See https://github.com/boostorg/thread/issues/308
// Check if the time has passed and, if so, break gracefully
if (timeToWaitFor <= boost::chrono::system_clock::now()) break;
throw;
}
}
#endif #endif
// If there are multiple threads, the queue can empty while we're waiting (another // If there are multiple threads, the queue can empty while we're waiting (another
// thread may service the task we were waiting on). // thread may service the task we were waiting on).

View file

@ -4,7 +4,6 @@
// file COPYING or http://www.opensource.org/licenses/mit-license.php. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <script/interpreter.h> #include <script/interpreter.h>
#include <nameclaim.h>
#include <crypto/ripemd160.h> #include <crypto/ripemd160.h>
#include <crypto/sha1.h> #include <crypto/sha1.h>
@ -1491,11 +1490,6 @@ bool VerifyScript(const CScript& scriptSig, const CScript& scriptPubKey, const C
return set_error(serror, SCRIPT_ERR_SIG_PUSHONLY); return set_error(serror, SCRIPT_ERR_SIG_PUSHONLY);
} }
int claimOp;
const CScript& strippedScriptPubKey = StripClaimScriptPrefix(scriptPubKey, claimOp);
if (claimOp >= 0) // lbryum used to violate this rule with an off-by-1 at len == 255 (and its not very important)
flags &= ~SCRIPT_VERIFY_MINIMALDATA;
std::vector<std::vector<unsigned char> > stack, stackCopy; std::vector<std::vector<unsigned char> > stack, stackCopy;
if (!EvalScript(stack, scriptSig, flags, checker, SigVersion::BASE, serror)) if (!EvalScript(stack, scriptSig, flags, checker, SigVersion::BASE, serror))
// serror is set // serror is set
@ -1511,11 +1505,10 @@ bool VerifyScript(const CScript& scriptSig, const CScript& scriptPubKey, const C
return set_error(serror, SCRIPT_ERR_EVAL_FALSE); return set_error(serror, SCRIPT_ERR_EVAL_FALSE);
// Bare witness programs // Bare witness programs
int witnessversion; int witnessversion;
std::vector<unsigned char> witnessprogram; std::vector<unsigned char> witnessprogram;
if (flags & SCRIPT_VERIFY_WITNESS) { if (flags & SCRIPT_VERIFY_WITNESS) {
if (strippedScriptPubKey.IsWitnessProgram(witnessversion, witnessprogram)) { if (scriptPubKey.IsWitnessProgram(witnessversion, witnessprogram)) {
hadWitness = true; hadWitness = true;
if (scriptSig.size() != 0) { if (scriptSig.size() != 0) {
// The scriptSig must be _exactly_ CScript(), otherwise we reintroduce malleability. // The scriptSig must be _exactly_ CScript(), otherwise we reintroduce malleability.
@ -1531,7 +1524,7 @@ bool VerifyScript(const CScript& scriptSig, const CScript& scriptPubKey, const C
} }
// Additional validation for spend-to-script-hash transactions: // Additional validation for spend-to-script-hash transactions:
if ((flags & SCRIPT_VERIFY_P2SH) && strippedScriptPubKey.IsPayToScriptHash()) if ((flags & SCRIPT_VERIFY_P2SH) && scriptPubKey.IsPayToScriptHash())
{ {
// scriptSig must be literals-only or validation fails // scriptSig must be literals-only or validation fails
if (!scriptSig.IsPushOnly()) if (!scriptSig.IsPushOnly())

View file

@ -40,6 +40,8 @@ enum class IsMineResult
WATCH_ONLY = 1, //! Included in watch-only balance WATCH_ONLY = 1, //! Included in watch-only balance
SPENDABLE = 2, //! Included in all balances SPENDABLE = 2, //! Included in all balances
INVALID = 3, //! Not spendable by anyone (uncompressed pubkey in segwit, P2SH inside P2SH or witness, witness inside witness) INVALID = 3, //! Not spendable by anyone (uncompressed pubkey in segwit, P2SH inside P2SH or witness, witness inside witness)
CLAIM = 4,
SUPPORT = 5,
}; };
bool PermitsUncompressed(IsMineSigVersion sigversion) bool PermitsUncompressed(IsMineSigVersion sigversion)
@ -58,11 +60,17 @@ bool HaveKeys(const std::vector<valtype>& pubkeys, const CKeyStore& keystore)
IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey, IsMineSigVersion sigversion) IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey, IsMineSigVersion sigversion)
{ {
int op = 0;
IsMineResult ret = IsMineResult::NO; IsMineResult ret = IsMineResult::NO;
CScript strippedScriptPubKey = StripClaimScriptPrefix(scriptPubKey, op);
IsMineResult claim_ret = ((op == OP_CLAIM_NAME || op == OP_UPDATE_CLAIM) ? IsMineResult::CLAIM :
((op == OP_SUPPORT_CLAIM) ? IsMineResult::SUPPORT :
IsMineResult::NO));
std::vector<valtype> vSolutions; std::vector<valtype> vSolutions;
txnouttype whichType; txnouttype whichType;
Solver(scriptPubKey, whichType, vSolutions); Solver(strippedScriptPubKey, whichType, vSolutions);
CKeyID keyID; CKeyID keyID;
switch (whichType) switch (whichType)
@ -77,7 +85,7 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
return IsMineResult::INVALID; return IsMineResult::INVALID;
} }
if (keystore.HaveKey(keyID)) { if (keystore.HaveKey(keyID)) {
ret = std::max(ret, IsMineResult::SPENDABLE); ret = std::max(claim_ret, IsMineResult::SPENDABLE);
} }
break; break;
case TX_WITNESS_V0_KEYHASH: case TX_WITNESS_V0_KEYHASH:
@ -92,6 +100,10 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
// This also applies to the P2WSH case. // This also applies to the P2WSH case.
break; break;
} }
// Claims are not explicitly supported on Witness v0
// Transactions, and instead of supporting the wrapped inner
// tx, we are ignoring this type at this time (consistent with
// previous releases).
ret = std::max(ret, IsMineInner(keystore, GetScriptForDestination(CKeyID(uint160(vSolutions[0]))), IsMineSigVersion::WITNESS_V0)); ret = std::max(ret, IsMineInner(keystore, GetScriptForDestination(CKeyID(uint160(vSolutions[0]))), IsMineSigVersion::WITNESS_V0));
break; break;
} }
@ -104,7 +116,7 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
} }
} }
if (keystore.HaveKey(keyID)) { if (keystore.HaveKey(keyID)) {
ret = std::max(ret, IsMineResult::SPENDABLE); ret = std::max(claim_ret, IsMineResult::SPENDABLE);
} }
break; break;
case TX_SCRIPTHASH: case TX_SCRIPTHASH:
@ -116,7 +128,7 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
CScriptID scriptID = CScriptID(uint160(vSolutions[0])); CScriptID scriptID = CScriptID(uint160(vSolutions[0]));
CScript subscript; CScript subscript;
if (keystore.GetCScript(scriptID, subscript)) { if (keystore.GetCScript(scriptID, subscript)) {
ret = std::max(ret, IsMineInner(keystore, subscript, IsMineSigVersion::P2SH)); ret = std::max(claim_ret, IsMineInner(keystore, subscript, IsMineSigVersion::P2SH));
} }
break; break;
} }
@ -134,6 +146,10 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
CScriptID scriptID = CScriptID(hash); CScriptID scriptID = CScriptID(hash);
CScript subscript; CScript subscript;
if (keystore.GetCScript(scriptID, subscript)) { if (keystore.GetCScript(scriptID, subscript)) {
// Claims are not explicitly supported on Witness v0
// Transactions, and instead of supporting the wrapped inner
// tx, we are ignoring this type at this time (consistent with
// previous releases).
ret = std::max(ret, IsMineInner(keystore, subscript, IsMineSigVersion::WITNESS_V0)); ret = std::max(ret, IsMineInner(keystore, subscript, IsMineSigVersion::WITNESS_V0));
} }
break; break;
@ -160,14 +176,14 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
} }
} }
if (HaveKeys(keys, keystore)) { if (HaveKeys(keys, keystore)) {
ret = std::max(ret, IsMineResult::SPENDABLE); ret = std::max(claim_ret, IsMineResult::SPENDABLE);
} }
break; break;
} }
} }
if (ret == IsMineResult::NO && keystore.HaveWatchOnly(scriptPubKey)) { if (ret == IsMineResult::NO && keystore.HaveWatchOnly(scriptPubKey)) {
ret = std::max(ret, IsMineResult::WATCH_ONLY); ret = std::max(claim_ret, IsMineResult::WATCH_ONLY);
} }
return ret; return ret;
} }
@ -176,22 +192,18 @@ IsMineResult IsMineInner(const CKeyStore& keystore, const CScript& scriptPubKey,
isminetype IsMine(const CKeyStore& keystore, const CScript& scriptPubKey) isminetype IsMine(const CKeyStore& keystore, const CScript& scriptPubKey)
{ {
isminetype flags = ISMINE_NO; switch (IsMineInner(keystore, scriptPubKey, IsMineSigVersion::TOP)) {
int op;
CScript strippedScriptPubKey = StripClaimScriptPrefix(scriptPubKey, op);
if (op == OP_CLAIM_NAME || op == OP_UPDATE_CLAIM)
flags = ISMINE_CLAIM;
else if (op == OP_SUPPORT_CLAIM)
flags = ISMINE_SUPPORT;
switch (IsMineInner(keystore, strippedScriptPubKey, IsMineSigVersion::TOP)) {
case IsMineResult::INVALID: case IsMineResult::INVALID:
case IsMineResult::NO: case IsMineResult::NO:
return ISMINE_NO; return ISMINE_NO;
case IsMineResult::WATCH_ONLY: case IsMineResult::WATCH_ONLY:
return ISMINE_WATCH_ONLY; // addresses we're watching are never considered our claim or support -- but should they be? return ISMINE_WATCH_ONLY;
case IsMineResult::SPENDABLE: case IsMineResult::SPENDABLE:
return isminetype(ISMINE_SPENDABLE | flags); return ISMINE_SPENDABLE;
case IsMineResult::CLAIM:
return ISMINE_CLAIM;
case IsMineResult::SUPPORT:
return ISMINE_SUPPORT;
} }
assert(false); assert(false);
} }

View file

@ -21,9 +21,7 @@ enum isminetype
ISMINE_SPENDABLE = 2, ISMINE_SPENDABLE = 2,
ISMINE_CLAIM = 4, ISMINE_CLAIM = 4,
ISMINE_SUPPORT = 8, ISMINE_SUPPORT = 8,
ISMINE_ALL = ISMINE_WATCH_ONLY | ISMINE_SPENDABLE, ISMINE_ALL = ISMINE_WATCH_ONLY | ISMINE_SPENDABLE
ISMINE_STAKE = ISMINE_CLAIM | ISMINE_SUPPORT,
ISMINE_SPENDABLE_OR_STAKE = ISMINE_SPENDABLE | ISMINE_STAKE
}; };
/** used for bitflags of isminetype */ /** used for bitflags of isminetype */
typedef uint8_t isminefilter; typedef uint8_t isminefilter;

View file

@ -133,9 +133,9 @@ const char* GetOpName(opcodetype opcode)
case OP_CHECKSEQUENCEVERIFY : return "OP_CHECKSEQUENCEVERIFY"; case OP_CHECKSEQUENCEVERIFY : return "OP_CHECKSEQUENCEVERIFY";
case OP_NOP4 : return "OP_NOP4"; case OP_NOP4 : return "OP_NOP4";
case OP_NOP5 : return "OP_NOP5"; case OP_NOP5 : return "OP_NOP5";
case OP_NOP6 : return "OP_CLAIM_NAME"; case OP_NOP6 : return "OP_NOP6";
case OP_NOP7 : return "OP_SUPPORT_CLAIM"; case OP_NOP7 : return "OP_NOP7";
case OP_NOP8 : return "OP_UPDATE_CLAIM"; case OP_NOP8 : return "OP_NOP8";
case OP_NOP9 : return "OP_NOP9"; case OP_NOP9 : return "OP_NOP9";
case OP_NOP10 : return "OP_NOP10"; case OP_NOP10 : return "OP_NOP10";

View file

@ -6,12 +6,13 @@
#include <script/sign.h> #include <script/sign.h>
#include <key.h> #include <key.h>
#include <nameclaim.h>
#include <policy/policy.h> #include <policy/policy.h>
#include <primitives/transaction.h> #include <primitives/transaction.h>
#include <script/standard.h> #include <script/standard.h>
#include <uint256.h> #include <uint256.h>
#include "nameclaim.h"
typedef std::vector<unsigned char> valtype; typedef std::vector<unsigned char> valtype;
MutableTransactionSignatureCreator::MutableTransactionSignatureCreator(const CMutableTransaction* txToIn, unsigned int nInIn, const CAmount& amountIn, int nHashTypeIn) : txTo(txToIn), nIn(nInIn), nHashType(nHashTypeIn), amount(amountIn), checker(txTo, nIn, amountIn) {} MutableTransactionSignatureCreator::MutableTransactionSignatureCreator(const CMutableTransaction* txToIn, unsigned int nInIn, const CAmount& amountIn, int nHashTypeIn) : txTo(txToIn), nIn(nInIn), nHashType(nHashTypeIn), amount(amountIn), checker(txTo, nIn, amountIn) {}
@ -87,21 +88,6 @@ static bool CreateSig(const BaseSignatureCreator& creator, SignatureData& sigdat
return false; return false;
} }
static CScript StripTimelockPrefix(const CScript& script) {
auto it = script.begin();
opcodetype op;
if (!script.GetOp(it, op))
return script;
if (!script.GetOp(it, op) || (op != OP_CHECKLOCKTIMEVERIFY && op != OP_CHECKSEQUENCEVERIFY))
return script;
if (!script.GetOp(it, op) || op != OP_DROP)
return script;
return CScript(it, script.end());
}
/** /**
* Sign scriptPubKey using signature made with creator. * Sign scriptPubKey using signature made with creator.
* Signatures are returned in scriptSigRet (or returns false if scriptPubKey can't be signed), * Signatures are returned in scriptSigRet (or returns false if scriptPubKey can't be signed),
@ -116,10 +102,10 @@ static bool SignStep(const SigningProvider& provider, const BaseSignatureCreator
ret.clear(); ret.clear();
std::vector<unsigned char> sig; std::vector<unsigned char> sig;
const CScript& strippedScriptPubKey = StripClaimScriptPrefix(scriptPubKey);
std::vector<valtype> vSolutions; std::vector<valtype> vSolutions;
auto stripped = StripClaimScriptPrefix(scriptPubKey); if (!Solver(strippedScriptPubKey, whichTypeRet, vSolutions))
stripped = StripTimelockPrefix(stripped);
if (!Solver(stripped, whichTypeRet, vSolutions))
return false; return false;
switch (whichTypeRet) switch (whichTypeRet)
@ -367,7 +353,7 @@ SignatureData DataFromTransaction(const CMutableTransaction& tx, unsigned int nI
// Get scripts // Get scripts
txnouttype script_type; txnouttype script_type;
std::vector<std::vector<unsigned char>> solutions; std::vector<std::vector<unsigned char>> solutions;
Solver(StripClaimScriptPrefix(txout.scriptPubKey), script_type, solutions); Solver(txout.scriptPubKey, script_type, solutions);
SigVersion sigversion = SigVersion::BASE; SigVersion sigversion = SigVersion::BASE;
CScript next_script = txout.scriptPubKey; CScript next_script = txout.scriptPubKey;

View file

@ -6,9 +6,9 @@
#include <script/standard.h> #include <script/standard.h>
#include <crypto/sha256.h> #include <crypto/sha256.h>
#include <nameclaim.h>
#include <pubkey.h> #include <pubkey.h>
#include <script/script.h> #include <script/script.h>
#include <util.h>
#include <utilstrencodings.h> #include <utilstrencodings.h>
@ -166,7 +166,7 @@ bool ExtractDestination(const CScript& scriptPubKey, CTxDestination& addressRet)
{ {
std::vector<valtype> vSolutions; std::vector<valtype> vSolutions;
txnouttype whichType; txnouttype whichType;
if (!Solver(StripClaimScriptPrefix(scriptPubKey), whichType, vSolutions)) if (!Solver(scriptPubKey, whichType, vSolutions))
return false; return false;
if (whichType == TX_PUBKEY) if (whichType == TX_PUBKEY)
@ -214,8 +214,9 @@ bool ExtractDestinations(const CScript& scriptPubKey, txnouttype& typeRet, std::
addressRet.clear(); addressRet.clear();
typeRet = TX_NONSTANDARD; typeRet = TX_NONSTANDARD;
std::vector<valtype> vSolutions; std::vector<valtype> vSolutions;
auto solved = Solver(scriptPubKey, typeRet, vSolutions); if (!Solver(scriptPubKey, typeRet, vSolutions))
if (!solved || typeRet == TX_NULL_DATA){ return false;
if (typeRet == TX_NULL_DATA){
// This is data, not addresses // This is data, not addresses
return false; return false;
} }

View file

@ -559,8 +559,8 @@ template<typename Stream, typename K, typename T> void Unserialize(Stream& is, s
/** /**
* map * map
*/ */
template<typename Stream, typename K, typename T, typename ... Z> void Serialize(Stream& os, const std::map<K, T, Z...>& m); template<typename Stream, typename K, typename T, typename Pred, typename A> void Serialize(Stream& os, const std::map<K, T, Pred, A>& m);
template<typename Stream, typename K, typename T, typename ... Z> void Unserialize(Stream& is, std::map<K, T, Z...>& m); template<typename Stream, typename K, typename T, typename Pred, typename A> void Unserialize(Stream& is, std::map<K, T, Pred, A>& m);
/** /**
* set * set
@ -781,20 +781,20 @@ void Unserialize(Stream& is, std::pair<K, T>& item)
/** /**
* map * map
*/ */
template<typename Stream, typename K, typename T, typename ... Z> template<typename Stream, typename K, typename T, typename Pred, typename A>
void Serialize(Stream& os, const std::map<K, T, Z...>& m) void Serialize(Stream& os, const std::map<K, T, Pred, A>& m)
{ {
WriteCompactSize(os, m.size()); WriteCompactSize(os, m.size());
for (const auto& entry : m) for (const auto& entry : m)
Serialize(os, entry); Serialize(os, entry);
} }
template<typename Stream, typename K, typename T, typename ... Z> template<typename Stream, typename K, typename T, typename Pred, typename A>
void Unserialize(Stream& is, std::map<K, T, Z...>& m) void Unserialize(Stream& is, std::map<K, T, Pred, A>& m)
{ {
m.clear(); m.clear();
unsigned int nSize = ReadCompactSize(is); unsigned int nSize = ReadCompactSize(is);
typename std::map<K, T, Z...>::iterator mi = m.begin(); typename std::map<K, T, Pred, A>::iterator mi = m.begin();
for (unsigned int i = 0; i < nSize; i++) for (unsigned int i = 0; i < nSize; i++)
{ {
std::pair<K, T> item; std::pair<K, T> item;

View file

@ -227,10 +227,6 @@ public:
return (std::string(begin(), end())); return (std::string(begin(), end()));
} }
std::size_t capacity() const
{
return vch.capacity();
}
// //
// Vector subset // Vector subset

File diff suppressed because it is too large Load diff

View file

@ -3,48 +3,74 @@
#include <uint256.h> #include <uint256.h>
#include <validation.h> #include <validation.h>
#include <test/claimtriefixture.h>
#include <test/test_bitcoin.h> #include <test/test_bitcoin.h>
#include <boost/test/unit_test.hpp> #include <boost/test/unit_test.hpp>
#include <boost/scope_exit.hpp>
using namespace std; using namespace std;
class CClaimTrieCacheTest : public CClaimTrieCacheBase class CClaimTrieCacheTest : public CClaimTrieCache {
{
public: public:
explicit CClaimTrieCacheTest(CClaimTrie* base): CClaimTrieCacheBase(base) CClaimTrieCacheTest(CClaimTrie* base):
CClaimTrieCache(base, false){}
bool recursiveComputeMerkleHash(CClaimTrieNode* tnCurrent,
std::string sPos) const
{ {
return CClaimTrieCache::recursiveComputeMerkleHash(tnCurrent, sPos);
} }
using CClaimTrieCacheBase::insertSupportIntoMap; bool recursivePruneName(CClaimTrieNode* tnCurrent, unsigned int nPos, std::string sName, bool* pfNullified) const
using CClaimTrieCacheBase::removeSupportFromMap;
using CClaimTrieCacheBase::insertClaimIntoTrie;
using CClaimTrieCacheBase::removeClaimFromTrie;
void insert(const std::string& key, CClaimTrieData&& data)
{ {
nodesToAddOrUpdate.insert(key, std::move(data)); return CClaimTrieCache::recursivePruneName(tnCurrent,nPos,sName, pfNullified);
} }
bool erase(const std::string& key) bool insertSupportIntoMap(const std::string& name, CSupportValue support, bool fCheckTakeover) const
{ {
return nodesToAddOrUpdate.erase(key); return CClaimTrieCache::insertSupportIntoMap(name, support, fCheckTakeover);
} }
int cacheSize() int cacheSize()
{ {
return nodesToAddOrUpdate.height(); return cache.size();
} }
CClaimTrie::iterator getCache(const std::string& key) nodeCacheType::iterator getCache(std::string key)
{ {
return nodesToAddOrUpdate.find(key); return cache.find(key);
}
bool insertClaimIntoTrie(const std::string& name, CClaimValue claim,
bool fCheckTakeover = false) const
{
return CClaimTrieCache::insertClaimIntoTrie(name, claim, fCheckTakeover);
}
bool removeClaimFromTrie(const std::string& name, const COutPoint& outPoint,
CClaimValue& claim, bool fCheckTakeover = false) const
{
return CClaimTrieCache::removeClaimFromTrie(name, outPoint, claim, fCheckTakeover);
} }
}; };
CMutableTransaction BuildTransaction(const uint256& prevhash)
{
CMutableTransaction tx;
tx.nVersion = 1;
tx.nLockTime = 0;
tx.vin.resize(1);
tx.vout.resize(1);
tx.vin[0].prevout.hash = prevhash;
tx.vin[0].prevout.n = 0;
tx.vin[0].scriptSig = CScript();
tx.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
tx.vout[0].scriptPubKey = CScript();
tx.vout[0].nValue = 0;
return tx;
}
BOOST_FIXTURE_TEST_SUITE(claimtriecache_tests, RegTestingSetup) BOOST_FIXTURE_TEST_SUITE(claimtriecache_tests, RegTestingSetup)
BOOST_AUTO_TEST_CASE(merkle_hash_single_test) BOOST_AUTO_TEST_CASE(merkle_hash_single_test)
{ {
// check empty trie // check empty trie
@ -52,9 +78,10 @@ BOOST_AUTO_TEST_CASE(merkle_hash_single_test)
CClaimTrieCacheTest cc(pclaimTrie); CClaimTrieCacheTest cc(pclaimTrie);
BOOST_CHECK_EQUAL(one, cc.getMerkleHash()); BOOST_CHECK_EQUAL(one, cc.getMerkleHash());
// we cannot have leaf root node // check trie with only root node
auto it = cc.getCache(""); CClaimTrieNode base_node;
BOOST_CHECK(!it); cc.recursiveComputeMerkleHash(&base_node, "");
BOOST_CHECK_EQUAL(one, cc.getMerkleHash());
} }
BOOST_AUTO_TEST_CASE(merkle_hash_multiple_test) BOOST_AUTO_TEST_CASE(merkle_hash_multiple_test)
@ -90,97 +117,97 @@ BOOST_AUTO_TEST_CASE(merkle_hash_multiple_test)
BOOST_CHECK(pclaimTrie->empty()); BOOST_CHECK(pclaimTrie->empty());
CClaimTrieCacheTest ntState(pclaimTrie); CClaimTrieCacheTest ntState(pclaimTrie);
ntState.insertClaimIntoTrie(std::string("test"), CClaimValue(tx1OutPoint, hash160, 50, 100, 200), true); ntState.insertClaimIntoTrie(std::string("test"), CClaimValue(tx1OutPoint, hash160, 50, 100, 200));
ntState.insertClaimIntoTrie(std::string("test2"), CClaimValue(tx2OutPoint, hash160, 50, 100, 200), true); ntState.insertClaimIntoTrie(std::string("test2"), CClaimValue(tx2OutPoint, hash160, 50, 100, 200));
BOOST_CHECK(pclaimTrie->empty()); BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(!ntState.empty()); BOOST_CHECK(!ntState.empty());
BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash1); BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash1);
ntState.insertClaimIntoTrie(std::string("test"), CClaimValue(tx3OutPoint, hash160, 50, 101, 201), true); ntState.insertClaimIntoTrie(std::string("test"), CClaimValue(tx3OutPoint, hash160, 50, 101, 201));
BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash1); BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash1);
ntState.insertClaimIntoTrie(std::string("tes"), CClaimValue(tx4OutPoint, hash160, 50, 100, 200), true); ntState.insertClaimIntoTrie(std::string("tes"), CClaimValue(tx4OutPoint, hash160, 50, 100, 200));
BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash2);
ntState.insertClaimIntoTrie(std::string("testtesttesttest"), CClaimValue(tx5OutPoint, hash160, 50, 100, 200), true); ntState.insertClaimIntoTrie(std::string("testtesttesttest"), CClaimValue(tx5OutPoint, hash160, 50, 100, 200));
ntState.removeClaimFromTrie(std::string("testtesttesttest"), tx5OutPoint, unused, true); ntState.removeClaimFromTrie(std::string("testtesttesttest"), tx5OutPoint, unused);
BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash2);
ntState.flush(); ntState.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash2);
BOOST_CHECK(ntState.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState1(pclaimTrie); CClaimTrieCacheTest ntState1(pclaimTrie);
ntState1.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused, true); ntState1.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused);
ntState1.removeClaimFromTrie(std::string("test2"), tx2OutPoint, unused, true); ntState1.removeClaimFromTrie(std::string("test2"), tx2OutPoint, unused);
ntState1.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused, true); ntState1.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused);
ntState1.removeClaimFromTrie(std::string("tes"), tx4OutPoint, unused, true); ntState1.removeClaimFromTrie(std::string("tes"), tx4OutPoint, unused);
BOOST_CHECK_EQUAL(ntState1.getMerkleHash(), hash0); BOOST_CHECK_EQUAL(ntState1.getMerkleHash(), hash0);
CClaimTrieCacheTest ntState2(pclaimTrie); CClaimTrieCacheTest ntState2(pclaimTrie);
ntState2.insertClaimIntoTrie(std::string("abab"), CClaimValue(tx6OutPoint, hash160, 50, 100, 200), true); ntState2.insertClaimIntoTrie(std::string("abab"), CClaimValue(tx6OutPoint, hash160, 50, 100, 200));
ntState2.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused, true); ntState2.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused);
BOOST_CHECK_EQUAL(ntState2.getMerkleHash(), hash3); BOOST_CHECK_EQUAL(ntState2.getMerkleHash(), hash3);
ntState2.flush(); ntState2.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState2.getMerkleHash(), hash3); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash3);
BOOST_CHECK(ntState2.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState3(pclaimTrie); CClaimTrieCacheTest ntState3(pclaimTrie);
ntState3.insertClaimIntoTrie(std::string("test"), CClaimValue(tx1OutPoint, hash160, 50, 100, 200), true); ntState3.insertClaimIntoTrie(std::string("test"), CClaimValue(tx1OutPoint, hash160, 50, 100, 200));
BOOST_CHECK_EQUAL(ntState3.getMerkleHash(), hash4); BOOST_CHECK_EQUAL(ntState3.getMerkleHash(), hash4);
ntState3.flush(); ntState3.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState3.getMerkleHash(), hash4); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash4);
BOOST_CHECK(ntState3.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState4(pclaimTrie); CClaimTrieCacheTest ntState4(pclaimTrie);
ntState4.removeClaimFromTrie(std::string("abab"), tx6OutPoint, unused, true); ntState4.removeClaimFromTrie(std::string("abab"), tx6OutPoint, unused);
BOOST_CHECK_EQUAL(ntState4.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(ntState4.getMerkleHash(), hash2);
ntState4.flush(); ntState4.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState4.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash2);
BOOST_CHECK(ntState4.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState5(pclaimTrie); CClaimTrieCacheTest ntState5(pclaimTrie);
ntState5.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused, true); ntState5.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused);
BOOST_CHECK_EQUAL(ntState5.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(ntState5.getMerkleHash(), hash2);
ntState5.flush(); ntState5.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState5.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash2);
BOOST_CHECK(ntState5.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState6(pclaimTrie); CClaimTrieCacheTest ntState6(pclaimTrie);
ntState6.insertClaimIntoTrie(std::string("test"), CClaimValue(tx3OutPoint, hash160, 50, 101, 201), true); ntState6.insertClaimIntoTrie(std::string("test"), CClaimValue(tx3OutPoint, hash160, 50, 101, 201));
BOOST_CHECK_EQUAL(ntState6.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(ntState6.getMerkleHash(), hash2);
ntState6.flush(); ntState6.flush();
BOOST_CHECK(!pclaimTrie->empty()); BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState6.getMerkleHash(), hash2); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash2);
BOOST_CHECK(ntState6.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
CClaimTrieCacheTest ntState7(pclaimTrie); CClaimTrieCacheTest ntState7(pclaimTrie);
ntState7.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused, true); ntState7.removeClaimFromTrie(std::string("test"), tx3OutPoint, unused);
ntState7.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused, true); ntState7.removeClaimFromTrie(std::string("test"), tx1OutPoint, unused);
ntState7.removeClaimFromTrie(std::string("tes"), tx4OutPoint, unused, true); ntState7.removeClaimFromTrie(std::string("tes"), tx4OutPoint, unused);
ntState7.removeClaimFromTrie(std::string("test2"), tx2OutPoint, unused, true); ntState7.removeClaimFromTrie(std::string("test2"), tx2OutPoint, unused);
BOOST_CHECK_EQUAL(ntState7.getMerkleHash(), hash0); BOOST_CHECK_EQUAL(ntState7.getMerkleHash(), hash0);
ntState7.flush(); ntState7.flush();
BOOST_CHECK(pclaimTrie->empty()); BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK_EQUAL(ntState7.getMerkleHash(), hash0); BOOST_CHECK_EQUAL(pclaimTrie->getMerkleHash(), hash0);
BOOST_CHECK(ntState7.checkConsistency()); BOOST_CHECK(pclaimTrie->checkConsistency());
} }
BOOST_AUTO_TEST_CASE(basic_insertion_info_test) BOOST_AUTO_TEST_CASE(basic_insertion_info_test)
{ {
// test basic claim insertions and that get methods retreives information properly // test basic claim insertions and that get methods retreives information properly
BOOST_CHECK(pclaimTrie->empty()); BOOST_CHECK_EQUAL(pclaimTrie->empty(), true);
CClaimTrieCacheTest ctc(pclaimTrie); CClaimTrieCacheTest ctc(pclaimTrie);
// create and insert claim // create and insert claim
@ -192,18 +219,17 @@ BOOST_AUTO_TEST_CASE(basic_insertion_info_test)
int height = 0; int height = 0;
int validHeight = 0; int validHeight = 0;
CClaimValue claimVal(claimOutPoint, claimId, amount, height, validHeight); CClaimValue claimVal(claimOutPoint, claimId, amount, height, validHeight);
ctc.insertClaimIntoTrie("test", claimVal, true); ctc.insertClaimIntoTrie("test", claimVal);
// try getClaimsForName, effectiveAmount, getInfoForName // try getClaimsForName, getEffectiveAmountForClaim, getInfoForName
auto res = ctc.getClaimsForName("test"); claimsForNameType res = ctc.getClaimsForName("test");
BOOST_CHECK_EQUAL(res.claimsNsupports.size(), 1); BOOST_CHECK_EQUAL(res.claims.size(), 1);
BOOST_CHECK_EQUAL(res.claimsNsupports[0].claim, claimVal); BOOST_CHECK_EQUAL(res.claims[0], claimVal);
BOOST_CHECK_EQUAL(res.claimsNsupports[0].supports.size(), 0);
BOOST_CHECK_EQUAL(10, res.claimsNsupports[0].effectiveAmount); BOOST_CHECK_EQUAL(10, ctc.getEffectiveAmountForClaim("test", claimId));
CClaimValue claim; CClaimValue claim;
BOOST_CHECK(ctc.getInfoForName("test", claim)); BOOST_CHECK_EQUAL(ctc.getInfoForName("test", claim), true);
BOOST_CHECK_EQUAL(claim, claimVal); BOOST_CHECK_EQUAL(claim, claimVal);
// insert a support // insert a support
@ -215,12 +241,8 @@ BOOST_AUTO_TEST_CASE(basic_insertion_info_test)
CSupportValue support(supportOutPoint, claimId, supportAmount, height, validHeight); CSupportValue support(supportOutPoint, claimId, supportAmount, height, validHeight);
ctc.insertSupportIntoMap("test", support, false); ctc.insertSupportIntoMap("test", support, false);
auto res1 = ctc.getClaimsForName("test");
BOOST_CHECK_EQUAL(res1.claimsNsupports.size(), 1);
BOOST_CHECK_EQUAL(res1.claimsNsupports[0].supports.size(), 1);
// try getEffectiveAmount // try getEffectiveAmount
BOOST_CHECK_EQUAL(20, res1.claimsNsupports[0].effectiveAmount); BOOST_CHECK_EQUAL(20, ctc.getEffectiveAmountForClaim("test", claimId));
} }
BOOST_AUTO_TEST_CASE(recursive_prune_test) BOOST_AUTO_TEST_CASE(recursive_prune_test)
@ -235,42 +257,35 @@ BOOST_AUTO_TEST_CASE(recursive_prune_test)
int validAtHeight = 0; int validAtHeight = 0;
CClaimValue test_claim(outpoint, claimId, amount, height, validAtHeight); CClaimValue test_claim(outpoint, claimId, amount, height, validAtHeight);
CClaimTrieData data; CClaimTrieNode base_node;
// base node has a claim, so it should not be pruned // base node has a claim, so it should not be pruned
data.insertClaim(test_claim); base_node.insertClaim(test_claim);
cc.insert("", std::move(data));
// node 1 has a claim so it should not be pruned // node 1 has a claim so it should not be pruned
data.insertClaim(test_claim); CClaimTrieNode node_1;
const char c = 't';
base_node.children[c] = &node_1;
node_1.insertClaim(test_claim);
// set this just to make sure we get the right CClaimTrieNode back // set this just to make sure we get the right CClaimTrieNode back
data.nHeightOfLastTakeover = 10; node_1.nHeightOfLastTakeover = 10;
cc.insert("t", std::move(data));
//node 2 does not have a claim so it should be pruned //node 2 does not have a claim so it should be pruned
// thus we should find pruned node 1 in cache // thus we should find pruned node 1 in cache
cc.insert("te", CClaimTrieData{}); CClaimTrieNode node_2;
const char c_2 = 'e';
node_1.children[c_2] = &node_2;
BOOST_CHECK(cc.erase("te")); cc.recursivePruneName(&base_node, 0, std::string("te"), NULL);
BOOST_CHECK_EQUAL(2, cc.cacheSize());
auto it = cc.getCache("t");
BOOST_CHECK_EQUAL(10, it->nHeightOfLastTakeover);
BOOST_CHECK_EQUAL(1, it->claims.size());
BOOST_CHECK_EQUAL(2, cc.cacheSize());
cc.insert("te", CClaimTrieData{});
// erasing "t" will make it weak
BOOST_CHECK(cc.erase("t"));
// so now we erase "e" as well as "t"
BOOST_CHECK(cc.erase("te"));
// we have claim in root
BOOST_CHECK_EQUAL(1, cc.cacheSize()); BOOST_CHECK_EQUAL(1, cc.cacheSize());
BOOST_CHECK(cc.erase("")); nodeCacheType::iterator it = cc.getCache(std::string("t"));
BOOST_CHECK_EQUAL(0, cc.cacheSize()); BOOST_CHECK_EQUAL(10, it->second->nHeightOfLastTakeover);
BOOST_CHECK_EQUAL(1U, it->second->claims.size());
BOOST_CHECK_EQUAL(0U, it->second->children.size());
} }
BOOST_AUTO_TEST_CASE(iteratetrie_test) BOOST_AUTO_TEST_CASE(iteratetrie_test)
{ {
BOOST_CHECK(pclaimTrie->empty()); BOOST_CHECK_EQUAL(pclaimTrie->empty(), true);
CClaimTrieCacheTest ctc(pclaimTrie); CClaimTrieCacheTest ctc(pclaimTrie);
uint256 hash0(uint256S("0000000000000000000000000000000000000000000000000000000000000001")); uint256 hash0(uint256S("0000000000000000000000000000000000000000000000000000000000000001"));
@ -278,173 +293,48 @@ BOOST_AUTO_TEST_CASE(iteratetrie_test)
const uint256 txhash = tx1.GetHash(); const uint256 txhash = tx1.GetHash();
CClaimValue claimVal(COutPoint(txhash, 0), ClaimIdHash(txhash, 0), CAmount(10), 0, 0); CClaimValue claimVal(COutPoint(txhash, 0), ClaimIdHash(txhash, 0), CAmount(10), 0, 0);
ctc.insertClaimIntoTrie("test", claimVal, true); ctc.insertClaimIntoTrie("test", claimVal);
BOOST_CHECK(ctc.flush());
auto hit = pclaimTrie->find(""); int count = 0;
BOOST_CHECK(hit);
BOOST_CHECK_EQUAL(hit.children().size(), 1U);
BOOST_CHECK(hit = pclaimTrie->find("test"));
BOOST_CHECK_EQUAL(hit.children().size(), 0U);
BOOST_CHECK_EQUAL(hit.data().claims.size(), 1);
}
BOOST_AUTO_TEST_CASE(trie_stays_consistent_test) struct TestCallBack : public CNodeCallback
{ {
std::vector<std::string> names { TestCallBack(int& count) : count(count)
"goodness", "goodnight", "goodnatured", "goods", "go", "goody", "goo"
};
CClaimTrie trie(true, false, 1);
CClaimTrieCacheTest cache(&trie);
CClaimValue value;
for (auto& name: names)
BOOST_CHECK(cache.insertClaimIntoTrie(name, value, false));
cache.flush();
BOOST_CHECK(cache.checkConsistency());
for (auto& name: names) {
CClaimValue temp;
BOOST_CHECK(cache.removeClaimFromTrie(name, COutPoint(), temp, false));
cache.flush();
BOOST_CHECK(cache.checkConsistency());
}
BOOST_CHECK(trie.empty());
}
BOOST_AUTO_TEST_CASE(takeover_workaround_triggers)
{ {
auto& consensus = const_cast<Consensus::Params&>(Params().GetConsensus());
auto currentMax = consensus.nMaxTakeoverWorkaroundHeight;
consensus.nMaxTakeoverWorkaroundHeight = 10000;
BOOST_SCOPE_EXIT(&consensus, currentMax) { consensus.nMaxTakeoverWorkaroundHeight = currentMax; }
BOOST_SCOPE_EXIT_END
CClaimTrie trie(true, false, 1);
CClaimTrieCacheTest cache(&trie);
insertUndoType icu, isu; claimQueueRowType ecu; supportQueueRowType esu;
std::vector<std::pair<std::string, int>> thu;
BOOST_CHECK(cache.incrementBlock(icu, ecu, isu, esu, thu));
CClaimValue value;
value.nHeight = 1;
BOOST_CHECK(cache.insertClaimIntoTrie("a", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("b", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("c", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("aa", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("bb", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("cc", value, true));
BOOST_CHECK(cache.insertSupportIntoMap("aa", CSupportValue(), false));
BOOST_CHECK(cache.incrementBlock(icu, ecu, isu, esu, thu));
BOOST_CHECK(cache.flush());
BOOST_CHECK(cache.incrementBlock(icu, ecu, isu, esu, thu));
BOOST_CHECK_EQUAL(0, cache.cacheSize());
CSupportValue temp;
BOOST_CHECK(cache.insertSupportIntoMap("bb", temp, false));
BOOST_CHECK(!cache.getCache("aa"));
BOOST_CHECK(cache.removeSupportFromMap("aa", COutPoint(), temp, false));
BOOST_CHECK(cache.removeClaimFromTrie("aa", COutPoint(), value, false));
BOOST_CHECK(cache.removeClaimFromTrie("bb", COutPoint(), value, false));
BOOST_CHECK(cache.removeClaimFromTrie("cc", COutPoint(), value, false));
BOOST_CHECK(cache.insertClaimIntoTrie("aa", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("bb", value, true));
BOOST_CHECK(cache.insertClaimIntoTrie("cc", value, true));
BOOST_CHECK(cache.incrementBlock(icu, ecu, isu, esu, thu));
BOOST_CHECK_EQUAL(3, cache.getCache("aa")->nHeightOfLastTakeover);
BOOST_CHECK_EQUAL(3, cache.getCache("bb")->nHeightOfLastTakeover);
BOOST_CHECK_EQUAL(1, cache.getCache("cc")->nHeightOfLastTakeover);
} }
BOOST_AUTO_TEST_CASE(verify_basic_serialization) void visit(const std::string& name, const CClaimTrieNode* node)
{ {
CClaimValue cv; count++;
cv.outPoint = COutPoint(uint256S("123"), 2); if (name == "test")
cv.nHeight = 3; BOOST_CHECK_EQUAL(node->claims.size(), 1);
cv.claimId.SetHex("4567");
cv.nEffectiveAmount = 4;
cv.nAmount = 5;
cv.nValidAtHeight = 6;
CDataStream ssData(SER_NETWORK, PROTOCOL_VERSION);
ssData << cv;
CClaimValue cv2;
ssData >> cv2;
BOOST_CHECK_EQUAL(cv, cv2);
} }
BOOST_AUTO_TEST_CASE(claimtrienode_serialize_unserialize) int& count;
} testCallback(count);
BOOST_CHECK_EQUAL(ctc.iterateTrie(testCallback), true);
BOOST_CHECK_EQUAL(count, 5);
count = 3;
struct TestCallBack2 : public CNodeCallback
{
TestCallBack2(int& count) : count(count)
{ {
CDataStream ss(SER_DISK, 0);
uint160 hash160;
CClaimTrieData n1;
CClaimTrieData n2;
CClaimValue throwaway;
ss << n1;
ss >> n2;
BOOST_CHECK_EQUAL(n1, n2);
CClaimValue v1(COutPoint(uint256S("0000000000000000000000000000000000000000000000000000000000000001"), 0), hash160, 50, 0, 100);
CClaimValue v2(COutPoint(uint256S("0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"), 1), hash160, 100, 1, 101);
n1.insertClaim(v1);
BOOST_CHECK(n1 != n2);
ss << n1;
ss >> n2;
BOOST_CHECK_EQUAL(n1, n2);
n1.insertClaim(v2);
BOOST_CHECK(n1 != n2);
ss << n1;
ss >> n2;
BOOST_CHECK_EQUAL(n1, n2);
n1.removeClaim(v1.outPoint, throwaway);
BOOST_CHECK(n1 != n2);
ss << n1;
ss >> n2;
BOOST_CHECK_EQUAL(n1, n2);
n1.removeClaim(v2.outPoint, throwaway);
BOOST_CHECK(n1 != n2);
ss << n1;
ss >> n2;
BOOST_CHECK_EQUAL(n1, n2);
} }
BOOST_AUTO_TEST_CASE(claimtrienode_remove_invalid_claim) void visit(const std::string& name, const CClaimTrieNode* node)
{ {
uint160 hash160; if (--count <= 0)
throw CRecursionInterruptionException(false);
}
CClaimTrieData n1; int& count;
CClaimTrieData n2; } testCallback2(count);
CClaimValue throwaway;
CClaimValue v1(COutPoint(uint256S("0000000000000000000000000000000000000000000000000000000000000001"), 0), hash160, 50, 0, 100); BOOST_CHECK_EQUAL(ctc.iterateTrie(testCallback2), false);
CClaimValue v2(COutPoint(uint256S("0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"), 1), hash160, 100, 1, 101); BOOST_CHECK_EQUAL(count, 0);
n1.insertClaim(v1);
n2.insertClaim(v2);
bool invalidClaim = n2.removeClaim(v1.outPoint, throwaway);
BOOST_CHECK_EQUAL(invalidClaim, false);
invalidClaim = n1.removeClaim(v2.outPoint, throwaway);
BOOST_CHECK_EQUAL(invalidClaim, false);
} }
BOOST_AUTO_TEST_SUITE_END() BOOST_AUTO_TEST_SUITE_END()

View file

@ -1,803 +0,0 @@
// Copyright (c) 2015-2019 The LBRY Foundation
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://opensource.org/licenses/mit-license.php
#include <test/claimtriefixture.h>
using namespace std;
BOOST_FIXTURE_TEST_SUITE(claimtrieexpirationfork_tests, RegTestingSetup)
/*
expiration
check claims expire and loses claim
check claims expire and is not updateable (may be changed in future soft fork)
check supports expire and can cause supported bid to lose claim
*/
BOOST_AUTO_TEST_CASE(claimtrie_expire_test)
{
ClaimTrieChainFixture fixture;
fixture.setExpirationForkHeight(1000000, 5, 1000000);
// check claims expire and loses claim
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(),"test", "one", 2);
fixture.IncrementBlocks(fixture.expirationTime());
BOOST_CHECK(fixture.is_best_claim("test", tx1));
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(),"test", "one", 1);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", tx2));
fixture.DecrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", tx1));
fixture.DecrementBlocks(fixture.expirationTime());
// check claims expire and is not updateable (may be changed in future soft fork)
CMutableTransaction tx3 = fixture.MakeClaim(fixture.GetCoinbase(),"test", "one", 2);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", tx3));
fixture.IncrementBlocks(fixture.expirationTime());
CMutableTransaction u1 = fixture.MakeUpdate(tx3, "test", "two", ClaimIdHash(tx3.GetHash(), 0), 2);
BOOST_CHECK(!fixture.is_best_claim("test",u1));
fixture.DecrementBlocks(fixture.expirationTime());
BOOST_CHECK(fixture.is_best_claim("test", tx3));
fixture.DecrementBlocks(1);
// check supports expire and can cause supported bid to lose claim
CMutableTransaction tx4 = fixture.MakeClaim(fixture.GetCoinbase(), "test", "one", 1);
CMutableTransaction tx5 = fixture.MakeClaim(fixture.GetCoinbase(), "test", "one", 2);
CMutableTransaction s1 = fixture.MakeSupport(fixture.GetCoinbase(), tx4, "test", 2);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test",tx4));
CMutableTransaction u2 = fixture.MakeUpdate(tx4, "test", "two", ClaimIdHash(tx4.GetHash(),0), 1);
CMutableTransaction u3 = fixture.MakeUpdate(tx5, "test", "two", ClaimIdHash(tx5.GetHash(),0), 2);
fixture.IncrementBlocks(fixture.expirationTime());
BOOST_CHECK(fixture.is_best_claim("test", u3));
fixture.DecrementBlocks(fixture.expirationTime());
BOOST_CHECK(fixture.is_best_claim("test", tx4));
fixture.DecrementBlocks(1);
// check updated claims will extend expiration
CMutableTransaction tx6 = fixture.MakeClaim(fixture.GetCoinbase(), "test", "one", 2);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", tx6));
CMutableTransaction u4 = fixture.MakeUpdate(tx6, "test", "two", ClaimIdHash(tx6.GetHash(), 0), 2);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", u4));
fixture.IncrementBlocks(fixture.expirationTime()-1);
BOOST_CHECK(fixture.is_best_claim("test", u4));
fixture.IncrementBlocks(1);
BOOST_CHECK(!fixture.is_best_claim("test", u4));
fixture.DecrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test", u4));
fixture.DecrementBlocks(fixture.expirationTime());
BOOST_CHECK(fixture.is_best_claim("test", tx6));
}
/*
claim expiration for hard fork
check claims do not expire post ExpirationForkHeight
check supports work post ExpirationForkHeight
*/
BOOST_AUTO_TEST_CASE(hardfork_claim_test)
{
ClaimTrieChainFixture fixture;
fixture.setExpirationForkHeight(7, 3, 6);
// First create a claim and make sure it expires pre-fork
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(),"test","one",3);
fixture.IncrementBlocks(4);
BOOST_CHECK(!fixture.is_best_claim("test",tx1));
fixture.DecrementBlocks(3);
BOOST_CHECK(fixture.is_best_claim("test",tx1));
fixture.IncrementBlocks(3);
BOOST_CHECK(!fixture.is_best_claim("test",tx1));
// Create a claim 1 block before the fork height that will expire after the fork height
fixture.IncrementBlocks(1);
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(),"test2","one",3);
fixture.IncrementBlocks(1);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 3);
// Disable future expirations and fast-forward past the fork height
fixture.IncrementBlocks(1);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 6);
// make sure decrementing to before the fork height will apppropriately set back the
// expiration time to the original expiraiton time
fixture.DecrementBlocks(1);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 3);
fixture.IncrementBlocks(1);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 6);
// make sure that claim created 1 block before the fork expires as expected
// at the extended expiration times
BOOST_CHECK(fixture.is_best_claim("test2", tx2));
fixture.IncrementBlocks(5);
BOOST_CHECK(!fixture.is_best_claim("test2", tx2));
fixture.DecrementBlocks(5);
BOOST_CHECK(fixture.is_best_claim("test2", tx2));
// This first claim is still expired since it's pre-fork, even
// after fork activation
BOOST_CHECK(!fixture.is_best_claim("test", tx1));
// This new claim created at the fork height cannot expire at original expiration
CMutableTransaction tx3 = fixture.MakeClaim(fixture.GetCoinbase(),"test","one",1);
fixture.IncrementBlocks(1);
fixture.IncrementBlocks(3);
BOOST_CHECK(fixture.is_best_claim("test",tx3));
BOOST_CHECK(!fixture.is_best_claim("test",tx1));
fixture.DecrementBlocks(3);
// but it expires at the extended expiration, and not a single block below
fixture.IncrementBlocks(6);
BOOST_CHECK(!fixture.is_best_claim("test",tx3));
fixture.DecrementBlocks(6);
fixture.IncrementBlocks(5);
BOOST_CHECK(fixture.is_best_claim("test",tx3));
fixture.DecrementBlocks(5);
// Ensure that we cannot update the original pre-fork expired claim
CMutableTransaction u1 = fixture.MakeUpdate(tx1,"test","two",ClaimIdHash(tx1.GetHash(),0), 3);
fixture.IncrementBlocks(1);
BOOST_CHECK(!fixture.is_best_claim("test",u1));
// Ensure that supports for the expired claim don't support it
CMutableTransaction s1 = fixture.MakeSupport(fixture.GetCoinbase(),u1,"test",10);
BOOST_CHECK(!fixture.is_best_claim("test",u1));
// Ensure that we can update the new post-fork claim
CMutableTransaction u2 = fixture.MakeUpdate(tx3,"test","two",ClaimIdHash(tx3.GetHash(),0), 1);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test",u2));
// Ensure that supports for the new post-fork claim
CMutableTransaction s2 = fixture.MakeSupport(fixture.GetCoinbase(),u2,"test",3);
BOOST_CHECK(fixture.is_best_claim("test",u2));
}
/*
support expiration for hard fork
*/
BOOST_AUTO_TEST_CASE(hardfork_support_test)
{
ClaimTrieChainFixture fixture;
fixture.setExpirationForkHeight(2, 2, 4);
// Create claim and support it before the fork height
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), "test", "one", 1);
CMutableTransaction s1 = fixture.MakeSupport(fixture.GetCoinbase(), tx1, "test", 2);
// this claim will win without the support
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(),"test","one",2);
fixture.IncrementBlocks(2);
// check that the claim expires as expected at the extended time, as does the support
fixture.IncrementBlocks(2);
BOOST_CHECK(fixture.is_best_claim("test",tx1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",3));
fixture.DecrementBlocks(2);
fixture.IncrementBlocks(3);
BOOST_CHECK(!fixture.is_best_claim("test",tx1));
fixture.DecrementBlocks(3);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("test",tx1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",3));
fixture.DecrementBlocks(1);
// update the claims at fork
fixture.DecrementBlocks(1);
CMutableTransaction u1 = fixture.MakeUpdate(tx1, "test", "two", ClaimIdHash(tx1.GetHash(),0), 1);
CMutableTransaction u2 = fixture.MakeUpdate(tx2, "test", "two", ClaimIdHash(tx2.GetHash(),0), 2);
fixture.IncrementBlocks(1);
BOOST_CHECK_EQUAL(Params().GetConsensus().nExtendedClaimExpirationForkHeight, chainActive.Height());
BOOST_CHECK(fixture.is_best_claim("test", u1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",3));
BOOST_CHECK(!fixture.is_claim_in_queue("test", tx1));
BOOST_CHECK(!fixture.is_claim_in_queue("test", tx2));
// check that the support expires as expected
fixture.IncrementBlocks(3);
BOOST_CHECK(fixture.is_best_claim("test", u2));
fixture.DecrementBlocks(3);
fixture.IncrementBlocks(2);
BOOST_CHECK(fixture.is_best_claim("test",u1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",3));
}
/*
activation_fall_through and supports_fall_through
Tests for where claims/supports in queues would be undone properly in a decrement.
See https://github.com/lbryio/lbrycrd/issues/243 for more details
*/
BOOST_AUTO_TEST_CASE(activations_fall_through)
{
ClaimTrieChainFixture fixture;
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), "A", "1", 1);
fixture.IncrementBlocks(3);
BOOST_CHECK_EQUAL(fixture.proportionalDelayFactor(), 1);
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(), "A", "2", 3);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx1));
fixture.IncrementBlocks(3);
BOOST_CHECK(fixture.is_best_claim("A", tx2));
fixture.DecrementBlocks(3);
fixture.Spend(tx1); // this will trigger early activation on tx2 claim
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx2));
fixture.DecrementBlocks(1); //reorg the early activation
BOOST_CHECK(fixture.is_best_claim("A", tx1));
fixture.Spend(tx1);
fixture.IncrementBlocks(1); // this should not cause tx2 to activate again and crash
BOOST_CHECK(fixture.is_best_claim("A", tx2));
}
BOOST_AUTO_TEST_CASE(supports_fall_through)
{
ClaimTrieChainFixture fixture;
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), "A", "1", 3);
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(), "A", "2", 1);
CMutableTransaction tx3 = fixture.MakeClaim(fixture.GetCoinbase(), "A", "3", 2);
fixture.IncrementBlocks(3);
BOOST_CHECK_EQUAL(fixture.proportionalDelayFactor(), 1);
CMutableTransaction sx2 = fixture.MakeSupport(fixture.GetCoinbase(), tx2, "A", 3);
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx1));
fixture.IncrementBlocks(3);
BOOST_CHECK(fixture.is_best_claim("A", tx2));
fixture.DecrementBlocks(3);
fixture.Spend(tx1); // this will trigger early activation
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx2));
fixture.DecrementBlocks(1); // reorg the early activation
BOOST_CHECK(fixture.is_best_claim("A", tx1));
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx1)); //tx2 support should not be active
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx1)); //tx2 support should not be active
fixture.IncrementBlocks(1);
BOOST_CHECK(fixture.is_best_claim("A", tx2)); //tx2 support should be active now
}
/*
claim/support expiration for hard fork, but with checks for disk procedures
*/
BOOST_AUTO_TEST_CASE(hardfork_disk_test)
{
ClaimTrieChainFixture fixture;
fixture.setExpirationForkHeight(7, 3, 6);
// Check that incrementing to fork height, reseting to disk will get proper expiration time
BOOST_CHECK_EQUAL(fixture.expirationTime(), 3);
fixture.IncrementBlocks(7, true);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 6);
fixture.ReadFromDisk(chainActive.Tip());
BOOST_CHECK_EQUAL(fixture.expirationTime(), 6);
// Create a claim and support 1 block before the fork height that will expire after the fork height.
// Reset to disk, increment past the fork height and make sure we get
// proper behavior
fixture.DecrementBlocks(2);
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), "test", "one", 1);
CMutableTransaction s1 = fixture.MakeSupport(fixture.GetCoinbase(), tx1, "test", 1);
fixture.IncrementBlocks(1);
fixture.ReadFromDisk(chainActive.Tip());
BOOST_CHECK_EQUAL(fixture.expirationTime(), 3);
fixture.IncrementBlocks(1);
BOOST_CHECK_EQUAL(fixture.expirationTime(), 6);
BOOST_CHECK(fixture.is_best_claim("test", tx1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",2));
fixture.IncrementBlocks(2);
BOOST_CHECK(fixture.is_best_claim("test", tx1));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test",2));
fixture.DecrementBlocks(2);
fixture.IncrementBlocks(5);
BOOST_CHECK(!fixture.is_best_claim("test", tx1));
// Create a claim and support before the fork height, reset to disk, update the claim
// increment past the fork height and make sure we get proper behavior
fixture.DecrementBlocks();
fixture.setExpirationForkHeight(3, 5, 6);
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(),"test2","one",1);
CMutableTransaction s2 = fixture.MakeSupport(fixture.GetCoinbase(),tx2,"test2",1);
fixture.IncrementBlocks(1);
fixture.ReadFromDisk(chainActive.Tip());
CMutableTransaction u2 = fixture.MakeUpdate(tx2, "test2", "two", ClaimIdHash(tx2.GetHash(), 0), 1);
// increment to fork
fixture.IncrementBlocks(2);
BOOST_CHECK(fixture.is_best_claim("test2", u2));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test2",2));
// increment to original expiration, should not be expired
fixture.IncrementBlocks(2);
BOOST_CHECK(fixture.is_best_claim("test2", u2));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test2", 2));
fixture.DecrementBlocks(2);
// increment to extended expiration, should be expired and not one block before
fixture.IncrementBlocks(5);
BOOST_CHECK(!fixture.is_best_claim("test2", u2));
fixture.DecrementBlocks(5);
fixture.IncrementBlocks(4);
BOOST_CHECK(fixture.is_best_claim("test2", u2));
BOOST_CHECK(fixture.best_claim_effective_amount_equals("test2", 1)); // the support expires one block before
}
BOOST_AUTO_TEST_CASE(claim_expiration_test)
{
ClaimTrieChainFixture fixture;
std::string sName("atest");
std::string sValue("testa");
int nThrowaway;
// set expiration time to 80 blocks after the block is created
fixture.setExpirationForkHeight(1000000, 80, 1000000);
// create a claim. verify the expiration event has been scheduled.
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), sName, sValue, 10);
COutPoint tx1OutPoint(tx1.GetHash(), 0);
fixture.IncrementBlocks(1, true);
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// advance until the expiration event occurs. verify the expiration event occurs on time.
fixture.IncrementBlocks(79); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
fixture.IncrementBlocks(1); // 81
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// roll forward a bit and then roll back to before the expiration event. verify the claim is reinserted. verify the expiration event is scheduled again.
fixture.IncrementBlocks(20); // 101
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
fixture.DecrementBlocks(21); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// advance until the expiration event occurs. verify the expiration event occurs on time.
fixture.IncrementBlocks(1); // 81
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// roll back to before the expiration event. verify the claim is reinserted. verify the expiration event is scheduled again.
fixture.DecrementBlocks(2); // 79
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// roll back some more.
fixture.DecrementBlocks(39); // 40
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// spend the claim. verify the expiration event is removed.
CMutableTransaction tx2 = fixture.Spend(tx1);
fixture.IncrementBlocks(1); // 41
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// roll back the spend. verify the expiration event is returned.
fixture.DecrementBlocks(1); // 40
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// advance until the expiration event occurs. verify the event occurs on time.
fixture.IncrementBlocks(40); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
fixture.IncrementBlocks(1); // 81
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// spend the expired claim
fixture.CommitTx(tx2);
fixture.IncrementBlocks(1); // 82
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// undo the spend. verify everything remains empty.
fixture.DecrementBlocks(1); // 81
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// roll back to before the expiration event. verify the claim is reinserted. verify the expiration event is scheduled again.
fixture.DecrementBlocks(1); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// verify the expiration event happens at the right time again
fixture.IncrementBlocks(1); // 81
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// roll back to before the expiration event. verify it gets reinserted and expiration gets scheduled.
fixture.DecrementBlocks(1); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// roll all the way back. verify the claim is removed and the expiration event is removed.
fixture.DecrementBlocks(); // 0
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
// Make sure that when a claim expires, a lesser claim for the same name takes over
CClaimValue val;
// create one claim for the name
fixture.CommitTx(tx1);
fixture.IncrementBlocks(1, true); // 1
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// advance a little while and insert the second claim
fixture.IncrementBlocks(4); // 5
CMutableTransaction tx3 = fixture.MakeClaim(fixture.GetCoinbase(), sName, sValue, 5);
COutPoint tx3OutPoint(tx3.GetHash(), 0);
fixture.IncrementBlocks(1); // 6
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(!fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
// advance until tx3 is valid, ensure tx1 is winning
fixture.IncrementBlocks(4); // 10
BOOST_CHECK(!fixture.queueEmpty());
BOOST_CHECK(fixture.haveClaimInQueue(sName, tx3OutPoint, nThrowaway));
fixture.IncrementBlocks(1); // 11
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint, tx1OutPoint);
uint256 tx1MerkleHash = fixture.getMerkleHash();
// roll back to before tx3 is valid
fixture.DecrementBlocks(1); // 10
// advance again until tx is valid
fixture.IncrementBlocks(1); // 11
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint, tx1OutPoint);
BOOST_CHECK_EQUAL(tx1MerkleHash, fixture.getMerkleHash());
// advance until the expiration event occurs. verify the expiration event occurs on time.
fixture.IncrementBlocks(69, true); // 80
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
fixture.IncrementBlocks(1); // 81
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint, tx3OutPoint);
BOOST_CHECK(tx1MerkleHash != fixture.getMerkleHash());
// spend tx1
fixture.CommitTx(tx2);
fixture.IncrementBlocks(1); // 82
// roll back to when tx1 and tx3 are in the trie and tx1 is winning
fixture.DecrementBlocks(); // 11
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.expirationQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint, tx1OutPoint);
BOOST_CHECK_EQUAL(tx1MerkleHash, fixture.getMerkleHash());
// roll all the way back
fixture.DecrementBlocks();
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.expirationQueueEmpty());
}
BOOST_AUTO_TEST_CASE(expiring_supports_test)
{
ClaimTrieChainFixture fixture;
std::string sName("atest");
std::string sValue1("testa");
std::string sValue2("testb");
CClaimValue val;
std::vector<uint256> blocks_to_invalidate;
fixture.setExpirationForkHeight(1000000, 80, 1000000);
// to be active bid must have: a higher block number and current block >= (current height - block number) / 32
// Verify that supports expire
// Create a 1 LBC claim (tx1)
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), sName, sValue1, 1);
fixture.IncrementBlocks(1); // 1, expires at 81
BOOST_CHECK(pcoinsTip->HaveCoin(COutPoint(tx1.GetHash(), 0)));
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
// Create a 5 LBC support (tx3)
CMutableTransaction tx3 = fixture.MakeSupport(fixture.GetCoinbase(), tx1, sName, 5);
fixture.IncrementBlocks(1); // 2, expires at 82
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
// Advance some, then insert 5 LBC claim (tx2)
fixture.IncrementBlocks(19); // 21
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(), sName, sValue2, 5);
fixture.IncrementBlocks(1); // 22, activating in (22 - 2) / 1 = 20block (but not then active because support still holds tx1 up)
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(!fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
uint256 rootMerkleHash = fixture.getMerkleHash();
// Advance until tx2 is valid
fixture.IncrementBlocks(20); // 42
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(!fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK_EQUAL(rootMerkleHash, fixture.getMerkleHash());
fixture.IncrementBlocks(1); // 43
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx1.GetHash());
rootMerkleHash = fixture.getMerkleHash();
// Update tx1 so that it expires after tx3 expires
uint160 claimId = ClaimIdHash(tx1.GetHash(), 0);
CMutableTransaction tx4 = fixture.MakeUpdate(tx1, sName, sValue1, claimId, tx1.vout[0].nValue);
fixture.IncrementBlocks(1); // 104
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx4.GetHash());
BOOST_CHECK(rootMerkleHash != fixture.getMerkleHash());
// Advance until the support expires
fixture.IncrementBlocks(37); // 81
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
rootMerkleHash = fixture.getMerkleHash();
fixture.IncrementBlocks(1); // 82
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx2.GetHash());
BOOST_CHECK(rootMerkleHash != fixture.getMerkleHash());
rootMerkleHash = fixture.getMerkleHash();
// undo the block, make sure control goes back
fixture.DecrementBlocks(1); // 81
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx4.GetHash());
BOOST_CHECK(rootMerkleHash != fixture.getMerkleHash());
// redo the block, make sure it expires again
fixture.IncrementBlocks(1); // 82
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx2.GetHash());
rootMerkleHash = fixture.getMerkleHash();
// roll back some, spend the support, and make sure nothing unexpected
// happens at the time the support should have expired
fixture.DecrementBlocks(19); // 63
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx4.GetHash());
BOOST_CHECK(rootMerkleHash != fixture.getMerkleHash());
fixture.Spend(tx3);
fixture.IncrementBlocks(1); // 64
blocks_to_invalidate.push_back(chainActive.Tip()->GetBlockHash());
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx2.GetHash());
fixture.IncrementBlocks(20); // 84
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx2.GetHash());
//undo the spend, and make sure it still expires on time
fixture.DecrementBlocks(21); // 63
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx4.GetHash());
fixture.IncrementBlocks(18); // 81
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(!fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx4.GetHash());
fixture.IncrementBlocks(1); // 82
BOOST_CHECK(!pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
BOOST_CHECK(fixture.getInfoForName(sName, val));
BOOST_CHECK_EQUAL(val.outPoint.hash, tx2.GetHash());
// roll all the way back
fixture.DecrementBlocks(82);
BOOST_CHECK(pclaimTrie->empty());
BOOST_CHECK(fixture.queueEmpty());
BOOST_CHECK(fixture.supportEmpty());
BOOST_CHECK(fixture.supportQueueEmpty());
}
BOOST_AUTO_TEST_CASE(get_claim_by_id_test_3)
{
ClaimTrieChainFixture fixture;
fixture.setExpirationForkHeight(1000000, 5, 1000000);
const std::string name = "test";
CMutableTransaction tx1 = fixture.MakeClaim(fixture.GetCoinbase(), name, "one", 1);
uint160 claimId = ClaimIdHash(tx1.GetHash(), 0);
fixture.IncrementBlocks(1);
CClaimValue claimValue;
std::string claimName;
BOOST_CHECK(getClaimById(claimId, claimName, &claimValue));
BOOST_CHECK_EQUAL(claimName, name);
BOOST_CHECK_EQUAL(claimValue.claimId, claimId);
// make second claim with activation delay 1
CMutableTransaction tx2 = fixture.MakeClaim(fixture.GetCoinbase(), name, "one", 2);
uint160 claimId2 = ClaimIdHash(tx2.GetHash(), 0);
fixture.IncrementBlocks(1);
// second claim is not activated yet, but can still access by claim id
BOOST_CHECK(fixture.is_best_claim(name, tx1));
BOOST_CHECK(getClaimById(claimId2, claimName, &claimValue));
BOOST_CHECK_EQUAL(claimName, name);
BOOST_CHECK_EQUAL(claimValue.claimId, claimId2);
fixture.IncrementBlocks(1);
// second claim has activated
BOOST_CHECK(fixture.is_best_claim(name, tx2));
BOOST_CHECK(getClaimById(claimId2, claimName, &claimValue));
BOOST_CHECK_EQUAL(claimName, name);
BOOST_CHECK_EQUAL(claimValue.claimId, claimId2);
fixture.DecrementBlocks(1);
// second claim has been deactivated via decrement
// should still be accesible via claim id
BOOST_CHECK(fixture.is_best_claim(name, tx1));
BOOST_CHECK(getClaimById(claimId2, claimName, &claimValue));
BOOST_CHECK_EQUAL(claimName, name);
BOOST_CHECK_EQUAL(claimValue.claimId, claimId2);
fixture.IncrementBlocks(1);
// second claim should have been re activated via increment
BOOST_CHECK(fixture.is_best_claim(name, tx2));
BOOST_CHECK(getClaimById(claimId2, claimName, &claimValue));
BOOST_CHECK_EQUAL(claimName, name);
BOOST_CHECK_EQUAL(claimValue.claimId, claimId2);
}
BOOST_AUTO_TEST_SUITE_END()

View file

@ -1,419 +0,0 @@
// Copyright (c) 2015-2019 The LBRY Foundation
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://opensource.org/licenses/mit-license.php
#include <functional>
#include <test/claimtriefixture.h>
using namespace std;
BOOST_FIXTURE_TEST_SUITE(claimtriefixture_tests, RegTestingSetup)
BOOST_AUTO_TEST_CASE(claimtriefixture_noop)
{
BOOST_REQUIRE(true);
}
BOOST_AUTO_TEST_SUITE_END()
CMutableTransaction BuildTransaction(const CTransaction& prev, uint32_t prevout, unsigned int numOutputs, int locktime)
{
CMutableTransaction tx;
tx.nVersion = CTransaction::CURRENT_VERSION;
tx.vin.resize(1);
tx.vout.resize(numOutputs);
tx.vin[0].prevout.hash = prev.GetHash();
tx.vin[0].prevout.n = prevout;
tx.vin[0].scriptSig = CScript();
if (locktime != 0) {
// Use a relative locktime for validity X blocks in the future
tx.nLockTime = chainActive.Height() + locktime;
tx.vin[0].nSequence = 0xffffffff - 1;
} else {
tx.nLockTime = 1 << 31; // Disable BIP68
tx.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
}
CAmount valuePerOutput = prev.vout[prevout].nValue;
unsigned int numOutputsCopy = numOutputs;
while ((numOutputsCopy = numOutputsCopy >> 1) > 0)
valuePerOutput = valuePerOutput >> 1;
for (unsigned int i = 0; i < numOutputs; ++i) {
tx.vout[i].scriptPubKey = CScript();
tx.vout[i].nValue = valuePerOutput;
}
return tx;
}
CMutableTransaction BuildTransaction(const uint256& prevhash)
{
CMutableTransaction tx;
tx.nVersion = 1;
tx.nLockTime = 0;
tx.vin.resize(1);
tx.vout.resize(1);
tx.vin[0].prevout.hash = prevhash;
tx.vin[0].prevout.n = 0;
tx.vin[0].scriptSig = CScript();
tx.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
tx.vout[0].scriptPubKey = CScript();
tx.vout[0].nValue = 0;
return tx;
}
BlockAssembler AssemblerForTest()
{
BlockAssembler::Options options;
options.nBlockMaxWeight = DEFAULT_BLOCK_MAX_WEIGHT;
options.blockMinFeeRate = CFeeRate(0);
return BlockAssembler(Params(), options);
}
ClaimTrieChainFixture::ClaimTrieChainFixture() : CClaimTrieCache(pclaimTrie),
unique_block_counter(0), normalization_original(-1), expirationForkHeight(-1), forkhash_original(-1)
{
fRequireStandard = false;
BOOST_CHECK_EQUAL(nNextHeight, chainActive.Height() + 1);
setNormalizationForkHeight(1000000);
gArgs.ForceSetArg("-limitancestorcount", "1000000");
gArgs.ForceSetArg("-limitancestorsize", "1000000");
gArgs.ForceSetArg("-limitdescendantcount", "1000000");
gArgs.ForceSetArg("-limitdescendantsize", "1000000");
num_txs_for_next_block = 0;
coinbase_txs_used = 0;
unique_block_counter = 0;
added_unchecked = false;
// generate coinbases to spend
CreateCoinbases(40, coinbase_txs);
}
ClaimTrieChainFixture::~ClaimTrieChainFixture()
{
added_unchecked = false;
DecrementBlocks(chainActive.Height());
auto& consensus = const_cast<Consensus::Params&>(Params().GetConsensus());
if (normalization_original >= 0)
consensus.nNormalizedNameForkHeight = normalization_original;
if (expirationForkHeight >= 0) {
consensus.nExtendedClaimExpirationForkHeight = expirationForkHeight;
consensus.nExtendedClaimExpirationTime = extendedExpiration;
consensus.nOriginalClaimExpirationTime = originalExpiration;
}
if (forkhash_original >= 0)
consensus.nAllClaimsInMerkleForkHeight = forkhash_original;
}
void ClaimTrieChainFixture::setExpirationForkHeight(int targetMinusCurrent, int64_t preForkExpirationTime, int64_t postForkExpirationTime)
{
int target = chainActive.Height() + targetMinusCurrent;
auto& consensus = const_cast<Consensus::Params&>(Params().GetConsensus());
if (expirationForkHeight < 0) {
expirationForkHeight = consensus.nExtendedClaimExpirationForkHeight;
originalExpiration = consensus.nOriginalClaimExpirationTime;
extendedExpiration = consensus.nExtendedClaimExpirationTime;
}
consensus.nExtendedClaimExpirationForkHeight = target;
consensus.nExtendedClaimExpirationTime = postForkExpirationTime;
consensus.nOriginalClaimExpirationTime = preForkExpirationTime;
setExpirationTime(targetMinusCurrent >= 0 ? preForkExpirationTime : postForkExpirationTime);
}
void ClaimTrieChainFixture::setNormalizationForkHeight(int targetMinusCurrent)
{
int target = chainActive.Height() + targetMinusCurrent;
auto& consensus = const_cast<Consensus::Params&>(Params().GetConsensus());
if (normalization_original < 0)
normalization_original = consensus.nNormalizedNameForkHeight;
consensus.nNormalizedNameForkHeight = target;
}
void ClaimTrieChainFixture::setHashForkHeight(int targetMinusCurrent)
{
int target = chainActive.Height() + targetMinusCurrent;
auto& consensus = const_cast<Consensus::Params&>(Params().GetConsensus());
if (forkhash_original < 0)
forkhash_original = consensus.nAllClaimsInMerkleForkHeight;
consensus.nAllClaimsInMerkleForkHeight = target;
}
bool ClaimTrieChainFixture::CreateBlock(const std::unique_ptr<CBlockTemplate>& pblocktemplate)
{
CBlock* pblock = &pblocktemplate->block;
{
LOCK(cs_main);
pblock->nVersion = 5;
pblock->hashPrevBlock = chainActive.Tip()->GetBlockHash();
pblock->nTime = chainActive.Tip()->GetBlockTime() + Params().GetConsensus().nPowTargetSpacing;
CMutableTransaction txCoinbase(*pblock->vtx[0]);
txCoinbase.vin[0].scriptSig = CScript() << int(chainActive.Height() + 1) << int(++unique_block_counter);
txCoinbase.vout[0].nValue = GetBlockSubsidy(chainActive.Height() + 1, Params().GetConsensus());
pblock->vtx[0] = MakeTransactionRef(std::move(txCoinbase));
pblock->hashMerkleRoot = BlockMerkleRoot(*pblock);
for (uint32_t i = 0;; ++i) {
pblock->nNonce = i;
if (CheckProofOfWork(pblock->GetPoWHash(), pblock->nBits, Params().GetConsensus()))
break;
}
}
auto success = ProcessNewBlock(Params(), std::make_shared<const CBlock>(*pblock), true, nullptr);
return success && pblock->GetHash() == chainActive.Tip()->GetBlockHash();
}
bool ClaimTrieChainFixture::CreateCoinbases(unsigned int num_coinbases, std::vector<CTransaction>& coinbases)
{
std::unique_ptr<CBlockTemplate> pblocktemplate;
coinbases.clear();
BOOST_CHECK(pblocktemplate = AssemblerForTest().CreateNewBlock(CScript() << OP_TRUE));
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), 1);
for (unsigned int i = 0; i < 100 + num_coinbases; ++i) {
BOOST_CHECK(CreateBlock(pblocktemplate));
if (coinbases.size() < num_coinbases)
coinbases.push_back(std::move(*pblocktemplate->block.vtx[0]));
}
return true;
}
void ClaimTrieChainFixture::CommitTx(const CMutableTransaction &tx, bool has_locktime)
{
num_txs_for_next_block++;
if (has_locktime) {
added_unchecked = true;
TestMemPoolEntryHelper entry;
LOCK(mempool.cs);
mempool.addUnchecked(tx.GetHash(), entry.Fee(0).Time(GetTime()).SpendsCoinbase(true).FromTx(tx));
} else {
CValidationState state;
CAmount txFeeRate = CAmount(0);
LOCK(cs_main);
BOOST_CHECK_EQUAL(AcceptToMemoryPool(mempool, state, MakeTransactionRef(tx), nullptr, nullptr, false, txFeeRate, false), true);
}
}
// spend a bid into some non claimtrie related unspent
CMutableTransaction ClaimTrieChainFixture::Spend(const CTransaction &prev)
{
CMutableTransaction tx = BuildTransaction(prev, 0);
tx.vout[0].scriptPubKey = CScript() << OP_TRUE;
tx.vout[0].nValue = prev.vout[0].nValue;
CommitTx(tx);
return tx;
}
// make claim at the current block
CMutableTransaction ClaimTrieChainFixture::MakeClaim(const CTransaction& prev, const std::string& name, const std::string& value, CAmount quantity, int locktime)
{
uint32_t prevout = prev.vout.size() - 1;
while (prevout > 0 && prev.vout[prevout].nValue < quantity)
--prevout;
CMutableTransaction tx = BuildTransaction(prev, prevout, prev.vout[prevout].nValue > quantity ? 2 : 1, locktime);
tx.vout[0].scriptPubKey = ClaimNameScript(name, value);
tx.vout[0].nValue = quantity;
if (tx.vout.size() > 1) {
tx.vout[1].scriptPubKey = CScript() << OP_TRUE;
tx.vout[1].nValue = prev.vout[prevout].nValue - quantity;
}
CommitTx(tx, locktime != 0);
return tx;
}
CMutableTransaction ClaimTrieChainFixture::MakeClaim(const CTransaction& prev, const std::string& name, const std::string& value)
{
return MakeClaim(prev, name, value, prev.vout[0].nValue, 0);
}
// make support at the current block
CMutableTransaction ClaimTrieChainFixture::MakeSupport(const CTransaction &prev, const CTransaction &claimtx, const std::string& name, CAmount quantity)
{
uint32_t prevout = prev.vout.size() - 1;
while (prevout > 0 && prev.vout[prevout].nValue < quantity)
--prevout;
CMutableTransaction tx = BuildTransaction(prev, prevout, prev.vout[prevout].nValue > quantity ? 2 : 1);
tx.vout[0].scriptPubKey = SupportClaimScript(name, ClaimIdHash(claimtx.GetHash(), 0));
tx.vout[0].nValue = quantity;
if (tx.vout.size() > 1) {
tx.vout[1].scriptPubKey = CScript() << OP_TRUE;
tx.vout[1].nValue = prev.vout[prevout].nValue - quantity;
}
CommitTx(tx);
return tx;
}
// make update at the current block
CMutableTransaction ClaimTrieChainFixture::MakeUpdate(const CTransaction &prev, const std::string& name, const std::string& value, const uint160& claimId, CAmount quantity)
{
CMutableTransaction tx = BuildTransaction(prev, 0);
tx.vout[0].scriptPubKey = UpdateClaimScript(name, claimId, value);
tx.vout[0].nValue = quantity;
CommitTx(tx);
return tx;
}
CTransaction ClaimTrieChainFixture::GetCoinbase()
{
return coinbase_txs.at(coinbase_txs_used++);
}
// create i blocks
void ClaimTrieChainFixture::IncrementBlocks(int num_blocks, bool mark)
{
if (mark)
marks.push_back(chainActive.Height());
clear(); // clears the internal cache
for (int i = 0; i < num_blocks; ++i) {
CScript coinbase_scriptpubkey;
coinbase_scriptpubkey << CScriptNum(chainActive.Height());
std::unique_ptr<CBlockTemplate> pblocktemplate = AssemblerForTest().CreateNewBlock(coinbase_scriptpubkey);
BOOST_CHECK(pblocktemplate != nullptr);
if (!added_unchecked)
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), num_txs_for_next_block + 1);
BOOST_CHECK_EQUAL(CreateBlock(pblocktemplate), true);
num_txs_for_next_block = 0;
nNextHeight = chainActive.Height() + 1;
}
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight - 1));
}
// disconnect i blocks from tip
void ClaimTrieChainFixture::DecrementBlocks(int num_blocks)
{
clear(); // clears the internal cache
CValidationState state;
{
LOCK(cs_main);
CBlockIndex* pblockindex = chainActive[chainActive.Height() - num_blocks + 1];
BOOST_CHECK_EQUAL(InvalidateBlock(state, Params(), pblockindex), true);
}
BOOST_CHECK_EQUAL(state.IsValid(), true);
BOOST_CHECK_EQUAL(ActivateBestChain(state, Params()), true);
mempool.clear();
num_txs_for_next_block = 0;
nNextHeight = chainActive.Height() + 1;
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight - 1));
}
// decrement back to last mark
void ClaimTrieChainFixture::DecrementBlocks()
{
int mark = marks.back();
marks.pop_back();
DecrementBlocks(chainActive.Height() - mark);
}
template <typename K>
bool ClaimTrieChainFixture::keyTypeEmpty(uint8_t keyType)
{
boost::scoped_ptr<CDBIterator> pcursor(base->db->NewIterator());
pcursor->SeekToFirst();
while (pcursor->Valid()) {
std::pair<uint8_t, K> key;
if (pcursor->GetKey(key))
if (key.first == keyType)
return false;
pcursor->Next();
}
return true;
}
bool ClaimTrieChainFixture::queueEmpty()
{
for (const auto& claimQueue: claimQueueCache)
if (!claimQueue.second.empty())
return false;
return keyTypeEmpty<int>(CLAIM_QUEUE_ROW);
}
bool ClaimTrieChainFixture::expirationQueueEmpty()
{
for (const auto& expirationQueue: expirationQueueCache)
if (!expirationQueue.second.empty())
return false;
return keyTypeEmpty<int>(CLAIM_EXP_QUEUE_ROW);
}
bool ClaimTrieChainFixture::supportEmpty()
{
for (const auto& entry: supportCache)
if (!entry.second.empty())
return false;
return supportCache.empty() && keyTypeEmpty<std::string>(SUPPORT);
}
bool ClaimTrieChainFixture::supportQueueEmpty()
{
for (const auto& support: supportQueueCache)
if (!support.second.empty())
return false;
return keyTypeEmpty<int>(SUPPORT_QUEUE_ROW);
}
int ClaimTrieChainFixture::proportionalDelayFactor()
{
return base->nProportionalDelayFactor;
}
boost::test_tools::predicate_result negativeResult(const std::function<void(boost::wrap_stringstream&)>& callback)
{
boost::test_tools::predicate_result res(false);
callback(res.message());
return res;
}
boost::test_tools::predicate_result negativeResult(const std::string& message)
{
return negativeResult([&message](boost::wrap_stringstream& stream) {
stream << message;
});
}
// is a claim in queue
boost::test_tools::predicate_result ClaimTrieChainFixture::is_claim_in_queue(const std::string& name, const CTransaction &tx)
{
COutPoint outPoint(tx.GetHash(), 0);
int validAtHeight;
if (haveClaimInQueue(name, outPoint, validAtHeight))
return true;
return negativeResult("Is not a claim in queue");
}
// check if tx is best claim based on outpoint
boost::test_tools::predicate_result ClaimTrieChainFixture::is_best_claim(const std::string& name, const CTransaction &tx)
{
CClaimValue val;
COutPoint outPoint(tx.GetHash(), 0);
bool have_claim = haveClaim(name, outPoint);
bool have_info = getInfoForName(name, val);
if (have_claim && have_info && val.outPoint == outPoint)
return true;
return negativeResult("Is not best claim");
}
// check effective quantity of best claim
boost::test_tools::predicate_result ClaimTrieChainFixture::best_claim_effective_amount_equals(const std::string& name, CAmount amount)
{
CClaimValue val;
bool have_info = getInfoForName(name, val);
if (!have_info)
return negativeResult("No claim found");
CAmount effective_amount = getClaimsForName(name).find(val.claimId).effectiveAmount;
if (effective_amount != amount)
return negativeResult([amount, effective_amount](boost::wrap_stringstream& stream) {
stream << amount << " != " << effective_amount;
});
return true;
}
std::size_t ClaimTrieChainFixture::getTotalNamesInTrie() const
{
return base->getTotalNamesInTrie();
}

View file

@ -1,124 +0,0 @@
// Copyright (c) 2015-2019 The LBRY Foundation
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://opensource.org/licenses/mit-license.php
#ifndef _CLAIMTRIEFIXTURE_H_
#define _CLAIMTRIEFIXTURE_H_
#include <chainparams.h>
#include <claimtrie.h>
#include <coins.h>
#include <consensus/merkle.h>
#include <consensus/validation.h>
#include <miner.h>
#include <nameclaim.h>
#include <policy/policy.h>
#include <pow.h>
#include <primitives/transaction.h>
#include <random.h>
#include <rpc/claimrpchelp.h>
#include <rpc/server.h>
#include <streams.h>
#include <test/test_bitcoin.h>
#include <txmempool.h>
#include <validation.h>
#include <boost/test/unit_test.hpp>
#include <iostream>
extern ::CChainState g_chainstate;
extern ::ArgsManager gArgs;
extern std::vector<std::string> random_strings(std::size_t count);
extern bool getClaimById(const uint160&, std::string&, CClaimValue*);
CMutableTransaction BuildTransaction(const uint256& prevhash);
CMutableTransaction BuildTransaction(const CTransaction& prev, uint32_t prevout=0, unsigned int numOutputs=1, int locktime=0);
BlockAssembler AssemblerForTest();
// Test Fixtures
struct ClaimTrieChainFixture: public CClaimTrieCache
{
std::vector<CTransaction> coinbase_txs;
std::vector<int> marks;
int coinbase_txs_used;
int unique_block_counter;
int normalization_original;
unsigned int num_txs_for_next_block;
bool added_unchecked;
int64_t expirationForkHeight;
int64_t originalExpiration;
int64_t extendedExpiration;
int64_t forkhash_original;
using CClaimTrieCache::getSupportsForName;
ClaimTrieChainFixture();
~ClaimTrieChainFixture();
void setExpirationForkHeight(int targetMinusCurrent, int64_t preForkExpirationTime, int64_t postForkExpirationTime);
void setNormalizationForkHeight(int targetMinusCurrent);
void setHashForkHeight(int targetMinusCurrent);
bool CreateBlock(const std::unique_ptr<CBlockTemplate>& pblocktemplate);
bool CreateCoinbases(unsigned int num_coinbases, std::vector<CTransaction>& coinbases);
void CommitTx(const CMutableTransaction &tx, bool has_locktime=false);
// spend a bid into some non claimtrie related unspent
CMutableTransaction Spend(const CTransaction &prev);
// make claim at the current block
CMutableTransaction MakeClaim(const CTransaction& prev, const std::string& name, const std::string& value, CAmount quantity, int locktime=0);
CMutableTransaction MakeClaim(const CTransaction& prev, const std::string& name, const std::string& value);
// make support at the current block
CMutableTransaction MakeSupport(const CTransaction &prev, const CTransaction &claimtx, const std::string& name, CAmount quantity);
// make update at the current block
CMutableTransaction MakeUpdate(const CTransaction &prev, const std::string& name, const std::string& value, const uint160& claimId, CAmount quantity);
CTransaction GetCoinbase();
// create i blocks
void IncrementBlocks(int num_blocks, bool mark = false);
// disconnect i blocks from tip
void DecrementBlocks(int num_blocks);
// decrement back to last mark
void DecrementBlocks();
bool queueEmpty();
bool expirationQueueEmpty();
bool supportEmpty();
bool supportQueueEmpty();
int proportionalDelayFactor();
// is a claim in queue
boost::test_tools::predicate_result is_claim_in_queue(const std::string& name, const CTransaction &tx);
// check if tx is best claim based on outpoint
boost::test_tools::predicate_result is_best_claim(const std::string& name, const CTransaction &tx);
// check effective quantity of best claim
boost::test_tools::predicate_result best_claim_effective_amount_equals(const std::string& name, CAmount amount);
std::size_t getTotalNamesInTrie() const;
private:
template <typename K>
bool keyTypeEmpty(uint8_t keyType);
};
#endif // _CLAIMTRIEFIXTURE_H_

Some files were not shown because too many files have changed in this diff Show more