Use fully static linkage #364

Closed
bvbfan wants to merge 78 commits from static_link into master
312 changed files with 263947 additions and 37882 deletions

View file

@ -225,6 +225,10 @@ It is easy to solo mine on testnet. (It's easy on mainnet too, but much harder t
We maintain a mailing list for notifications of upgrades, security issues, and soft/hard forks. To join, visit [https://lbry.com/forklist](https://lbry.com/forklist). We maintain a mailing list for notifications of upgrades, security issues, and soft/hard forks. To join, visit [https://lbry.com/forklist](https://lbry.com/forklist).
## Mailing List
We maintain a mailing list for notifications of upgrades, security issues, and soft/hard forks. To join, visit [https://lbry.com/forklist](https://lbry.com/forklist).
## License ## License
This project is MIT licensed. For the full license, see [LICENSE](LICENSE). This project is MIT licensed. For the full license, see [LICENSE](LICENSE).

View file

@ -12,7 +12,7 @@ function HELP {
echo "-q: compile the QT GUI (not working at present)" echo "-q: compile the QT GUI (not working at present)"
echo "-d: force a rebuild of dependencies" echo "-d: force a rebuild of dependencies"
echo "-u: run the unit tests when done" echo "-u: run the unit tests when done"
echo "-g: include debug symbols" echo "-g: compile in debug mode"
echo "-h: show help" echo "-h: show help"
exit 1 exit 1
} }
@ -56,16 +56,18 @@ done
echo "Compiling with ${PARALLEL_JOBS} jobs in parallel." echo "Compiling with ${PARALLEL_JOBS} jobs in parallel."
BUILD_FLAGS=(CXXFLAGS="-O3 -march=native") BUILD_FLAGS=(CXXFLAGS="-O3 -march=native -g")
DEBUG_DEPENDS=""
if test "$COMPILE_WITH_DEBUG" = true; then if test "$COMPILE_WITH_DEBUG" = true; then
BUILD_FLAGS=(--with-debug CXXFLAGS="-Og -g") BUILD_FLAGS=(--with-debug CXXFLAGS="-O0 -g")
DEBUG_DEPENDS="DEBUG=1"
fi fi
cd depends cd depends
if test "$REBUILD_DEPENDENCIES" = true; then if test "$REBUILD_DEPENDENCIES" = true; then
make clean make clean
fi fi
make -j${PARALLEL_JOBS} ${DO_NOT_COMPILE_THE_GUI} V=1 make -j${PARALLEL_JOBS} ${DO_NOT_COMPILE_THE_GUI} ${DEBUG_DEPENDS} V=1
cd .. cd ..
LC_ALL=C autoreconf --install LC_ALL=C autoreconf --install
@ -79,4 +81,4 @@ fi
if test $? -eq 0 && "$RUN_UNIT_TESTS" = true; then if test $? -eq 0 && "$RUN_UNIT_TESTS" = true; then
./src/test/test_lbrycrd ./src/test/test_lbrycrd
fi fi

View file

@ -2,8 +2,8 @@ dnl require autoconf 2.60 (AS_ECHO/AS_ECHO_N)
AC_PREREQ([2.60]) AC_PREREQ([2.60])
define(_CLIENT_VERSION_MAJOR, 0) define(_CLIENT_VERSION_MAJOR, 0)
define(_CLIENT_VERSION_MINOR, 17) define(_CLIENT_VERSION_MINOR, 17)
define(_CLIENT_VERSION_REVISION, 3) define(_CLIENT_VERSION_REVISION, 4)
define(_CLIENT_VERSION_BUILD, 2) define(_CLIENT_VERSION_BUILD, 1)
define(_CLIENT_VERSION_IS_RELEASE, true) define(_CLIENT_VERSION_IS_RELEASE, true)
define(_COPYRIGHT_YEAR, 2019) define(_COPYRIGHT_YEAR, 2019)
define(_COPYRIGHT_HOLDERS,[The %s developers]) define(_COPYRIGHT_HOLDERS,[The %s developers])
@ -14,6 +14,8 @@ AC_CONFIG_HEADERS([src/config/bitcoin-config.h])
AC_CONFIG_AUX_DIR([build-aux]) AC_CONFIG_AUX_DIR([build-aux])
AC_CONFIG_MACRO_DIR([build-aux/m4]) AC_CONFIG_MACRO_DIR([build-aux/m4])
AC_DISABLE_STATIC(yes)
BITCOIN_DAEMON_NAME=lbrycrdd BITCOIN_DAEMON_NAME=lbrycrdd
BITCOIN_GUI_NAME=lbrycrd-qt BITCOIN_GUI_NAME=lbrycrd-qt
BITCOIN_CLI_NAME=lbrycrd-cli BITCOIN_CLI_NAME=lbrycrd-cli
@ -60,8 +62,8 @@ case $host in
lt_cv_deplibs_check_method="pass_all" lt_cv_deplibs_check_method="pass_all"
;; ;;
esac esac
dnl Require C++11 compiler (no GNU extensions) dnl Require C++14 compiler (no GNU extensions)
AX_CXX_COMPILE_STDCXX([11], [noext], [mandatory], [nodefault]) AX_CXX_COMPILE_STDCXX([14], [noext], [mandatory], [nodefault])
dnl Check if -latomic is required for <std::atomic> dnl Check if -latomic is required for <std::atomic>
CHECK_ATOMIC CHECK_ATOMIC
@ -554,6 +556,7 @@ case $host in
*linux*) *linux*)
TARGET_OS=linux TARGET_OS=linux
LEVELDB_TARGET_FLAGS="-DOS_LINUX" LEVELDB_TARGET_FLAGS="-DOS_LINUX"
AS_IF([test "x${enable_static}" = "xyes"], [LIBTOOL_APP_LDFLAGS="$LIBTOOL_APP_LDFLAGS -all-static"])
;; ;;
*kfreebsd*) *kfreebsd*)
LEVELDB_TARGET_FLAGS="-DOS_KFREEBSD" LEVELDB_TARGET_FLAGS="-DOS_KFREEBSD"
@ -589,6 +592,7 @@ if test x$use_pkgconfig = xyes; then
AC_MSG_ERROR(pkg-config not found.) AC_MSG_ERROR(pkg-config not found.)
fi fi
]) ])
AS_IF([test "x${enable_static}" = "xyes"], [PKG_CONFIG="$PKG_CONFIG --static"])
fi fi
if test x$use_extended_functional_tests != xno; then if test x$use_extended_functional_tests != xno; then
@ -646,7 +650,7 @@ if test x$ac_cv_sys_large_files != x &&
CPPFLAGS="$CPPFLAGS -D_LARGE_FILES=$ac_cv_sys_large_files" CPPFLAGS="$CPPFLAGS -D_LARGE_FILES=$ac_cv_sys_large_files"
fi fi
AS_IF([test x$enable_static != x && test x$LDFLAGS != xdarwin], [ AS_IF([test x$enable_static = xyes && test x$LDFLAGS != xdarwin], [
# darwin should be using -stdlib=libc++ (and may need a -static instead) # darwin should be using -stdlib=libc++ (and may need a -static instead)
AX_CHECK_LINK_FLAG([[-static-libstdc++]], [LDFLAGS="$LDFLAGS -static-libstdc++"]) AX_CHECK_LINK_FLAG([[-static-libstdc++]], [LDFLAGS="$LDFLAGS -static-libstdc++"])
]) ])
@ -1084,8 +1088,8 @@ AS_IF([test "x${prefix}" != "xNONE" && test "x$ICU_PREFIX" == "xauto"], [
]) ])
AS_IF([test "x$ICU_PREFIX" != "xauto"], [ AS_IF([test "x$ICU_PREFIX" != "xauto"], [
ICU_CPPFLAGS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig pkg-config icu-io icu-uc icu-i18n --cflags)" ICU_CPPFLAGS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig ${PKG_CONFIG} icu-io icu-uc icu-i18n --cflags)"
ICU_LIBS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig pkg-config icu-io icu-uc icu-i18n --libs)" ICU_LIBS="$(PKG_CONFIG_SYSROOT_DIR=/ PKG_CONFIG_LIBDIR=$ICU_PREFIX/lib/pkgconfig PKG_CONFIG_PATH=$ICU_PREFIX/share/pkgconfig ${PKG_CONFIG} icu-io icu-uc icu-i18n --libs)"
]) ])
if test x$use_pkgconfig = xyes; then if test x$use_pkgconfig = xyes; then
@ -1505,6 +1509,7 @@ esac
echo echo
echo "Options used to compile and link:" echo "Options used to compile and link:"
echo " static link = $enable_static"
echo " with wallet = $enable_wallet" echo " with wallet = $enable_wallet"
echo " with gui / qt = $bitcoin_enable_qt" echo " with gui / qt = $bitcoin_enable_qt"
if test x$bitcoin_enable_qt != xno; then if test x$bitcoin_enable_qt != xno; then

View file

@ -131,7 +131,7 @@ $(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_
-e 's|@build_os@|$(build_os)|' \ -e 's|@build_os@|$(build_os)|' \
-e 's|@host_os@|$(host_os)|' \ -e 's|@host_os@|$(host_os)|' \
-e 's|@CFLAGS@|$(strip $(host_CFLAGS) $(host_$(release_type)_CFLAGS))|' \ -e 's|@CFLAGS@|$(strip $(host_CFLAGS) $(host_$(release_type)_CFLAGS))|' \
-e 's|@CXXFLAGS@|$(strip $(host_CXXFLAGS) $(host_$(release_type)_CXXFLAGS))|' \ -e 's|@CXXFLAGS@|$(strip -pipe $(host_$(release_type)_CXXFLAGS))|' \
-e 's|@CPPFLAGS@|$(strip $(host_CPPFLAGS) $(host_$(release_type)_CPPFLAGS))|' \ -e 's|@CPPFLAGS@|$(strip $(host_CPPFLAGS) $(host_$(release_type)_CPPFLAGS))|' \
-e 's|@LDFLAGS@|$(strip $(host_LDFLAGS) $(host_$(release_type)_LDFLAGS))|' \ -e 's|@LDFLAGS@|$(strip $(host_LDFLAGS) $(host_$(release_type)_LDFLAGS))|' \
-e 's|@allow_host_packages@|$(ALLOW_HOST_PACKAGES)|' \ -e 's|@allow_host_packages@|$(ALLOW_HOST_PACKAGES)|' \

View file

@ -7,12 +7,12 @@ darwin_CC=clang -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroo
darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++ -B $(host_prefix)/native/bin darwin_CXX=clang++ -target $(host) -mmacosx-version-min=$(OSX_MIN_VERSION) -isysroot $(OSX_SDK) -mlinker-version=$(LD64_VERSION) -stdlib=libc++ -B $(host_prefix)/native/bin
darwin_CFLAGS=-pipe darwin_CFLAGS=-pipe
darwin_CXXFLAGS=$(darwin_CFLAGS) darwin_CXXFLAGS=$(darwin_CFLAGS) -std=c++11
darwin_release_CFLAGS=-O2 darwin_release_CFLAGS=-O2 -g
darwin_release_CXXFLAGS=$(darwin_release_CFLAGS) darwin_release_CXXFLAGS=$(darwin_release_CFLAGS)
darwin_debug_CFLAGS=-Og darwin_debug_CFLAGS=-Og -g
darwin_debug_CXXFLAGS=$(darwin_debug_CFLAGS) darwin_debug_CXXFLAGS=-O0 -g
darwin_native_toolchain=native_cctools darwin_native_toolchain=native_cctools

View file

@ -1,11 +1,11 @@
linux_CFLAGS=-pipe linux_CFLAGS=-pipe
linux_CXXFLAGS=$(linux_CFLAGS) linux_CXXFLAGS=$(linux_CFLAGS) -std=c++11
linux_release_CFLAGS=-O2 linux_release_CFLAGS=-O2 -g
linux_release_CXXFLAGS=$(linux_release_CFLAGS) linux_release_CXXFLAGS=$(linux_release_CFLAGS)
linux_debug_CFLAGS=-Og linux_debug_CFLAGS=-O1 -g
linux_debug_CXXFLAGS=$(linux_debug_CFLAGS) linux_debug_CXXFLAGS=-O0 -g
linux_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC linux_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC

View file

@ -1,10 +1,10 @@
mingw32_CFLAGS=-pipe mingw32_CFLAGS=-pipe
mingw32_CXXFLAGS=$(mingw32_CFLAGS) mingw32_CXXFLAGS=$(mingw32_CFLAGS) -std=c++11
mingw32_release_CFLAGS=-O2 mingw32_release_CFLAGS=-O2 -g
mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS) mingw32_release_CXXFLAGS=$(mingw32_release_CFLAGS)
mingw32_debug_CFLAGS=-O1 mingw32_debug_CFLAGS=-O1 -g
mingw32_debug_CXXFLAGS=$(mingw32_debug_CFLAGS) mingw32_debug_CXXFLAGS=-O0 -g
mingw32_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC mingw32_debug_CPPFLAGS=-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC

View file

@ -1,6 +1,6 @@
package=bdb package=bdb
$(package)_version=4.8.30 $(package)_version=4.8.30
$(package)_download_path=http://download.oracle.com/berkeley-db $(package)_download_path=https://download.oracle.com/berkeley-db
$(package)_file_name=db-$($(package)_version).NC.tar.gz $(package)_file_name=db-$($(package)_version).NC.tar.gz
$(package)_sha256_hash=12edc0df75bf9abd7f82f821795bcee50f42cb2e5f76a6a281b85732798364ef $(package)_sha256_hash=12edc0df75bf9abd7f82f821795bcee50f42cb2e5f76a6a281b85732798364ef
$(package)_build_subdir=build_unix $(package)_build_subdir=build_unix
@ -9,7 +9,7 @@ define $(package)_set_vars
$(package)_config_opts=--disable-shared --enable-cxx --disable-replication $(package)_config_opts=--disable-shared --enable-cxx --disable-replication
$(package)_config_opts_mingw32=--enable-mingw $(package)_config_opts_mingw32=--enable-mingw
$(package)_config_opts_linux=--with-pic $(package)_config_opts_linux=--with-pic
$(package)_cxxflags=-std=c++11 $(package)_cppflags_mingw32=-DUNICODE -D_UNICODE
endef endef
define $(package)_preprocess_cmds define $(package)_preprocess_cmds

View file

@ -15,14 +15,12 @@ define $(package)_set_vars
$(package)_archiver_darwin=$($(package)_libtool) $(package)_archiver_darwin=$($(package)_libtool)
$(package)_cflags_linux=-fPIC $(package)_cflags_linux=-fPIC
$(package)_cppflags_linux=-fPIC $(package)_cppflags_linux=-fPIC
$(package)_cxxflags=-std=c++11
endef endef
define $(package)_preprocess_cmds define $(package)_preprocess_cmds
PKG_CONFIG_SYSROOT_DIR=/ \ PKG_CONFIG_SYSROOT_DIR=/ \
PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \ PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \
PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \ PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \
sed -i.old 's/^GEN_DEPS.cc.*/& $(CXXFLAGS)/' source/config/mh-mingw* && \
mkdir -p build && cd build && \ mkdir -p build && cd build && \
../source/runConfigureICU Linux $($(package)_standard_opts) CXXFLAGS=-std=c++11 && \ ../source/runConfigureICU Linux $($(package)_standard_opts) CXXFLAGS=-std=c++11 && \
$(MAKE) && cd .. $(MAKE) && cd ..
@ -32,6 +30,8 @@ define $(package)_config_cmds
PKG_CONFIG_SYSROOT_DIR=/ \ PKG_CONFIG_SYSROOT_DIR=/ \
PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \ PKG_CONFIG_LIBDIR=$(host_prefix)/lib/pkgconfig \
PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \ PKG_CONFIG_PATH=$(host_prefix)/share/pkgconfig \
sed -i.old 's/^GEN_DEPS.c=.*/& $($(package)_cflags)/' config/mh-mingw* && \
sed -i.old 's/^GEN_DEPS.cc=.*/& $($(package)_cxxflags)/' config/mh-mingw* && \
$($(package)_autoconf) $($(package)_autoconf)
endef endef

View file

@ -27,4 +27,5 @@ define $(package)_stage_cmds
endef endef
define $(package)_postprocess_cmds define $(package)_postprocess_cmds
rm lib/*.la
endef endef

View file

@ -4,7 +4,6 @@ $(package)_download_path=$(native_$(package)_download_path)
$(package)_file_name=$(native_$(package)_file_name) $(package)_file_name=$(native_$(package)_file_name)
$(package)_sha256_hash=$(native_$(package)_sha256_hash) $(package)_sha256_hash=$(native_$(package)_sha256_hash)
$(package)_dependencies=native_$(package) $(package)_dependencies=native_$(package)
$(package)_cxxflags=-std=c++11
define $(package)_set_vars define $(package)_set_vars
$(package)_config_opts=--disable-shared --with-protoc=$(build_prefix)/bin/protoc $(package)_config_opts=--disable-shared --with-protoc=$(build_prefix)/bin/protoc

View file

@ -6,9 +6,10 @@ $(package)_sha256_hash=bcbabe1e2c7d0eec4ed612e10b94b112dd5f06fcefa994a0c79a45d83
$(package)_patches=0001-fix-build-with-older-mingw64.patch 0002-disable-pthread_set_name_np.patch $(package)_patches=0001-fix-build-with-older-mingw64.patch 0002-disable-pthread_set_name_np.patch
define $(package)_set_vars define $(package)_set_vars
$(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror $(package)_config_opts=--without-docs --disable-shared --without-libsodium --disable-curve --disable-curve-keygen --disable-perf --disable-Werror --disable-drafts
$(package)_config_opts += --without-libsodium --without-libgssapi_krb5 --without-pgm --without-norm --without-vmci
$(package)_config_opts += --disable-libunwind --disable-radix-tree --without-gcov
$(package)_config_opts_linux=--with-pic $(package)_config_opts_linux=--with-pic
$(package)_cxxflags=-std=c++11
endef endef
define $(package)_preprocess_cmds define $(package)_preprocess_cmds
@ -31,5 +32,5 @@ endef
define $(package)_postprocess_cmds define $(package)_postprocess_cmds
sed -i.old "s/ -lstdc++//" lib/pkgconfig/libzmq.pc && \ sed -i.old "s/ -lstdc++//" lib/pkgconfig/libzmq.pc && \
rm -rf bin share rm -rf bin share lib/*.la
endef endef

View file

@ -0,0 +1,36 @@
FROM ubuntu:18.04 as prep
LABEL MAINTAINER="leopere [at] nixc [dot] us"
## TODO: Implement version pinning. `apt-get install curl=<version>`
RUN apt-get update && \
apt-get -y install unzip curl build-essential && \
apt-get autoclean -y && \
rm -rf /var/lib/apt/lists/*
WORKDIR /
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
COPY ./stuff/start.sh start
COPY ./stuff/healthcheck.sh healthcheck
COPY ./stuff/advance_blocks.sh advance
COPY ./stuff/fix-permissions.c fix-permissions.c
ARG release_url
# require that release_url is set
RUN test -n "$release_url"
RUN curl --progress-bar -L -o ./lbrycrd-linux.zip "$release_url" && \
unzip ./lbrycrd-linux.zip && \
gcc fix-permissions.c -o fix-permissions && \
chmod +x ./lbrycrdd ./lbrycrd-cli ./lbrycrd-tx ./start ./healthcheck ./fix-permissions ./advance
FROM ubuntu:18.04 as app
COPY --from=prep /lbrycrdd /lbrycrd-cli /lbrycrd-tx /start /healthcheck /fix-permissions /advance /usr/bin/
RUN addgroup --gid 1000 lbrycrd && \
adduser lbrycrd --uid 1000 --gid 1000 --gecos GECOS --shell /bin/bash --disabled-password --home /data && \
mkdir /etc/lbry && \
chown lbrycrd /etc/lbry && \
chmod a+s /usr/bin/fix-permissions
RUN apt-get update && apt-get -y install wget && apt-get autoclean -y && rm -rf /var/lib/apt/lists/*
VOLUME ["/data"]
WORKDIR /data
HEALTHCHECK CMD /usr/bin/healthcheck
EXPOSE 9246 9245 11337 29245
USER lbrycrd
CMD ["start"]

View file

@ -0,0 +1,45 @@
# lbrycrd Docker image
`
## Configuration
The lbrycrd container comes with a default configuration you can use for
production. Extra configuration is optional.
The container includes a `start` script that offers a flexible configuration
style. It allows you to mount your own `lbrycrd.conf` file, or use environment
variables, or a mix of both.
### Environment variables
The environment variables override the values in the mounted config file. If no
mounted config file exists, these variables are used to create a fresh config.
* `PORT` - The main lbrycrd port
* `RPC_USER` - The rpc user
* `RPC_PASSWORD` - The rpc user's password
* `RPC_ALLOW_IP` - the subnet that is allowed rpc access
* `RPC_PORT` - The port to bind the rpc service to
* `RPC_BIND` - The ip address to bind the rpc service to
### Example run commands
Running the default configuration:
```
docker run --rm -it -e RUN_MODE=default -e SNAPSHOT_URL="https://lbry.com/snapshot/blockchain" lbry/lbrycrd:latest-release
```
Running with RPC password changed:
```
docker run --rm -it -e RUN_MODE=default -e RPC_PASSWORD=hunter2 lbry/lbrycrd:latest-release
```
Running with a config file but with the RPC password still overridden:
```
docker run --rm -it -v /path/to/lbrycrd.conf:/etc/lbry/lbrycrd.conf -e RUN_MODE=default -e RPC_PASSWORD=hunter2 lbry/lbrycrd:latest-release
```

View file

@ -0,0 +1,26 @@
version: '3.4'
networks:
lbry-network:
external: true
services:
#############
## Lbrycrd ##
#############
lbrycrd:
build: .
restart: always
networks:
lbry-network:
ipv4_address: 10.6.1.2
environment:
RUN_MODE: default
env_file:
- env
expose:
- 9245
- 9246
## host volumes for persistent data such as wallet private keys.
volumes:
- "../persist/data:/data"

View file

@ -0,0 +1,28 @@
version: '3.4'
networks:
lbry-network:
external: true
services:
#############
## Lbrycrd ##
#############
lbrycrd:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "11336:29246"
- "11337:29245"
## host volumes for persistent data such as wallet private keys.
volumes:
- "../persist/data:/data"
networks:
lbry-network:
ipv4_address: 10.6.1.2
environment:
- RUN_MODE=regtest
- PORT=29245
- AUTO_ADVANCE=1

View file

@ -0,0 +1,17 @@
version: '3.4'
services:
#############
## Lbrycrd ##
#############
lbrycrd:
image: lbry/lbrycrd:latest-release
restart: always
ports:
- "11336:9246"
- "11337:11337"
## host volumes for persistent data such as wallet private keys.
volumes:
- "../persist/data:/data"
environment:
- RUN_MODE=testnet

View file

@ -0,0 +1,33 @@
#!/bin/bash
set -euo pipefail
hash docker 2>/dev/null || { echo >&2 'Make sure Docker is installed'; exit 1; }
set +eo pipefail
docker version | grep -q Server
ret=$?
set -eo pipefail
if [ $ret -ne 0 ]; then
echo "Cannot connect to Docker server. Is it running? Do you have the right user permissions?"
exit 1
fi
echo "This will build the docker image using the latest lbrycrd release and push that image to Docker Hub"
echo ""
echo "What Docker tag should I use? It's the part that goes here: lbry/lbrycrd:TAG"
read -p "Docker tag: " docker_tag
if [ -z "$docker_tag" ]; then
echo "Docker tag cannot be empty"
exit 1
fi
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
release_url=$(curl -s https://api.github.com/repos/lbryio/lbrycrd/releases | grep -F 'lbrycrd-linux' | grep download | head -n 1 | cut -d'"' -f4)
docker build --build-arg "release_url=$release_url" --tag "lbry/lbrycrd:${docker_tag}" -f Dockerfile "$DIR"
docker push "lbry/lbrycrd:${docker_tag}"

View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
while sleep 2; do
lbrycrd-cli -conf=/etc/lbry/lbrycrd.conf generate 1
done

View file

@ -0,0 +1,12 @@
COMPOSE_PROJECT_NAME=lbrycrd
#############
## Lbrycrd ##
#############
## TODO: The credentials are a formality but we should randomize these with magic.
RPC_USER=${RPC_USER=lbryrpc}
RPC_PASSWORD=${RPC_PASSWORD:-changeme}
## This should be the internal container IP from which you'll be calling the RPC for Lbrycrdd from.
## TODO: make this more dynamic before we move to scalability
RPC_ALLOW_IP=${RPC_ALLOW_IP:-10.6.1.3}

View file

@ -0,0 +1,9 @@
#include <unistd.h>
int main() {
// This program needs to run with setuid == root
// This needs to be in a compiled language because you cannot setuid bash scripts
setuid(0);
execle("/bin/bash", "bash", "-c",
"/bin/chown -R lbrycrd:lbrycrd /data && /bin/chmod -R 755 /data/",
(char*) NULL, (char*) NULL);
}

View file

@ -0,0 +1 @@
/usr/bin/lbrycrd-cli -conf=/etc/lbry/lbrycrd.conf getnetworkinfo | grep -q '"networkactive": true,'

View file

@ -0,0 +1,139 @@
#!/usr/bin/env bash
CONFIG_PATH=/etc/lbry/lbrycrd.conf
function override_config_option() {
# Remove existing config line from a config file
# and replace with environment fed value.
# Does nothing if the variable does not exist.
# var Name of ENV variable
# option Name of config option
# config Path of config file
local var=$1 option=$2 config=$3
if [[ -v $var ]]; then
# Remove the existing config option:
sed -i "/^$option\W*=/d" $config
# Add the value from the environment:
echo "$option=${!var}" >> $config
fi
}
function set_config() {
if [ -d "$CONFIG_PATH" ]; then
echo "$CONFIG_PATH is a directory when it should be a file."
exit 1
elif [ -f "$CONFIG_PATH" ]; then
echo "Merging the mounted config file with environment variables."
local MERGED_CONFIG=/tmp/lbrycrd_merged.conf
cp $CONFIG_PATH $MERGED_CONFIG
echo "" >> $MERGED_CONFIG
override_config_option PORT port $MERGED_CONFIG
override_config_option RPC_USER rpcuser $MERGED_CONFIG
override_config_option RPC_PASSWORD rpcpassword $MERGED_CONFIG
override_config_option RPC_ALLOW_IP rpcallowip $MERGED_CONFIG
override_config_option RPC_PORT rpcport $MERGED_CONFIG
override_config_option RPC_BIND rpcbind $MERGED_CONFIG
override_config_option TX_INDEX txindex $MERGED_CONFIG
override_config_option MAX_TX_FEE maxtxfee $MERGED_CONFIG
override_config_option DUST_RELAY_FEE dustrelayfee $MERGED_CONFIG
# Make the new merged config file the new CONFIG_PATH
# This ensures that the original file the user mounted remains unmodified
CONFIG_PATH=$MERGED_CONFIG
else
echo "Creating a fresh config file from environment variables."
cat << EOF > $CONFIG_PATH
server=1
printtoconsole=1
port=${PORT:-9246}
rpcuser=${RPC_USER:-lbry}
rpcpassword=${RPC_PASSWORD:-lbry}
rpcallowip=${RPC_ALLOW_IP:-127.0.0.1}
rpcport=${RPC_PORT:-9245}
rpcbind=${RPC_BIND:-"0.0.0.0"}
txindex=${TX_INDEX:-"1"}
maxtxfee=${MAX_TX_FEE:-"0.5"}
dustrelayfee=${DUST_RELAY_FEE:-"0.00000001"}
EOF
fi
echo "Config: "
cat $CONFIG_PATH
}
function download_snapshot() {
local url="${SNAPSHOT_URL:-}" #off by default. latest snapshot at https://lbry.com/snapshot/blockchain
if [[ -n "$url" ]] && [[ ! -d ./.lbrycrd/blocks ]]; then
echo "Downloading blockchain snapshot from $url"
wget --no-verbose -O snapshot.tar.bz2 "$url"
echo "Extracting snapshot..."
mkdir -p ./.lbrycrd
tar xvjf snapshot.tar.bz2 --directory ./.lbrycrd
rm snapshot.tar.bz2
fi
}
## Ensure perms are correct prior to running main binary
/usr/bin/fix-permissions
## You can optionally specify a run mode if you want to use lbry defined presets for compatibility.
case $RUN_MODE in
default )
set_config
download_snapshot
exec lbrycrdd -conf=$CONFIG_PATH
;;
## If it's a first run you need to do a full index including all transactions
## tx index creates an index of every single transaction in the block history if
## not specified it will only create an index for transactions that are related to the wallet or have unspent outputs.
## This is generally specific to chainquery.
reindex )
## Apply this RUN_MODE in the case you need to update a dataset. NOTE: you do not need to use `RUN_MODE reindex` for more than one complete run.
set_config
exec lbrycrdd -conf=$CONFIG_PATH -reindex
;;
regtest )
## Set config params
## TODO: Make this more automagic in the future.
mkdir -p `dirname $CONFIG_PATH`
echo "rpcuser=lbry" > $CONFIG_PATH
echo "rpcpassword=lbry" >> $CONFIG_PATH
echo "rpcport=29245" >> $CONFIG_PATH
echo "rpcbind=0.0.0.0" >> $CONFIG_PATH
echo "rpcallowip=0.0.0.0/0" >> $CONFIG_PATH
echo "regtest=1" >> $CONFIG_PATH
echo "txindex=1" >> $CONFIG_PATH
echo "server=1" >> $CONFIG_PATH
echo "printtoconsole=1" >> $CONFIG_PATH
[ "${AUTO_ADVANCE:-0}" == "1" ] && nohup advance &>/dev/null &
exec lbrycrdd -conf=$CONFIG_PATH $1
;;
testnet )
## Set config params
## TODO: Make this more automagic in the future.
mkdir -p `dirname $CONFIG_PATH`
echo "rpcuser=lbry" > $CONFIG_PATH
echo "rpcpassword=lbry" >> $CONFIG_PATH
echo "rpcport=29245" >> $CONFIG_PATH
echo "rpcbind=0.0.0.0" >> $CONFIG_PATH
echo "rpcallowip=0.0.0.0/0" >> $CONFIG_PATH
echo "testnet=1" >> $CONFIG_PATH
echo "txindex=1" >> $CONFIG_PATH
echo "server=1" >> $CONFIG_PATH
echo "printtoconsole=1" >> $CONFIG_PATH
#nohup advance &>/dev/null &
exec lbrycrdd -conf=$CONFIG_PATH $1
;;
* )
echo "Error, you must define a RUN_MODE environment variable."
echo "Available options are testnet, regtest, chainquery, default, and reindex"
;;
esac

View file

@ -19,16 +19,18 @@ else
LIBUNIVALUE = $(UNIVALUE_LIBS) LIBUNIVALUE = $(UNIVALUE_LIBS)
endif endif
BITCOIN_INCLUDES=-I$(builddir) $(BDB_CPPFLAGS) $(ICU_CPPFLAGS) $(BOOST_CPPFLAGS) $(LEVELDB_CPPFLAGS) $(CRYPTO_CFLAGS) $(ICU_CFLAGS) BITCOIN_INCLUDES=-I$(builddir) $(BDB_CPPFLAGS) $(ICU_CPPFLAGS) $(BOOST_CPPFLAGS) $(CRYPTO_CFLAGS) $(ICU_CFLAGS)
BITCOIN_INCLUDES += -I$(srcdir)/secp256k1/include BITCOIN_INCLUDES += -I$(srcdir)/secp256k1/include
BITCOIN_INCLUDES += $(UNIVALUE_CFLAGS) BITCOIN_INCLUDES += $(UNIVALUE_CFLAGS)
BITCOIN_INCLUDES += -I$(srcdir)/claimtrie
LIBBITCOIN_SERVER=libbitcoin_server.a LIBBITCOIN_SERVER=libbitcoin_server.a
LIBBITCOIN_COMMON=libbitcoin_common.a LIBBITCOIN_COMMON=libbitcoin_common.a
LIBBITCOIN_CONSENSUS=libbitcoin_consensus.a LIBBITCOIN_CONSENSUS=libbitcoin_consensus.a
LIBBITCOIN_CLI=libbitcoin_cli.a LIBBITCOIN_CLI=libbitcoin_cli.a
LIBBITCOIN_UTIL=libbitcoin_util.a LIBBITCOIN_UTIL=libbitcoin_util.a
LIBCLAIMTRIE=claimtrie/libclaimtrie.a
LIBBITCOIN_CRYPTO_BASE=crypto/libbitcoin_crypto_base.a LIBBITCOIN_CRYPTO_BASE=crypto/libbitcoin_crypto_base.a
LIBBITCOINQT=qt/libbitcoinqt.a LIBBITCOINQT=qt/libbitcoinqt.a
LIBSECP256K1=secp256k1/libsecp256k1.la LIBSECP256K1=secp256k1/libsecp256k1.la
@ -63,6 +65,7 @@ $(LIBSECP256K1): $(wildcard secp256k1/src/*.h) $(wildcard secp256k1/src/*.c) $(w
# Make is not made aware of per-object dependencies to avoid limiting building parallelization # Make is not made aware of per-object dependencies to avoid limiting building parallelization
# But to build the less dependent modules first, we manually select their order here: # But to build the less dependent modules first, we manually select their order here:
EXTRA_LIBRARIES += \ EXTRA_LIBRARIES += \
$(LIBCLAIMTRIE) \
$(LIBBITCOIN_CRYPTO) \ $(LIBBITCOIN_CRYPTO) \
$(LIBBITCOIN_UTIL) \ $(LIBBITCOIN_UTIL) \
$(LIBBITCOIN_COMMON) \ $(LIBBITCOIN_COMMON) \
@ -103,7 +106,7 @@ BITCOIN_CORE_H = \
checkpoints.h \ checkpoints.h \
checkqueue.h \ checkqueue.h \
claimscriptop.h \ claimscriptop.h \
claimtrie.h \ claimtrie_serial.h \
clientversion.h \ clientversion.h \
coins.h \ coins.h \
compat.h \ compat.h \
@ -119,8 +122,6 @@ BITCOIN_CORE_H = \
fs.h \ fs.h \
httprpc.h \ httprpc.h \
httpserver.h \ httpserver.h \
index/base.h \
index/txindex.h \
indirectmap.h \ indirectmap.h \
init.h \ init.h \
interfaces/handler.h \ interfaces/handler.h \
@ -130,7 +131,6 @@ BITCOIN_CORE_H = \
key_io.h \ key_io.h \
keystore.h \ keystore.h \
lbry.h \ lbry.h \
dbwrapper.h \
limitedmap.h \ limitedmap.h \
logging.h \ logging.h \
memusage.h \ memusage.h \
@ -149,7 +149,6 @@ BITCOIN_CORE_H = \
policy/policy.h \ policy/policy.h \
policy/rbf.h \ policy/rbf.h \
pow.h \ pow.h \
prefixtrie.h \
protocol.h \ protocol.h \
random.h \ random.h \
reverse_iterator.h \ reverse_iterator.h \
@ -230,15 +229,10 @@ libbitcoin_server_a_SOURCES = \
chain.cpp \ chain.cpp \
checkpoints.cpp \ checkpoints.cpp \
claimscriptop.cpp \ claimscriptop.cpp \
claimtrie.cpp \
claimtrieforks.cpp \
consensus/tx_verify.cpp \ consensus/tx_verify.cpp \
httprpc.cpp \ httprpc.cpp \
httpserver.cpp \ httpserver.cpp \
index/base.cpp \
index/txindex.cpp \
init.cpp \ init.cpp \
dbwrapper.cpp \
lbry.cpp \ lbry.cpp \
merkleblock.cpp \ merkleblock.cpp \
miner.cpp \ miner.cpp \
@ -251,7 +245,6 @@ libbitcoin_server_a_SOURCES = \
policy/policy.cpp \ policy/policy.cpp \
policy/rbf.cpp \ policy/rbf.cpp \
pow.cpp \ pow.cpp \
prefixtrie.cpp \
rest.cpp \ rest.cpp \
rpc/blockchain.cpp \ rpc/blockchain.cpp \
rpc/claimtrie.cpp \ rpc/claimtrie.cpp \
@ -413,6 +406,21 @@ libbitcoin_common_a_SOURCES = \
warnings.cpp \ warnings.cpp \
$(BITCOIN_CORE_H) $(BITCOIN_CORE_H)
# claimtrie: shared between all executables.
claimtrie_libclaimtrie_a_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
claimtrie_libclaimtrie_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
claimtrie_libclaimtrie_a_CFLAGS = $(PIE_FLAGS)
claimtrie_libclaimtrie_a_SOURCES = \
claimtrie/blob.cpp \
claimtrie/data.cpp \
claimtrie/forks.cpp \
claimtrie/hashes.cpp \
claimtrie/log.cpp \
claimtrie/sqlite/sqlite3.c \
claimtrie/trie.cpp \
claimtrie/txoutpoint.cpp \
claimtrie/uints.cpp
# util: shared between all executables. # util: shared between all executables.
# This library *must* be included to make sure that the glibc # This library *must* be included to make sure that the glibc
# backward-compatibility objects and their sanity checks are linked. # backward-compatibility objects and their sanity checks are linked.
@ -475,12 +483,10 @@ lbrycrdd_LDADD = \
$(LIBBITCOIN_ZMQ) \ $(LIBBITCOIN_ZMQ) \
$(LIBBITCOIN_CONSENSUS) \ $(LIBBITCOIN_CONSENSUS) \
$(LIBBITCOIN_CRYPTO) \ $(LIBBITCOIN_CRYPTO) \
$(LIBLEVELDB) \
$(LIBLEVELDB_SSE42) \
$(LIBMEMENV) \ $(LIBMEMENV) \
$(LIBSECP256K1) $(LIBSECP256K1)
lbrycrdd_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(ZMQ_LIBS) lbrycrdd_LDADD += $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(ZMQ_LIBS)
# lbrycrd-cli binary # # lbrycrd-cli binary #
lbrycrd_cli_SOURCES = bitcoin-cli.cpp lbrycrd_cli_SOURCES = bitcoin-cli.cpp
@ -498,7 +504,7 @@ lbrycrd_cli_LDADD = \
$(LIBBITCOIN_UTIL) \ $(LIBBITCOIN_UTIL) \
$(LIBBITCOIN_CRYPTO) $(LIBBITCOIN_CRYPTO)
lbrycrd_cli_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(EVENT_LIBS) lbrycrd_cli_LDADD += $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(CRYPTO_LIBS) $(ICU_LIBS) $(EVENT_LIBS)
# #
# bitcoin-tx binary # # bitcoin-tx binary #
@ -519,7 +525,7 @@ lbrycrd_tx_LDADD = \
$(LIBBITCOIN_CRYPTO) \ $(LIBBITCOIN_CRYPTO) \
$(LIBSECP256K1) $(LIBSECP256K1)
lbrycrd_tx_LDADD += $(BOOST_LIBS) $(ICU_LIBS) $(CRYPTO_LIBS) lbrycrd_tx_LDADD += $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(ICU_LIBS) $(CRYPTO_LIBS)
# #
# bitcoinconsensus library # # bitcoinconsensus library #
@ -533,7 +539,7 @@ endif
libbitcoinconsensus_la_LDFLAGS = $(AM_LDFLAGS) -no-undefined $(RELDFLAGS) libbitcoinconsensus_la_LDFLAGS = $(AM_LDFLAGS) -no-undefined $(RELDFLAGS)
libbitcoinconsensus_la_LIBADD = $(LIBSECP256K1) libbitcoinconsensus_la_LIBADD = $(LIBSECP256K1)
libbitcoinconsensus_la_CPPFLAGS = $(AM_CPPFLAGS) -I$(builddir)/obj -I$(srcdir)/secp256k1/include -DBUILD_BITCOIN_INTERNAL libbitcoinconsensus_la_CPPFLAGS = $(AM_CPPFLAGS) -I$(srcdir)/claimtrie -I$(builddir)/obj -I$(srcdir)/secp256k1/include -DBUILD_BITCOIN_INTERNAL
libbitcoinconsensus_la_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS) libbitcoinconsensus_la_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
endif endif
@ -574,7 +580,6 @@ $(top_srcdir)/$(subdir)/config/bitcoin-config.h.in: $(am__configure_deps)
clean-local: clean-local:
-$(MAKE) -C secp256k1 clean -$(MAKE) -C secp256k1 clean
-$(MAKE) -C univalue clean -$(MAKE) -C univalue clean
-rm -f leveldb/*/*.gcda leveldb/*/*.gcno leveldb/helpers/memenv/*.gcda leveldb/helpers/memenv/*.gcno
-rm -f config.h -rm -f config.h
-rm -rf test/__pycache__ -rm -rf test/__pycache__
@ -599,10 +604,6 @@ endif
@test -f $(PROTOC) @test -f $(PROTOC)
$(AM_V_GEN) $(PROTOC) --cpp_out=$(@D) --proto_path=$(<D) $< $(AM_V_GEN) $(PROTOC) --cpp_out=$(@D) --proto_path=$(<D) $<
if EMBEDDED_LEVELDB
include Makefile.leveldb.include
endif
if ENABLE_TESTS if ENABLE_TESTS
include Makefile.test.include include Makefile.test.include
endif endif

View file

@ -41,8 +41,6 @@ bench_bench_bitcoin_LDADD = \
$(LIBBITCOIN_UTIL) \ $(LIBBITCOIN_UTIL) \
$(LIBBITCOIN_CONSENSUS) \ $(LIBBITCOIN_CONSENSUS) \
$(LIBBITCOIN_CRYPTO) \ $(LIBBITCOIN_CRYPTO) \
$(LIBLEVELDB) \
$(LIBLEVELDB_SSE42) \
$(LIBMEMENV) \ $(LIBMEMENV) \
$(LIBSECP256K1) \ $(LIBSECP256K1) \
$(LIBUNIVALUE) $(LIBUNIVALUE)
@ -55,7 +53,7 @@ if ENABLE_WALLET
bench_bench_bitcoin_SOURCES += bench/coin_selection.cpp bench_bench_bitcoin_SOURCES += bench/coin_selection.cpp
endif endif
bench_bench_bitcoin_LDADD += $(BOOST_LIBS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) bench_bench_bitcoin_LDADD += $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
bench_bench_bitcoin_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) bench_bench_bitcoin_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
CLEAN_BITCOIN_BENCH = bench/*.gcda bench/*.gcno $(GENERATED_BENCH_FILES) CLEAN_BITCOIN_BENCH = bench/*.gcda bench/*.gcno $(GENERATED_BENCH_FILES)

View file

@ -1,149 +0,0 @@
# Copyright (c) 2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
LIBLEVELDB_INT = leveldb/libleveldb.a
LIBMEMENV_INT = leveldb/libmemenv.a
LIBLEVELDB_SSE42_INT = leveldb/libleveldb_sse42.a
EXTRA_LIBRARIES += $(LIBLEVELDB_INT)
EXTRA_LIBRARIES += $(LIBMEMENV_INT)
EXTRA_LIBRARIES += $(LIBLEVELDB_SSE42_INT)
LIBLEVELDB += $(LIBLEVELDB_INT)
LIBMEMENV += $(LIBMEMENV_INT)
LIBLEVELDB_SSE42 = $(LIBLEVELDB_SSE42_INT)
LEVELDB_CPPFLAGS += -I$(srcdir)/leveldb/include
LEVELDB_CPPFLAGS += -I$(srcdir)/leveldb/helpers/memenv
LEVELDB_CPPFLAGS_INT =
LEVELDB_CPPFLAGS_INT += -I$(srcdir)/leveldb
LEVELDB_CPPFLAGS_INT += $(LEVELDB_TARGET_FLAGS)
LEVELDB_CPPFLAGS_INT += -DLEVELDB_ATOMIC_PRESENT
LEVELDB_CPPFLAGS_INT += -D__STDC_LIMIT_MACROS
if TARGET_WINDOWS
LEVELDB_CPPFLAGS_INT += -DLEVELDB_PLATFORM_WINDOWS -DWINVER=0x0500 -D__USE_MINGW_ANSI_STDIO=1
else
LEVELDB_CPPFLAGS_INT += -DLEVELDB_PLATFORM_POSIX
endif
leveldb_libleveldb_a_CPPFLAGS = $(AM_CPPFLAGS) $(LEVELDB_CPPFLAGS_INT) $(LEVELDB_CPPFLAGS)
leveldb_libleveldb_a_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
leveldb_libleveldb_a_SOURCES=
leveldb_libleveldb_a_SOURCES += leveldb/port/atomic_pointer.h
leveldb_libleveldb_a_SOURCES += leveldb/port/port_example.h
leveldb_libleveldb_a_SOURCES += leveldb/port/port_posix.h
leveldb_libleveldb_a_SOURCES += leveldb/port/win/stdint.h
leveldb_libleveldb_a_SOURCES += leveldb/port/port.h
leveldb_libleveldb_a_SOURCES += leveldb/port/port_win.h
leveldb_libleveldb_a_SOURCES += leveldb/port/thread_annotations.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/db.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/options.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/comparator.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/filter_policy.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/slice.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/table_builder.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/env.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/c.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/iterator.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/cache.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/dumpfile.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/table.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/write_batch.h
leveldb_libleveldb_a_SOURCES += leveldb/include/leveldb/status.h
leveldb_libleveldb_a_SOURCES += leveldb/db/log_format.h
leveldb_libleveldb_a_SOURCES += leveldb/db/memtable.h
leveldb_libleveldb_a_SOURCES += leveldb/db/version_set.h
leveldb_libleveldb_a_SOURCES += leveldb/db/write_batch_internal.h
leveldb_libleveldb_a_SOURCES += leveldb/db/filename.h
leveldb_libleveldb_a_SOURCES += leveldb/db/version_edit.h
leveldb_libleveldb_a_SOURCES += leveldb/db/dbformat.h
leveldb_libleveldb_a_SOURCES += leveldb/db/builder.h
leveldb_libleveldb_a_SOURCES += leveldb/db/log_writer.h
leveldb_libleveldb_a_SOURCES += leveldb/db/db_iter.h
leveldb_libleveldb_a_SOURCES += leveldb/db/skiplist.h
leveldb_libleveldb_a_SOURCES += leveldb/db/db_impl.h
leveldb_libleveldb_a_SOURCES += leveldb/db/table_cache.h
leveldb_libleveldb_a_SOURCES += leveldb/db/snapshot.h
leveldb_libleveldb_a_SOURCES += leveldb/db/log_reader.h
leveldb_libleveldb_a_SOURCES += leveldb/table/filter_block.h
leveldb_libleveldb_a_SOURCES += leveldb/table/block_builder.h
leveldb_libleveldb_a_SOURCES += leveldb/table/block.h
leveldb_libleveldb_a_SOURCES += leveldb/table/two_level_iterator.h
leveldb_libleveldb_a_SOURCES += leveldb/table/merger.h
leveldb_libleveldb_a_SOURCES += leveldb/table/format.h
leveldb_libleveldb_a_SOURCES += leveldb/table/iterator_wrapper.h
leveldb_libleveldb_a_SOURCES += leveldb/util/crc32c.h
leveldb_libleveldb_a_SOURCES += leveldb/util/env_posix_test_helper.h
leveldb_libleveldb_a_SOURCES += leveldb/util/arena.h
leveldb_libleveldb_a_SOURCES += leveldb/util/random.h
leveldb_libleveldb_a_SOURCES += leveldb/util/posix_logger.h
leveldb_libleveldb_a_SOURCES += leveldb/util/hash.h
leveldb_libleveldb_a_SOURCES += leveldb/util/histogram.h
leveldb_libleveldb_a_SOURCES += leveldb/util/coding.h
leveldb_libleveldb_a_SOURCES += leveldb/util/testutil.h
leveldb_libleveldb_a_SOURCES += leveldb/util/mutexlock.h
leveldb_libleveldb_a_SOURCES += leveldb/util/logging.h
leveldb_libleveldb_a_SOURCES += leveldb/util/testharness.h
leveldb_libleveldb_a_SOURCES += leveldb/db/builder.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/c.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/dbformat.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/db_impl.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/db_iter.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/dumpfile.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/filename.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/log_reader.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/log_writer.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/memtable.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/repair.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/table_cache.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/version_edit.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/version_set.cc
leveldb_libleveldb_a_SOURCES += leveldb/db/write_batch.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/block_builder.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/block.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/filter_block.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/format.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/iterator.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/merger.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/table_builder.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/table.cc
leveldb_libleveldb_a_SOURCES += leveldb/table/two_level_iterator.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/arena.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/bloom.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/cache.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/coding.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/comparator.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/crc32c.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/env.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/env_posix.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/filter_policy.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/hash.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/histogram.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/logging.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/options.cc
leveldb_libleveldb_a_SOURCES += leveldb/util/status.cc
if TARGET_WINDOWS
leveldb_libleveldb_a_SOURCES += leveldb/util/env_win.cc
leveldb_libleveldb_a_SOURCES += leveldb/port/port_win.cc
else
leveldb_libleveldb_a_SOURCES += leveldb/port/port_posix.cc
endif
leveldb_libmemenv_a_CPPFLAGS = $(leveldb_libleveldb_a_CPPFLAGS)
leveldb_libmemenv_a_CXXFLAGS = $(leveldb_libleveldb_a_CXXFLAGS)
leveldb_libmemenv_a_SOURCES = leveldb/helpers/memenv/memenv.cc
leveldb_libmemenv_a_SOURCES += leveldb/helpers/memenv/memenv.h
leveldb_libleveldb_sse42_a_CPPFLAGS = $(leveldb_libleveldb_a_CPPFLAGS)
leveldb_libleveldb_sse42_a_CXXFLAGS = $(leveldb_libleveldb_a_CXXFLAGS)
if ENABLE_HWCRC32
leveldb_libleveldb_sse42_a_CPPFLAGS += -DLEVELDB_PLATFORM_POSIX_SSE
leveldb_libleveldb_sse42_a_CXXFLAGS += $(SSE42_CXXFLAGS)
endif
leveldb_libleveldb_sse42_a_SOURCES = leveldb/port/port_posix_sse.cc

View file

@ -408,8 +408,8 @@ endif
if ENABLE_ZMQ if ENABLE_ZMQ
qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS) qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
endif endif
qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) $(LIBLEVELDB_SSE42) $(LIBMEMENV) \ qt_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBMEMENV) \
$(BOOST_LIBS) $(QT_LIBS) $(QT_DBUS_LIBS) $(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \ $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(QT_LIBS) $(QT_DBUS_LIBS) $(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
$(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
qt_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) qt_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
qt_lbrycrd_qt_LIBTOOLFLAGS = $(AM_LIBTOOLFLAGS) --tag CXX qt_lbrycrd_qt_LIBTOOLFLAGS = $(AM_LIBTOOLFLAGS) --tag CXX
@ -443,10 +443,10 @@ CLEAN_QT = $(nodist_qt_libbitcoinqt_a_SOURCES) $(QT_QM) $(QT_FORMS_H) qt/*.gcda
CLEANFILES += $(CLEAN_QT) CLEANFILES += $(CLEAN_QT)
bitcoin_qt_clean: FORCE lbrycrd_qt_clean: FORCE
rm -f $(CLEAN_QT) $(qt_libbitcoinqt_a_OBJECTS) $(qt_lbrycrd_qt_OBJECTS) qt/lbrycrd-qt$(EXEEXT) $(LIBBITCOINQT) rm -f $(CLEAN_QT) $(qt_libbitcoinqt_a_OBJECTS) $(qt_lbrycrd_qt_OBJECTS) qt/lbrycrd-qt$(EXEEXT) $(LIBBITCOINQT)
bitcoin_qt : qt/lbrycrd-qt$(EXEEXT) lbrycrd_qt : qt/lbrycrd-qt$(EXEEXT)
ui_%.h: %.ui ui_%.h: %.ui
@test -f $(UIC) @test -f $(UIC)

View file

@ -4,6 +4,7 @@
bin_PROGRAMS += qt/test/test_lbrycrd-qt bin_PROGRAMS += qt/test/test_lbrycrd-qt
TESTS += qt/test/test_lbrycrd-qt TESTS += qt/test/test_lbrycrd-qt
TEST_QT_BINARY=qt/test/test_lbrycrd-qt$(EXEEXT)
TEST_QT_MOC_CPP = \ TEST_QT_MOC_CPP = \
qt/test/moc_compattests.cpp \ qt/test/moc_compattests.cpp \
@ -62,8 +63,8 @@ endif
if ENABLE_ZMQ if ENABLE_ZMQ
qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS) qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
endif endif
qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) $(LIBLEVELDB) \ qt_test_test_lbrycrd_qt_LDADD += $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) \
$(LIBLEVELDB_SSE42) $(LIBMEMENV) $(BOOST_LIBS) $(QT_DBUS_LIBS) $(QT_TEST_LIBS) $(QT_LIBS) \ $(LIBMEMENV) $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(QT_DBUS_LIBS) $(QT_TEST_LIBS) $(QT_LIBS) \
$(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \ $(QR_LIBS) $(PROTOBUF_LIBS) $(ICU_LIBS) $(BDB_LIBS) $(SSL_LIBS) $(CRYPTO_LIBS) $(MINIUPNPC_LIBS) $(LIBSECP256K1) \
$(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS)
qt_test_test_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) qt_test_test_lbrycrd_qt_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(QT_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
@ -73,10 +74,10 @@ CLEAN_BITCOIN_QT_TEST = $(TEST_QT_MOC_CPP) qt/test/*.gcda qt/test/*.gcno
CLEANFILES += $(CLEAN_BITCOIN_QT_TEST) CLEANFILES += $(CLEAN_BITCOIN_QT_TEST)
test_lbrycrd_qt : qt/test/test_bitcoin-qt$(EXEEXT) test_lbrycrd_qt : $(TEST_QT_BINARY)
test_lbrycrd_qt_check : qt/test/test_bitcoin-qt$(EXEEXT) FORCE test_lbrycrd_qt_check : $(TEST_QT_BINARY) FORCE
$(MAKE) check-TESTS TESTS=$^ $(MAKE) check-TESTS TESTS=$^
test_lbrycrd_qt_clean: FORCE test_lbrycrd_qt_clean: FORCE
rm -f $(CLEAN_BITCOIN_QT_TEST) $(qt_test_test_lbrycrd_qt_OBJECTS) rm -f $(CLEAN_BITCOIN_QT_TEST) $(qt_test_test_lbrycrd_qt_OBJECTS) $(TEST_QT_BINARY)

View file

@ -56,7 +56,6 @@ BITCOIN_TESTS =\
test/key_io_tests.cpp \ test/key_io_tests.cpp \
test/key_tests.cpp \ test/key_tests.cpp \
test/limitedmap_tests.cpp \ test/limitedmap_tests.cpp \
test/dbwrapper_tests.cpp \
test/main_tests.cpp \ test/main_tests.cpp \
test/mempool_tests.cpp \ test/mempool_tests.cpp \
test/merkle_tests.cpp \ test/merkle_tests.cpp \
@ -76,7 +75,6 @@ BITCOIN_TESTS =\
test/pmt_tests.cpp \ test/pmt_tests.cpp \
test/policyestimator_tests.cpp \ test/policyestimator_tests.cpp \
test/pow_tests.cpp \ test/pow_tests.cpp \
test/prefixtrie_tests.cpp \
test/prevector_tests.cpp \ test/prevector_tests.cpp \
test/raii_event_tests.cpp \ test/raii_event_tests.cpp \
test/random_tests.cpp \ test/random_tests.cpp \
@ -96,7 +94,6 @@ BITCOIN_TESTS =\
test/timedata_tests.cpp \ test/timedata_tests.cpp \
test/torcontrol_tests.cpp \ test/torcontrol_tests.cpp \
test/transaction_tests.cpp \ test/transaction_tests.cpp \
test/txindex_tests.cpp \
test/txvalidation_tests.cpp \ test/txvalidation_tests.cpp \
test/txvalidationcache_tests.cpp \ test/txvalidationcache_tests.cpp \
test/uint256_tests.cpp \ test/uint256_tests.cpp \
@ -127,7 +124,7 @@ test_test_lbrycrd_LDADD += $(LIBBITCOIN_WALLET)
endif endif
test_test_lbrycrd_LDADD += $(LIBBITCOIN_SERVER) $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) \ test_test_lbrycrd_LDADD += $(LIBBITCOIN_SERVER) $(LIBBITCOIN_CLI) $(LIBBITCOIN_COMMON) $(LIBBITCOIN_UTIL) $(LIBBITCOIN_CONSENSUS) $(LIBBITCOIN_CRYPTO) $(LIBUNIVALUE) \
$(LIBLEVELDB) $(LIBLEVELDB_SSE42) $(LIBMEMENV) $(BOOST_LIBS) $(BOOST_UNIT_TEST_FRAMEWORK_LIB) $(LIBSECP256K1) $(EVENT_LIBS) $(EVENT_PTHREADS_LIBS) $(LIBMEMENV) $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(BOOST_UNIT_TEST_FRAMEWORK_LIB) $(LIBSECP256K1) $(EVENT_LIBS) $(EVENT_PTHREADS_LIBS)
test_test_lbrycrd_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS) test_test_lbrycrd_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
test_test_lbrycrd_LDADD += $(LIBBITCOIN_CONSENSUS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS) test_test_lbrycrd_LDADD += $(LIBBITCOIN_CONSENSUS) $(BDB_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) $(MINIUPNPC_LIBS)
@ -156,7 +153,7 @@ test_test_lbrycrd_fuzzy_LDADD = \
$(LIBBITCOIN_CRYPTO_SHANI) \ $(LIBBITCOIN_CRYPTO_SHANI) \
$(LIBSECP256K1) $(LIBSECP256K1)
test_test_lbrycrd_fuzzy_LDADD += $(BOOST_LIBS) $(CRYPTO_LIBS) $(ICU_LIBS) test_test_lbrycrd_fuzzy_LDADD += $(BOOST_LIBS) $(LIBCLAIMTRIE) $(BOOST_LOCALE_LIB) $(CRYPTO_LIBS) $(ICU_LIBS)
nodist_test_test_lbrycrd_SOURCES = $(GENERATED_TEST_FILES) nodist_test_test_lbrycrd_SOURCES = $(GENERATED_TEST_FILES)

View file

@ -11,23 +11,23 @@
int CAddrInfo::GetTriedBucket(const uint256& nKey) const int CAddrInfo::GetTriedBucket(const uint256& nKey) const
{ {
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetKey()).GetHash().GetCheapHash(); auto hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetKey()).GetHash();
uint64_t hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << (hash1 % ADDRMAN_TRIED_BUCKETS_PER_GROUP)).GetHash().GetCheapHash(); auto hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << (GetCheapHash(hash1) % ADDRMAN_TRIED_BUCKETS_PER_GROUP)).GetHash();
return hash2 % ADDRMAN_TRIED_BUCKET_COUNT; return GetCheapHash(hash2) % ADDRMAN_TRIED_BUCKET_COUNT;
} }
int CAddrInfo::GetNewBucket(const uint256& nKey, const CNetAddr& src) const int CAddrInfo::GetNewBucket(const uint256& nKey, const CNetAddr& src) const
{ {
std::vector<unsigned char> vchSourceGroupKey = src.GetGroup(); std::vector<unsigned char> vchSourceGroupKey = src.GetGroup();
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << vchSourceGroupKey).GetHash().GetCheapHash(); auto hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << GetGroup() << vchSourceGroupKey).GetHash();
uint64_t hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << vchSourceGroupKey << (hash1 % ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP)).GetHash().GetCheapHash(); auto hash2 = (CHashWriter(SER_GETHASH, 0) << nKey << vchSourceGroupKey << (GetCheapHash(hash1) % ADDRMAN_NEW_BUCKETS_PER_SOURCE_GROUP)).GetHash();
return hash2 % ADDRMAN_NEW_BUCKET_COUNT; return GetCheapHash(hash2) % ADDRMAN_NEW_BUCKET_COUNT;
} }
int CAddrInfo::GetBucketPosition(const uint256 &nKey, bool fNew, int nBucket) const int CAddrInfo::GetBucketPosition(const uint256 &nKey, bool fNew, int nBucket) const
{ {
uint64_t hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << (fNew ? 'N' : 'K') << nBucket << GetKey()).GetHash().GetCheapHash(); auto hash1 = (CHashWriter(SER_GETHASH, 0) << nKey << (fNew ? 'N' : 'K') << nBucket << GetKey()).GetHash();
return hash1 % ADDRMAN_BUCKET_SIZE; return GetCheapHash(hash1) % ADDRMAN_BUCKET_SIZE;
} }
bool CAddrInfo::IsTerrible(int64_t nNow) const bool CAddrInfo::IsTerrible(int64_t nNow) const

View file

@ -56,8 +56,8 @@ public:
{ {
static_assert(BITS/32 > 0 && BITS%32 == 0, "Template parameter BITS must be a positive multiple of 32."); static_assert(BITS/32 > 0 && BITS%32 == 0, "Template parameter BITS must be a positive multiple of 32.");
pn[0] = (unsigned int)b; pn[0] = uint32_t(b);
pn[1] = (unsigned int)(b >> 32); pn[1] = uint32_t(b >> 32U);
for (int i = 2; i < WIDTH; i++) for (int i = 2; i < WIDTH; i++)
pn[i] = 0; pn[i] = 0;
} }

View file

@ -13,6 +13,7 @@
#include <scheduler.h> #include <scheduler.h>
#include <txdb.h> #include <txdb.h>
#include <txmempool.h> #include <txmempool.h>
#include <util.h>
#include <utiltime.h> #include <utiltime.h>
#include <validation.h> #include <validation.h>
#include <validationinterface.h> #include <validationinterface.h>
@ -40,7 +41,7 @@ static CTxIn MineBlock(const CScript& coinbase_scriptPubKey)
{ {
auto block = PrepareBlock(coinbase_scriptPubKey); auto block = PrepareBlock(coinbase_scriptPubKey);
while (!CheckProofOfWork(block->GetHash(), block->nBits, Params().GetConsensus())) { while (!CheckProofOfWork(block->GetPoWHash(), block->nBits, Params().GetConsensus())) {
++block->nNonce; ++block->nNonce;
assert(block->nNonce); assert(block->nNonce);
} }
@ -72,18 +73,29 @@ static void AssembleBlock(benchmark::State& state)
boost::thread_group thread_group; boost::thread_group thread_group;
CScheduler scheduler; CScheduler scheduler;
{ {
delete ::pclaimTrie;
const CChainParams& chainparams = Params();
auto &consensus = chainparams.GetConsensus();
::pblocktree.reset(new CBlockTreeDB(1 << 20, true)); ::pblocktree.reset(new CBlockTreeDB(1 << 20, true));
::pcoinsdbview.reset(new CCoinsViewDB(1 << 23, true)); ::pcoinsdbview.reset(new CCoinsViewDB(1 << 23, true));
::pcoinsTip.reset(new CCoinsViewCache(pcoinsdbview.get())); ::pcoinsTip.reset(new CCoinsViewCache(pcoinsdbview.get()));
::pclaimTrie = new CClaimTrie(1 << 25, true, 0, GetDataDir().string(),
consensus.nNormalizedNameForkHeight,
consensus.nMinRemovalWorkaroundHeight,
consensus.nMaxRemovalWorkaroundHeight,
consensus.nOriginalClaimExpirationTime,
consensus.nExtendedClaimExpirationTime,
consensus.nExtendedClaimExpirationForkHeight,
consensus.nAllClaimsInMerkleForkHeight, 1);
const CChainParams& chainparams = Params();
thread_group.create_thread(boost::bind(&CScheduler::serviceQueue, &scheduler)); thread_group.create_thread(boost::bind(&CScheduler::serviceQueue, &scheduler));
GetMainSignals().RegisterBackgroundSignalScheduler(scheduler); GetMainSignals().RegisterBackgroundSignalScheduler(scheduler);
LoadGenesisBlock(chainparams); LoadGenesisBlock(chainparams);
CValidationState state; CValidationState state;
ActivateBestChain(state, chainparams); ActivateBestChain(state, chainparams);
assert(::chainActive.Tip() != nullptr); assert(::chainActive.Tip() != nullptr);
const bool witness_enabled{IsWitnessEnabled(::chainActive.Tip(), chainparams.GetConsensus())}; const_cast<int&>(consensus.nWitnessForkHeight) = ::chainActive.Height();
const bool witness_enabled{IsWitnessEnabled(::chainActive.Tip(), consensus)};
assert(witness_enabled); assert(witness_enabled);
} }

View file

@ -9,9 +9,50 @@
#include <streams.h> #include <streams.h>
#include <consensus/validation.h> #include <consensus/validation.h>
/* Don't use raw bitcoin blocks
namespace block_bench { namespace block_bench {
#include <bench/data/block413567.raw.h> #include <bench/data/block413567.raw.h>
} // namespace block_bench } // namespace block_bench
*/
CDataStream getTestBlockStream()
{
static CBlock block;
static CDataStream stream(SER_NETWORK, PROTOCOL_VERSION);
if (block.IsNull()) {
block.nVersion = 5;
block.hashPrevBlock = uint256S("e8fc54d1e6581fceaaf64cc8afeedb9c4ba1eb2349eaf638f1fc41e331869dee");
block.hashMerkleRoot = uint256S("a926580973e1204aa179a4c536023c69212ce00b56dc85d3516488f3c51dd022");
block.hashClaimTrie = uint256S("7bae80d60b09031265f8a9f6282e4b9653764fadfac2c8b4c1f416c972b58814");
block.nTime = 1545050539;
block.nBits = 0x207fffff;
block.nNonce = 128913;
block.vtx.resize(60);
uint256 prevHash;
for (int i = 0; i < 60; ++i) {
CMutableTransaction tx;
tx.nVersion = 5;
tx.nLockTime = 0;
tx.vin.resize(1);
tx.vout.resize(1);
if (!prevHash.IsNull()) {
tx.vin[0].prevout.hash = prevHash;
tx.vin[0].prevout.n = 0;
}
tx.vin[0].scriptSig = CScript() << OP_0 << OP_0;
tx.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
tx.vout[0].scriptPubKey = CScript();
tx.vout[0].nValue = 0;
prevHash = tx.GetHash();
block.vtx[i] = MakeTransactionRef(std::move(tx));
}
stream << block;
char a = '\0';
stream.write(&a, 1); // Prevent compaction
}
return stream;
}
// These are the two major time-sinks which happen after we have fully received // These are the two major time-sinks which happen after we have fully received
// a block off the wire, but before we can relay the block on to peers using // a block off the wire, but before we can relay the block on to peers using
@ -19,33 +60,25 @@ namespace block_bench {
static void DeserializeBlockTest(benchmark::State& state) static void DeserializeBlockTest(benchmark::State& state)
{ {
CDataStream stream((const char*)block_bench::block413567, auto stream = getTestBlockStream();
(const char*)&block_bench::block413567[sizeof(block_bench::block413567)], const auto size = stream.size() - 1;
SER_NETWORK, PROTOCOL_VERSION);
char a = '\0';
stream.write(&a, 1); // Prevent compaction
while (state.KeepRunning()) { while (state.KeepRunning()) {
CBlock block; CBlock block;
stream >> block; stream >> block;
assert(stream.Rewind(sizeof(block_bench::block413567))); assert(stream.Rewind(size));
} }
} }
static void DeserializeAndCheckBlockTest(benchmark::State& state) static void DeserializeAndCheckBlockTest(benchmark::State& state)
{ {
CDataStream stream((const char*)block_bench::block413567, auto stream = getTestBlockStream();
(const char*)&block_bench::block413567[sizeof(block_bench::block413567)], const auto size = stream.size() - 1;
SER_NETWORK, PROTOCOL_VERSION); const auto chainParams = CreateChainParams(CBaseChainParams::REGTEST);
char a = '\0';
stream.write(&a, 1); // Prevent compaction
const auto chainParams = CreateChainParams(CBaseChainParams::MAIN);
while (state.KeepRunning()) { while (state.KeepRunning()) {
CBlock block; // Note that CBlock caches its checked state, so we need to recreate it here CBlock block; // Note that CBlock caches its checked state, so we need to recreate it here
stream >> block; stream >> block;
assert(stream.Rewind(sizeof(block_bench::block413567))); assert(stream.Rewind(size));
CValidationState validationState; CValidationState validationState;
assert(CheckBlock(block, validationState, chainParams->GetConsensus())); assert(CheckBlock(block, validationState, chainParams->GetConsensus()));

View file

@ -4,11 +4,13 @@
#include <bench/bench.h> #include <bench/bench.h>
#include <uint256.h> #include <claimtrie/hashes.h>
#include <random.h>
#include <consensus/merkle.h> #include <consensus/merkle.h>
#include <crypto/sha256.h>
#include <random.h>
#include <uint256.h>
static void MerkleRoot(benchmark::State& state) static void MerkleRootUni(benchmark::State& state)
{ {
FastRandomContext rng(true); FastRandomContext rng(true);
std::vector<uint256> leaves; std::vector<uint256> leaves;
@ -18,9 +20,18 @@ static void MerkleRoot(benchmark::State& state)
} }
while (state.KeepRunning()) { while (state.KeepRunning()) {
bool mutation = false; bool mutation = false;
uint256 hash = ComputeMerkleRoot(std::vector<uint256>(leaves), &mutation); uint256 hash = ComputeMerkleRoot(leaves, &mutation);
leaves[mutation] = hash; leaves[mutation] = hash;
} }
} }
static void MerkleRoot(benchmark::State& state)
{
sha256n_way = [](std::vector<uint256>& hashes) {
SHA256D64(hashes[0].begin(), hashes[0].begin(), hashes.size() / 2);
};
MerkleRootUni(state);
}
BENCHMARK(MerkleRoot, 800); BENCHMARK(MerkleRoot, 800);
BENCHMARK(MerkleRootUni, 800);

View file

@ -376,73 +376,6 @@ int64_t GetBlockProofEquivalentTime(const CBlockIndex& to, const CBlockIndex& fr
const CBlockIndex* LastCommonAncestor(const CBlockIndex* pa, const CBlockIndex* pb); const CBlockIndex* LastCommonAncestor(const CBlockIndex* pa, const CBlockIndex* pb);
/** Used to marshal pointers into hashes for db storage. */
class CDiskBlockIndex : public CBlockIndex
{
public:
uint256 hashPrev;
CDiskBlockIndex() {
hashPrev = uint256();
}
explicit CDiskBlockIndex(const CBlockIndex* pindex) : CBlockIndex(*pindex) {
hashPrev = (pprev ? pprev->GetBlockHash() : uint256());
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action) {
int _nVersion = s.GetVersion();
if (!(s.GetType() & SER_GETHASH))
READWRITE(VARINT(_nVersion, VarIntMode::NONNEGATIVE_SIGNED));
READWRITE(VARINT(nHeight, VarIntMode::NONNEGATIVE_SIGNED));
READWRITE(VARINT(nStatus));
READWRITE(VARINT(nTx));
if (nStatus & (BLOCK_HAVE_DATA | BLOCK_HAVE_UNDO))
READWRITE(VARINT(nFile, VarIntMode::NONNEGATIVE_SIGNED));
if (nStatus & BLOCK_HAVE_DATA)
READWRITE(VARINT(nDataPos));
if (nStatus & BLOCK_HAVE_UNDO)
READWRITE(VARINT(nUndoPos));
// block header
READWRITE(this->nVersion);
READWRITE(hashPrev);
READWRITE(hashMerkleRoot);
READWRITE(hashClaimTrie);
READWRITE(nTime);
READWRITE(nBits);
READWRITE(nNonce);
}
uint256 GetBlockHash() const
{
CBlockHeader block;
block.nVersion = nVersion;
block.hashPrevBlock = hashPrev;
block.hashMerkleRoot = hashMerkleRoot;
block.hashClaimTrie = hashClaimTrie;
block.nTime = nTime;
block.nBits = nBits;
block.nNonce = nNonce;
return block.GetHash();
}
std::string ToString() const
{
std::string str = "CDiskBlockIndex(";
str += CBlockIndex::ToString();
str += strprintf("\n hashBlock=%s, hashClaimTrie=%s, hashPrev=%s)",
GetBlockHash().ToString(),
hashClaimTrie.ToString(),
hashPrev.ToString());
return str;
}
};
/** An in-memory indexed chain of blocks. */ /** An in-memory indexed chain of blocks. */
class CChain { class CChain {
private: private:

View file

@ -143,8 +143,8 @@ public:
consensus.nAllowMinDiffMinHeight = -1; consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1; consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 539940; // targeting 21 March 2019 consensus.nNormalizedNameForkHeight = 539940; // targeting 21 March 2019
consensus.nMinTakeoverWorkaroundHeight = 496850; consensus.nMinRemovalWorkaroundHeight = 297706;
consensus.nMaxTakeoverWorkaroundHeight = 658300; // targeting 30 Oct 2019 consensus.nMaxRemovalWorkaroundHeight = 100000000;
consensus.nWitnessForkHeight = 680770; // targeting 11 Dec 2019 consensus.nWitnessForkHeight = 680770; // targeting 11 Dec 2019
consensus.nAllClaimsInMerkleForkHeight = 658310; // targeting 30 Oct 2019 consensus.nAllClaimsInMerkleForkHeight = 658310; // targeting 30 Oct 2019
consensus.fPowAllowMinDifficultyBlocks = false; consensus.fPowAllowMinDifficultyBlocks = false;
@ -261,8 +261,8 @@ public:
consensus.nAllowMinDiffMinHeight = 277299; consensus.nAllowMinDiffMinHeight = 277299;
consensus.nAllowMinDiffMaxHeight = 1100000; consensus.nAllowMinDiffMaxHeight = 1100000;
consensus.nNormalizedNameForkHeight = 993380; // targeting, 21 Feb 2019 consensus.nNormalizedNameForkHeight = 993380; // targeting, 21 Feb 2019
consensus.nMinTakeoverWorkaroundHeight = 99; consensus.nMinRemovalWorkaroundHeight = 99;
consensus.nMaxTakeoverWorkaroundHeight = 1198550; // targeting 30 Sep 2019 consensus.nMaxRemovalWorkaroundHeight = 100000000;
consensus.nWitnessForkHeight = 1198600; consensus.nWitnessForkHeight = 1198600;
consensus.nAllClaimsInMerkleForkHeight = 1198560; // targeting 30 Sep 2019 consensus.nAllClaimsInMerkleForkHeight = 1198560; // targeting 30 Sep 2019
consensus.fPowAllowMinDifficultyBlocks = true; consensus.fPowAllowMinDifficultyBlocks = true;
@ -368,8 +368,8 @@ public:
consensus.nAllowMinDiffMinHeight = -1; consensus.nAllowMinDiffMinHeight = -1;
consensus.nAllowMinDiffMaxHeight = -1; consensus.nAllowMinDiffMaxHeight = -1;
consensus.nNormalizedNameForkHeight = 250; // SDK depends upon this number consensus.nNormalizedNameForkHeight = 250; // SDK depends upon this number
consensus.nMinTakeoverWorkaroundHeight = -1; consensus.nMinRemovalWorkaroundHeight = -1;
consensus.nMaxTakeoverWorkaroundHeight = -1; consensus.nMaxRemovalWorkaroundHeight = -1;
consensus.nWitnessForkHeight = 150; consensus.nWitnessForkHeight = 150;
consensus.nAllClaimsInMerkleForkHeight = 350; consensus.nAllClaimsInMerkleForkHeight = 350;
consensus.fPowAllowMinDifficultyBlocks = false; consensus.fPowAllowMinDifficultyBlocks = false;

View file

@ -13,22 +13,30 @@ CClaimScriptAddOp::CClaimScriptAddOp(const COutPoint& point, CAmount nValue, int
bool CClaimScriptAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
return addClaim(trieCache, name, ClaimIdHash(point.hash, point.n)); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "+++ Claim added: %s, c: %.6s, t: %.6s:%d, h: %.6d, a: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValue);
return addClaim(trieCache, name, claimId, -1, -1);
} }
bool CClaimScriptAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
return addClaim(trieCache, name, claimId); LogPrint(BCLog::CLAIMS, "+++ Claim updated: %s, c: %.6s, t: %.6s:%d, h: %.6d, a: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValue);
return addClaim(trieCache, name, claimId, -1, -1);
} }
bool CClaimScriptAddOp::addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptAddOp::addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId,
int takeoverHeight, int originalHeight)
{ {
return trieCache.addClaim(name, point, claimId, nValue, nHeight); return trieCache.addClaim(name, point, claimId, nValue, nHeight, takeoverHeight, originalHeight);
} }
bool CClaimScriptAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
return trieCache.addSupport(name, point, nValue, claimId, nHeight); LogPrint(BCLog::CLAIMS, "+++ Support added: %s, c: %.6s, t: %.6s:%d, h: %.6d, a: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValue);
return trieCache.addSupport(name, point, claimId, nValue, nHeight, -1);
} }
CClaimScriptUndoAddOp::CClaimScriptUndoAddOp(const COutPoint& point, int nHeight) : point(point), nHeight(nHeight) CClaimScriptUndoAddOp::CClaimScriptUndoAddOp(const COutPoint& point, int nHeight) : point(point), nHeight(nHeight)
@ -38,37 +46,39 @@ CClaimScriptUndoAddOp::CClaimScriptUndoAddOp(const COutPoint& point, int nHeight
bool CClaimScriptUndoAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptUndoAddOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
auto claimId = ClaimIdHash(point.hash, point.n); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrint(BCLog::CLAIMS, "--- Undoing claim add: %s, c: %.6s, t: %.6s:%d, h: %.6d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight);
return undoAddClaim(trieCache, name, claimId); return undoAddClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "--- [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); LogPrint(BCLog::CLAIMS, "--- Undoing claim update: %s, c: %.6s, t: %.6s:%d, h: %.6d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight);
return undoAddClaim(trieCache, name, claimId); return undoAddClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoAddOp::undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::undoAddClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); std::string nodeName;
bool res = trieCache.undoAddClaim(name, point, nHeight); int validHeight, originalHeight;
bool res = trieCache.removeClaim(claimId, point, nodeName, validHeight, originalHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing claim fails\n", __func__); LogPrint(BCLog::CLAIMS, "Remove claim failed for %s (%s) with claimID %.6s, t:%.6s:%d\n", name, nodeName,
claimId.GetHex(), point.hash.GetHex(), point.n);
return res; return res;
} }
bool CClaimScriptUndoAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoAddOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
if (LogAcceptCategory(BCLog::CLAIMS)) { LogPrint(BCLog::CLAIMS, "--- Undoing support add: %s, c: %.6s, t: %.6s:%d, h: %.6d\n",
LogPrintf("--- [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight);
claimId.GetHex(), point.hash.ToString(), point.n); std::string nodeName;
LogPrintf( int validHeight;
"%s: (txid: %s, nOut: %d) Removing support for %s, claimId: %s, from the claim trie due to block disconnect\n", bool res = trieCache.removeSupport(point, nodeName, validHeight);
__func__, point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.undoAddSupport(name, point, nHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__); LogPrint(BCLog::CLAIMS, "Remove support failed for %s with claimID %.6s, t:%.6s:%d\n", name,
claimId.GetHex(), point.hash.GetHex(), point.n);
return res; return res;
} }
@ -80,64 +90,71 @@ CClaimScriptSpendOp::CClaimScriptSpendOp(const COutPoint& point, int nHeight, in
bool CClaimScriptSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
auto claimId = ClaimIdHash(point.hash, point.n); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_CLAIM_NAME \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); auto ret = spendClaim(trieCache, name, claimId);
return spendClaim(trieCache, name, claimId); LogPrint(BCLog::CLAIMS, "--- Spent original claim: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
return ret;
} }
bool CClaimScriptSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "+++ [%lu]: OP_UPDATE_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, claimId.GetHex(), point.hash.ToString(), point.n); auto ret = spendClaim(trieCache, name, claimId);
return spendClaim(trieCache, name, claimId); LogPrint(BCLog::CLAIMS, "--- Spent updated claim: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
return ret;
} }
bool CClaimScriptSpendOp::spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Removing %s, claimId: %s, from the claim trie\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); std::string nodeName;
bool res = trieCache.spendClaim(name, point, nHeight, nValidHeight); bool res = trieCache.removeClaim(claimId, point, nodeName, nValidHeight, nOriginalHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing fails\n", __func__); LogPrint(BCLog::CLAIMS, "Remove claim failed for %s with claimid %s\n", name, claimId.GetHex().substr(0, 6));
return res; return res;
} }
bool CClaimScriptSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
if (LogAcceptCategory(BCLog::CLAIMS)) { std::string nodeName;
LogPrintf("+++ [%lu]: OP_SUPPORT_CLAIM \"%s\" with claimId %s and tx prevout %s at index %d\n", nHeight, name, bool res = trieCache.removeSupport(point, nodeName, nValidHeight);
claimId.GetHex(), point.hash.ToString(), point.n); LogPrint(BCLog::CLAIMS, "--- Spent support: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
LogPrintf("%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie\n", __func__, name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
point.hash.ToString(), point.n, name, claimId.ToString());
}
bool res = trieCache.spendSupport(name, point, nHeight, nValidHeight);
if (!res) if (!res)
LogPrint(BCLog::CLAIMS, "%s: Removing support fails\n", __func__); LogPrint(BCLog::CLAIMS, "Remove support failed for %s with claimid %s\n", name, claimId.GetHex().substr(0, 6));
return res; return res;
} }
CClaimScriptUndoSpendOp::CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight) CClaimScriptUndoSpendOp::CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight, int nOriginalHeight)
: point(point), nValue(nValue), nHeight(nHeight), nValidHeight(nValidHeight) : point(point), nValue(nValue), nHeight(nHeight), nValidHeight(nValidHeight), nOriginalHeight(nOriginalHeight)
{ {
} }
bool CClaimScriptUndoSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name) bool CClaimScriptUndoSpendOp::claimName(CClaimTrieCache& trieCache, const std::string& name)
{ {
return undoSpendClaim(trieCache, name, ClaimIdHash(point.hash, point.n)); auto claimId = ClaimIdHash(point.hash, point.n);
LogPrint(BCLog::CLAIMS, "+++ Undoing original claim spend: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
return undoSpendClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoSpendOp::updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "+++ Undoing updated claim spend: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
return undoSpendClaim(trieCache, name, claimId); return undoSpendClaim(trieCache, name, claimId);
} }
bool CClaimScriptUndoSpendOp::undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoSpendOp::undoSpendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); return trieCache.addClaim(name, point, claimId, nValue, nHeight,
return trieCache.undoSpendClaim(name, point, claimId, nValue, nHeight, nValidHeight); nValidHeight, nOriginalHeight);
} }
bool CClaimScriptUndoSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) bool CClaimScriptUndoSpendOp::supportClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId)
{ {
LogPrint(BCLog::CLAIMS, "%s: (txid: %s, nOut: %d) Restoring support for %s, claimId: %s, to the claim trie due to block disconnect\n", __func__, point.hash.ToString(), point.n, name, claimId.ToString()); LogPrint(BCLog::CLAIMS, "+++ Undoing support spend: %s, c: %.6s, t: %.6s:%d, h: %.6d, vh: %d\n",
return trieCache.undoSpendSupport(name, point, claimId, nValue, nHeight, nValidHeight); name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValidHeight);
return trieCache.addSupport(name, point, claimId, nValue, nHeight, nValidHeight);
} }
static std::string vchToString(const std::vector<unsigned char>& name) static std::string vchToString(const std::vector<unsigned char>& name)
@ -159,8 +176,9 @@ bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CSc
return claimOp.supportClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1])); return claimOp.supportClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1]));
case OP_UPDATE_CLAIM: case OP_UPDATE_CLAIM:
return claimOp.updateClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1])); return claimOp.updateClaim(trieCache, vchToString(vvchParams[0]), uint160(vvchParams[1]));
default:
throw std::runtime_error("Unimplemented OP handler.");
} }
throw std::runtime_error("Unimplemented OP handler.");
} }
void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks) void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoinsViewCache& view, int nHeight, const CUpdateCacheCallbacks& callbacks)
@ -173,17 +191,18 @@ void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoin
bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override bool spendClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{ {
if (CClaimScriptSpendOp::spendClaim(trieCache, name, claimId)) { if (CClaimScriptSpendOp::spendClaim(trieCache, name, claimId)) {
callback(name, claimId); assert(nOriginalHeight >= 0);
callback(name, claimId, nOriginalHeight);
return true; return true;
} }
return false; return false;
} }
std::function<void(const std::string& name, const uint160& claimId)> callback; std::function<void(const std::string& name, const uint160& claimId, int originalHeight)> callback;
}; };
spentClaimsType spentClaims; spentClaimsType spentClaims;
for (std::size_t j = 0; j < tx.vin.size(); j++) { for (uint32_t j = 0; j < tx.vin.size(); j++) {
const CTxIn& txin = tx.vin[j]; const CTxIn& txin = tx.vin[j];
const Coin& coin = view.AccessCoin(txin.prevout); const Coin& coin = view.AccessCoin(txin.prevout);
@ -199,13 +218,16 @@ void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoin
if (scriptPubKey.empty()) if (scriptPubKey.empty())
continue; continue;
int nValidAtHeight; int nValidAtHeight, nOriginalHeight = 0;
CSpendClaimHistory spendClaim(COutPoint(txin.prevout.hash, txin.prevout.n), scriptHeight, nValidAtHeight); CSpendClaimHistory spendClaim(COutPoint(txin.prevout.hash, txin.prevout.n), scriptHeight, nValidAtHeight);
spendClaim.callback = [&spentClaims](const std::string& name, const uint160& claimId) { spendClaim.callback = [&spentClaims, &nOriginalHeight](const std::string& name, const uint160& claimId, int originalHeight) {
spentClaims.emplace_back(name, claimId); spentClaims.push_back({name, claimId, originalHeight});
nOriginalHeight = originalHeight;
}; };
if (ProcessClaim(spendClaim, trieCache, scriptPubKey) && callbacks.claimUndoHeights) if (ProcessClaim(spendClaim, trieCache, scriptPubKey) && callbacks.claimUndoHeights) {
callbacks.claimUndoHeights(j, nValidAtHeight); // assert(nValidAtHeight > 0 && nOriginalHeight > 0); // fails on tests
callbacks.claimUndoHeights(j, uint32_t(nValidAtHeight), uint32_t(nOriginalHeight));
}
} }
class CAddSpendClaim : public CClaimScriptAddOp class CAddSpendClaim : public CClaimScriptAddOp
@ -215,11 +237,15 @@ void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoin
bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override bool updateClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId) override
{ {
if (callback(name, claimId)) auto originalHeight = callback(name, claimId);
return CClaimScriptAddOp::updateClaim(trieCache, name, claimId); if (originalHeight >= 0) {
LogPrint(BCLog::CLAIMS, "+++ Claim updated: %s, c: %.6s, t: %.6s:%d, h: %.6d, a: %d\n",
name, claimId.GetHex(), point.hash.GetHex(), point.n, nHeight, nValue);
return CClaimScriptAddOp::addClaim(trieCache, name, claimId, -1, originalHeight);
}
return false; return false;
} }
std::function<bool(const std::string& name, const uint160& claimId)> callback; std::function<int(const std::string& name, const uint160& claimId)> callback;
}; };
for (std::size_t j = 0; j < tx.vout.size(); j++) { for (std::size_t j = 0; j < tx.vout.size(); j++) {
@ -229,14 +255,15 @@ void UpdateCache(const CTransaction& tx, CClaimTrieCache& trieCache, const CCoin
continue; continue;
CAddSpendClaim addClaim(COutPoint(tx.GetHash(), j), txout.nValue, nHeight); CAddSpendClaim addClaim(COutPoint(tx.GetHash(), j), txout.nValue, nHeight);
addClaim.callback = [&trieCache, &spentClaims](const std::string& name, const uint160& claimId) -> bool { addClaim.callback = [&trieCache, &spentClaims](const std::string& name, const uint160& claimId) -> int {
for (auto itSpent = spentClaims.begin(); itSpent != spentClaims.end(); ++itSpent) { for (auto itSpent = spentClaims.begin(); itSpent != spentClaims.end(); ++itSpent) {
if (itSpent->second == claimId && trieCache.normalizeClaimName(name) == trieCache.normalizeClaimName(itSpent->first)) { if (itSpent->id == claimId && trieCache.normalizeClaimName(name) == trieCache.normalizeClaimName(itSpent->name)) {
spentClaims.erase(itSpent); spentClaims.erase(itSpent);
return true; assert(itSpent->originalHeight >= 0);
return itSpent->originalHeight;
} }
} }
return false; return -1;
}; };
ProcessClaim(addClaim, trieCache, txout.scriptPubKey); ProcessClaim(addClaim, trieCache, txout.scriptPubKey);
} }

View file

@ -6,7 +6,7 @@
#define CLAIMSCRIPTOP_H #define CLAIMSCRIPTOP_H
#include "amount.h" #include "amount.h"
#include "claimtrie.h" #include "claimtrie/forks.h"
#include "hash.h" #include "hash.h"
#include "primitives/transaction.h" #include "primitives/transaction.h"
#include "script/script.h" #include "script/script.h"
@ -22,7 +22,7 @@
class CClaimScriptOp class CClaimScriptOp
{ {
public: public:
virtual ~CClaimScriptOp() {} virtual ~CClaimScriptOp() = default;
/** /**
* Pure virtual, OP_CLAIM_NAME handler * Pure virtual, OP_CLAIM_NAME handler
* @param[in] trieCache trie to operate on * @param[in] trieCache trie to operate on
@ -81,7 +81,8 @@ protected:
* @param[in] name name of the claim * @param[in] name name of the claim
* @param[in] claimId id of the claim * @param[in] claimId id of the claim
*/ */
virtual bool addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId); virtual bool addClaim(CClaimTrieCache& trieCache, const std::string& name, const uint160& claimId,
int takeoverHeight, int originalHeight);
const COutPoint point; const COutPoint point;
const CAmount nValue; const CAmount nValue;
const int nHeight; const int nHeight;
@ -167,6 +168,7 @@ protected:
const COutPoint point; const COutPoint point;
const int nHeight; const int nHeight;
int& nValidHeight; int& nValidHeight;
int nOriginalHeight;
}; };
/** /**
@ -182,7 +184,7 @@ public:
* @param[in] nHeight entry height of the claim * @param[in] nHeight entry height of the claim
* @param[in] nValidHeight valid height of the claim * @param[in] nValidHeight valid height of the claim
*/ */
CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight); CClaimScriptUndoSpendOp(const COutPoint& point, CAmount nValue, int nHeight, int nValidHeight, int nOriginalHeight);
/** /**
* Implementation of OP_CLAIM_NAME handler * Implementation of OP_CLAIM_NAME handler
* @see CClaimScriptOp::claimName * @see CClaimScriptOp::claimName
@ -211,6 +213,7 @@ protected:
const CAmount nValue; const CAmount nValue;
const int nHeight; const int nHeight;
const int nValidHeight; const int nValidHeight;
const int nOriginalHeight;
}; };
/** /**
@ -221,14 +224,13 @@ protected:
*/ */
bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CScript& scriptPubKey); bool ProcessClaim(CClaimScriptOp& claimOp, CClaimTrieCache& trieCache, const CScript& scriptPubKey);
typedef std::pair<std::string, uint160> spentClaimType; struct spentClaimType { std::string name; uint160 id; int originalHeight; };
typedef std::vector<spentClaimType> spentClaimsType; typedef std::vector<spentClaimType> spentClaimsType;
struct CUpdateCacheCallbacks struct CUpdateCacheCallbacks
{ {
std::function<CScript(const COutPoint& point)> findScriptKey; std::function<CScript(const COutPoint& point)> findScriptKey;
std::function<void(int, int)> claimUndoHeights; std::function<void(uint32_t, uint32_t, uint32_t)> claimUndoHeights;
}; };
/** /**

File diff suppressed because it is too large Load diff

View file

@ -1,756 +0,0 @@
#ifndef BITCOIN_CLAIMTRIE_H
#define BITCOIN_CLAIMTRIE_H
#include <amount.h>
#include <chain.h>
#include <chainparams.h>
#include <dbwrapper.h>
#include <prefixtrie.h>
#include <primitives/transaction.h>
#include <serialize.h>
#include <uint256.h>
#include <util.h>
#include <map>
#include <string>
#include <vector>
#include <unordered_map>
#include <unordered_set>
// leveldb keys
#define TRIE_NODE 'n'
#define TRIE_NODE_CHILDREN 'b'
#define CLAIM_BY_ID 'i'
#define CLAIM_QUEUE_ROW 'r'
#define CLAIM_QUEUE_NAME_ROW 'm'
#define CLAIM_EXP_QUEUE_ROW 'e'
#define SUPPORT 's'
#define SUPPORT_QUEUE_ROW 'u'
#define SUPPORT_QUEUE_NAME_ROW 'p'
#define SUPPORT_EXP_QUEUE_ROW 'x'
uint256 getValueHash(const COutPoint& outPoint, int nHeightOfLastTakeover);
struct CClaimValue
{
COutPoint outPoint;
uint160 claimId;
CAmount nAmount = 0;
CAmount nEffectiveAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CClaimValue() = default;
CClaimValue(const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight, int nValidAtHeight)
: outPoint(outPoint), claimId(claimId), nAmount(nAmount), nEffectiveAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
CClaimValue(CClaimValue&&) = default;
CClaimValue(const CClaimValue&) = default;
CClaimValue& operator=(CClaimValue&&) = default;
CClaimValue& operator=(const CClaimValue&) = default;
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(claimId);
READWRITE(nAmount);
READWRITE(nHeight);
READWRITE(nValidAtHeight);
}
bool operator<(const CClaimValue& other) const
{
if (nEffectiveAmount < other.nEffectiveAmount)
return true;
if (nEffectiveAmount != other.nEffectiveAmount)
return false;
if (nHeight > other.nHeight)
return true;
if (nHeight != other.nHeight)
return false;
return outPoint != other.outPoint && !(outPoint < other.outPoint);
}
bool operator==(const CClaimValue& other) const
{
return outPoint == other.outPoint && claimId == other.claimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool operator!=(const CClaimValue& other) const
{
return !(*this == other);
}
};
struct CSupportValue
{
COutPoint outPoint;
uint160 supportedClaimId;
CAmount nAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CSupportValue() = default;
CSupportValue(const COutPoint& outPoint, const uint160& supportedClaimId, CAmount nAmount, int nHeight, int nValidAtHeight)
: outPoint(outPoint), supportedClaimId(supportedClaimId), nAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
CSupportValue(CSupportValue&&) = default;
CSupportValue(const CSupportValue&) = default;
CSupportValue& operator=(CSupportValue&&) = default;
CSupportValue& operator=(const CSupportValue&) = default;
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(supportedClaimId);
READWRITE(nAmount);
READWRITE(nHeight);
READWRITE(nValidAtHeight);
}
bool operator==(const CSupportValue& other) const
{
return outPoint == other.outPoint && supportedClaimId == other.supportedClaimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool operator!=(const CSupportValue& other) const
{
return !(*this == other);
}
};
typedef std::vector<CClaimValue> claimEntryType;
typedef std::vector<CSupportValue> supportEntryType;
struct CClaimTrieData
{
uint256 hash;
claimEntryType claims;
int nHeightOfLastTakeover = 0;
CClaimTrieData() = default;
CClaimTrieData(CClaimTrieData&&) = default;
CClaimTrieData(const CClaimTrieData&) = default;
CClaimTrieData& operator=(CClaimTrieData&&) = default;
CClaimTrieData& operator=(const CClaimTrieData& d) = default;
bool insertClaim(const CClaimValue& claim);
bool removeClaim(const COutPoint& outPoint, CClaimValue& claim);
bool getBestClaim(CClaimValue& claim) const;
bool haveClaim(const COutPoint& outPoint) const;
void reorderClaims(const supportEntryType& support);
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(hash);
if (ser_action.ForRead()) {
if (s.eof()) {
claims.clear();
nHeightOfLastTakeover = 0;
return;
}
}
else if (claims.empty())
return;
READWRITE(claims);
READWRITE(nHeightOfLastTakeover);
}
bool operator==(const CClaimTrieData& other) const
{
return hash == other.hash && nHeightOfLastTakeover == other.nHeightOfLastTakeover && claims == other.claims;
}
bool operator!=(const CClaimTrieData& other) const
{
return !(*this == other);
}
bool empty() const
{
return claims.empty();
}
};
struct COutPointHeightType
{
COutPoint outPoint;
int nHeight = 0;
COutPointHeightType() = default;
COutPointHeightType(const COutPoint& outPoint, int nHeight)
: outPoint(outPoint), nHeight(nHeight)
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(outPoint);
READWRITE(nHeight);
}
};
struct CNameOutPointHeightType
{
std::string name;
COutPoint outPoint;
int nHeight = 0;
CNameOutPointHeightType() = default;
CNameOutPointHeightType(std::string name, const COutPoint& outPoint, int nHeight)
: name(std::move(name)), outPoint(outPoint), nHeight(nHeight)
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(outPoint);
READWRITE(nHeight);
}
};
struct CNameOutPointType
{
std::string name;
COutPoint outPoint;
CNameOutPointType() = default;
CNameOutPointType(std::string name, const COutPoint& outPoint)
: name(std::move(name)), outPoint(outPoint)
{
}
bool operator==(const CNameOutPointType& other) const
{
return name == other.name && outPoint == other.outPoint;
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(outPoint);
}
};
struct CClaimIndexElement
{
CClaimIndexElement() = default;
CClaimIndexElement(std::string name, CClaimValue claim)
: name(std::move(name)), claim(std::move(claim))
{
}
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action)
{
READWRITE(name);
READWRITE(claim);
}
std::string name;
CClaimValue claim;
};
struct CClaimNsupports
{
CClaimNsupports() = default;
CClaimNsupports(CClaimNsupports&&) = default;
CClaimNsupports(const CClaimNsupports&) = default;
CClaimNsupports& operator=(CClaimNsupports&&) = default;
CClaimNsupports& operator=(const CClaimNsupports&) = default;
CClaimNsupports(const CClaimValue& claim, CAmount effectiveAmount, const std::vector<CSupportValue>& supports = {})
: claim(claim), effectiveAmount(effectiveAmount), supports(supports)
{
}
bool IsNull() const
{
return claim.claimId.IsNull();
}
CClaimValue claim;
CAmount effectiveAmount = 0;
std::vector<CSupportValue> supports;
};
static const CClaimNsupports invalid;
struct CClaimSupportToName
{
CClaimSupportToName(const std::string& name, int nLastTakeoverHeight, std::vector<CClaimNsupports> claimsNsupports, std::vector<CSupportValue> unmatchedSupports)
: name(name), nLastTakeoverHeight(nLastTakeoverHeight), claimsNsupports(std::move(claimsNsupports)), unmatchedSupports(std::move(unmatchedSupports))
{
}
const CClaimNsupports& find(const uint160& claimId) const
{
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&claimId](const CClaimNsupports& value) {
return claimId == value.claim.claimId;
});
return it != claimsNsupports.end() ? *it : invalid;
}
const CClaimNsupports& find(const std::string& partialId) const
{
std::string lowered(partialId);
for (auto& c: lowered)
c = std::tolower(c);
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&lowered](const CClaimNsupports& value) {
return value.claim.claimId.GetHex().find(lowered) == 0;
});
return it != claimsNsupports.end() ? *it : invalid;
}
const std::string name;
const int nLastTakeoverHeight;
const std::vector<CClaimNsupports> claimsNsupports;
const std::vector<CSupportValue> unmatchedSupports;
};
class CClaimTrie : public CPrefixTrie<std::string, CClaimTrieData>
{
public:
CClaimTrie() = default;
virtual ~CClaimTrie() = default;
CClaimTrie(CClaimTrie&&) = delete;
CClaimTrie(const CClaimTrie&) = delete;
CClaimTrie(bool fMemory, bool fWipe, int proportionalDelayFactor = 32, std::size_t cacheMB=200);
CClaimTrie& operator=(CClaimTrie&&) = delete;
CClaimTrie& operator=(const CClaimTrie&) = delete;
bool SyncToDisk();
friend class CClaimTrieCacheBase;
friend struct ClaimTrieChainFixture;
friend class CClaimTrieCacheExpirationFork;
friend class CClaimTrieCacheNormalizationFork;
friend bool getClaimById(const uint160&, std::string&, CClaimValue*);
friend bool getClaimById(const std::string&, std::string&, CClaimValue*);
std::size_t getTotalNamesInTrie() const;
std::size_t getTotalClaimsInTrie() const;
CAmount getTotalValueOfClaimsInTrie(bool fControllingOnly) const;
protected:
int nNextHeight = 0;
int nProportionalDelayFactor = 0;
std::unique_ptr<CDBWrapper> db;
};
struct CClaimTrieProofNode
{
CClaimTrieProofNode(std::vector<std::pair<unsigned char, uint256>> children, bool hasValue, const uint256& valHash)
: children(std::move(children)), hasValue(hasValue), valHash(valHash)
{
}
CClaimTrieProofNode(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode(const CClaimTrieProofNode&) = default;
CClaimTrieProofNode& operator=(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode& operator=(const CClaimTrieProofNode&) = default;
std::vector<std::pair<unsigned char, uint256>> children;
bool hasValue;
uint256 valHash;
};
struct CClaimTrieProof
{
CClaimTrieProof() = default;
CClaimTrieProof(CClaimTrieProof&&) = default;
CClaimTrieProof(const CClaimTrieProof&) = default;
CClaimTrieProof& operator=(CClaimTrieProof&&) = default;
CClaimTrieProof& operator=(const CClaimTrieProof&) = default;
std::vector<std::pair<bool, uint256>> pairs;
std::vector<CClaimTrieProofNode> nodes;
int nHeightOfLastTakeover = 0;
bool hasValue = false;
COutPoint outPoint;
};
template <typename T>
class COptional
{
bool own;
T* value;
public:
COptional(T* value = nullptr) : own(false), value(value) {}
COptional(COptional&& o)
{
own = o.own;
value = o.value;
o.own = false;
o.value = nullptr;
}
COptional(T&& o) : own(true)
{
value = new T(std::move(o));
}
~COptional()
{
if (own)
delete value;
}
COptional& operator=(COptional&&) = delete;
bool unique() const
{
return own;
}
operator bool() const
{
return value;
}
operator T*() const
{
return value;
}
T* operator->() const
{
return value;
}
operator T&() const
{
return *value;
}
T& operator*() const
{
return *value;
}
};
template <typename T>
using queueEntryType = std::pair<std::string, T>;
typedef std::vector<queueEntryType<CClaimValue>> claimQueueRowType;
typedef std::map<int, claimQueueRowType> claimQueueType;
typedef std::vector<queueEntryType<CSupportValue>> supportQueueRowType;
typedef std::map<int, supportQueueRowType> supportQueueType;
typedef std::vector<COutPointHeightType> queueNameRowType;
typedef std::map<std::string, queueNameRowType> queueNameType;
typedef std::vector<CNameOutPointHeightType> insertUndoType;
typedef std::vector<CNameOutPointType> expirationQueueRowType;
typedef std::map<int, expirationQueueRowType> expirationQueueType;
typedef std::set<CClaimValue> claimIndexClaimListType;
typedef std::vector<CClaimIndexElement> claimIndexElementListType;
class CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheBase(CClaimTrie* base);
virtual ~CClaimTrieCacheBase() = default;
uint256 getMerkleHash();
bool flush();
bool empty() const;
bool checkConsistency() const;
bool ReadFromDisk(const CBlockIndex* tip);
bool haveClaim(const std::string& name, const COutPoint& outPoint) const;
bool haveClaimInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool haveSupport(const std::string& name, const COutPoint& outPoint) const;
bool haveSupportInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool addClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight);
bool undoAddClaim(const std::string& name, const COutPoint& outPoint, int nHeight);
bool spendClaim(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight);
bool undoSpendClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight, int nValidAtHeight);
bool addSupport(const std::string& name, const COutPoint& outPoint, CAmount nAmount, const uint160& supportedClaimId, int nHeight);
bool undoAddSupport(const std::string& name, const COutPoint& outPoint, int nHeight);
bool spendSupport(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight);
bool undoSpendSupport(const std::string& name, const COutPoint& outPoint, const uint160& supportedClaimId, CAmount nAmount, int nHeight, int nValidAtHeight);
virtual bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
virtual bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo);
virtual bool getProofForName(const std::string& name, CClaimTrieProof& proof);
virtual bool getInfoForName(const std::string& name, CClaimValue& claim) const;
virtual int expirationTime() const;
virtual bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
virtual CClaimSupportToName getClaimsForName(const std::string& name) const;
CClaimTrie::const_iterator find(const std::string& name) const;
void iterate(std::function<void(const std::string&, const CClaimTrieData&)> callback) const;
void dumpToLog(CClaimTrie::const_iterator it, bool diffFromBase = true) const;
virtual std::string adjustNameForValidHeight(const std::string& name, int validHeight) const;
protected:
CClaimTrie* base;
CClaimTrie nodesToAddOrUpdate; // nodes pulled in from base (and possibly modified thereafter), written to base on flush
std::unordered_set<std::string> nodesAlreadyCached; // set of nodes already pulled into cache from base
std::unordered_set<std::string> namesToCheckForTakeover; // takeover numbers are updated on increment
virtual uint256 recursiveComputeMerkleHash(CClaimTrie::iterator& it);
virtual bool recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const;
virtual bool insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover);
virtual bool removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover);
virtual bool insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover);
virtual bool removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover);
supportEntryType getSupportsForName(const std::string& name) const;
int getDelayForName(const std::string& name) const;
virtual int getDelayForName(const std::string& name, const uint160& claimId) const;
CClaimTrie::iterator cacheData(const std::string& name, bool create = true);
bool getLastTakeoverForName(const std::string& name, uint160& claimId, int& takeoverHeight) const;
int getNumBlocksOfContinuousOwnership(const std::string& name) const;
void reactivateClaim(const expirationQueueRowType& row, int height, bool increment);
void reactivateSupport(const expirationQueueRowType& row, int height, bool increment);
expirationQueueType expirationQueueCache;
expirationQueueType supportExpirationQueueCache;
int nNextHeight; // Height of the block that is being worked on, which is
// one greater than the height of the chain's tip
private:
uint256 hashBlock;
std::unordered_map<std::string, std::pair<uint160, int>> takeoverCache;
claimQueueType claimQueueCache; // claims not active yet: to be written to disk on flush
queueNameType claimQueueNameCache;
supportQueueType supportQueueCache; // supports not active yet: to be written to disk on flush
queueNameType supportQueueNameCache;
claimIndexElementListType claimsToAddToByIdIndex; // written to index on flush
claimIndexClaimListType claimsToDeleteFromByIdIndex;
std::unordered_map<std::string, supportEntryType> supportCache; // to be added/updated to base (and disk) on flush
std::unordered_set<std::string> nodesToDelete; // to be removed from base (and disk) on flush
std::unordered_map<std::string, bool> takeoverWorkaround;
std::unordered_set<std::string> removalWorkaround;
bool shouldUseTakeoverWorkaround(const std::string& key) const;
void addTakeoverWorkaroundPotential(const std::string& key);
void confirmTakeoverWorkaroundNeeded(const std::string& key);
bool clear();
void markAsDirty(const std::string& name, bool fCheckTakeover);
bool removeSupport(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover);
bool removeClaim(const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover);
template <typename T>
void insertRowsFromQueue(std::vector<T>& result, const std::string& name) const;
template <typename T>
std::vector<queueEntryType<T>>* getQueueCacheRow(int nHeight, bool createIfNotExists);
template <typename T>
COptional<const std::vector<queueEntryType<T>>> getQueueCacheRow(int nHeight) const;
template <typename T>
queueNameRowType* getQueueCacheNameRow(const std::string& name, bool createIfNotExists);
template <typename T>
COptional<const queueNameRowType> getQueueCacheNameRow(const std::string& name) const;
template <typename T>
expirationQueueRowType* getExpirationQueueCacheRow(int nHeight, bool createIfNotExists);
template <typename T>
bool haveInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
template <typename T>
T add(const std::string& name, const COutPoint& outPoint, const uint160& claimId, CAmount nAmount, int nHeight);
template <typename T>
bool remove(T& value, const std::string& name, const COutPoint& outPoint, int nHeight, int& nValidAtHeight, bool fCheckTakeover = false);
template <typename T>
bool addToQueue(const std::string& name, const T& value);
template <typename T>
bool removeFromQueue(const std::string& name, const COutPoint& outPoint, T& value);
template <typename T>
bool addToCache(const std::string& name, const T& value, bool fCheckTakeover = false);
template <typename T>
bool removeFromCache(const std::string& name, const COutPoint& outPoint, T& value, bool fCheckTakeover = false);
template <typename T>
bool undoSpend(const std::string& name, const T& value, int nValidAtHeight);
template <typename T>
void undoIncrement(insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo, std::set<T>* deleted = nullptr);
template <typename T>
void undoDecrement(insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo, std::vector<CClaimIndexElement>* added = nullptr, std::set<T>* deleted = nullptr);
template <typename T>
void undoIncrement(const std::string& name, insertUndoType& insertUndo, std::vector<queueEntryType<T>>& expireUndo);
template <typename T>
void reactivate(const expirationQueueRowType& row, int height, bool increment);
// for unit test
friend struct ClaimTrieChainFixture;
friend class CClaimTrieCacheTest;
};
class CClaimTrieCacheExpirationFork : public CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheExpirationFork(CClaimTrie* base);
void setExpirationTime(int time);
int expirationTime() const override;
virtual void initializeIncrement();
bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo) override;
private:
int nExpirationTime;
bool forkForExpirationChange(bool increment);
};
class CClaimTrieCacheNormalizationFork : public CClaimTrieCacheExpirationFork
{
public:
explicit CClaimTrieCacheNormalizationFork(CClaimTrie* base)
: CClaimTrieCacheExpirationFork(base), overrideInsertNormalization(false), overrideRemoveNormalization(false)
{
}
bool shouldNormalize() const;
// lower-case and normalize any input string name
// see: https://unicode.org/reports/tr15/#Norm_Forms
std::string normalizeClaimName(const std::string& name, bool force = false) const; // public only for validating name field on update op
bool incrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool decrementBlock(insertUndoType& insertUndo,
claimQueueRowType& expireUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo) override;
bool getProofForName(const std::string& name, CClaimTrieProof& proof) override;
bool getInfoForName(const std::string& name, CClaimValue& claim) const override;
CClaimSupportToName getClaimsForName(const std::string& name) const override;
std::string adjustNameForValidHeight(const std::string& name, int validHeight) const override;
protected:
bool insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover) override;
bool removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover) override;
bool insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover) override;
bool removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover) override;
int getDelayForName(const std::string& name, const uint160& claimId) const override;
private:
bool overrideInsertNormalization;
bool overrideRemoveNormalization;
bool normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo,
claimQueueRowType& removeUndo,
insertUndoType& insertSupportUndo,
supportQueueRowType& expireSupportUndo,
std::vector<std::pair<std::string, int>>& takeoverHeightUndo);
};
class CClaimTrieCacheHashFork : public CClaimTrieCacheNormalizationFork
{
public:
explicit CClaimTrieCacheHashFork(CClaimTrie* base);
bool getProofForName(const std::string& name, CClaimTrieProof& proof) override;
bool getProofForName(const std::string& name, CClaimTrieProof& proof, const std::function<bool(const CClaimValue&)>& comp);
void initializeIncrement() override;
bool finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo) override;
bool allowSupportMetadata() const;
protected:
uint256 recursiveComputeMerkleHash(CClaimTrie::iterator& it) override;
bool recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const override;
private:
void copyAllBaseToCache();
};
typedef CClaimTrieCacheHashFork CClaimTrieCache;
#endif // BITCOIN_CLAIMTRIE_H

View file

@ -0,0 +1,158 @@
cmake_minimum_required(VERSION 3.10)
project(libclaimtrie)
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
include(../../contrib/cmake/cmake/CPM.cmake)
include(ExternalProject)
set(CLAIMTRIE_SRC
blob.cpp
data.cpp
forks.cpp
hashes.cpp
log.cpp
trie.cpp
txoutpoint.cpp
uints.cpp
./sqlite/sqlite3.c
)
if(BIND)
find_program(SWIG NAMES swig)
string(TOLOWER ${BIND} BIND)
if(${BIND} STREQUAL "python")
find_package(PythonInterp 3.6 REQUIRED)
find_package(PythonLibs 3.6 REQUIRED)
set(BIND_INCLUDE_DIRS ${PYTHON_INCLUDE_DIRS})
set(SWIG_OPTIONS -python;-py3)
set(CMAKE_SHARED_LIBRARY_PREFIX _lib)
elseif(${BIND} STREQUAL "go")
find_program(GOLANG NAMES go)
if(NOT GOLANG)
message(FATAL_ERROR "Golang was not found in system path")
endif()
execute_process(COMMAND ${GOLANG} "version" OUTPUT_VARIABLE GOLANG_VERSION)
STRING(REGEX MATCH "[0-9]+.[0-9]+.[0-9]" GOLANG_VERSION "${GOLANG_VERSION}")
if(GOLANG_VERSION VERSION_LESS 1.5)
message(FATAL_ERROR "Update Golang to at least 1.5, now " ${GOLANG_VERSION})
endif()
set(SWIG_OPTIONS -go;-cgo;-intgosize;32)
set(POST_BUILD_COMMAND go)
set(POST_BUILD_ARGS install;-x;${CMAKE_PROJECT_NAME}.go)
# CGO_LDFLAGS is buggy so we should inject linker flags in generated go file
set(POST_BUILD_PATCH -i "\"s/import \\\"C\\\"/\\/\\/ #cgo LDFLAGS: -L\\\$$\\{SRCDIR\\} -lclaimtrie\\nimport \\\"C\\\"/\"" ${CMAKE_PROJECT_NAME}.go)
else()
message(FATAL_ERROR "Implement a handler for ${BIND}")
endif()
add_custom_command(OUTPUT ${CMAKE_PROJECT_NAME}_wrap.cxx
COMMAND ${SWIG}
ARGS -c++ ${SWIG_OPTIONS} -outcurrentdir ${CMAKE_CURRENT_SOURCE_DIR}/${CMAKE_PROJECT_NAME}.i
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
set(CLAIMTRIE_SRC ${CLAIMTRIE_SRC}
${CMAKE_PROJECT_NAME}_wrap.cxx
)
endif()
add_library(claimtrie SHARED ${CLAIMTRIE_SRC})
if (POST_BUILD_PATCH)
add_custom_command(TARGET claimtrie POST_BUILD
COMMAND sed
ARGS ${POST_BUILD_PATCH}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
endif()
if(POST_BUILD_COMMAND)
add_custom_command(TARGET claimtrie POST_BUILD
COMMAND ${POST_BUILD_COMMAND}
ARGS ${POST_BUILD_ARGS}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
endif()
target_include_directories(claimtrie PRIVATE ${CMAKE_CURRENT_SOURCE_DIR})
if(BIND_INCLUDE_DIRS)
target_include_directories(claimtrie PRIVATE ${BIND_INCLUDE_DIRS})
endif()
CPMAddPackage(
NAME OpenSSL
GITHUB_REPOSITORY openssl/openssl
VERSION 1.0.2
GIT_TAG OpenSSL_1_0_2r
DOWNLOAD_ONLY TRUE
)
if(OpenSSL_ADDED)
string(TOLOWER ${CMAKE_SYSTEM_NAME}-${CMAKE_SYSTEM_PROCESSOR} ARCH)
ExternalProject_Add(OpenSSL
PREFIX openssl
SOURCE_DIR ${OpenSSL_SOURCE_DIR}
CONFIGURE_COMMAND ${OpenSSL_SOURCE_DIR}/Configure ${ARCH} no-shared no-dso no-engines -fPIC --prefix=<INSTALL_DIR>
BUILD_IN_SOURCE 1
)
add_dependencies(claimtrie OpenSSL)
ExternalProject_Get_Property(OpenSSL INSTALL_DIR)
target_link_directories(claimtrie PRIVATE ${INSTALL_DIR}/lib)
target_include_directories(claimtrie PRIVATE ${INSTALL_DIR}/include)
endif(OpenSSL_ADDED)
target_link_libraries(claimtrie PRIVATE ssl)
set(BOOST_LIBS filesystem,locale,system,chrono,thread,test)
set(BOOST_COMPONENTS filesystem;locale;system;chrono;thread;unit_test_framework)
CPMAddPackage(
NAME Boost
GITHUB_REPOSITORY boostorg/boost
VERSION 1.64.0
COMPONENTS ${BOOST_COMPONENTS}
GIT_TAG boost-1.69.0
GIT_SUBMODULES libs/* tools/*
DOWNLOAD_ONLY TRUE
)
# if boost is found system wide we expect to be compiled against icu, so we can skip it
if(Boost_ADDED)
CPMAddPackage(
NAME ICU
GITHUB_REPOSITORY unicode-org/icu
VERSION 63.2
GIT_TAG release-63-2
DOWNLOAD_ONLY TRUE
)
if(ICU_ADDED)
ExternalProject_Add(ICU
PREFIX icu
SOURCE_DIR ${ICU_SOURCE_DIR}
CONFIGURE_COMMAND ${ICU_SOURCE_DIR}/icu4c/source/configure --disable-extras --disable-strict --enable-static
--disable-shared --disable-tests --disable-samples --disable-dyload --disable-layoutex CFLAGS=-fPIC CPPFLAGS=-fPIC --prefix=<INSTALL_DIR>
)
ExternalProject_Get_Property(ICU INSTALL_DIR)
set(ICU_PATH ${INSTALL_DIR})
target_link_directories(claimtrie PRIVATE ${ICU_PATH}/lib)
target_include_directories(claimtrie PRIVATE ${ICU_PATH}/include)
endif(ICU_ADDED)
ExternalProject_Add(Boost
PREFIX boost
DEPENDS ICU
SOURCE_DIR ${Boost_SOURCE_DIR}
CONFIGURE_COMMAND ${Boost_SOURCE_DIR}/bootstrap.sh --with-icu=${ICU_PATH} --with-libraries=${BOOST_LIBS} && ${Boost_SOURCE_DIR}/b2 headers
BUILD_COMMAND ${Boost_SOURCE_DIR}/b2 install threading=multi -sNO_BZIP2=1 -sNO_ZLIB=1 link=static linkflags="-L${ICU_PATH}/lib -licuio -licuuc -licudata -licui18n" cxxflags=-fPIC boost.locale.iconv=off boost.locale.posix=off boost.locale.icu=on boost.locale.std=off -sICU_PATH=${ICU_PATH} --prefix=<INSTALL_DIR>
INSTALL_COMMAND ""
BUILD_IN_SOURCE 1
)
add_dependencies(claimtrie Boost)
ExternalProject_Get_Property(Boost INSTALL_DIR)
target_link_directories(claimtrie PRIVATE ${INSTALL_DIR}/lib)
target_include_directories(claimtrie PRIVATE ${INSTALL_DIR}/include)
set_property(DIRECTORY PROPERTY ADDITIONAL_MAKE_CLEAN_FILES ${Boost_SOURCE_DIR}/bin.v2)
endif(Boost_ADDED)
target_link_libraries(claimtrie PRIVATE boost_filesystem boost_locale)

172
src/claimtrie/blob.cpp Normal file
View file

@ -0,0 +1,172 @@
#include <blob.h>
#include <algorithm>
#include <cassert>
#include <cstring>
#include <iomanip>
#include <sstream>
/** Template base class for fixed-sized opaque blobs. */
template<uint32_t BITS>
CBaseBlob<BITS>::CBaseBlob()
{
SetNull();
}
template<uint32_t BITS>
CBaseBlob<BITS>::CBaseBlob(const std::vector<uint8_t>& vec)
{
assert(vec.size() == size());
std::copy(vec.begin(), vec.end(), begin());
}
template<uint32_t BITS>
CBaseBlob<BITS>::CBaseBlob(const CBaseBlob& o)
{
*this = o;
}
template<uint32_t BITS>
CBaseBlob<BITS>& CBaseBlob<BITS>::operator=(const CBaseBlob& o)
{
if (this != &o)
std::copy(o.begin(), o.end(), begin());
return *this;
}
template<uint32_t BITS>
int CBaseBlob<BITS>::Compare(const CBaseBlob& b) const
{
return std::memcmp(begin(), b.begin(), size());
}
template<uint32_t BITS>
bool CBaseBlob<BITS>::operator<(const CBaseBlob& b) const
{
return Compare(b) < 0;
}
template<uint32_t BITS>
bool CBaseBlob<BITS>::operator==(const CBaseBlob& b) const
{
return Compare(b) == 0;
}
template<uint32_t BITS>
bool CBaseBlob<BITS>::operator!=(const CBaseBlob& b) const
{
return !(*this == b);
}
template<uint32_t BITS>
bool CBaseBlob<BITS>::IsNull() const
{
return std::all_of(begin(), end(), [](uint8_t e) { return e == 0; });
}
template<uint32_t BITS>
void CBaseBlob<BITS>::SetNull()
{
data.fill(0);
}
template<uint32_t BITS>
std::string CBaseBlob<BITS>::GetHex(bool reverse) const
{
std::stringstream ss;
ss << std::hex;
if (reverse) {
for (auto it = data.rbegin(); it != data.rend(); ++it)
ss << std::setw(2) << std::setfill('0') << uint32_t(*it);
}
else {
for (auto it = data.begin(); it != data.end(); ++it)
ss << std::setw(2) << std::setfill('0') << uint32_t(*it);
}
return ss.str();
}
template<uint32_t BITS>
void CBaseBlob<BITS>::SetHex(const char* psz)
{
SetNull();
// skip leading spaces
while (isspace(*psz))
psz++;
// skip 0x
if (psz[0] == '0' && tolower(psz[1]) == 'x')
psz += 2;
auto b = psz;
// advance to end
while (isxdigit(*psz))
psz++;
--psz;
char buf[3] = {};
auto it = begin();
while (psz >= b && it != end()) {
buf[1] = *psz--;
buf[0] = psz >= b ? *psz-- : '0';
*it++ = std::strtoul(buf, nullptr, 16);
}
}
template<uint32_t BITS>
uint64_t CBaseBlob<BITS>::GetUint64(int pos) const
{
assert((pos + 1) * 8 <= size());
const uint8_t* ptr = begin() + pos * 8;
uint64_t res = 0;
for (int i = 0; i < 8; ++i)
res |= (uint64_t(ptr[i]) << (8 * i));
return res;
}
template<uint32_t BITS>
void CBaseBlob<BITS>::SetHex(const std::string& str)
{
SetHex(str.c_str());
}
template<uint32_t BITS>
std::string CBaseBlob<BITS>::ToString() const
{
return GetHex();
}
template<uint32_t BITS>
uint8_t* CBaseBlob<BITS>::begin() noexcept
{
return data.data();
}
template<uint32_t BITS>
const uint8_t* CBaseBlob<BITS>::begin() const noexcept
{
return data.data();
}
template<uint32_t BITS>
uint8_t* CBaseBlob<BITS>::end() noexcept
{
return data.data() + data.size();
}
template<uint32_t BITS>
const uint8_t* CBaseBlob<BITS>::end() const noexcept
{
return data.data() + data.size();
}
template<uint32_t BITS>
std::size_t CBaseBlob<BITS>::size() const
{
return data.size();
}
template class CBaseBlob<160>;
template class CBaseBlob<256>;

50
src/claimtrie/blob.h Normal file
View file

@ -0,0 +1,50 @@
#ifndef CLAIMTRIE_BLOB_H
#define CLAIMTRIE_BLOB_H
#include <array>
#include <string>
#include <vector>
/** Template base class for fixed-sized opaque blobs. */
template<uint32_t BITS>
class CBaseBlob
{
std::array<uint8_t, BITS / 8> data;
public:
CBaseBlob();
explicit CBaseBlob(const std::vector<uint8_t>& vec);
CBaseBlob(CBaseBlob&&) = default;
CBaseBlob& operator=(CBaseBlob&&) = default;
CBaseBlob(const CBaseBlob& o);
CBaseBlob& operator=(const CBaseBlob& o);
int Compare(const CBaseBlob& b) const;
bool operator<(const CBaseBlob& b) const;
bool operator==(const CBaseBlob& b) const;
bool operator!=(const CBaseBlob& b) const;
uint8_t* begin() noexcept;
const uint8_t* begin() const noexcept;
uint8_t* end() noexcept;
const uint8_t* end() const noexcept;
std::size_t size() const;
bool IsNull() const;
void SetNull();
std::string GetHex(bool reverse=true) const;
void SetHex(const char* psz);
void SetHex(const std::string& str);
std::string ToString() const;
uint64_t GetUint64(int pos) const;
};
#endif // CLAIMTRIE_BLOB_H

123
src/claimtrie/data.cpp Normal file
View file

@ -0,0 +1,123 @@
#include <data.h>
#include <algorithm>
#include <sstream>
CClaimValue::CClaimValue(COutPoint outPoint, uint160 claimId, int64_t nAmount, int nHeight, int nValidAtHeight)
: outPoint(std::move(outPoint)), claimId(std::move(claimId)), nAmount(nAmount), nEffectiveAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
bool CClaimValue::operator<(const CClaimValue& other) const
{
if (nEffectiveAmount < other.nEffectiveAmount)
return true;
if (nEffectiveAmount != other.nEffectiveAmount)
return false;
if (nHeight > other.nHeight)
return true;
if (nHeight != other.nHeight)
return false;
return outPoint != other.outPoint && !(outPoint < other.outPoint);
}
bool CClaimValue::operator==(const CClaimValue& other) const
{
return outPoint == other.outPoint && claimId == other.claimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool CClaimValue::operator!=(const CClaimValue& other) const
{
return !(*this == other);
}
std::string CClaimValue::ToString() const
{
std::stringstream ss;
ss << "CClaimValue(" << outPoint.ToString()
<< ", " << claimId.ToString()
<< ", " << nAmount
<< ", " << nEffectiveAmount
<< ", " << nHeight
<< ", " << nValidAtHeight << ')';
return ss.str();
}
CSupportValue::CSupportValue(COutPoint outPoint, uint160 supportedClaimId, int64_t nAmount, int nHeight, int nValidAtHeight)
: outPoint(std::move(outPoint)), supportedClaimId(std::move(supportedClaimId)), nAmount(nAmount), nHeight(nHeight), nValidAtHeight(nValidAtHeight)
{
}
bool CSupportValue::operator==(const CSupportValue& other) const
{
return outPoint == other.outPoint && supportedClaimId == other.supportedClaimId && nAmount == other.nAmount && nHeight == other.nHeight && nValidAtHeight == other.nValidAtHeight;
}
bool CSupportValue::operator!=(const CSupportValue& other) const
{
return !(*this == other);
}
std::string CSupportValue::ToString() const
{
std::stringstream ss;
ss << "CSupportValue(" << outPoint.ToString()
<< ", " << supportedClaimId.ToString()
<< ", " << nAmount
<< ", " << nHeight
<< ", " << nValidAtHeight << ')';
return ss.str();
}
CNameOutPointHeightType::CNameOutPointHeightType(std::string name, COutPoint outPoint, int nValidHeight)
: name(std::move(name)), outPoint(std::move(outPoint)), nValidHeight(nValidHeight)
{
}
CClaimNsupports::CClaimNsupports(CClaimValue claim, int64_t effectiveAmount, int originalHeight, std::vector<CSupportValue> supports)
: claim(std::move(claim)), effectiveAmount(effectiveAmount), originalHeight(originalHeight), supports(std::move(supports))
{
}
bool CClaimNsupports::IsNull() const
{
return claim.claimId.IsNull();
}
CClaimSupportToName::CClaimSupportToName(std::string name, int nLastTakeoverHeight, std::vector<CClaimNsupports> claimsNsupports, std::vector<CSupportValue> unmatchedSupports)
: name(std::move(name)), nLastTakeoverHeight(nLastTakeoverHeight), claimsNsupports(std::move(claimsNsupports)), unmatchedSupports(std::move(unmatchedSupports))
{
}
static const CClaimNsupports invalid;
const CClaimNsupports& CClaimSupportToName::find(const uint160& claimId) const
{
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&claimId](const CClaimNsupports& value) {
return claimId == value.claim.claimId;
});
return it != claimsNsupports.end() ? *it : invalid;
}
const CClaimNsupports& CClaimSupportToName::find(const std::string& partialId) const
{
std::string lowered(partialId);
for (auto& c: lowered)
c = std::tolower(c);
auto it = std::find_if(claimsNsupports.begin(), claimsNsupports.end(), [&lowered](const CClaimNsupports& value) {
return value.claim.claimId.GetHex().find(lowered) == 0;
});
return it != claimsNsupports.end() ? *it : invalid;
}
bool CClaimNsupports::operator<(const CClaimNsupports& other) const
{
return claim < other.claim;
}
CClaimTrieProofNode::CClaimTrieProofNode(std::vector<std::pair<unsigned char, uint256>> children, bool hasValue, uint256 valHash)
: children(std::move(children)), hasValue(hasValue), valHash(std::move(valHash))
{
}

135
src/claimtrie/data.h Normal file
View file

@ -0,0 +1,135 @@
#ifndef CLAIMTRIE_DATA_H
#define CLAIMTRIE_DATA_H
#include <txoutpoint.h>
#include <uints.h>
#include <cstring>
#include <string>
#include <vector>
struct CClaimValue
{
COutPoint outPoint;
uint160 claimId;
int64_t nAmount = 0;
int64_t nEffectiveAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CClaimValue() = default;
CClaimValue(COutPoint outPoint, uint160 claimId, int64_t nAmount, int nHeight, int nValidAtHeight);
CClaimValue(CClaimValue&&) = default;
CClaimValue(const CClaimValue&) = default;
CClaimValue& operator=(CClaimValue&&) = default;
CClaimValue& operator=(const CClaimValue&) = default;
bool operator<(const CClaimValue& other) const;
bool operator==(const CClaimValue& other) const;
bool operator!=(const CClaimValue& other) const;
std::string ToString() const;
};
struct CSupportValue
{
COutPoint outPoint;
uint160 supportedClaimId;
int64_t nAmount = 0;
int nHeight = 0;
int nValidAtHeight = 0;
CSupportValue() = default;
CSupportValue(COutPoint outPoint, uint160 supportedClaimId, int64_t nAmount, int nHeight, int nValidAtHeight);
CSupportValue(CSupportValue&&) = default;
CSupportValue(const CSupportValue&) = default;
CSupportValue& operator=(CSupportValue&&) = default;
CSupportValue& operator=(const CSupportValue&) = default;
bool operator==(const CSupportValue& other) const;
bool operator!=(const CSupportValue& other) const;
std::string ToString() const;
};
typedef std::vector<CClaimValue> claimEntryType;
typedef std::vector<CSupportValue> supportEntryType;
struct CNameOutPointHeightType
{
std::string name;
COutPoint outPoint;
int nValidHeight = 0;
CNameOutPointHeightType() = default;
CNameOutPointHeightType(std::string name, COutPoint outPoint, int nValidHeight);
};
struct CClaimNsupports
{
CClaimNsupports() = default;
CClaimNsupports(CClaimNsupports&&) = default;
CClaimNsupports(const CClaimNsupports&) = default;
bool operator<(const CClaimNsupports& other) const;
CClaimNsupports& operator=(CClaimNsupports&&) = default;
CClaimNsupports& operator=(const CClaimNsupports&) = default;
CClaimNsupports(CClaimValue claim, int64_t effectiveAmount, int originalHeight,
std::vector<CSupportValue> supports = {});
bool IsNull() const;
CClaimValue claim;
int64_t effectiveAmount = 0;
int originalHeight = -1;
std::vector<CSupportValue> supports;
};
struct CClaimSupportToName
{
CClaimSupportToName(std::string name, int nLastTakeoverHeight, std::vector<CClaimNsupports> claimsNsupports, std::vector<CSupportValue> unmatchedSupports);
const CClaimNsupports& find(const uint160& claimId) const;
const CClaimNsupports& find(const std::string& partialId) const;
const std::string name;
const int nLastTakeoverHeight;
const std::vector<CClaimNsupports> claimsNsupports;
const std::vector<CSupportValue> unmatchedSupports;
};
struct CClaimTrieProofNode
{
CClaimTrieProofNode(std::vector<std::pair<unsigned char, uint256>> children, bool hasValue, uint256 valHash);
CClaimTrieProofNode() = default;
CClaimTrieProofNode(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode(const CClaimTrieProofNode&) = default;
CClaimTrieProofNode& operator=(CClaimTrieProofNode&&) = default;
CClaimTrieProofNode& operator=(const CClaimTrieProofNode&) = default;
std::vector<std::pair<unsigned char, uint256>> children;
bool hasValue;
uint256 valHash;
};
struct CClaimTrieProof
{
CClaimTrieProof() = default;
CClaimTrieProof(CClaimTrieProof&&) = default;
CClaimTrieProof(const CClaimTrieProof&) = default;
CClaimTrieProof& operator=(CClaimTrieProof&&) = default;
CClaimTrieProof& operator=(const CClaimTrieProof&) = default;
std::vector<std::pair<bool, uint256>> pairs;
std::vector<CClaimTrieProofNode> nodes;
int nHeightOfLastTakeover = 0;
bool hasValue = false;
COutPoint outPoint;
};
#endif // CLAIMTRIE_DATA_H

417
src/claimtrie/forks.cpp Normal file
View file

@ -0,0 +1,417 @@
#include <forks.h>
#include <hashes.h>
#include <log.h>
#include <trie.h>
#include <boost/locale.hpp>
#include <boost/locale/conversion.hpp>
#include <boost/locale/localization_backend.hpp>
#define logPrint CLogPrint::global()
CClaimTrieCacheExpirationFork::CClaimTrieCacheExpirationFork(CClaimTrie* base) : CClaimTrieCacheBase(base)
{
expirationHeight = nNextHeight;
}
int CClaimTrieCacheExpirationFork::expirationTime() const
{
if (expirationHeight < base->nExtendedClaimExpirationForkHeight)
return CClaimTrieCacheBase::expirationTime();
return base->nExtendedClaimExpirationTime;
}
bool CClaimTrieCacheExpirationFork::incrementBlock()
{
if (CClaimTrieCacheBase::incrementBlock()) {
expirationHeight = nNextHeight;
return true;
}
return false;
}
bool CClaimTrieCacheExpirationFork::decrementBlock()
{
if (CClaimTrieCacheBase::decrementBlock()) {
expirationHeight = nNextHeight;
return true;
}
return false;
}
void CClaimTrieCacheExpirationFork::initializeIncrement()
{
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != base->nExtendedClaimExpirationForkHeight)
return;
forkForExpirationChange(true);
}
bool CClaimTrieCacheExpirationFork::finalizeDecrement()
{
auto ret = CClaimTrieCacheBase::finalizeDecrement();
if (ret && nNextHeight == base->nExtendedClaimExpirationForkHeight)
forkForExpirationChange(false);
return ret;
}
bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment)
{
ensureTransacting();
/*
If increment is True, we have forked to extend the expiration time, thus items in the expiration queue
will have their expiration extended by "new expiration time - original expiration time"
If increment is False, we are decremented a block to reverse the fork. Thus items in the expiration queue
will have their expiration extension removed.
*/
auto height = nNextHeight;
int extension = base->nExtendedClaimExpirationTime - base->nOriginalClaimExpirationTime;
if (!increment) {
height += extension;
extension *= -1;
}
db << "UPDATE claim SET expirationHeight = expirationHeight + ? WHERE expirationHeight >= ?" << extension << height;
db << "UPDATE support SET expirationHeight = expirationHeight + ? WHERE expirationHeight >= ?" << extension << height;
return true;
}
CClaimTrieCacheNormalizationFork::CClaimTrieCacheNormalizationFork(CClaimTrie* base) : CClaimTrieCacheExpirationFork(base)
{
db.define("NORMALIZED", [this](const std::string& str) { return normalizeClaimName(str, true); });
}
bool CClaimTrieCacheNormalizationFork::shouldNormalize() const
{
return nNextHeight > base->nNormalizedNameForkHeight;
}
std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::string& name, bool force) const
{
if (!force && !shouldNormalize())
return name;
static std::locale utf8;
static bool initialized = false;
if (!initialized) {
static boost::locale::localization_backend_manager manager =
boost::locale::localization_backend_manager::global();
manager.select("icu");
static boost::locale::generator curLocale(manager);
utf8 = curLocale("en_US.UTF8");
initialized = true;
}
std::string normalized;
try {
// Check if it is a valid utf-8 string. If not, it will throw a
// boost::locale::conv::conversion_error exception which we catch later
normalized = boost::locale::conv::to_utf<char>(name, "UTF-8", boost::locale::conv::stop);
if (normalized.empty())
return name;
// these methods supposedly only use the "UTF8" portion of the locale object:
normalized = boost::locale::normalize(normalized, boost::locale::norm_nfd, utf8);
normalized = boost::locale::fold_case(normalized, utf8);
} catch (const boost::locale::conv::conversion_error& e) {
return name;
} catch (const std::bad_cast& e) {
logPrint << "CClaimTrieCacheNormalizationFork::" << __func__ << "() is invalid or dependencies are missing: " << e.what() << Clog::endl;
throw;
} catch (const std::exception& e) { // TODO: change to use ... with current_exception() in c++11
logPrint << "CClaimTrieCacheNormalizationFork::" << __func__ << "() had an unexpected exception: " << e.what() << Clog::endl;
return name;
}
return normalized;
}
bool CClaimTrieCacheNormalizationFork::normalizeAllNamesInTrieIfNecessary()
{
ensureTransacting();
// make the new nodes
db << "INSERT INTO node(name) SELECT NORMALIZED(name) AS nn FROM claim WHERE nn != nodeName "
"AND activationHeight <= ?1 AND expirationHeight > ?1 ON CONFLICT(name) DO UPDATE SET hash = NULL" << nNextHeight;
// there's a subtlety here: names in supports don't make new nodes
db << "UPDATE node SET hash = NULL WHERE name IN "
"(SELECT NORMALIZED(name) AS nn FROM support WHERE nn != nodeName "
"AND activationHeight <= ?1 AND expirationHeight > ?1)" << nNextHeight;
// update the claims and supports
db << "UPDATE claim SET nodeName = NORMALIZED(name) WHERE activationHeight <= ?1 AND expirationHeight > ?1" << nNextHeight;
db << "UPDATE support SET nodeName = NORMALIZED(name) WHERE activationHeight <= ?1 AND expirationHeight > ?1" << nNextHeight;
// remove the old nodes
db << "UPDATE node SET hash = NULL WHERE name NOT IN "
"(SELECT nodeName FROM claim WHERE activationHeight <= ?1 AND expirationHeight > ?1 "
"UNION SELECT nodeName FROM support WHERE activationHeight <= ?1 AND expirationHeight > ?1)" << nNextHeight;
// work around a bug in the old implementation:
db << "UPDATE claim SET activationHeight = ?1 " // force a takeover on these
"WHERE updateHeight < ?1 AND activationHeight > ?1 AND nodeName != name" << nNextHeight;
return true;
}
bool CClaimTrieCacheNormalizationFork::unnormalizeAllNamesInTrieIfNecessary()
{
ensureTransacting();
db << "INSERT INTO node(name) SELECT name FROM claim WHERE name != nodeName "
"AND activationHeight < ?1 AND expirationHeight > ?1 ON CONFLICT(name) DO UPDATE SET hash = NULL" << nNextHeight;
db << "UPDATE node SET hash = NULL WHERE name IN "
"(SELECT nodeName FROM support WHERE name != nodeName "
"UNION SELECT nodeName FROM claim WHERE name != nodeName)";
db << "UPDATE claim SET nodeName = name";
db << "UPDATE support SET nodeName = name";
// we need to let the tree structure method do the actual node delete
db << "UPDATE node SET hash = NULL WHERE name NOT IN "
"(SELECT DISTINCT name FROM claim)";
return true;
}
bool CClaimTrieCacheNormalizationFork::incrementBlock()
{
if (nNextHeight == base->nNormalizedNameForkHeight)
normalizeAllNamesInTrieIfNecessary();
return CClaimTrieCacheExpirationFork::incrementBlock();
}
bool CClaimTrieCacheNormalizationFork::decrementBlock()
{
auto ret = CClaimTrieCacheExpirationFork::decrementBlock();
if (ret && nNextHeight == base->nNormalizedNameForkHeight)
unnormalizeAllNamesInTrieIfNecessary();
return ret;
}
bool CClaimTrieCacheNormalizationFork::getProofForName(const std::string& name, const uint160& claim, CClaimTrieProof& proof)
{
return CClaimTrieCacheExpirationFork::getProofForName(normalizeClaimName(name), claim, proof);
}
bool CClaimTrieCacheNormalizationFork::getInfoForName(const std::string& name, CClaimValue& claim, int offsetHeight) const
{
return CClaimTrieCacheExpirationFork::getInfoForName(normalizeClaimName(name), claim, offsetHeight);
}
CClaimSupportToName CClaimTrieCacheNormalizationFork::getClaimsForName(const std::string& name) const
{
return CClaimTrieCacheExpirationFork::getClaimsForName(normalizeClaimName(name));
}
int CClaimTrieCacheNormalizationFork::getDelayForName(const std::string& name, const uint160& claimId) const
{
return CClaimTrieCacheExpirationFork::getDelayForName(normalizeClaimName(name), claimId);
}
std::string CClaimTrieCacheNormalizationFork::adjustNameForValidHeight(const std::string& name, int validHeight) const
{
return normalizeClaimName(name, validHeight > base->nNormalizedNameForkHeight);
}
CClaimTrieCacheHashFork::CClaimTrieCacheHashFork(CClaimTrie* base) : CClaimTrieCacheNormalizationFork(base)
{
}
static const auto leafHash = uint256S("0000000000000000000000000000000000000000000000000000000000000002");
static const auto emptyHash = uint256S("0000000000000000000000000000000000000000000000000000000000000003");
uint256 ComputeMerkleRoot(std::vector<uint256> hashes)
{
while (hashes.size() > 1) {
if (hashes.size() & 1)
hashes.push_back(hashes.back());
sha256n_way(hashes);
hashes.resize(hashes.size() / 2);
}
return hashes.empty() ? uint256{} : hashes[0];
}
uint256 CClaimTrieCacheHashFork::computeNodeHash(const std::string& name, int takeoverHeight)
{
if (nNextHeight < base->nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::computeNodeHash(name, takeoverHeight);
std::vector<uint256> childHashes;
childHashQuery << name >> [&childHashes](std::string, uint256 hash) {
childHashes.push_back(std::move(hash));
};
childHashQuery++;
std::vector<uint256> claimHashes;
//if (takeoverHeight > 0) {
COutPoint p;
for (auto &&row: claimHashQuery << nNextHeight << name) {
row >> p.hash >> p.n;
claimHashes.push_back(getValueHash(p, takeoverHeight));
}
claimHashQuery++;
//}
auto left = childHashes.empty() ? leafHash : ComputeMerkleRoot(std::move(childHashes));
auto right = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(std::move(claimHashes));
return Hash(left.begin(), left.end(), right.begin(), right.end());
}
std::vector<uint256> ComputeMerklePath(const std::vector<uint256>& hashes, uint32_t idx)
{
uint32_t count = 0;
int matchlevel = -1;
bool matchh = false;
uint256 inner[32], h;
const uint32_t one = 1;
std::vector<uint256> res;
const auto iterateInner = [&](int& level) {
for (; !(count & (one << level)); level++) {
const auto& ihash = inner[level];
if (matchh) {
res.push_back(ihash);
} else if (matchlevel == level) {
res.push_back(h);
matchh = true;
}
h = Hash(ihash.begin(), ihash.end(), h.begin(), h.end());
}
};
while (count < hashes.size()) {
h = hashes[count];
matchh = count == idx;
count++;
int level = 0;
iterateInner(level);
// Store the resulting hash at inner position level.
inner[level] = h;
if (matchh)
matchlevel = level;
}
int level = 0;
while (!(count & (one << level)))
level++;
h = inner[level];
matchh = matchlevel == level;
while (count != (one << level)) {
// If we reach this point, h is an inner value that is not the top.
if (matchh)
res.push_back(h);
h = Hash(h.begin(), h.end(), h.begin(), h.end());
// Increment count to the value it would have if two entries at this
count += (one << level);
level++;
iterateInner(level);
}
return res;
}
extern const std::string proofClaimQuery_s;
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, const uint160& claim, CClaimTrieProof& proof)
{
if (nNextHeight < base->nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::getProofForName(name, claim, proof);
auto fillPairs = [&proof](const std::vector<uint256>& hashes, uint32_t idx) {
auto partials = ComputeMerklePath(hashes, idx);
for (int i = partials.size() - 1; i >= 0; --i)
proof.pairs.emplace_back((idx >> i) & 1, partials[i]);
};
// cache the parent nodes
getMerkleHash();
proof = CClaimTrieProof();
for (auto&& row: db << proofClaimQuery_s << name) {
std::string key;
int takeoverHeight;
row >> key >> takeoverHeight;
uint32_t nextCurrentIdx = 0;
std::vector<uint256> childHashes;
for (auto&& child : childHashQuery << key) {
std::string childKey;
uint256 childHash;
child >> childKey >> childHash;
if (name.find(childKey) == 0)
nextCurrentIdx = uint32_t(childHashes.size());
childHashes.push_back(childHash);
}
childHashQuery++;
std::vector<uint256> claimHashes;
uint32_t claimIdx = 0;
for (auto&& child: claimHashQuery << nNextHeight << key) {
COutPoint childOutPoint;
uint160 childClaimID;
child >> childOutPoint.hash >> childOutPoint.n >> childClaimID;
if (childClaimID == claim && key == name) {
claimIdx = uint32_t(claimHashes.size());
proof.outPoint = childOutPoint;
proof.hasValue = true;
}
claimHashes.push_back(getValueHash(childOutPoint, takeoverHeight));
}
claimHashQuery++;
// I am on a node; I need a hash(children, claims)
// if I am the last node on the list, it will be hash(children, x)
// else it will be hash(x, claims)
if (key == name) {
proof.nHeightOfLastTakeover = takeoverHeight;
auto hash = childHashes.empty() ? leafHash : ComputeMerkleRoot(std::move(childHashes));
proof.pairs.emplace_back(true, hash);
if (!claimHashes.empty())
fillPairs(claimHashes, claimIdx);
} else {
auto hash = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(std::move(claimHashes));
proof.pairs.emplace_back(false, hash);
if (!childHashes.empty())
fillPairs(childHashes, nextCurrentIdx);
}
}
std::reverse(proof.pairs.begin(), proof.pairs.end());
return true;
}
void CClaimTrieCacheHashFork::initializeIncrement()
{
CClaimTrieCacheNormalizationFork::initializeIncrement();
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight == base->nAllClaimsInMerkleForkHeight - 1) {
ensureTransacting();
db << "UPDATE node SET hash = NULL";
}
}
bool CClaimTrieCacheHashFork::finalizeDecrement()
{
auto ret = CClaimTrieCacheNormalizationFork::finalizeDecrement();
if (ret && nNextHeight == base->nAllClaimsInMerkleForkHeight - 1) {
ensureTransacting();
db << "UPDATE node SET hash = NULL";
}
return ret;
}
bool CClaimTrieCacheHashFork::allowSupportMetadata() const
{
return nNextHeight >= base->nAllClaimsInMerkleForkHeight;
}

70
src/claimtrie/forks.h Normal file
View file

@ -0,0 +1,70 @@
#ifndef CLAIMTRIE_FORKS_H
#define CLAIMTRIE_FORKS_H
#include <trie.h>
class CClaimTrieCacheExpirationFork : public CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheExpirationFork(CClaimTrie* base);
int expirationTime() const override;
virtual void initializeIncrement();
bool incrementBlock() override;
bool decrementBlock() override;
bool finalizeDecrement() override;
protected:
int expirationHeight;
private:
bool forkForExpirationChange(bool increment);
};
class CClaimTrieCacheNormalizationFork : public CClaimTrieCacheExpirationFork
{
public:
explicit CClaimTrieCacheNormalizationFork(CClaimTrie* base);
bool shouldNormalize() const;
// lower-case and normalize any input string name
// see: https://unicode.org/reports/tr15/#Norm_Forms
std::string normalizeClaimName(const std::string& name, bool force = false) const; // public only for validating name field on update op
bool incrementBlock() override;
bool decrementBlock() override;
bool getProofForName(const std::string& name, const uint160& claim, CClaimTrieProof& proof) override;
bool getInfoForName(const std::string& name, CClaimValue& claim, int heightOffset = 0) const override;
CClaimSupportToName getClaimsForName(const std::string& name) const override;
std::string adjustNameForValidHeight(const std::string& name, int validHeight) const override;
protected:
int getDelayForName(const std::string& name, const uint160& claimId) const override;
private:
bool normalizeAllNamesInTrieIfNecessary();
bool unnormalizeAllNamesInTrieIfNecessary();
};
class CClaimTrieCacheHashFork : public CClaimTrieCacheNormalizationFork
{
public:
explicit CClaimTrieCacheHashFork(CClaimTrie* base);
bool getProofForName(const std::string& name, const uint160& claim, CClaimTrieProof& proof) override;
void initializeIncrement() override;
bool finalizeDecrement() override;
bool allowSupportMetadata() const;
protected:
uint256 computeNodeHash(const std::string& name, int takeoverHeight) override;
};
typedef CClaimTrieCacheHashFork CClaimTrieCache;
#endif // CLAIMTRIE_FORKS_H

21
src/claimtrie/hashes.cpp Normal file
View file

@ -0,0 +1,21 @@
#include <hashes.h>
// Bitcoin doubles hash
uint256 CalcHash(SHA256_CTX* sha)
{
uint256 result;
SHA256_Final(result.begin(), sha);
SHA256_Init(sha);
SHA256_Update(sha, result.begin(), result.size());
SHA256_Final(result.begin(), sha);
return result;
}
// universal N way hash function
std::function<void(std::vector<uint256>&)> sha256n_way =
[](std::vector<uint256>& hashes) {
for (std::size_t i = 0, j = 0; i < hashes.size(); i += 2)
hashes[j++] = Hash(hashes[i].begin(), hashes[i].end(),
hashes[i+1].begin(), hashes[i+1].end());
};

33
src/claimtrie/hashes.h Normal file
View file

@ -0,0 +1,33 @@
#ifndef CLAIMTRIE_HASHES_H
#define CLAIMTRIE_HASHES_H
#include <openssl/sha.h>
#include <functional>
#include <vector>
#include <uints.h>
uint256 CalcHash(SHA256_CTX* sha);
template<typename TIterator, typename... Args>
uint256 CalcHash(SHA256_CTX* sha, TIterator begin, TIterator end, Args... args)
{
static uint8_t blank;
SHA256_Update(sha, begin == end ? &blank : (uint8_t*)&begin[0], std::distance(begin, end) * sizeof(begin[0]));
return CalcHash(sha, args...);
}
template<typename TIterator, typename... Args>
uint256 Hash(TIterator begin, TIterator end, Args... args)
{
static_assert((sizeof...(args) & 1) != 1, "Parameters should be even number");
SHA256_CTX sha;
SHA256_Init(&sha);
return CalcHash(&sha, begin, end, args...);
}
extern std::function<void(std::vector<uint256>&)> sha256n_way;
#endif // CLAIMTRIE_HASHES_H

View file

@ -0,0 +1,75 @@
%module(directors="1") libclaimtrie
%{
#include "blob.h"
#include "uints.h"
#include "txoutpoint.h"
#include "data.h"
#include "trie.h"
#include "forks.h"
%}
%feature("flatnested", 1);
%feature("director") CIterateCallback;
%include stl.i
%include stdint.i
%include typemaps.i
%apply int& OUTPUT { int& nValidAtHeight };
%ignore CBaseBlob(CBaseBlob &&);
%ignore CClaimNsupports(CClaimNsupports &&);
%ignore CClaimTrieProof(CClaimTrieProof &&);
%ignore CClaimTrieProofNode(CClaimTrieProofNode &&);
%ignore CClaimValue(CClaimValue &&);
%ignore COutPoint(COutPoint &&);
%ignore CSupportValue(CSupportValue &&);
%ignore uint160(uint160 &&);
%ignore uint256(uint256 &&);
%template(vecUint8) std::vector<uint8_t>;
%include "blob.h"
%template(blob160) CBaseBlob<160>;
%template(blob256) CBaseBlob<256>;
%include "uints.h"
%include "txoutpoint.h"
%include "data.h"
%rename(CClaimTrieCache) CClaimTrieCacheHashFork;
%include "trie.h"
%include "forks.h"
%template(claimEntryType) std::vector<CClaimValue>;
%template(supportEntryType) std::vector<CSupportValue>;
%template(claimsNsupports) std::vector<CClaimNsupports>;
%template(proofPair) std::pair<bool, uint256>;
%template(proofNodePair) std::pair<unsigned char, uint256>;
%template(proofNodes) std::vector<CClaimTrieProofNode>;
%template(proofPairs) std::vector<std::pair<bool, uint256>>;
%template(proofNodeChildren) std::vector<std::pair<unsigned char, uint256>>;
%inline %{
struct CIterateCallback {
CIterateCallback() = default;
virtual ~CIterateCallback() = default;
virtual void apply(const std::string&) = 0;
};
void getNamesInTrie(const CClaimTrieCache& cache, CIterateCallback* cb)
{
cache.getNamesInTrie([cb](const std::string& name) {
cb->apply(name);
});
}
%}
%typemap(in,numinputs=0) CClaimValue&, CClaimTrieProof& %{
$1 = &$input;
%}

View file

@ -0,0 +1,120 @@
package main
import(
"testing"
."libclaimtrie"
)
func assertEqual(t *testing.T, a interface{}, b interface{}, msg string) {
if a != b {
t.Fatalf("%s != %s, %s", a, b, msg)
}
}
func assertNotEqual(t *testing.T, a interface{}, b interface{}, msg string) {
if a == b {
t.Fatalf("%s == %s, %s", a, b, msg)
}
}
func assertFalse(t *testing.T, a interface{}, msg string) {
if a != false {
t.Fatalf("%s != false, %s", a, msg)
}
}
func assertTrue(t *testing.T, a interface{}, msg string) {
if a != true {
t.Fatalf("%s != true, %s", a, msg)
}
}
func assertClaimEqual(t *testing.T, claim CClaimValue, txo COutPoint, cid Uint160, amount int64, effe int64, height int, validHeight int, msg string) {
assertEqual(t, claim.GetOutPoint(), txo, msg)
assertEqual(t, claim.GetClaimId(), cid, msg)
assertEqual(t, claim.GetNAmount(), amount, msg)
assertEqual(t, claim.GetNEffectiveAmount(), effe, msg)
assertEqual(t, claim.GetNHeight(), height, msg)
assertEqual(t, claim.GetNValidAtHeight(), validHeight, msg)
}
func assertSupportEqual(t *testing.T, support CSupportValue, txo COutPoint, cid Uint160, amount int64, height int, validHeight int, msg string) {
assertEqual(t, support.GetOutPoint(), txo, msg)
assertEqual(t, support.GetSupportedClaimId(), cid, msg)
assertEqual(t, support.GetNAmount(), amount, msg)
assertEqual(t, support.GetNHeight(), height, msg)
assertEqual(t, support.GetNValidAtHeight(), validHeight, msg)
}
var uint256s = "1234567890987654321012345678909876543210123456789098765432101234"
var uint160 = Uint160S("1234567890987654321012345678909876543210")
var uint256 = Uint256S(uint256s)
var txp = NewCOutPoint(uint256, uint(1))
func Test_uint256(t *testing.T) {
assertFalse(t, uint256.IsNull(), "incorrect uint256S or CBaseBlob::IsNull")
assertEqual(t, uint256.GetHex(), uint256s, "incorrect CBaseBlob::GetHex")
assertEqual(t, uint256.GetHex(), uint256.ToString(), "incorrect CBaseBlob::ToString")
assertEqual(t, uint256.Size(), 32, "incorrect CBaseBlob::size")
uint256c := NewUint256()
assertNotEqual(t, uint256c, uint256, "incorrect CBaseBlob::operator!=")
assertTrue(t, uint256c.IsNull(), "incorrect CBaseBlob::IsNull")
uint256c = NewUint256(uint256)
assertEqual(t, uint256c, uint256, "incorrect CBaseBlob::operator==")
uint256c.SetNull()
assertTrue(t, uint256c.IsNull(), "incorrect CBaseBlob::SetNull")
}
func Test_txoupoint(t *testing.T) {
assertEqual(t, txp.GetHash(), uint256, "incorrect COutPoint::COutPoint")
assertEqual(t, txp.GetN(), 1, "incorrect COutPoint::COutPoint")
assertFalse(t, txp.IsNull(), "incorrect COutPoint::IsNull")
pcopy := NewCOutPoint()
assertTrue(t, pcopy.IsNull(), "incorrect COutPoint::IsNull")
assertEqual(t, pcopy.GetHash(), NewUint256(), "incorrect COutPoint::COutPoint")
assertNotEqual(t, pcopy, txp, "incorrect COutPoint::operator!=")
}
func Test_claim(t *testing.T) {
assertEqual(t, uint160.Size(), 20, "incorrect CBaseBlob::size")
claim := NewCClaimValue(txp, uint160, 20, 1, 10)
assertClaimEqual(t, claim, txp, uint160, 20, 20, 1, 10, "incorrect CClaimValue::CClaimValue")
}
func Test_support(t *testing.T) {
claim := NewCClaimValue(txp, uint160, 20, 1, 10)
support := NewCSupportValue(txp, uint160, 20, 1, 10)
assertSupportEqual(t, support, claim.GetOutPoint(), claim.GetClaimId(), claim.GetNAmount(), claim.GetNHeight(), claim.GetNValidAtHeight(), "incorrect CSupportValue::CSupportValue")
}
func Test_claimtrie(t *testing.T) {
claim := NewCClaimValue(txp, uint160, 20, 0, 0)
wipe := true; height := 1; data_dir := "."
trie := NewCClaimTrie(10*1024*1024, wipe, height, data_dir)
cache := NewCClaimTrieCache(trie)
assertTrue(t, trie.Empty(), "incorrect CClaimtrieCache::empty")
assertTrue(t, cache.AddClaim("test", txp, uint160, 20, 0, 0), "incorrect CClaimtrieCache::addClaim")
assertTrue(t, cache.HaveClaim("test", txp), "incorrect CClaimtrieCache::haveClaim")
assertEqual(t, cache.GetTotalNamesInTrie(), 1, "incorrect CClaimtrieCache::getTotalNamesInTrie")
assertEqual(t, cache.GetTotalClaimsInTrie(), 1, "incorrect CClaimtrieCache::getTotalClaimsInTrie")
var nValidAtHeight []int
// add second claim
txp.SetN(2)
uint1601 := Uint160S("1234567890987654321012345678909876543211")
assertTrue(t, cache.AddClaim("test", txp, uint1601, 20, 1, 1), "incorrect CClaimtrieCache::addClaim")
result := cache.HaveClaimInQueue("test", txp, nValidAtHeight)
assertTrue(t, result, "incorrect CClaimTrieCache::haveClaimInQueue")
assertEqual(t, nValidAtHeight[0], 1, "incorrect CClaimTrieCache::haveClaimInQueue, nValidAtHeight")
claim1 := NewCClaimValue()
assertTrue(t, cache.GetInfoForName("test", claim1), "incorrect CClaimTrieCache::getInfoForName")
assertEqual(t, claim, claim1, "incorrect CClaimtrieCache::getInfoForName")
proof := NewCClaimTrieProof()
assertTrue(t, cache.GetProofForName("test", uint160, proof), "incorrect CacheProofCallback")
assertTrue(t, proof.GetHasValue(), "incorrect CClaimTrieCache::getProofForName")
claimsToName := cache.GetClaimsForName("test")
claims := claimsToName.GetClaimsNsupports()
assertEqual(t, claims.Size(), 2, "incorrect CClaimTrieCache::getClaimsForName")
assertFalse(t, claims.Get(0).IsNull(), "incorrect CClaimNsupports::IsNull")
assertFalse(t, claims.Get(1).IsNull(), "incorrect CClaimNsupports::IsNull")
}

View file

@ -0,0 +1,108 @@
from libclaimtrie import *
import unittest
class CacheIterateCallback(CIterateCallback):
def __init__(self, names):
CIterateCallback.__init__(self)
self.names = names
def apply(self, name):
assert(name in self.names), "Incorrect trie names"
class TestClaimTrieTypes(unittest.TestCase):
def setUp(self):
self.uint256s = "1234567890987654321012345678909876543210123456789098765432101234"
self.uint160 = uint160S("1234567890987654321012345678909876543210")
self.uint = uint256S(self.uint256s)
self.txp = COutPoint(self.uint, 1)
def assertClaimEqual(self, claim, txo, cid, amount, effe, height, validHeight, msg):
self.assertEqual(claim.outPoint, txo, msg)
self.assertEqual(claim.claimId, cid, msg)
self.assertEqual(claim.nAmount, amount, msg)
self.assertEqual(claim.nEffectiveAmount, effe, msg)
self.assertEqual(claim.nHeight, height, msg)
self.assertEqual(claim.nValidAtHeight, validHeight, msg)
def assertSupportEqual(self, support, txo, cid, amount, height, validHeight, msg):
self.assertEqual(support.outPoint, txo, msg)
self.assertEqual(support.supportedClaimId, cid, msg)
self.assertEqual(support.nAmount, amount, msg)
self.assertEqual(support.nHeight, height, msg)
self.assertEqual(support.nValidAtHeight, validHeight, msg)
def test_uint256(self):
uint = self.uint
self.assertFalse(uint.IsNull(), "incorrect uint256S or CBaseBlob::IsNull")
self.assertEqual(uint.GetHex(), self.uint256s, "incorrect CBaseBlob::GetHex")
self.assertEqual(uint.GetHex(), uint.ToString(), "incorrect CBaseBlob::ToString")
self.assertEqual(uint.size(), 32, "incorrect CBaseBlob::size")
copy = uint256()
self.assertNotEqual(copy, uint, "incorrect CBaseBlob::operator!=")
self.assertTrue(copy.IsNull(), "incorrect CBaseBlob::IsNull")
copy = uint256(uint)
self.assertEqual(copy, uint, "incorrect CBaseBlob::operator==")
copy.SetNull()
self.assertTrue(copy.IsNull()), "incorrect CBaseBlob::SetNull"
def test_txoupoint(self):
txp = self.txp
uint = self.uint
self.assertEqual(txp.hash, uint, "incorrect COutPoint::COutPoint")
self.assertEqual(txp.n, 1, "incorrect COutPoint::COutPoint")
self.assertFalse(txp.IsNull(), "incorrect COutPoint::IsNull")
pcopy = COutPoint()
self.assertTrue(pcopy.IsNull(), "incorrect COutPoint::IsNull")
self.assertEqual(pcopy.hash, uint256(), "incorrect COutPoint::COutPoint")
self.assertNotEqual(pcopy, txp, "incorrect COutPoint::operator!=")
self.assertIn(uint.ToString()[:10], txp.ToString(), "incorrect COutPoint::ToString")
def test_claim(self):
txp = self.txp
uint160 = self.uint160
self.assertEqual(uint160.size(), 20, "incorrect CBaseBlob::size")
claim = CClaimValue(txp, uint160, 20, 1, 10)
self.assertClaimEqual(claim, txp, uint160, 20, 20, 1, 10, "incorrect CClaimValue::CClaimValue")
def test_support(self):
txp = self.txp
uint160 = self.uint160
claim = CClaimValue(txp, uint160, 20, 1, 10)
support = CSupportValue(txp, uint160, 20, 1, 10)
self.assertSupportEqual(support, claim.outPoint, claim.claimId, claim.nAmount, claim.nHeight, claim.nValidAtHeight, "incorrect CSupportValue::CSupportValue")
def test_claimtrie(self):
txp = self.txp
uint160 = self.uint160
claim = CClaimValue(txp, uint160, 20, 0, 0)
wipe = True; height = 1; data_dir = "."
trie = CClaimTrie(10*1024*1024, wipe, height, data_dir)
cache = CClaimTrieCache(trie)
self.assertTrue(trie.empty(), "incorrect CClaimtrieCache::empty")
self.assertTrue(cache.addClaim("test", txp, uint160, 20, 0, 0), "incorrect CClaimtrieCache::addClaim")
self.assertTrue(cache.haveClaim("test", txp), "incorrect CClaimtrieCache::haveClaim")
self.assertEqual(cache.getTotalNamesInTrie(), 1, "incorrect CClaimtrieCache::getTotalNamesInTrie")
self.assertEqual(cache.getTotalClaimsInTrie(), 1, "incorrect CClaimtrieCache::getTotalClaimsInTrie")
getNamesInTrie(cache, CacheIterateCallback(["test"]))
nValidAtHeight = -1
# add second claim
txp.n = 2
uint1601 = uint160S("1234567890987654321012345678909876543211")
self.assertTrue(cache.addClaim("test", txp, uint1601, 20, 1, 1), "incorrect CClaimtrieCache::addClaim")
result, nValidAtHeight = cache.haveClaimInQueue("test", txp)
self.assertTrue(result, "incorrect CClaimTrieCache::haveClaimInQueue")
self.assertEqual(nValidAtHeight, 1, "incorrect CClaimTrieCache::haveClaimInQueue, nValidAtHeight")
claim1 = CClaimValue()
self.assertTrue(cache.getInfoForName("test", claim1), "incorrect CClaimTrieCache::getInfoForName")
self.assertEqual(claim, claim1, "incorrect CClaimtrieCache::getInfoForName")
proof = CClaimTrieProof()
self.assertTrue(cache.getProofForName("test", uint160, proof), "incorrect CacheProofCallback")
self.assertTrue(proof.hasValue, "incorrect CClaimTrieCache::getProofForName")
claimsToName = cache.getClaimsForName("test")
claims = claimsToName.claimsNsupports
self.assertEqual(claims.size(), 2, "incorrect CClaimTrieCache::getClaimsForName")
self.assertFalse(claims[0].IsNull(), "incorrect CClaimNsupports::IsNull")
self.assertFalse(claims[1].IsNull(), "incorrect CClaimNsupports::IsNull")
unittest.main()

30
src/claimtrie/log.cpp Normal file
View file

@ -0,0 +1,30 @@
#include <log.h>
void CLogPrint::setLogger(ClogBase* log)
{
ss.str({});
logger = log;
}
CLogPrint& CLogPrint::global()
{
static CLogPrint logger;
return logger;
}
CLogPrint& CLogPrint::operator<<(const Clog& cl)
{
if (logger) {
switch(cl) {
case Clog::endl:
ss << '\n';
// fallthrough
case Clog::flush:
logger->LogPrintStr(ss.str());
ss.str({});
break;
}
}
return *this;
}

42
src/claimtrie/log.h Normal file
View file

@ -0,0 +1,42 @@
#ifndef CLAIMTRIE_LOG_H
#define CLAIMTRIE_LOG_H
#include <string>
#include <sstream>
struct ClogBase
{
ClogBase() = default;
virtual ~ClogBase() = default;
virtual void LogPrintStr(const std::string&) = 0;
};
enum struct Clog
{
endl = 0,
flush = 1,
};
struct CLogPrint
{
template <typename T>
CLogPrint& operator<<(const T& a)
{
if (logger)
ss << a;
return *this;
}
CLogPrint& operator<<(const Clog& cl);
void setLogger(ClogBase* log);
static CLogPrint& global();
private:
CLogPrint() = default;
std::stringstream ss;
ClogBase* logger = nullptr;
};
#endif // CLAIMTRIE_LOG_H

93
src/claimtrie/sqlite.h Normal file
View file

@ -0,0 +1,93 @@
#ifndef SQLITE_H
#define SQLITE_H
#include <sqlite/sqlite3.h>
#include <uints.h>
#include <chrono>
#include <thread>
namespace sqlite
{
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const uint160& val) {
return sqlite3_bind_blob(stmt, inx, val.begin(), int(val.size()), SQLITE_STATIC);
}
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const uint256& val) {
return sqlite3_bind_blob(stmt, inx, val.begin(), int(val.size()), SQLITE_STATIC);
}
inline void store_result_in_db(sqlite3_context* db, const uint160& val) {
sqlite3_result_blob(db, val.begin(), int(val.size()), SQLITE_TRANSIENT);
}
inline void store_result_in_db(sqlite3_context* db, const uint256& val) {
sqlite3_result_blob(db, val.begin(), int(val.size()), SQLITE_TRANSIENT);
}
}
#include <sqlite/hdr/sqlite_modern_cpp.h>
namespace sqlite
{
template<>
struct has_sqlite_type<uint256, SQLITE_BLOB, void> : std::true_type {};
template<>
struct has_sqlite_type<uint160, SQLITE_BLOB, void> : std::true_type {};
inline uint160 get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<uint160>) {
uint160 ret;
auto ptr = sqlite3_column_blob(stmt, inx);
if (!ptr) return ret;
int bytes = sqlite3_column_bytes(stmt, inx);
assert(bytes > 0 && bytes <= int(ret.size()));
std::memcpy(ret.begin(), ptr, bytes);
return ret;
}
inline uint256 get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<uint256>) {
uint256 ret;
// I think we need this, but I lost my specific test case:
// auto type = sqlite3_column_type(stmt, inx);
// if (type == SQLITE_NULL)
// return ret;
//
// if (type == SQLITE_INTEGER) {
// auto integer = sqlite3_column_int64(stmt, inx);
// return uint256(integer);
// }
// assert(type == SQLITE_BLOB);
auto ptr = sqlite3_column_blob(stmt, inx);
if (!ptr) return ret;
int bytes = sqlite3_column_bytes(stmt, inx);
assert(bytes <= ret.size());
std::memcpy(ret.begin(), ptr, bytes);
return ret;
}
inline int commit(database& db, std::size_t attempts = 200)
{
int code = SQLITE_OK;
for (auto i = 0u; i < attempts; ++i) {
try {
db << "COMMIT";
} catch (const sqlite_exception& e) {
code = e.get_code();
if (code == SQLITE_LOCKED || code == SQLITE_BUSY) {
using namespace std::chrono_literals;
std::this_thread::sleep_for(100ms);
continue;
}
return code;
}
return SQLITE_OK;
}
return code;
}
}
#endif // SQLITE_H

View file

@ -0,0 +1,682 @@
#pragma once
#include <algorithm>
#include <cctype>
#include <string>
#include <functional>
#include <tuple>
#include <memory>
#define MODERN_SQLITE_VERSION 3002008
#include "../sqlite3.h"
#include "sqlite_modern_cpp/type_wrapper.h"
#include "sqlite_modern_cpp/errors.h"
#include "sqlite_modern_cpp/utility/function_traits.h"
#include "sqlite_modern_cpp/utility/uncaught_exceptions.h"
#include "sqlite_modern_cpp/utility/utf16_utf8.h"
namespace sqlite {
class database;
class database_binder;
template<std::size_t> class binder;
typedef std::shared_ptr<sqlite3> connection_type;
template<class T, bool Name = false>
struct index_binding_helper {
index_binding_helper(const index_binding_helper &) = delete;
#if __cplusplus < 201703
index_binding_helper(index_binding_helper &&) = default;
#endif
typename std::conditional<Name, const char *, int>::type index;
T value;
};
template<class T>
auto named_parameter(const char *name, T &&arg) {
return index_binding_helper<decltype(arg), true>{name, std::forward<decltype(arg)>(arg)};
}
template<class T>
auto indexed_parameter(int index, T &&arg) {
return index_binding_helper<decltype(arg)>{index, std::forward<decltype(arg)>(arg)};
}
class row_iterator;
class database_binder {
public:
// database_binder is not copyable
database_binder() = delete;
database_binder(const database_binder& other) = delete;
database_binder& operator=(const database_binder&) = delete;
database_binder(database_binder&& other) :
_db(std::move(other._db)),
_stmt(std::move(other._stmt)),
_inx(other._inx), execution_started(other.execution_started) { }
void execute();
std::string sql() {
#if SQLITE_VERSION_NUMBER >= 3014000
auto sqlite_deleter = [](void *ptr) {sqlite3_free(ptr);};
std::unique_ptr<char, decltype(sqlite_deleter)> str(sqlite3_expanded_sql(_stmt.get()), sqlite_deleter);
return str ? str.get() : original_sql();
#else
return original_sql();
#endif
}
std::string original_sql() {
return sqlite3_sql(_stmt.get());
}
void used(bool state) {
if(!state) {
// We may have to reset first if we haven't done so already:
_next_index();
--_inx;
}
execution_started = state;
}
bool used() const { return execution_started; }
row_iterator begin();
row_iterator end();
private:
std::shared_ptr<sqlite3> _db;
std::unique_ptr<sqlite3_stmt, decltype(&sqlite3_finalize)> _stmt;
utility::UncaughtExceptionDetector _has_uncaught_exception;
int _inx;
bool execution_started = false;
int _next_index() {
if(execution_started && !_inx) {
sqlite3_reset(_stmt.get());
sqlite3_clear_bindings(_stmt.get());
}
return ++_inx;
}
sqlite3_stmt* _prepare(u16str_ref sql) {
return _prepare(utility::utf16_to_utf8(sql));
}
sqlite3_stmt* _prepare(str_ref sql) {
int hresult;
sqlite3_stmt* tmp = nullptr;
const char *remaining;
hresult = sqlite3_prepare_v2(_db.get(), sql.data(), sql.length(), &tmp, &remaining);
if(hresult != SQLITE_OK) errors::throw_sqlite_error(hresult, sql, sqlite3_errmsg(_db.get()));
if(!std::all_of(remaining, sql.data() + sql.size(), [](char ch) {return std::isspace(ch);}))
throw errors::more_statements("Multiple semicolon separated statements are unsupported", sql);
return tmp;
}
template<typename T> friend database_binder& operator<<(database_binder& db, T&&);
template<typename T> friend database_binder& operator<<(database_binder& db, index_binding_helper<T>);
template<typename T> friend database_binder& operator<<(database_binder& db, index_binding_helper<T, true>);
friend void operator++(database_binder& db, int);
public:
database_binder(std::shared_ptr<sqlite3> db, u16str_ref sql):
_db(db),
_stmt(_prepare(sql), sqlite3_finalize),
_inx(0) {
}
database_binder(std::shared_ptr<sqlite3> db, str_ref sql):
_db(db),
_stmt(_prepare(sql), sqlite3_finalize),
_inx(0) {
}
~database_binder() noexcept(false) {
/* Will be executed if no >>op is found, but not if an exception
is in mid flight */
if(!used() && !_has_uncaught_exception && _stmt) {
execute();
}
}
friend class row_iterator;
};
class row_iterator {
public:
class value_type {
public:
value_type(database_binder *_binder): _binder(_binder) {};
template<class T>
typename std::enable_if<is_sqlite_value<T>::value, value_type &>::type operator >>(T &result) {
result = get_col_from_db(_binder->_stmt.get(), next_index++, result_type<T>());
return *this;
}
template<class ...Types>
value_type &operator >>(std::tuple<Types...>& values) {
values = handle_tuple<std::tuple<typename std::decay<Types>::type...>>(std::index_sequence_for<Types...>());
next_index += sizeof...(Types);
return *this;
}
template<class ...Types>
value_type &operator >>(std::tuple<Types...>&& values) {
return *this >> values;
}
template<class ...Types>
operator std::tuple<Types...>() {
std::tuple<Types...> value;
*this >> value;
return value;
}
explicit operator bool() {
return sqlite3_column_count(_binder->_stmt.get()) >= next_index;
}
int& index() { return next_index; }
private:
template<class Tuple, std::size_t ...Index>
Tuple handle_tuple(std::index_sequence<Index...>) {
return Tuple(
get_col_from_db(
_binder->_stmt.get(),
next_index + Index,
result_type<typename std::tuple_element<Index, Tuple>::type>())...);
}
database_binder *_binder;
int next_index = 0;
};
using difference_type = std::ptrdiff_t;
using pointer = value_type*;
using reference = value_type&;
using iterator_category = std::input_iterator_tag;
row_iterator() = default;
explicit row_iterator(database_binder &binder): _binder(&binder) {
_binder->_next_index();
_binder->_inx = 0;
_binder->used(true);
++*this;
}
reference operator*() const { return value;}
pointer operator->() const { return std::addressof(**this); }
row_iterator &operator++() {
switch(int result = sqlite3_step(_binder->_stmt.get())) {
case SQLITE_ROW:
value = {_binder};
break;
case SQLITE_DONE:
_binder = nullptr;
break;
default:
exceptions::throw_sqlite_error(result, _binder->sql(), sqlite3_errmsg(_binder->_db.get()));
}
return *this;
}
friend inline bool operator ==(const row_iterator &a, const row_iterator &b) {
return a._binder == b._binder;
}
friend inline bool operator !=(const row_iterator &a, const row_iterator &b) {
return !(a==b);
}
private:
database_binder *_binder = nullptr;
mutable value_type value{_binder}; // mutable, because `changing` the value is just reading it
};
inline row_iterator database_binder::begin() {
return row_iterator(*this);
}
inline row_iterator database_binder::end() {
return row_iterator();
}
namespace detail {
template<class Callback>
void _extract_single_value(database_binder &binder, Callback call_back) {
auto iter = binder.begin();
if(iter == binder.end())
throw errors::no_rows("no rows to extract: exactly 1 row expected", binder.sql(), SQLITE_DONE);
call_back(*iter);
if(++iter != binder.end())
throw errors::more_rows("not all rows extracted", binder.sql(), SQLITE_ROW);
}
}
inline void database_binder::execute() {
for(auto &&row : *this)
(void)row;
}
namespace detail {
template<class T> using void_t = void;
template<class T, class = void>
struct sqlite_direct_result : std::false_type {};
template<class T>
struct sqlite_direct_result<
T,
void_t<decltype(std::declval<row_iterator::value_type&>() >> std::declval<T&&>())>
> : std::true_type {};
}
template <typename Result>
inline typename std::enable_if<detail::sqlite_direct_result<Result>::value>::type operator>>(database_binder &binder, Result&& value) {
detail::_extract_single_value(binder, [&value] (row_iterator::value_type &row) {
row >> std::forward<Result>(value);
});
}
template <typename Function>
inline typename std::enable_if<!detail::sqlite_direct_result<Function>::value>::type operator>>(database_binder &db_binder, Function&& func) {
using traits = utility::function_traits<Function>;
for(auto &&row : db_binder) {
binder<traits::arity>::run(row, func);
}
}
template <typename Result>
inline decltype(auto) operator>>(database_binder &&binder, Result&& value) {
return binder >> std::forward<Result>(value);
}
namespace sql_function_binder {
template<
typename ContextType,
std::size_t Count,
typename Functions
>
inline void step(
sqlite3_context* db,
int count,
sqlite3_value** vals
);
template<
std::size_t Count,
typename Functions,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) && sizeof...(Values) < Count), void>::type step(
sqlite3_context* db,
int count,
sqlite3_value** vals,
Values&&... values
);
template<
std::size_t Count,
typename Functions,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) == Count), void>::type step(
sqlite3_context* db,
int,
sqlite3_value**,
Values&&... values
);
template<
typename ContextType,
typename Functions
>
inline void final(sqlite3_context* db);
template<
std::size_t Count,
typename Function,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) < Count), void>::type scalar(
sqlite3_context* db,
int count,
sqlite3_value** vals,
Values&&... values
);
template<
std::size_t Count,
typename Function,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) == Count), void>::type scalar(
sqlite3_context* db,
int,
sqlite3_value**,
Values&&... values
);
}
enum class OpenFlags {
READONLY = SQLITE_OPEN_READONLY,
READWRITE = SQLITE_OPEN_READWRITE,
CREATE = SQLITE_OPEN_CREATE,
NOMUTEX = SQLITE_OPEN_NOMUTEX,
FULLMUTEX = SQLITE_OPEN_FULLMUTEX,
SHAREDCACHE = SQLITE_OPEN_SHAREDCACHE,
PRIVATECACH = SQLITE_OPEN_PRIVATECACHE,
URI = SQLITE_OPEN_URI
};
inline OpenFlags operator|(const OpenFlags& a, const OpenFlags& b) {
return static_cast<OpenFlags>(static_cast<int>(a) | static_cast<int>(b));
}
enum class Encoding {
ANY = SQLITE_ANY,
UTF8 = SQLITE_UTF8,
UTF16 = SQLITE_UTF16
};
struct sqlite_config {
OpenFlags flags = OpenFlags::READWRITE | OpenFlags::CREATE;
const char *zVfs = nullptr;
Encoding encoding = Encoding::ANY;
};
class database {
protected:
std::shared_ptr<sqlite3> _db;
public:
database(const std::string &db_name, const sqlite_config &config = {}): _db(nullptr) {
sqlite3* tmp = nullptr;
auto ret = sqlite3_open_v2(db_name.data(), &tmp, static_cast<int>(config.flags), config.zVfs);
_db = std::shared_ptr<sqlite3>(tmp, [=](sqlite3* ptr) { sqlite3_close_v2(ptr); }); // this will close the connection eventually when no longer needed.
if(ret != SQLITE_OK) errors::throw_sqlite_error(_db ? sqlite3_extended_errcode(_db.get()) : ret, {}, sqlite3_errmsg(_db.get()));
sqlite3_extended_result_codes(_db.get(), true);
if(config.encoding == Encoding::UTF16)
*this << R"(PRAGMA encoding = "UTF-16";)";
}
database(const std::u16string &db_name, const sqlite_config &config = {}): database(utility::utf16_to_utf8(db_name), config) {
if (config.encoding == Encoding::ANY)
*this << R"(PRAGMA encoding = "UTF-16";)";
}
database(std::shared_ptr<sqlite3> db):
_db(db) {}
database_binder operator<<(str_ref sql) {
return database_binder(_db, sql);
}
database_binder operator<<(u16str_ref sql) {
return database_binder(_db, sql);
}
connection_type connection() const { return _db; }
sqlite3_int64 last_insert_rowid() const {
return sqlite3_last_insert_rowid(_db.get());
}
int rows_modified() const {
return sqlite3_changes(_db.get());
}
template <typename Function>
void define(const std::string &name, Function&& func, bool deterministic = true) {
typedef utility::function_traits<Function> traits;
auto funcPtr = new auto(std::forward<Function>(func));
if(int result = sqlite3_create_function_v2(
_db.get(), name.data(), traits::arity, SQLITE_UTF8 | (deterministic ? SQLITE_DETERMINISTIC : 0), funcPtr,
sql_function_binder::scalar<traits::arity, typename std::remove_reference<Function>::type>,
nullptr, nullptr, [](void* ptr){
delete static_cast<decltype(funcPtr)>(ptr);
}))
errors::throw_sqlite_error(result, {}, sqlite3_errmsg(_db.get()));
}
template <typename StepFunction, typename FinalFunction>
void define(const std::string &name, StepFunction&& step, FinalFunction&& final, bool deterministic = true) {
typedef utility::function_traits<StepFunction> traits;
using ContextType = typename std::remove_reference<typename traits::template argument<0>>::type;
auto funcPtr = new auto(std::make_pair(std::forward<StepFunction>(step), std::forward<FinalFunction>(final)));
if(int result = sqlite3_create_function_v2(
_db.get(), name.c_str(), traits::arity - 1, SQLITE_UTF8 | (deterministic ? SQLITE_DETERMINISTIC : 0), funcPtr, nullptr,
sql_function_binder::step<ContextType, traits::arity, typename std::remove_reference<decltype(*funcPtr)>::type>,
sql_function_binder::final<ContextType, typename std::remove_reference<decltype(*funcPtr)>::type>,
[](void* ptr){
delete static_cast<decltype(funcPtr)>(ptr);
}))
errors::throw_sqlite_error(result, {}, sqlite3_errmsg(_db.get()));
}
};
template<std::size_t Count>
class binder {
private:
template <
typename Function,
std::size_t Index
>
using nth_argument_type = typename utility::function_traits<
Function
>::template argument<Index>;
public:
// `Boundary` needs to be defaulted to `Count` so that the `run` function
// template is not implicitly instantiated on class template instantiation.
// Look up section 14.7.1 _Implicit instantiation_ of the ISO C++14 Standard
// and the [dicussion](https://github.com/aminroosta/sqlite_modern_cpp/issues/8)
// on Github.
template<
typename Function,
typename... Values,
std::size_t Boundary = Count
>
static typename std::enable_if<(sizeof...(Values) < Boundary), void>::type run(
row_iterator::value_type& row,
Function&& function,
Values&&... values
) {
typename std::decay<nth_argument_type<Function, sizeof...(Values)>>::type value;
row >> value;
run<Function>(row, function, std::forward<Values>(values)..., std::move(value));
}
template<
typename Function,
typename... Values,
std::size_t Boundary = Count
>
static typename std::enable_if<(sizeof...(Values) == Boundary), void>::type run(
row_iterator::value_type&,
Function&& function,
Values&&... values
) {
function(std::move(values)...);
}
};
// Some ppl are lazy so we have a operator for proper prep. statemant handling.
void inline operator++(database_binder& db, int) { db.execute(); }
template<typename T> database_binder &operator<<(database_binder& db, index_binding_helper<T> val) {
db._next_index(); --db._inx;
int result = bind_col_in_db(db._stmt.get(), val.index, std::forward<T>(val.value));
if(result != SQLITE_OK)
exceptions::throw_sqlite_error(result, db.sql(), sqlite3_errmsg(db._db.get()));
return db;
}
template<typename T> database_binder &operator<<(database_binder& db, index_binding_helper<T, true> val) {
db._next_index(); --db._inx;
int index = sqlite3_bind_parameter_index(db._stmt.get(), val.index);
if(!index)
throw errors::unknown_binding("The given binding name is not valid for this statement", db.sql());
int result = bind_col_in_db(db._stmt.get(), index, std::forward<T>(val.value));
if(result != SQLITE_OK)
exceptions::throw_sqlite_error(result, db.sql(), sqlite3_errmsg(db._db.get()));
return db;
}
template<typename T> database_binder &operator<<(database_binder& db, T&& val) {
int result = bind_col_in_db(db._stmt.get(), db._next_index(), std::forward<T>(val));
if(result != SQLITE_OK)
exceptions::throw_sqlite_error(result, db.sql(), sqlite3_errmsg(db._db.get()));
return db;
}
// Convert the rValue binder to a reference and call first op<<, its needed for the call that creates the binder (be carefull of recursion here!)
template<typename T> database_binder operator << (database_binder&& db, const T& val) { db << val; return std::move(db); }
template<typename T, bool Name> database_binder operator << (database_binder&& db, index_binding_helper<T, Name> val) { db << index_binding_helper<T, Name>{val.index, std::forward<T>(val.value)}; return std::move(db); }
namespace sql_function_binder {
template<class T>
struct AggregateCtxt {
T obj;
bool constructed = true;
};
template<
typename ContextType,
std::size_t Count,
typename Functions
>
inline void step(
sqlite3_context* db,
int count,
sqlite3_value** vals
) {
auto ctxt = static_cast<AggregateCtxt<ContextType>*>(sqlite3_aggregate_context(db, sizeof(AggregateCtxt<ContextType>)));
if(!ctxt) return;
try {
if(!ctxt->constructed) new(ctxt) AggregateCtxt<ContextType>();
step<Count, Functions>(db, count, vals, ctxt->obj);
return;
} catch(const sqlite_exception &e) {
sqlite3_result_error_code(db, e.get_code());
sqlite3_result_error(db, e.what(), -1);
} catch(const std::exception &e) {
sqlite3_result_error(db, e.what(), -1);
} catch(...) {
sqlite3_result_error(db, "Unknown error", -1);
}
if(ctxt && ctxt->constructed)
ctxt->~AggregateCtxt();
}
template<
std::size_t Count,
typename Functions,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) && sizeof...(Values) < Count), void>::type step(
sqlite3_context* db,
int count,
sqlite3_value** vals,
Values&&... values
) {
using arg_type = typename std::remove_cv<
typename std::remove_reference<
typename utility::function_traits<
typename Functions::first_type
>::template argument<sizeof...(Values)>
>::type
>::type;
step<Count, Functions>(
db,
count,
vals,
std::forward<Values>(values)...,
get_val_from_db(vals[sizeof...(Values) - 1], result_type<arg_type>()));
}
template<
std::size_t Count,
typename Functions,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) == Count), void>::type step(
sqlite3_context* db,
int,
sqlite3_value**,
Values&&... values
) {
static_cast<Functions*>(sqlite3_user_data(db))->first(std::forward<Values>(values)...);
}
template<
typename ContextType,
typename Functions
>
inline void final(sqlite3_context* db) {
auto ctxt = static_cast<AggregateCtxt<ContextType>*>(sqlite3_aggregate_context(db, sizeof(AggregateCtxt<ContextType>)));
try {
if(!ctxt) return;
if(!ctxt->constructed) new(ctxt) AggregateCtxt<ContextType>();
store_result_in_db(db,
static_cast<Functions*>(sqlite3_user_data(db))->second(ctxt->obj));
} catch(const sqlite_exception &e) {
sqlite3_result_error_code(db, e.get_code());
sqlite3_result_error(db, e.what(), -1);
} catch(const std::exception &e) {
sqlite3_result_error(db, e.what(), -1);
} catch(...) {
sqlite3_result_error(db, "Unknown error", -1);
}
if(ctxt && ctxt->constructed)
ctxt->~AggregateCtxt();
}
template<
std::size_t Count,
typename Function,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) < Count), void>::type scalar(
sqlite3_context* db,
int count,
sqlite3_value** vals,
Values&&... values
) {
using arg_type = typename std::remove_cv<
typename std::remove_reference<
typename utility::function_traits<Function>::template argument<sizeof...(Values)>
>::type
>::type;
scalar<Count, Function>(
db,
count,
vals,
std::forward<Values>(values)...,
get_val_from_db(vals[sizeof...(Values)], result_type<arg_type>()));
}
template<
std::size_t Count,
typename Function,
typename... Values
>
inline typename std::enable_if<(sizeof...(Values) == Count), void>::type scalar(
sqlite3_context* db,
int,
sqlite3_value**,
Values&&... values
) {
try {
store_result_in_db(db,
(*static_cast<Function*>(sqlite3_user_data(db)))(std::forward<Values>(values)...));
} catch(const sqlite_exception &e) {
sqlite3_result_error_code(db, e.get_code());
sqlite3_result_error(db, e.what(), -1);
} catch(const std::exception &e) {
sqlite3_result_error(db, e.what(), -1);
} catch(...) {
sqlite3_result_error(db, "Unknown error", -1);
}
}
}
}

View file

@ -0,0 +1,70 @@
#pragma once
#include <string>
#include <stdexcept>
#include "../../sqlite3.h"
namespace sqlite {
class sqlite_exception: public std::runtime_error {
public:
sqlite_exception(const char* msg, str_ref sql, int code = -1): runtime_error(msg + std::string(": ") + sql), code(code), sql(sql) {}
sqlite_exception(int code, str_ref sql, const char *msg = nullptr): runtime_error((msg ? msg : sqlite3_errstr(code)) + std::string(": ") + sql), code(code), sql(sql) {}
int get_code() const {return code & 0xFF;}
int get_extended_code() const {return code;}
std::string get_sql() const {return sql;}
const char *errstr() const {return code == -1 ? "Unknown error" : sqlite3_errstr(code);}
private:
int code;
std::string sql;
};
namespace errors {
//One more or less trivial derived error class for each SQLITE error.
//Note the following are not errors so have no classes:
//SQLITE_OK, SQLITE_NOTICE, SQLITE_WARNING, SQLITE_ROW, SQLITE_DONE
//
//Note these names are exact matches to the names of the SQLITE error codes.
#define SQLITE_MODERN_CPP_ERROR_CODE(NAME,name,derived) \
class name: public sqlite_exception { using sqlite_exception::sqlite_exception; };\
derived
#define SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(BASE,SUB,base,sub) \
class base ## _ ## sub: public base { using base::base; };
#include "lists/error_codes.h"
#undef SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED
#undef SQLITE_MODERN_CPP_ERROR_CODE
//Some additional errors are here for the C++ interface
class more_rows: public sqlite_exception { using sqlite_exception::sqlite_exception; };
class no_rows: public sqlite_exception { using sqlite_exception::sqlite_exception; };
class more_statements: public sqlite_exception { using sqlite_exception::sqlite_exception; }; // Prepared statements can only contain one statement
class invalid_utf16: public sqlite_exception { using sqlite_exception::sqlite_exception; };
class unknown_binding: public sqlite_exception { using sqlite_exception::sqlite_exception; };
static void throw_sqlite_error(const int& error_code, str_ref sql = "", const char *errmsg = nullptr) {
switch(error_code & 0xFF) {
#define SQLITE_MODERN_CPP_ERROR_CODE(NAME,name,derived) \
case SQLITE_ ## NAME: switch(error_code) { \
derived \
case SQLITE_ ## NAME: \
default: throw name(error_code, sql); \
}
#if SQLITE_VERSION_NUMBER < 3010000
#define SQLITE_IOERR_VNODE (SQLITE_IOERR | (27<<8))
#define SQLITE_IOERR_AUTH (SQLITE_IOERR | (28<<8))
#define SQLITE_AUTH_USER (SQLITE_AUTH | (1<<8))
#endif
#define SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(BASE,SUB,base,sub) \
case SQLITE_ ## BASE ## _ ## SUB: throw base ## _ ## sub(error_code, sql, errmsg);
#include "lists/error_codes.h"
#undef SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED
#undef SQLITE_MODERN_CPP_ERROR_CODE
default: throw sqlite_exception(error_code, sql, errmsg);
}
}
}
namespace exceptions = errors;
}

View file

@ -0,0 +1,88 @@
SQLITE_MODERN_CPP_ERROR_CODE(ERROR,error,)
SQLITE_MODERN_CPP_ERROR_CODE(INTERNAL,internal,)
SQLITE_MODERN_CPP_ERROR_CODE(PERM,perm,)
SQLITE_MODERN_CPP_ERROR_CODE(ABORT,abort,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(ABORT,ROLLBACK,abort,rollback)
)
SQLITE_MODERN_CPP_ERROR_CODE(BUSY,busy,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(BUSY,RECOVERY,busy,recovery)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(BUSY,SNAPSHOT,busy,snapshot)
)
SQLITE_MODERN_CPP_ERROR_CODE(LOCKED,locked,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(LOCKED,SHAREDCACHE,locked,sharedcache)
)
SQLITE_MODERN_CPP_ERROR_CODE(NOMEM,nomem,)
SQLITE_MODERN_CPP_ERROR_CODE(READONLY,readonly,)
SQLITE_MODERN_CPP_ERROR_CODE(INTERRUPT,interrupt,)
SQLITE_MODERN_CPP_ERROR_CODE(IOERR,ioerr,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,READ,ioerr,read)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SHORT_READ,ioerr,short_read)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,WRITE,ioerr,write)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,FSYNC,ioerr,fsync)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,DIR_FSYNC,ioerr,dir_fsync)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,TRUNCATE,ioerr,truncate)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,FSTAT,ioerr,fstat)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,UNLOCK,ioerr,unlock)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,RDLOCK,ioerr,rdlock)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,DELETE,ioerr,delete)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,BLOCKED,ioerr,blocked)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,NOMEM,ioerr,nomem)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,ACCESS,ioerr,access)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,CHECKRESERVEDLOCK,ioerr,checkreservedlock)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,LOCK,ioerr,lock)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,CLOSE,ioerr,close)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,DIR_CLOSE,ioerr,dir_close)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SHMOPEN,ioerr,shmopen)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SHMSIZE,ioerr,shmsize)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SHMLOCK,ioerr,shmlock)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SHMMAP,ioerr,shmmap)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,SEEK,ioerr,seek)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,DELETE_NOENT,ioerr,delete_noent)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,MMAP,ioerr,mmap)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,GETTEMPPATH,ioerr,gettemppath)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,CONVPATH,ioerr,convpath)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,VNODE,ioerr,vnode)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(IOERR,AUTH,ioerr,auth)
)
SQLITE_MODERN_CPP_ERROR_CODE(CORRUPT,corrupt,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CORRUPT,VTAB,corrupt,vtab)
)
SQLITE_MODERN_CPP_ERROR_CODE(NOTFOUND,notfound,)
SQLITE_MODERN_CPP_ERROR_CODE(FULL,full,)
SQLITE_MODERN_CPP_ERROR_CODE(CANTOPEN,cantopen,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CANTOPEN,NOTEMPDIR,cantopen,notempdir)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CANTOPEN,ISDIR,cantopen,isdir)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CANTOPEN,FULLPATH,cantopen,fullpath)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CANTOPEN,CONVPATH,cantopen,convpath)
)
SQLITE_MODERN_CPP_ERROR_CODE(PROTOCOL,protocol,)
SQLITE_MODERN_CPP_ERROR_CODE(EMPTY,empty,)
SQLITE_MODERN_CPP_ERROR_CODE(SCHEMA,schema,)
SQLITE_MODERN_CPP_ERROR_CODE(TOOBIG,toobig,)
SQLITE_MODERN_CPP_ERROR_CODE(CONSTRAINT,constraint,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,CHECK,constraint,check)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,COMMITHOOK,constraint,commithook)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,FOREIGNKEY,constraint,foreignkey)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,FUNCTION,constraint,function)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,NOTNULL,constraint,notnull)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,PRIMARYKEY,constraint,primarykey)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,TRIGGER,constraint,trigger)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,UNIQUE,constraint,unique)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,VTAB,constraint,vtab)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(CONSTRAINT,ROWID,constraint,rowid)
)
SQLITE_MODERN_CPP_ERROR_CODE(MISMATCH,mismatch,)
SQLITE_MODERN_CPP_ERROR_CODE(MISUSE,misuse,)
SQLITE_MODERN_CPP_ERROR_CODE(NOLFS,nolfs,)
SQLITE_MODERN_CPP_ERROR_CODE(AUTH,auth,
)
SQLITE_MODERN_CPP_ERROR_CODE(FORMAT,format,)
SQLITE_MODERN_CPP_ERROR_CODE(RANGE,range,)
SQLITE_MODERN_CPP_ERROR_CODE(NOTADB,notadb,)
SQLITE_MODERN_CPP_ERROR_CODE(NOTICE,notice,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(NOTICE,RECOVER_WAL,notice,recover_wal)
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(NOTICE,RECOVER_ROLLBACK,notice,recover_rollback)
)
SQLITE_MODERN_CPP_ERROR_CODE(WARNING,warning,
SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(WARNING,AUTOINDEX,warning,autoindex)
)

View file

@ -0,0 +1,101 @@
#include "errors.h"
#include <sqlite/sqlite3.h>
#include <utility>
#include <tuple>
#include <type_traits>
namespace sqlite {
namespace detail {
template<class>
using void_t = void;
template<class T, class = void>
struct is_callable : std::false_type {};
template<class Functor, class ...Arguments>
struct is_callable<Functor(Arguments...), void_t<decltype(std::declval<Functor>()(std::declval<Arguments>()...))>> : std::true_type {};
template<class Functor, class ...Functors>
class FunctorOverload: public Functor, public FunctorOverload<Functors...> {
public:
template<class Functor1, class ...Remaining>
FunctorOverload(Functor1 &&functor, Remaining &&... remaining):
Functor(std::forward<Functor1>(functor)),
FunctorOverload<Functors...>(std::forward<Remaining>(remaining)...) {}
using Functor::operator();
using FunctorOverload<Functors...>::operator();
};
template<class Functor>
class FunctorOverload<Functor>: public Functor {
public:
template<class Functor1>
FunctorOverload(Functor1 &&functor):
Functor(std::forward<Functor1>(functor)) {}
using Functor::operator();
};
template<class Functor>
class WrapIntoFunctor: public Functor {
public:
template<class Functor1>
WrapIntoFunctor(Functor1 &&functor):
Functor(std::forward<Functor1>(functor)) {}
using Functor::operator();
};
template<class ReturnType, class ...Arguments>
class WrapIntoFunctor<ReturnType(*)(Arguments...)> {
ReturnType(*ptr)(Arguments...);
public:
WrapIntoFunctor(ReturnType(*ptr)(Arguments...)): ptr(ptr) {}
ReturnType operator()(Arguments... arguments) { return (*ptr)(std::forward<Arguments>(arguments)...); }
};
inline void store_error_log_data_pointer(std::shared_ptr<void> ptr) {
static std::shared_ptr<void> stored;
stored = std::move(ptr);
}
template<class T>
std::shared_ptr<typename std::decay<T>::type> make_shared_inferred(T &&t) {
return std::make_shared<typename std::decay<T>::type>(std::forward<T>(t));
}
}
template<class Handler>
typename std::enable_if<!detail::is_callable<Handler(const sqlite_exception&)>::value>::type
error_log(Handler &&handler);
template<class Handler>
typename std::enable_if<detail::is_callable<Handler(const sqlite_exception&)>::value>::type
error_log(Handler &&handler);
template<class ...Handler>
typename std::enable_if<sizeof...(Handler)>=2>::type
error_log(Handler &&...handler) {
return error_log(detail::FunctorOverload<detail::WrapIntoFunctor<typename std::decay<Handler>::type>...>(std::forward<Handler>(handler)...));
}
template<class Handler>
typename std::enable_if<!detail::is_callable<Handler(const sqlite_exception&)>::value>::type
error_log(Handler &&handler) {
return error_log(std::forward<Handler>(handler), [](const sqlite_exception&) {});
}
template<class Handler>
typename std::enable_if<detail::is_callable<Handler(const sqlite_exception&)>::value>::type
error_log(Handler &&handler) {
auto ptr = detail::make_shared_inferred([handler = std::forward<Handler>(handler)](int error_code, const char *errstr) mutable {
switch(error_code & 0xFF) {
#define SQLITE_MODERN_CPP_ERROR_CODE(NAME,name,derived) \
case SQLITE_ ## NAME: switch(error_code) { \
derived \
default: handler(errors::name(errstr, "", error_code)); \
};break;
#define SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED(BASE,SUB,base,sub) \
case SQLITE_ ## BASE ## _ ## SUB: \
handler(errors::base ## _ ## sub(errstr, "", error_code)); \
break;
#include "lists/error_codes.h"
#undef SQLITE_MODERN_CPP_ERROR_CODE_EXTENDED
#undef SQLITE_MODERN_CPP_ERROR_CODE
default: handler(sqlite_exception(errstr, "", error_code)); \
}
});
sqlite3_config(SQLITE_CONFIG_LOG, static_cast<void(*)(void*,int,const char*)>([](void *functor, int error_code, const char *errstr) {
(*static_cast<decltype(ptr.get())>(functor))(error_code, errstr);
}), ptr.get());
detail::store_error_log_data_pointer(std::move(ptr));
}
}

View file

@ -0,0 +1,44 @@
#pragma once
#ifndef SQLITE_HAS_CODEC
#define SQLITE_HAS_CODEC
#endif
#include "../sqlite_modern_cpp.h"
namespace sqlite {
struct sqlcipher_config : public sqlite_config {
std::string key;
};
class sqlcipher_database : public database {
public:
sqlcipher_database(std::string db, const sqlcipher_config &config): database(db, config) {
set_key(config.key);
}
sqlcipher_database(std::u16string db, const sqlcipher_config &config): database(db, config) {
set_key(config.key);
}
void set_key(const std::string &key) {
if(auto ret = sqlite3_key(_db.get(), key.data(), key.size()))
errors::throw_sqlite_error(ret);
}
void set_key(const std::string &key, const std::string &db_name) {
if(auto ret = sqlite3_key_v2(_db.get(), db_name.c_str(), key.data(), key.size()))
errors::throw_sqlite_error(ret);
}
void rekey(const std::string &new_key) {
if(auto ret = sqlite3_rekey(_db.get(), new_key.data(), new_key.size()))
errors::throw_sqlite_error(ret);
}
void rekey(const std::string &new_key, const std::string &db_name) {
if(auto ret = sqlite3_rekey_v2(_db.get(), db_name.c_str(), new_key.data(), new_key.size()))
errors::throw_sqlite_error(ret);
}
};
}

View file

@ -0,0 +1,412 @@
#pragma once
#include <type_traits>
#include <string>
#include <memory>
#include <vector>
#ifdef __has_include
#if __cplusplus >= 201703 && __has_include(<string_view>)
#define MODERN_SQLITE_STRINGVIEW_SUPPORT
#endif
#endif
#ifdef __has_include
#if __cplusplus > 201402 && __has_include(<optional>)
#define MODERN_SQLITE_STD_OPTIONAL_SUPPORT
#endif
#endif
#ifdef __has_include
#if __cplusplus > 201402 && __has_include(<variant>)
#define MODERN_SQLITE_STD_VARIANT_SUPPORT
#endif
#endif
#ifdef MODERN_SQLITE_STD_OPTIONAL_SUPPORT
#include <optional>
#endif
#ifdef MODERN_SQLITE_STD_VARIANT_SUPPORT
#include <variant>
#endif
#ifdef MODERN_SQLITE_STRINGVIEW_SUPPORT
#include <string_view>
namespace sqlite
{
typedef const std::string_view str_ref;
typedef const std::u16string_view u16str_ref;
}
#else
namespace sqlite
{
typedef const std::string& str_ref;
typedef const std::u16string& u16str_ref;
}
#endif
#include "../../sqlite3.h"
#include "errors.h"
namespace sqlite {
template<class T, int Type, class = void>
struct has_sqlite_type : std::false_type {};
template<class T>
using is_sqlite_value = std::integral_constant<bool, false
|| has_sqlite_type<T, SQLITE_NULL>::value
|| has_sqlite_type<T, SQLITE_INTEGER>::value
|| has_sqlite_type<T, SQLITE_FLOAT>::value
|| has_sqlite_type<T, SQLITE_TEXT>::value
|| has_sqlite_type<T, SQLITE_BLOB>::value
>;
template<class T, int Type>
struct has_sqlite_type<T&, Type> : has_sqlite_type<T, Type> {};
template<class T, int Type>
struct has_sqlite_type<const T, Type> : has_sqlite_type<T, Type> {};
template<class T, int Type>
struct has_sqlite_type<volatile T, Type> : has_sqlite_type<T, Type> {};
template<class T>
struct result_type {
using type = T;
constexpr result_type() = default;
template<class U, class = typename std::enable_if<std::is_assignable<U, T>::value>>
constexpr result_type(result_type<U>) { }
};
// int
template<>
struct has_sqlite_type<int, SQLITE_INTEGER> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const int& val) {
return sqlite3_bind_int(stmt, inx, val);
}
inline void store_result_in_db(sqlite3_context* db, const int& val) {
sqlite3_result_int(db, val);
}
inline int get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<int>) {
return sqlite3_column_type(stmt, inx) == SQLITE_NULL ? 0 :
sqlite3_column_int(stmt, inx);
}
inline int get_val_from_db(sqlite3_value *value, result_type<int>) {
return sqlite3_value_type(value) == SQLITE_NULL ? 0 :
sqlite3_value_int(value);
}
// sqlite_int64
template<>
struct has_sqlite_type<sqlite_int64, SQLITE_INTEGER, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const sqlite_int64& val) {
return sqlite3_bind_int64(stmt, inx, val);
}
inline void store_result_in_db(sqlite3_context* db, const sqlite_int64& val) {
sqlite3_result_int64(db, val);
}
inline sqlite_int64 get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<sqlite_int64 >) {
return sqlite3_column_type(stmt, inx) == SQLITE_NULL ? 0 :
sqlite3_column_int64(stmt, inx);
}
inline sqlite3_int64 get_val_from_db(sqlite3_value *value, result_type<sqlite3_int64>) {
return sqlite3_value_type(value) == SQLITE_NULL ? 0 :
sqlite3_value_int64(value);
}
// float
template<>
struct has_sqlite_type<float, SQLITE_FLOAT, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const float& val) {
return sqlite3_bind_double(stmt, inx, double(val));
}
inline void store_result_in_db(sqlite3_context* db, const float& val) {
sqlite3_result_double(db, val);
}
inline float get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<float>) {
return sqlite3_column_type(stmt, inx) == SQLITE_NULL ? 0 :
sqlite3_column_double(stmt, inx);
}
inline float get_val_from_db(sqlite3_value *value, result_type<float>) {
return sqlite3_value_type(value) == SQLITE_NULL ? 0 :
sqlite3_value_double(value);
}
// double
template<>
struct has_sqlite_type<double, SQLITE_FLOAT, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const double& val) {
return sqlite3_bind_double(stmt, inx, val);
}
inline void store_result_in_db(sqlite3_context* db, const double& val) {
sqlite3_result_double(db, val);
}
inline double get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<double>) {
return sqlite3_column_type(stmt, inx) == SQLITE_NULL ? 0 :
sqlite3_column_double(stmt, inx);
}
inline double get_val_from_db(sqlite3_value *value, result_type<double>) {
return sqlite3_value_type(value) == SQLITE_NULL ? 0 :
sqlite3_value_double(value);
}
/* for nullptr support */
template<>
struct has_sqlite_type<std::nullptr_t, SQLITE_NULL, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, std::nullptr_t) {
return sqlite3_bind_null(stmt, inx);
}
inline void store_result_in_db(sqlite3_context* db, std::nullptr_t) {
sqlite3_result_null(db);
}
#ifdef MODERN_SQLITE_STD_VARIANT_SUPPORT
template<>
struct has_sqlite_type<std::monostate, SQLITE_NULL, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, std::monostate) {
return sqlite3_bind_null(stmt, inx);
}
inline void store_result_in_db(sqlite3_context* db, std::monostate) {
sqlite3_result_null(db);
}
inline std::monostate get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::monostate>) {
return std::monostate();
}
inline std::monostate get_val_from_db(sqlite3_value *value, result_type<std::monostate>) {
return std::monostate();
}
#endif
// str_ref
template<>
struct has_sqlite_type<std::string, SQLITE_BLOB, void> : std::true_type {}; //
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, str_ref val) {
return sqlite3_bind_blob(stmt, inx, val.data(), val.length(), SQLITE_TRANSIENT);
}
// Convert char* to string_view to trigger op<<(..., const str_ref )
template<std::size_t N> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const char(&STR)[N]) {
return sqlite3_bind_blob(stmt, inx, &STR[0], N-1, SQLITE_TRANSIENT);
}
inline std::string get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::string>) {
auto type = sqlite3_column_type(stmt, inx);
switch (type) {
case SQLITE_INTEGER:
return std::to_string(sqlite3_column_int64(stmt, inx));
case SQLITE_FLOAT:
return std::to_string(sqlite3_column_double(stmt, inx));
case SQLITE_BLOB: // It's important to support both text and blob data as txdb has some text and trie has some blob
return std::string(reinterpret_cast<char const *>(sqlite3_column_blob(stmt, inx)), sqlite3_column_bytes(stmt, inx));
case SQLITE3_TEXT:
return std::string(reinterpret_cast<char const *>(sqlite3_column_text(stmt, inx)), sqlite3_column_bytes(stmt, inx));
}
return std::string(); // NULL
}
inline std::string get_val_from_db(sqlite3_value *value, result_type<std::string >) {
auto type = sqlite3_value_type(value);
switch (type) {
case SQLITE_INTEGER:
return std::to_string(sqlite3_value_int64(value));
case SQLITE_FLOAT:
return std::to_string(sqlite3_value_double(value));
case SQLITE_BLOB:
return std::string(reinterpret_cast<char const *>(sqlite3_value_blob(value)), sqlite3_value_bytes(value));
case SQLITE3_TEXT:
return std::string(reinterpret_cast<char const *>(sqlite3_value_text(value)), sqlite3_value_bytes(value));
}
return std::string(); // NULL
}
inline void store_result_in_db(sqlite3_context* db, str_ref val) {
sqlite3_result_blob(db, val.data(), val.length(), SQLITE_TRANSIENT);
}
// u16str_ref
template<>
struct has_sqlite_type<std::u16string, SQLITE3_TEXT, void> : std::true_type {};
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, u16str_ref val) {
return sqlite3_bind_text16(stmt, inx, val.data(), sizeof(char16_t) * val.length(), SQLITE_TRANSIENT);
}
// Convert char* to string_view to trigger op<<(..., const str_ref )
template<std::size_t N> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const char16_t(&STR)[N]) {
return sqlite3_bind_text16(stmt, inx, &STR[0], sizeof(char16_t) * (N-1), SQLITE_TRANSIENT);
}
inline std::u16string get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::u16string>) {
return sqlite3_column_type(stmt, inx) == SQLITE_NULL ? std::u16string() :
std::u16string(reinterpret_cast<char16_t const *>(sqlite3_column_text16(stmt, inx)), sqlite3_column_bytes16(stmt, inx));
}
inline std::u16string get_val_from_db(sqlite3_value *value, result_type<std::u16string>) {
return sqlite3_value_type(value) == SQLITE_NULL ? std::u16string() :
std::u16string(reinterpret_cast<char16_t const *>(sqlite3_value_text16(value)), sqlite3_value_bytes16(value));
}
inline void store_result_in_db(sqlite3_context* db, u16str_ref val) {
sqlite3_result_text16(db, val.data(), sizeof(char16_t) * val.length(), SQLITE_TRANSIENT);
}
// Other integer types
template<class Integral>
struct has_sqlite_type<Integral, SQLITE_INTEGER, typename std::enable_if<std::is_integral<Integral>::value>::type> : std::true_type {};
template<class Integral, class = typename std::enable_if<std::is_integral<Integral>::value>::type>
inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const Integral& val) {
return bind_col_in_db(stmt, inx, static_cast<sqlite3_int64>(val));
}
template<class Integral, class = std::enable_if<std::is_integral<Integral>::type>>
inline void store_result_in_db(sqlite3_context* db, const Integral& val) {
store_result_in_db(db, static_cast<sqlite3_int64>(val));
}
template<class Integral, class = typename std::enable_if<std::is_integral<Integral>::value>::type>
inline Integral get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<Integral>) {
return get_col_from_db(stmt, inx, result_type<sqlite3_int64>());
}
template<class Integral, class = typename std::enable_if<std::is_integral<Integral>::value>::type>
inline Integral get_val_from_db(sqlite3_value *value, result_type<Integral>) {
return get_val_from_db(value, result_type<sqlite3_int64>());
}
// vector<T, A>
template<typename T, typename A>
struct has_sqlite_type<std::vector<T, A>, SQLITE_BLOB, void> : std::true_type {};
template<typename T, typename A> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const std::vector<T, A>& vec) {
void const* buf = reinterpret_cast<void const *>(vec.data());
int bytes = vec.size() * sizeof(T);
return sqlite3_bind_blob(stmt, inx, buf, bytes, SQLITE_TRANSIENT);
}
template<typename T, typename A> inline void store_result_in_db(sqlite3_context* db, const std::vector<T, A>& vec) {
void const* buf = reinterpret_cast<void const *>(vec.data());
int bytes = vec.size() * sizeof(T);
sqlite3_result_blob(db, buf, bytes, SQLITE_TRANSIENT);
}
template<typename T, typename A> inline std::vector<T, A> get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::vector<T, A>>) {
if(sqlite3_column_type(stmt, inx) == SQLITE_NULL) {
return {};
}
int bytes = sqlite3_column_bytes(stmt, inx);
T const* buf = reinterpret_cast<T const *>(sqlite3_column_blob(stmt, inx));
return std::vector<T, A>(buf, buf + bytes/sizeof(T));
}
template<typename T, typename A> inline std::vector<T, A> get_val_from_db(sqlite3_value *value, result_type<std::vector<T, A>>) {
if(sqlite3_value_type(value) == SQLITE_NULL) {
return {};
}
int bytes = sqlite3_value_bytes(value);
T const* buf = reinterpret_cast<T const *>(sqlite3_value_blob(value));
return std::vector<T, A>(buf, buf + bytes/sizeof(T));
}
/* for unique_ptr<T> support */
template<typename T, int Type>
struct has_sqlite_type<std::unique_ptr<T>, Type, void> : has_sqlite_type<T, Type> {};
template<typename T>
struct has_sqlite_type<std::unique_ptr<T>, SQLITE_NULL, void> : std::true_type {};
template<typename T> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const std::unique_ptr<T>& val) {
return val ? bind_col_in_db(stmt, inx, *val) : bind_col_in_db(stmt, inx, nullptr);
}
template<typename T> inline std::unique_ptr<T> get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::unique_ptr<T>>) {
if(sqlite3_column_type(stmt, inx) == SQLITE_NULL) {
return nullptr;
}
return std::make_unique<T>(get_col_from_db(stmt, inx, result_type<T>()));
}
template<typename T> inline std::unique_ptr<T> get_val_from_db(sqlite3_value *value, result_type<std::unique_ptr<T>>) {
if(sqlite3_value_type(value) == SQLITE_NULL) {
return nullptr;
}
return std::make_unique<T>(get_val_from_db(value, result_type<T>()));
}
// std::optional support for NULL values
#ifdef MODERN_SQLITE_STD_OPTIONAL_SUPPORT
template<class T>
using optional = std::optional<T>;
template<typename T, int Type>
struct has_sqlite_type<optional<T>, Type, void> : has_sqlite_type<T, Type> {};
template<typename T>
struct has_sqlite_type<optional<T>, SQLITE_NULL, void> : std::true_type {};
template <typename OptionalT> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const optional<OptionalT>& val) {
return val ? bind_col_in_db(stmt, inx, *val) : bind_col_in_db(stmt, inx, nullptr);
}
template <typename OptionalT> inline void store_result_in_db(sqlite3_context* db, const optional<OptionalT>& val) {
if(val)
store_result_in_db(db, *val);
else
sqlite3_result_null(db);
}
template <typename OptionalT> inline optional<OptionalT> get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<optional<OptionalT>>) {
if(sqlite3_column_type(stmt, inx) == SQLITE_NULL) {
return std::nullopt;
}
return std::make_optional(get_col_from_db(stmt, inx, result_type<OptionalT>()));
}
template <typename OptionalT> inline optional<OptionalT> get_val_from_db(sqlite3_value *value, result_type<optional<OptionalT>>) {
if(sqlite3_value_type(value) == SQLITE_NULL) {
return std::nullopt;
}
return std::make_optional(get_val_from_db(value, result_type<OptionalT>()));
}
#endif
#ifdef MODERN_SQLITE_STD_VARIANT_SUPPORT
namespace detail {
template<class T, class U>
struct tag_trait : U { using tag = T; };
}
template<int Type, class ...Options>
struct has_sqlite_type<std::variant<Options...>, Type, void> : std::disjunction<detail::tag_trait<Options, has_sqlite_type<Options, Type>>...> {};
namespace detail {
template<int Type, typename ...Options, typename Callback, typename first_compatible = has_sqlite_type<std::variant<Options...>, Type>>
inline std::variant<Options...> variant_select_type(Callback &&callback) {
if constexpr(first_compatible::value)
return callback(result_type<typename first_compatible::tag>());
else
throw errors::mismatch("The value is unsupported by this variant.", "", SQLITE_MISMATCH);
}
template<typename ...Options, typename Callback> inline decltype(auto) variant_select(int type, Callback &&callback) {
switch(type) {
case SQLITE_NULL:
return variant_select_type<SQLITE_NULL, Options...>(std::forward<Callback>(callback));
case SQLITE_INTEGER:
return variant_select_type<SQLITE_INTEGER, Options...>(std::forward<Callback>(callback));
case SQLITE_FLOAT:
return variant_select_type<SQLITE_FLOAT, Options...>(std::forward<Callback>(callback));
case SQLITE_TEXT:
return variant_select_type<SQLITE_TEXT, Options...>(std::forward<Callback>(callback));
case SQLITE_BLOB:
return variant_select_type<SQLITE_BLOB, Options...>(std::forward<Callback>(callback));
}
#ifdef _MSC_VER
__assume(false);
#else
__builtin_unreachable();
#endif
}
}
template <typename ...Args> inline int bind_col_in_db(sqlite3_stmt* stmt, int inx, const std::variant<Args...>& val) {
return std::visit([&](auto &&opt) {return bind_col_in_db(stmt, inx, std::forward<decltype(opt)>(opt));}, val);
}
template <typename ...Args> inline void store_result_in_db(sqlite3_context* db, const std::variant<Args...>& val) {
std::visit([&](auto &&opt) {store_result_in_db(db, std::forward<decltype(opt)>(opt));}, val);
}
template <typename ...Args> inline std::variant<Args...> get_col_from_db(sqlite3_stmt* stmt, int inx, result_type<std::variant<Args...>>) {
return detail::variant_select<Args...>(sqlite3_column_type(stmt, inx), [&](auto v) {
return std::variant<Args...>(std::in_place_type<typename decltype(v)::type>, get_col_from_db(stmt, inx, v));
});
}
template <typename ...Args> inline std::variant<Args...> get_val_from_db(sqlite3_value *value, result_type<std::variant<Args...>>) {
return detail::variant_select<Args...>(sqlite3_value_type(value), [&](auto v) {
return std::variant<Args...>(std::in_place_type<typename decltype(v)::type>, get_val_from_db(value, v));
});
}
#endif
}

View file

@ -0,0 +1,56 @@
#pragma once
#include <tuple>
#include <type_traits>
namespace sqlite {
namespace utility {
template<typename> struct function_traits;
template <typename Function>
struct function_traits : public function_traits<
decltype(&std::remove_reference<Function>::type::operator())
> { };
template <
typename ClassType,
typename ReturnType,
typename... Arguments
>
struct function_traits<
ReturnType(ClassType::*)(Arguments...) const
> : function_traits<ReturnType(*)(Arguments...)> { };
/* support the non-const operator ()
* this will work with user defined functors */
template <
typename ClassType,
typename ReturnType,
typename... Arguments
>
struct function_traits<
ReturnType(ClassType::*)(Arguments...)
> : function_traits<ReturnType(*)(Arguments...)> { };
template <
typename ReturnType,
typename... Arguments
>
struct function_traits<
ReturnType(*)(Arguments...)
> {
typedef ReturnType result_type;
using argument_tuple = std::tuple<Arguments...>;
template <std::size_t Index>
using argument = typename std::tuple_element<
Index,
argument_tuple
>::type;
static const std::size_t arity = sizeof...(Arguments);
};
}
}

View file

@ -0,0 +1,41 @@
#pragma once
#include <cassert>
#include <exception>
#include <iostream>
// Consider that std::uncaught_exceptions is available if explicitly indicated
// by the standard library, if compiler advertises full C++17 support or, as a
// special case, for MSVS 2015+ (which doesn't define __cplusplus correctly by
// default as of 2017.7 version and couldn't do it at all until it).
#ifndef MODERN_SQLITE_UNCAUGHT_EXCEPTIONS_SUPPORT
#ifdef __cpp_lib_uncaught_exceptions
#define MODERN_SQLITE_UNCAUGHT_EXCEPTIONS_SUPPORT
#elif __cplusplus >= 201703L
#define MODERN_SQLITE_UNCAUGHT_EXCEPTIONS_SUPPORT
#elif defined(_MSC_VER) && _MSC_VER >= 1900
#define MODERN_SQLITE_UNCAUGHT_EXCEPTIONS_SUPPORT
#endif
#endif
namespace sqlite {
namespace utility {
#ifdef MODERN_SQLITE_UNCAUGHT_EXCEPTIONS_SUPPORT
class UncaughtExceptionDetector {
public:
operator bool() {
return count != std::uncaught_exceptions();
}
private:
int count = std::uncaught_exceptions();
};
#else
class UncaughtExceptionDetector {
public:
operator bool() {
return std::uncaught_exception();
}
};
#endif
}
}

View file

@ -0,0 +1,42 @@
#pragma once
#include <locale>
#include <string>
#include <algorithm>
#include "../errors.h"
namespace sqlite {
namespace utility {
inline std::string utf16_to_utf8(u16str_ref input) {
struct : std::codecvt<char16_t, char, std::mbstate_t> {
} codecvt;
std::mbstate_t state{};
std::string result((std::max)(input.size() * 3 / 2, std::size_t(4)), '\0');
const char16_t *remaining_input = input.data();
std::size_t produced_output = 0;
while(true) {
char *used_output;
switch(codecvt.out(state, remaining_input, &input[input.size()],
remaining_input, &result[produced_output],
&result[result.size() - 1] + 1, used_output)) {
case std::codecvt_base::ok:
result.resize(used_output - result.data());
return result;
case std::codecvt_base::noconv:
// This should be unreachable
case std::codecvt_base::error:
throw errors::invalid_utf16("Invalid UTF-16 input", "");
case std::codecvt_base::partial:
if(used_output == result.data() + produced_output)
throw errors::invalid_utf16("Unexpected end of input", "");
produced_output = used_output - result.data();
result.resize(
result.size()
+ (std::max)((&input[input.size()] - remaining_input) * 3 / 2,
std::ptrdiff_t(4)));
}
}
}
} // namespace utility
} // namespace sqlite

19114
src/claimtrie/sqlite/shell.c Normal file

File diff suppressed because it is too large Load diff

225063
src/claimtrie/sqlite/sqlite3.c Normal file

File diff suppressed because it is too large Load diff

11859
src/claimtrie/sqlite/sqlite3.h Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,638 @@
/*
** 2006 June 7
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
*************************************************************************
** This header file defines the SQLite interface for use by
** shared libraries that want to be imported as extensions into
** an SQLite instance. Shared libraries that intend to be loaded
** as extensions by SQLite should #include this file instead of
** sqlite3.h.
*/
#ifndef SQLITE3EXT_H
#define SQLITE3EXT_H
#include "sqlite3.h"
/*
** The following structure holds pointers to all of the SQLite API
** routines.
**
** WARNING: In order to maintain backwards compatibility, add new
** interfaces to the end of this structure only. If you insert new
** interfaces in the middle of this structure, then older different
** versions of SQLite will not be able to load each other's shared
** libraries!
*/
struct sqlite3_api_routines {
void * (*aggregate_context)(sqlite3_context*,int nBytes);
int (*aggregate_count)(sqlite3_context*);
int (*bind_blob)(sqlite3_stmt*,int,const void*,int n,void(*)(void*));
int (*bind_double)(sqlite3_stmt*,int,double);
int (*bind_int)(sqlite3_stmt*,int,int);
int (*bind_int64)(sqlite3_stmt*,int,sqlite_int64);
int (*bind_null)(sqlite3_stmt*,int);
int (*bind_parameter_count)(sqlite3_stmt*);
int (*bind_parameter_index)(sqlite3_stmt*,const char*zName);
const char * (*bind_parameter_name)(sqlite3_stmt*,int);
int (*bind_text)(sqlite3_stmt*,int,const char*,int n,void(*)(void*));
int (*bind_text16)(sqlite3_stmt*,int,const void*,int,void(*)(void*));
int (*bind_value)(sqlite3_stmt*,int,const sqlite3_value*);
int (*busy_handler)(sqlite3*,int(*)(void*,int),void*);
int (*busy_timeout)(sqlite3*,int ms);
int (*changes)(sqlite3*);
int (*close)(sqlite3*);
int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*,
int eTextRep,const char*));
int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*,
int eTextRep,const void*));
const void * (*column_blob)(sqlite3_stmt*,int iCol);
int (*column_bytes)(sqlite3_stmt*,int iCol);
int (*column_bytes16)(sqlite3_stmt*,int iCol);
int (*column_count)(sqlite3_stmt*pStmt);
const char * (*column_database_name)(sqlite3_stmt*,int);
const void * (*column_database_name16)(sqlite3_stmt*,int);
const char * (*column_decltype)(sqlite3_stmt*,int i);
const void * (*column_decltype16)(sqlite3_stmt*,int);
double (*column_double)(sqlite3_stmt*,int iCol);
int (*column_int)(sqlite3_stmt*,int iCol);
sqlite_int64 (*column_int64)(sqlite3_stmt*,int iCol);
const char * (*column_name)(sqlite3_stmt*,int);
const void * (*column_name16)(sqlite3_stmt*,int);
const char * (*column_origin_name)(sqlite3_stmt*,int);
const void * (*column_origin_name16)(sqlite3_stmt*,int);
const char * (*column_table_name)(sqlite3_stmt*,int);
const void * (*column_table_name16)(sqlite3_stmt*,int);
const unsigned char * (*column_text)(sqlite3_stmt*,int iCol);
const void * (*column_text16)(sqlite3_stmt*,int iCol);
int (*column_type)(sqlite3_stmt*,int iCol);
sqlite3_value* (*column_value)(sqlite3_stmt*,int iCol);
void * (*commit_hook)(sqlite3*,int(*)(void*),void*);
int (*complete)(const char*sql);
int (*complete16)(const void*sql);
int (*create_collation)(sqlite3*,const char*,int,void*,
int(*)(void*,int,const void*,int,const void*));
int (*create_collation16)(sqlite3*,const void*,int,void*,
int(*)(void*,int,const void*,int,const void*));
int (*create_function)(sqlite3*,const char*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*));
int (*create_function16)(sqlite3*,const void*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*));
int (*create_module)(sqlite3*,const char*,const sqlite3_module*,void*);
int (*data_count)(sqlite3_stmt*pStmt);
sqlite3 * (*db_handle)(sqlite3_stmt*);
int (*declare_vtab)(sqlite3*,const char*);
int (*enable_shared_cache)(int);
int (*errcode)(sqlite3*db);
const char * (*errmsg)(sqlite3*);
const void * (*errmsg16)(sqlite3*);
int (*exec)(sqlite3*,const char*,sqlite3_callback,void*,char**);
int (*expired)(sqlite3_stmt*);
int (*finalize)(sqlite3_stmt*pStmt);
void (*free)(void*);
void (*free_table)(char**result);
int (*get_autocommit)(sqlite3*);
void * (*get_auxdata)(sqlite3_context*,int);
int (*get_table)(sqlite3*,const char*,char***,int*,int*,char**);
int (*global_recover)(void);
void (*interruptx)(sqlite3*);
sqlite_int64 (*last_insert_rowid)(sqlite3*);
const char * (*libversion)(void);
int (*libversion_number)(void);
void *(*malloc)(int);
char * (*mprintf)(const char*,...);
int (*open)(const char*,sqlite3**);
int (*open16)(const void*,sqlite3**);
int (*prepare)(sqlite3*,const char*,int,sqlite3_stmt**,const char**);
int (*prepare16)(sqlite3*,const void*,int,sqlite3_stmt**,const void**);
void * (*profile)(sqlite3*,void(*)(void*,const char*,sqlite_uint64),void*);
void (*progress_handler)(sqlite3*,int,int(*)(void*),void*);
void *(*realloc)(void*,int);
int (*reset)(sqlite3_stmt*pStmt);
void (*result_blob)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_double)(sqlite3_context*,double);
void (*result_error)(sqlite3_context*,const char*,int);
void (*result_error16)(sqlite3_context*,const void*,int);
void (*result_int)(sqlite3_context*,int);
void (*result_int64)(sqlite3_context*,sqlite_int64);
void (*result_null)(sqlite3_context*);
void (*result_text)(sqlite3_context*,const char*,int,void(*)(void*));
void (*result_text16)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_text16be)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_text16le)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_value)(sqlite3_context*,sqlite3_value*);
void * (*rollback_hook)(sqlite3*,void(*)(void*),void*);
int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*,
const char*,const char*),void*);
void (*set_auxdata)(sqlite3_context*,int,void*,void (*)(void*));
char * (*xsnprintf)(int,char*,const char*,...);
int (*step)(sqlite3_stmt*);
int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*,
char const**,char const**,int*,int*,int*);
void (*thread_cleanup)(void);
int (*total_changes)(sqlite3*);
void * (*trace)(sqlite3*,void(*xTrace)(void*,const char*),void*);
int (*transfer_bindings)(sqlite3_stmt*,sqlite3_stmt*);
void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*,
sqlite_int64),void*);
void * (*user_data)(sqlite3_context*);
const void * (*value_blob)(sqlite3_value*);
int (*value_bytes)(sqlite3_value*);
int (*value_bytes16)(sqlite3_value*);
double (*value_double)(sqlite3_value*);
int (*value_int)(sqlite3_value*);
sqlite_int64 (*value_int64)(sqlite3_value*);
int (*value_numeric_type)(sqlite3_value*);
const unsigned char * (*value_text)(sqlite3_value*);
const void * (*value_text16)(sqlite3_value*);
const void * (*value_text16be)(sqlite3_value*);
const void * (*value_text16le)(sqlite3_value*);
int (*value_type)(sqlite3_value*);
char *(*vmprintf)(const char*,va_list);
/* Added ??? */
int (*overload_function)(sqlite3*, const char *zFuncName, int nArg);
/* Added by 3.3.13 */
int (*prepare_v2)(sqlite3*,const char*,int,sqlite3_stmt**,const char**);
int (*prepare16_v2)(sqlite3*,const void*,int,sqlite3_stmt**,const void**);
int (*clear_bindings)(sqlite3_stmt*);
/* Added by 3.4.1 */
int (*create_module_v2)(sqlite3*,const char*,const sqlite3_module*,void*,
void (*xDestroy)(void *));
/* Added by 3.5.0 */
int (*bind_zeroblob)(sqlite3_stmt*,int,int);
int (*blob_bytes)(sqlite3_blob*);
int (*blob_close)(sqlite3_blob*);
int (*blob_open)(sqlite3*,const char*,const char*,const char*,sqlite3_int64,
int,sqlite3_blob**);
int (*blob_read)(sqlite3_blob*,void*,int,int);
int (*blob_write)(sqlite3_blob*,const void*,int,int);
int (*create_collation_v2)(sqlite3*,const char*,int,void*,
int(*)(void*,int,const void*,int,const void*),
void(*)(void*));
int (*file_control)(sqlite3*,const char*,int,void*);
sqlite3_int64 (*memory_highwater)(int);
sqlite3_int64 (*memory_used)(void);
sqlite3_mutex *(*mutex_alloc)(int);
void (*mutex_enter)(sqlite3_mutex*);
void (*mutex_free)(sqlite3_mutex*);
void (*mutex_leave)(sqlite3_mutex*);
int (*mutex_try)(sqlite3_mutex*);
int (*open_v2)(const char*,sqlite3**,int,const char*);
int (*release_memory)(int);
void (*result_error_nomem)(sqlite3_context*);
void (*result_error_toobig)(sqlite3_context*);
int (*sleep)(int);
void (*soft_heap_limit)(int);
sqlite3_vfs *(*vfs_find)(const char*);
int (*vfs_register)(sqlite3_vfs*,int);
int (*vfs_unregister)(sqlite3_vfs*);
int (*xthreadsafe)(void);
void (*result_zeroblob)(sqlite3_context*,int);
void (*result_error_code)(sqlite3_context*,int);
int (*test_control)(int, ...);
void (*randomness)(int,void*);
sqlite3 *(*context_db_handle)(sqlite3_context*);
int (*extended_result_codes)(sqlite3*,int);
int (*limit)(sqlite3*,int,int);
sqlite3_stmt *(*next_stmt)(sqlite3*,sqlite3_stmt*);
const char *(*sql)(sqlite3_stmt*);
int (*status)(int,int*,int*,int);
int (*backup_finish)(sqlite3_backup*);
sqlite3_backup *(*backup_init)(sqlite3*,const char*,sqlite3*,const char*);
int (*backup_pagecount)(sqlite3_backup*);
int (*backup_remaining)(sqlite3_backup*);
int (*backup_step)(sqlite3_backup*,int);
const char *(*compileoption_get)(int);
int (*compileoption_used)(const char*);
int (*create_function_v2)(sqlite3*,const char*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*),
void(*xDestroy)(void*));
int (*db_config)(sqlite3*,int,...);
sqlite3_mutex *(*db_mutex)(sqlite3*);
int (*db_status)(sqlite3*,int,int*,int*,int);
int (*extended_errcode)(sqlite3*);
void (*log)(int,const char*,...);
sqlite3_int64 (*soft_heap_limit64)(sqlite3_int64);
const char *(*sourceid)(void);
int (*stmt_status)(sqlite3_stmt*,int,int);
int (*strnicmp)(const char*,const char*,int);
int (*unlock_notify)(sqlite3*,void(*)(void**,int),void*);
int (*wal_autocheckpoint)(sqlite3*,int);
int (*wal_checkpoint)(sqlite3*,const char*);
void *(*wal_hook)(sqlite3*,int(*)(void*,sqlite3*,const char*,int),void*);
int (*blob_reopen)(sqlite3_blob*,sqlite3_int64);
int (*vtab_config)(sqlite3*,int op,...);
int (*vtab_on_conflict)(sqlite3*);
/* Version 3.7.16 and later */
int (*close_v2)(sqlite3*);
const char *(*db_filename)(sqlite3*,const char*);
int (*db_readonly)(sqlite3*,const char*);
int (*db_release_memory)(sqlite3*);
const char *(*errstr)(int);
int (*stmt_busy)(sqlite3_stmt*);
int (*stmt_readonly)(sqlite3_stmt*);
int (*stricmp)(const char*,const char*);
int (*uri_boolean)(const char*,const char*,int);
sqlite3_int64 (*uri_int64)(const char*,const char*,sqlite3_int64);
const char *(*uri_parameter)(const char*,const char*);
char *(*xvsnprintf)(int,char*,const char*,va_list);
int (*wal_checkpoint_v2)(sqlite3*,const char*,int,int*,int*);
/* Version 3.8.7 and later */
int (*auto_extension)(void(*)(void));
int (*bind_blob64)(sqlite3_stmt*,int,const void*,sqlite3_uint64,
void(*)(void*));
int (*bind_text64)(sqlite3_stmt*,int,const char*,sqlite3_uint64,
void(*)(void*),unsigned char);
int (*cancel_auto_extension)(void(*)(void));
int (*load_extension)(sqlite3*,const char*,const char*,char**);
void *(*malloc64)(sqlite3_uint64);
sqlite3_uint64 (*msize)(void*);
void *(*realloc64)(void*,sqlite3_uint64);
void (*reset_auto_extension)(void);
void (*result_blob64)(sqlite3_context*,const void*,sqlite3_uint64,
void(*)(void*));
void (*result_text64)(sqlite3_context*,const char*,sqlite3_uint64,
void(*)(void*), unsigned char);
int (*strglob)(const char*,const char*);
/* Version 3.8.11 and later */
sqlite3_value *(*value_dup)(const sqlite3_value*);
void (*value_free)(sqlite3_value*);
int (*result_zeroblob64)(sqlite3_context*,sqlite3_uint64);
int (*bind_zeroblob64)(sqlite3_stmt*, int, sqlite3_uint64);
/* Version 3.9.0 and later */
unsigned int (*value_subtype)(sqlite3_value*);
void (*result_subtype)(sqlite3_context*,unsigned int);
/* Version 3.10.0 and later */
int (*status64)(int,sqlite3_int64*,sqlite3_int64*,int);
int (*strlike)(const char*,const char*,unsigned int);
int (*db_cacheflush)(sqlite3*);
/* Version 3.12.0 and later */
int (*system_errno)(sqlite3*);
/* Version 3.14.0 and later */
int (*trace_v2)(sqlite3*,unsigned,int(*)(unsigned,void*,void*,void*),void*);
char *(*expanded_sql)(sqlite3_stmt*);
/* Version 3.18.0 and later */
void (*set_last_insert_rowid)(sqlite3*,sqlite3_int64);
/* Version 3.20.0 and later */
int (*prepare_v3)(sqlite3*,const char*,int,unsigned int,
sqlite3_stmt**,const char**);
int (*prepare16_v3)(sqlite3*,const void*,int,unsigned int,
sqlite3_stmt**,const void**);
int (*bind_pointer)(sqlite3_stmt*,int,void*,const char*,void(*)(void*));
void (*result_pointer)(sqlite3_context*,void*,const char*,void(*)(void*));
void *(*value_pointer)(sqlite3_value*,const char*);
int (*vtab_nochange)(sqlite3_context*);
int (*value_nochange)(sqlite3_value*);
const char *(*vtab_collation)(sqlite3_index_info*,int);
/* Version 3.24.0 and later */
int (*keyword_count)(void);
int (*keyword_name)(int,const char**,int*);
int (*keyword_check)(const char*,int);
sqlite3_str *(*str_new)(sqlite3*);
char *(*str_finish)(sqlite3_str*);
void (*str_appendf)(sqlite3_str*, const char *zFormat, ...);
void (*str_vappendf)(sqlite3_str*, const char *zFormat, va_list);
void (*str_append)(sqlite3_str*, const char *zIn, int N);
void (*str_appendall)(sqlite3_str*, const char *zIn);
void (*str_appendchar)(sqlite3_str*, int N, char C);
void (*str_reset)(sqlite3_str*);
int (*str_errcode)(sqlite3_str*);
int (*str_length)(sqlite3_str*);
char *(*str_value)(sqlite3_str*);
/* Version 3.25.0 and later */
int (*create_window_function)(sqlite3*,const char*,int,int,void*,
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*),
void (*xValue)(sqlite3_context*),
void (*xInv)(sqlite3_context*,int,sqlite3_value**),
void(*xDestroy)(void*));
/* Version 3.26.0 and later */
const char *(*normalized_sql)(sqlite3_stmt*);
/* Version 3.28.0 and later */
int (*stmt_isexplain)(sqlite3_stmt*);
int (*value_frombind)(sqlite3_value*);
/* Version 3.30.0 and later */
int (*drop_modules)(sqlite3*,const char**);
};
/*
** This is the function signature used for all extension entry points. It
** is also defined in the file "loadext.c".
*/
typedef int (*sqlite3_loadext_entry)(
sqlite3 *db, /* Handle to the database. */
char **pzErrMsg, /* Used to set error string on failure. */
const sqlite3_api_routines *pThunk /* Extension API function pointers. */
);
/*
** The following macros redefine the API routines so that they are
** redirected through the global sqlite3_api structure.
**
** This header file is also used by the loadext.c source file
** (part of the main SQLite library - not an extension) so that
** it can get access to the sqlite3_api_routines structure
** definition. But the main library does not want to redefine
** the API. So the redefinition macros are only valid if the
** SQLITE_CORE macros is undefined.
*/
#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION)
#define sqlite3_aggregate_context sqlite3_api->aggregate_context
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_aggregate_count sqlite3_api->aggregate_count
#endif
#define sqlite3_bind_blob sqlite3_api->bind_blob
#define sqlite3_bind_double sqlite3_api->bind_double
#define sqlite3_bind_int sqlite3_api->bind_int
#define sqlite3_bind_int64 sqlite3_api->bind_int64
#define sqlite3_bind_null sqlite3_api->bind_null
#define sqlite3_bind_parameter_count sqlite3_api->bind_parameter_count
#define sqlite3_bind_parameter_index sqlite3_api->bind_parameter_index
#define sqlite3_bind_parameter_name sqlite3_api->bind_parameter_name
#define sqlite3_bind_text sqlite3_api->bind_text
#define sqlite3_bind_text16 sqlite3_api->bind_text16
#define sqlite3_bind_value sqlite3_api->bind_value
#define sqlite3_busy_handler sqlite3_api->busy_handler
#define sqlite3_busy_timeout sqlite3_api->busy_timeout
#define sqlite3_changes sqlite3_api->changes
#define sqlite3_close sqlite3_api->close
#define sqlite3_collation_needed sqlite3_api->collation_needed
#define sqlite3_collation_needed16 sqlite3_api->collation_needed16
#define sqlite3_column_blob sqlite3_api->column_blob
#define sqlite3_column_bytes sqlite3_api->column_bytes
#define sqlite3_column_bytes16 sqlite3_api->column_bytes16
#define sqlite3_column_count sqlite3_api->column_count
#define sqlite3_column_database_name sqlite3_api->column_database_name
#define sqlite3_column_database_name16 sqlite3_api->column_database_name16
#define sqlite3_column_decltype sqlite3_api->column_decltype
#define sqlite3_column_decltype16 sqlite3_api->column_decltype16
#define sqlite3_column_double sqlite3_api->column_double
#define sqlite3_column_int sqlite3_api->column_int
#define sqlite3_column_int64 sqlite3_api->column_int64
#define sqlite3_column_name sqlite3_api->column_name
#define sqlite3_column_name16 sqlite3_api->column_name16
#define sqlite3_column_origin_name sqlite3_api->column_origin_name
#define sqlite3_column_origin_name16 sqlite3_api->column_origin_name16
#define sqlite3_column_table_name sqlite3_api->column_table_name
#define sqlite3_column_table_name16 sqlite3_api->column_table_name16
#define sqlite3_column_text sqlite3_api->column_text
#define sqlite3_column_text16 sqlite3_api->column_text16
#define sqlite3_column_type sqlite3_api->column_type
#define sqlite3_column_value sqlite3_api->column_value
#define sqlite3_commit_hook sqlite3_api->commit_hook
#define sqlite3_complete sqlite3_api->complete
#define sqlite3_complete16 sqlite3_api->complete16
#define sqlite3_create_collation sqlite3_api->create_collation
#define sqlite3_create_collation16 sqlite3_api->create_collation16
#define sqlite3_create_function sqlite3_api->create_function
#define sqlite3_create_function16 sqlite3_api->create_function16
#define sqlite3_create_module sqlite3_api->create_module
#define sqlite3_create_module_v2 sqlite3_api->create_module_v2
#define sqlite3_data_count sqlite3_api->data_count
#define sqlite3_db_handle sqlite3_api->db_handle
#define sqlite3_declare_vtab sqlite3_api->declare_vtab
#define sqlite3_enable_shared_cache sqlite3_api->enable_shared_cache
#define sqlite3_errcode sqlite3_api->errcode
#define sqlite3_errmsg sqlite3_api->errmsg
#define sqlite3_errmsg16 sqlite3_api->errmsg16
#define sqlite3_exec sqlite3_api->exec
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_expired sqlite3_api->expired
#endif
#define sqlite3_finalize sqlite3_api->finalize
#define sqlite3_free sqlite3_api->free
#define sqlite3_free_table sqlite3_api->free_table
#define sqlite3_get_autocommit sqlite3_api->get_autocommit
#define sqlite3_get_auxdata sqlite3_api->get_auxdata
#define sqlite3_get_table sqlite3_api->get_table
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_global_recover sqlite3_api->global_recover
#endif
#define sqlite3_interrupt sqlite3_api->interruptx
#define sqlite3_last_insert_rowid sqlite3_api->last_insert_rowid
#define sqlite3_libversion sqlite3_api->libversion
#define sqlite3_libversion_number sqlite3_api->libversion_number
#define sqlite3_malloc sqlite3_api->malloc
#define sqlite3_mprintf sqlite3_api->mprintf
#define sqlite3_open sqlite3_api->open
#define sqlite3_open16 sqlite3_api->open16
#define sqlite3_prepare sqlite3_api->prepare
#define sqlite3_prepare16 sqlite3_api->prepare16
#define sqlite3_prepare_v2 sqlite3_api->prepare_v2
#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2
#define sqlite3_profile sqlite3_api->profile
#define sqlite3_progress_handler sqlite3_api->progress_handler
#define sqlite3_realloc sqlite3_api->realloc
#define sqlite3_reset sqlite3_api->reset
#define sqlite3_result_blob sqlite3_api->result_blob
#define sqlite3_result_double sqlite3_api->result_double
#define sqlite3_result_error sqlite3_api->result_error
#define sqlite3_result_error16 sqlite3_api->result_error16
#define sqlite3_result_int sqlite3_api->result_int
#define sqlite3_result_int64 sqlite3_api->result_int64
#define sqlite3_result_null sqlite3_api->result_null
#define sqlite3_result_text sqlite3_api->result_text
#define sqlite3_result_text16 sqlite3_api->result_text16
#define sqlite3_result_text16be sqlite3_api->result_text16be
#define sqlite3_result_text16le sqlite3_api->result_text16le
#define sqlite3_result_value sqlite3_api->result_value
#define sqlite3_rollback_hook sqlite3_api->rollback_hook
#define sqlite3_set_authorizer sqlite3_api->set_authorizer
#define sqlite3_set_auxdata sqlite3_api->set_auxdata
#define sqlite3_snprintf sqlite3_api->xsnprintf
#define sqlite3_step sqlite3_api->step
#define sqlite3_table_column_metadata sqlite3_api->table_column_metadata
#define sqlite3_thread_cleanup sqlite3_api->thread_cleanup
#define sqlite3_total_changes sqlite3_api->total_changes
#define sqlite3_trace sqlite3_api->trace
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_transfer_bindings sqlite3_api->transfer_bindings
#endif
#define sqlite3_update_hook sqlite3_api->update_hook
#define sqlite3_user_data sqlite3_api->user_data
#define sqlite3_value_blob sqlite3_api->value_blob
#define sqlite3_value_bytes sqlite3_api->value_bytes
#define sqlite3_value_bytes16 sqlite3_api->value_bytes16
#define sqlite3_value_double sqlite3_api->value_double
#define sqlite3_value_int sqlite3_api->value_int
#define sqlite3_value_int64 sqlite3_api->value_int64
#define sqlite3_value_numeric_type sqlite3_api->value_numeric_type
#define sqlite3_value_text sqlite3_api->value_text
#define sqlite3_value_text16 sqlite3_api->value_text16
#define sqlite3_value_text16be sqlite3_api->value_text16be
#define sqlite3_value_text16le sqlite3_api->value_text16le
#define sqlite3_value_type sqlite3_api->value_type
#define sqlite3_vmprintf sqlite3_api->vmprintf
#define sqlite3_vsnprintf sqlite3_api->xvsnprintf
#define sqlite3_overload_function sqlite3_api->overload_function
#define sqlite3_prepare_v2 sqlite3_api->prepare_v2
#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2
#define sqlite3_clear_bindings sqlite3_api->clear_bindings
#define sqlite3_bind_zeroblob sqlite3_api->bind_zeroblob
#define sqlite3_blob_bytes sqlite3_api->blob_bytes
#define sqlite3_blob_close sqlite3_api->blob_close
#define sqlite3_blob_open sqlite3_api->blob_open
#define sqlite3_blob_read sqlite3_api->blob_read
#define sqlite3_blob_write sqlite3_api->blob_write
#define sqlite3_create_collation_v2 sqlite3_api->create_collation_v2
#define sqlite3_file_control sqlite3_api->file_control
#define sqlite3_memory_highwater sqlite3_api->memory_highwater
#define sqlite3_memory_used sqlite3_api->memory_used
#define sqlite3_mutex_alloc sqlite3_api->mutex_alloc
#define sqlite3_mutex_enter sqlite3_api->mutex_enter
#define sqlite3_mutex_free sqlite3_api->mutex_free
#define sqlite3_mutex_leave sqlite3_api->mutex_leave
#define sqlite3_mutex_try sqlite3_api->mutex_try
#define sqlite3_open_v2 sqlite3_api->open_v2
#define sqlite3_release_memory sqlite3_api->release_memory
#define sqlite3_result_error_nomem sqlite3_api->result_error_nomem
#define sqlite3_result_error_toobig sqlite3_api->result_error_toobig
#define sqlite3_sleep sqlite3_api->sleep
#define sqlite3_soft_heap_limit sqlite3_api->soft_heap_limit
#define sqlite3_vfs_find sqlite3_api->vfs_find
#define sqlite3_vfs_register sqlite3_api->vfs_register
#define sqlite3_vfs_unregister sqlite3_api->vfs_unregister
#define sqlite3_threadsafe sqlite3_api->xthreadsafe
#define sqlite3_result_zeroblob sqlite3_api->result_zeroblob
#define sqlite3_result_error_code sqlite3_api->result_error_code
#define sqlite3_test_control sqlite3_api->test_control
#define sqlite3_randomness sqlite3_api->randomness
#define sqlite3_context_db_handle sqlite3_api->context_db_handle
#define sqlite3_extended_result_codes sqlite3_api->extended_result_codes
#define sqlite3_limit sqlite3_api->limit
#define sqlite3_next_stmt sqlite3_api->next_stmt
#define sqlite3_sql sqlite3_api->sql
#define sqlite3_status sqlite3_api->status
#define sqlite3_backup_finish sqlite3_api->backup_finish
#define sqlite3_backup_init sqlite3_api->backup_init
#define sqlite3_backup_pagecount sqlite3_api->backup_pagecount
#define sqlite3_backup_remaining sqlite3_api->backup_remaining
#define sqlite3_backup_step sqlite3_api->backup_step
#define sqlite3_compileoption_get sqlite3_api->compileoption_get
#define sqlite3_compileoption_used sqlite3_api->compileoption_used
#define sqlite3_create_function_v2 sqlite3_api->create_function_v2
#define sqlite3_db_config sqlite3_api->db_config
#define sqlite3_db_mutex sqlite3_api->db_mutex
#define sqlite3_db_status sqlite3_api->db_status
#define sqlite3_extended_errcode sqlite3_api->extended_errcode
#define sqlite3_log sqlite3_api->log
#define sqlite3_soft_heap_limit64 sqlite3_api->soft_heap_limit64
#define sqlite3_sourceid sqlite3_api->sourceid
#define sqlite3_stmt_status sqlite3_api->stmt_status
#define sqlite3_strnicmp sqlite3_api->strnicmp
#define sqlite3_unlock_notify sqlite3_api->unlock_notify
#define sqlite3_wal_autocheckpoint sqlite3_api->wal_autocheckpoint
#define sqlite3_wal_checkpoint sqlite3_api->wal_checkpoint
#define sqlite3_wal_hook sqlite3_api->wal_hook
#define sqlite3_blob_reopen sqlite3_api->blob_reopen
#define sqlite3_vtab_config sqlite3_api->vtab_config
#define sqlite3_vtab_on_conflict sqlite3_api->vtab_on_conflict
/* Version 3.7.16 and later */
#define sqlite3_close_v2 sqlite3_api->close_v2
#define sqlite3_db_filename sqlite3_api->db_filename
#define sqlite3_db_readonly sqlite3_api->db_readonly
#define sqlite3_db_release_memory sqlite3_api->db_release_memory
#define sqlite3_errstr sqlite3_api->errstr
#define sqlite3_stmt_busy sqlite3_api->stmt_busy
#define sqlite3_stmt_readonly sqlite3_api->stmt_readonly
#define sqlite3_stricmp sqlite3_api->stricmp
#define sqlite3_uri_boolean sqlite3_api->uri_boolean
#define sqlite3_uri_int64 sqlite3_api->uri_int64
#define sqlite3_uri_parameter sqlite3_api->uri_parameter
#define sqlite3_uri_vsnprintf sqlite3_api->xvsnprintf
#define sqlite3_wal_checkpoint_v2 sqlite3_api->wal_checkpoint_v2
/* Version 3.8.7 and later */
#define sqlite3_auto_extension sqlite3_api->auto_extension
#define sqlite3_bind_blob64 sqlite3_api->bind_blob64
#define sqlite3_bind_text64 sqlite3_api->bind_text64
#define sqlite3_cancel_auto_extension sqlite3_api->cancel_auto_extension
#define sqlite3_load_extension sqlite3_api->load_extension
#define sqlite3_malloc64 sqlite3_api->malloc64
#define sqlite3_msize sqlite3_api->msize
#define sqlite3_realloc64 sqlite3_api->realloc64
#define sqlite3_reset_auto_extension sqlite3_api->reset_auto_extension
#define sqlite3_result_blob64 sqlite3_api->result_blob64
#define sqlite3_result_text64 sqlite3_api->result_text64
#define sqlite3_strglob sqlite3_api->strglob
/* Version 3.8.11 and later */
#define sqlite3_value_dup sqlite3_api->value_dup
#define sqlite3_value_free sqlite3_api->value_free
#define sqlite3_result_zeroblob64 sqlite3_api->result_zeroblob64
#define sqlite3_bind_zeroblob64 sqlite3_api->bind_zeroblob64
/* Version 3.9.0 and later */
#define sqlite3_value_subtype sqlite3_api->value_subtype
#define sqlite3_result_subtype sqlite3_api->result_subtype
/* Version 3.10.0 and later */
#define sqlite3_status64 sqlite3_api->status64
#define sqlite3_strlike sqlite3_api->strlike
#define sqlite3_db_cacheflush sqlite3_api->db_cacheflush
/* Version 3.12.0 and later */
#define sqlite3_system_errno sqlite3_api->system_errno
/* Version 3.14.0 and later */
#define sqlite3_trace_v2 sqlite3_api->trace_v2
#define sqlite3_expanded_sql sqlite3_api->expanded_sql
/* Version 3.18.0 and later */
#define sqlite3_set_last_insert_rowid sqlite3_api->set_last_insert_rowid
/* Version 3.20.0 and later */
#define sqlite3_prepare_v3 sqlite3_api->prepare_v3
#define sqlite3_prepare16_v3 sqlite3_api->prepare16_v3
#define sqlite3_bind_pointer sqlite3_api->bind_pointer
#define sqlite3_result_pointer sqlite3_api->result_pointer
#define sqlite3_value_pointer sqlite3_api->value_pointer
/* Version 3.22.0 and later */
#define sqlite3_vtab_nochange sqlite3_api->vtab_nochange
#define sqlite3_value_nochange sqlite3_api->value_nochange
#define sqlite3_vtab_collation sqlite3_api->vtab_collation
/* Version 3.24.0 and later */
#define sqlite3_keyword_count sqlite3_api->keyword_count
#define sqlite3_keyword_name sqlite3_api->keyword_name
#define sqlite3_keyword_check sqlite3_api->keyword_check
#define sqlite3_str_new sqlite3_api->str_new
#define sqlite3_str_finish sqlite3_api->str_finish
#define sqlite3_str_appendf sqlite3_api->str_appendf
#define sqlite3_str_vappendf sqlite3_api->str_vappendf
#define sqlite3_str_append sqlite3_api->str_append
#define sqlite3_str_appendall sqlite3_api->str_appendall
#define sqlite3_str_appendchar sqlite3_api->str_appendchar
#define sqlite3_str_reset sqlite3_api->str_reset
#define sqlite3_str_errcode sqlite3_api->str_errcode
#define sqlite3_str_length sqlite3_api->str_length
#define sqlite3_str_value sqlite3_api->str_value
/* Version 3.25.0 and later */
#define sqlite3_create_window_function sqlite3_api->create_window_function
/* Version 3.26.0 and later */
#define sqlite3_normalized_sql sqlite3_api->normalized_sql
/* Version 3.28.0 and later */
#define sqlite3_stmt_isexplain sqlite3_api->isexplain
#define sqlite3_value_frombind sqlite3_api->frombind
/* Version 3.30.0 and later */
#define sqlite3_drop_modules sqlite3_api->drop_modules
#endif /* !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION) */
#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION)
/* This case when the file really is being compiled as a loadable
** extension */
# define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api=0;
# define SQLITE_EXTENSION_INIT2(v) sqlite3_api=v;
# define SQLITE_EXTENSION_INIT3 \
extern const sqlite3_api_routines *sqlite3_api;
#else
/* This case when the file is being statically linked into the
** application */
# define SQLITE_EXTENSION_INIT1 /*no-op*/
# define SQLITE_EXTENSION_INIT2(v) (void)v; /* unused parameter */
# define SQLITE_EXTENSION_INIT3 /*no-op*/
#endif
#endif /* SQLITE3EXT_H */

View file

@ -0,0 +1,455 @@
#ifndef TAKEOVERWORKAROUNDS_H
#define TAKEOVERWORKAROUNDS_H
#include <string>
#include <boost/container/flat_map.hpp>
const boost::container::flat_map<std::pair<int, std::string>, int> takeoverWorkarounds = {
{{496856, "HunterxHunterAMV"}, 496835},
{{542978, "namethattune1"}, 542429},
{{543508, "namethattune-5"}, 543306},
{{546780, "forecasts"}, 546624},
{{548730, "forecasts"}, 546780},
{{551540, "forecasts"}, 548730},
{{552380, "chicthinkingofyou"}, 550804},
{{560363, "takephotowithlbryteam"}, 559962},
{{563710, "test-img"}, 563700},
{{566750, "itila"}, 543261},
{{567082, "malabarismo-com-bolas-de-futebol-vs-chap"}, 563592},
{{596860, "180mphpullsthrougheurope"}, 596757},
{{617743, "vaccines"}, 572756},
{{619609, "copface-slamshandcuffedteengirlintoconcrete"}, 539940},
{{620392, "banker-exposes-satanic-elite"}, 597788},
{{624997, "direttiva-sulle-armi-ue-in-svizzera-di"}, 567908},
{{624997, "best-of-apex"}, 585580},
{{629970, "cannot-ignore-my-veins"}, 629914},
{{633058, "bio-waste-we-programmed-your-brain"}, 617185},
{{633601, "macrolauncher-overview-first-look"}, 633058},
{{640186, "its-up-to-you-and-i-2019"}, 639116},
{{640241, "tor-eas-3-20"}, 592645},
{{640522, "seadoxdark"}, 619531},
{{640617, "lbry-przewodnik-1-instalacja"}, 451186},
{{640623, "avxchange-2019-the-next-netflix-spotify"}, 606790},
{{640684, "algebra-introduction"}, 624152},
{{640684, "a-high-school-math-teacher-does-a"}, 600885},
{{640684, "another-random-life-update"}, 600884},
{{640684, "who-is-the-taylor-series-for"}, 600882},
{{640684, "tedx-talk-released"}, 612303},
{{640730, "e-mental"}, 615375},
{{641143, "amiga-1200-bespoke-virgin-cinema"}, 623542},
{{641161, "dreamscape-432-omega"}, 618894},
{{641162, "2019-topstone-carbon-force-etap-axs-bike"}, 639107},
{{641186, "arin-sings-big-floppy-penis-live-jazz-2"}, 638904},
{{641421, "edward-snowden-on-bitcoin-and-privacy"}, 522729},
{{641421, "what-is-libra-facebook-s-new"}, 598236},
{{641421, "what-are-stablecoins-counter-party-risk"}, 583508},
{{641421, "anthony-pomp-pompliano-discusses-crypto"}, 564416},
{{641421, "tim-draper-crypto-invest-summit-2019"}, 550329},
{{641421, "mass-adoption-and-what-will-it-take-to"}, 549781},
{{641421, "dragonwolftech-youtube-channel-trailer"}, 567128},
{{641421, "naomi-brockwell-s-weekly-crypto-recap"}, 540006},
{{641421, "blockchain-based-youtube-twitter"}, 580809},
{{641421, "andreas-antonopoulos-on-privacy-privacy"}, 533522},
{{641817, "mexico-submits-and-big-tech-worsens"}, 582977},
{{641817, "why-we-need-travel-bans"}, 581354},
{{641880, "censored-by-patreon-bitchute-shares"}, 482460},
{{641880, "crypto-wonderland"}, 485218},
{{642168, "1-diabolo-julio-cezar-16-cbmcp-freestyle"}, 374999},
{{642314, "tough-students"}, 615780},
{{642697, "gamercauldronep2"}, 642153},
{{643406, "the-most-fun-i-ve-had-in-a-long-time"}, 616506},
{{643893, "spitshine69-and-uk-freedom-audits"}, 616876},
{{644480, "my-mum-getting-attacked-a-duck"}, 567624},
{{644486, "the-cryptocurrency-experiment"}, 569189},
{{644486, "tag-you-re-it"}, 558316},
{{644486, "orange-county-mineral-society-rock-and"}, 397138},
{{644486, "sampling-with-the-gold-rush-nugget"}, 527960},
{{644562, "september-15-21-a-new-way-of-doing"}, 634792},
{{644562, "july-week-3-collective-frequency-general"}, 607942},
{{644562, "september-8-14-growing-up-general"}, 630977},
{{644562, "august-4-10-collective-frequency-general"}, 612307},
{{644562, "august-11-17-collective-frequency"}, 617279},
{{644562, "september-1-7-gentle-wake-up-call"}, 627104},
{{644607, "no-more-lol"}, 643497},
{{644607, "minion-masters-who-knew"}, 641313},
{{645236, "danganronpa-3-the-end-of-hope-s-peak"}, 644153},
{{645348, "captchabot-a-discord-bot-to-protect-your"}, 592810},
{{645701, "the-xero-hour-saint-greta-of-thunberg"}, 644081},
{{645701, "batman-v-superman-theological-notions"}, 590189},
{{645918, "emacs-is-great-ep-0-init-el-from-org"}, 575666},
{{645918, "emacs-is-great-ep-1-packages"}, 575666},
{{645918, "emacs-is-great-ep-40-pt-2-hebrew"}, 575668},
{{645923, "nasal-snuff-review-osp-batch-2"}, 575658},
{{645923, "why-bit-coin"}, 575658},
{{645929, "begin-quest"}, 598822},
{{645929, "filthy-foe"}, 588386},
{{645929, "unsanitary-snow"}, 588386},
{{645929, "famispam-1-music-box"}, 588386},
{{645929, "running-away"}, 598822},
{{645931, "my-beloved-chris-madsen"}, 589114},
{{645931, "space-is-consciousness-chris-madsen"}, 589116},
{{645947, "gasifier-rocket-stove-secondary-burn"}, 590595},
{{645949, "mouse-razer-abyssus-v2-e-mousepad"}, 591139},
{{645949, "pr-temporada-2018-league-of-legends"}, 591138},
{{645949, "windows-10-build-9901-pt-br"}, 591137},
{{645949, "abrindo-pacotes-do-festival-lunar-2018"}, 591139},
{{645949, "unboxing-camisetas-personalizadas-play-e"}, 591138},
{{645949, "abrindo-envelopes-do-festival-lunar-2017"}, 591138},
{{645951, "grub-my-grub-played-guruku-tersayang"}, 618033},
{{645951, "ismeeltimepiece"}, 618038},
{{645951, "thoughts-on-doom"}, 596485},
{{645951, "thoughts-on-god-of-war-about-as-deep-as"}, 596485},
{{645956, "linux-lite-3-6-see-what-s-new"}, 645195},
{{646191, "kahlil-gibran-the-prophet-part-1"}, 597637},
{{646551, "crypto-market-crash-should-you-sell-your"}, 442613},
{{646551, "live-crypto-trading-and-market-analysis"}, 442615},
{{646551, "5-reasons-trading-is-always-better-than"}, 500850},
{{646551, "digitex-futures-dump-panic-selling-or"}, 568065},
{{646552, "how-to-install-polarr-on-kali-linux-bynp"}, 466235},
{{646586, "electoral-college-kids-civics-lesson"}, 430818},
{{646602, "grapes-full-90-minute-watercolour"}, 537108},
{{646602, "meizu-mx4-the-second-ubuntu-phone"}, 537109},
{{646609, "how-to-set-up-the-ledger-nano-x"}, 569992},
{{646609, "how-to-buy-ethereum"}, 482354},
{{646609, "how-to-install-setup-the-exodus-multi"}, 482356},
{{646609, "how-to-manage-your-passwords-using"}, 531987},
{{646609, "cryptodad-s-live-q-a-friday-may-3rd-2019"}, 562303},
{{646638, "resident-evil-ada-chapter-5-final"}, 605612},
{{646639, "taurus-june-2019-career-love-tarot"}, 586910},
{{646652, "digital-bullpen-ep-5-building-a-digital"}, 589274},
{{646661, "sunlight"}, 591076},
{{646661, "grasp-lab-nasa-open-mct-series"}, 589414},
{{646663, "bunnula-s-creepers-tim-pool-s-beanie-a"}, 599669},
{{646663, "bunnula-music-hey-ya-by-outkast"}, 605685},
{{646663, "bunnula-tv-s-music-television-eunoia"}, 644437},
{{646663, "the-pussy-centipede-40-sneakers-and"}, 587265},
{{646663, "bunnula-reacts-ashton-titty-whitty"}, 596988},
{{646677, "filip-reviews-jeromes-dream-cataracts-so"}, 589751},
{{646691, "fascism-and-its-mobilizing-passions"}, 464342},
{{646692, "hsb-color-layers-action-for-adobe"}, 586533},
{{646692, "master-colorist-action-pack-extracting"}, 631830},
{{646693, "how-to-protect-your-garden-from-animals"}, 588476},
{{646693, "gardening-for-the-apocalypse-epic"}, 588472},
{{646693, "my-first-bee-hive-foundationless-natural"}, 588469},
{{646693, "dragon-fruit-and-passion-fruit-planting"}, 588470},
{{646693, "installing-my-first-foundationless"}, 588469},
{{646705, "first-naza-fpv"}, 590411},
{{646717, "first-burning-man-2019-detour-034"}, 630247},
{{646717, "why-bob-marley-was-an-idiot-test-driving"}, 477558},
{{646717, "we-are-addicted-to-gambling-ufc-207-w"}, 481398},
{{646717, "ghetto-swap-meet-selling-storage-lockers"}, 498291},
{{646738, "1-kings-chapter-7-summary-and-what-god"}, 586599},
{{646814, "brand-spanking-new-junior-high-school"}, 592378},
{{646814, "lupe-fiasco-freestyle-at-end-of-the-weak"}, 639535},
{{646824, "how-to-one-stroke-painting-doodles-mixed"}, 592404},
{{646824, "acrylic-pouring-landscape-with-a-tree"}, 592404},
{{646824, "how-to-make-a-diy-concrete-paste-planter"}, 595976},
{{646824, "how-to-make-a-rustic-sand-planter-sand"}, 592404},
{{646833, "3-day-festival-at-the-galilee-lake-and"}, 592842},
{{646833, "rainbow-circle-around-the-noon-sun-above"}, 592842},
{{646833, "energetic-self-control-demonstration"}, 623811},
{{646833, "bees-congregating"}, 592842},
{{646856, "formula-offroad-honefoss-sunday-track2"}, 592872},
{{646862, "h3video1-dc-vs-mb-1"}, 593237},
{{646862, "h3video1-iwasgoingto-load-up-gmod-but"}, 593237},
{{646883, "watch-this-game-developer-make-a-video"}, 592593},
{{646883, "how-to-write-secure-javascript"}, 592593},
{{646883, "blockchain-technology-explained-2-hour"}, 592593},
{{646888, "fl-studio-bits"}, 608155},
{{646914, "andy-s-shed-live-s03e02-the-longest"}, 592200},
{{646914, "gpo-telephone-776-phone-restoration"}, 592201},
{{646916, "toxic-studios-co-stream-pubg"}, 597126},
{{646916, "hyperlapse-of-prague-praha-from-inside"}, 597109},
{{646933, "videobits-1"}, 597378},
{{646933, "clouds-developing-daytime-8"}, 597378},
{{646933, "slechtvalk-in-watertoren-bodegraven"}, 597378},
{{646933, "timelapse-maansverduistering-16-juli"}, 605880},
{{646933, "startrails-27"}, 597378},
{{646933, "passing-clouds-daytime-3"}, 597378},
{{646940, "nerdgasm-unboxing-massive-playing-cards"}, 597421},
{{646946, "debunking-cops-volume-3-the-murder-of"}, 630570},
{{646961, "kingsong-ks16x-electric-unicycle-250km"}, 636725},
{{646968, "wild-mountain-goats-amazing-rock"}, 621940},
{{646968, "no-shelter-backcountry-camping-in"}, 621940},
{{646968, "can-i-live-in-this-through-winter-lets"}, 645750},
{{646968, "why-i-wear-a-chest-rig-backcountry-or"}, 621940},
{{646989, "marc-ivan-o-gorman-promo-producer-editor"}, 645656},
{{647045, "@moraltis"}, 646367},
{{647045, "moraltis-twitch-highlights-first-edit"}, 646368},
{{647075, "the-3-massive-tinder-convo-mistakes"}, 629464},
{{647075, "how-to-get-friend-zoned-via-text"}, 592298},
{{647075, "don-t-do-this-on-tinder"}, 624591},
{{647322, "world-of-tanks-7-kills"}, 609905},
{{647322, "the-tier-6-auto-loading-swedish-meatball"}, 591338},
{{647416, "hypnotic-soundscapes-garden-of-the"}, 596923},
{{647416, "hypnotic-soundscapes-the-cauldron-sacred"}, 596928},
{{647416, "schumann-resonance-to-theta-sweep"}, 596920},
{{647416, "conversational-indirect-hypnosis-why"}, 596913},
{{647493, "mimirs-brunnr"}, 590498},
{{648143, "live-ita-completiamo-the-evil-within-2"}, 646568},
{{648203, "why-we-love-people-that-hurt-us"}, 591128},
{{648203, "i-didn-t-like-my-baby-and-considered"}, 591128},
{{648220, "trade-talk-001-i-m-a-vlogger-now-fielder"}, 597303},
{{648220, "vise-restoration-record-no-6-vise"}, 597303},
{{648540, "amv-reign"}, 571863},
{{648540, "amv-virus"}, 571863},
{{648588, "audial-drift-(a-journey-into-sound)"}, 630217},
{{648616, "quick-zbrush-tip-transpose-master-scale"}, 463205},
{{648616, "how-to-create-3d-horns-maya-to-zbrush-2"}, 463205},
{{648815, "arduino-based-cartridge-game-handheld"}, 593252},
{{648815, "a-maze-update-3-new-game-modes-amazing"}, 593252},
{{649209, "denmark-trip"}, 591428},
{{649209, "stunning-4k-drone-footage"}, 591428},
{{649215, "how-to-create-a-channel-and-publish-a"}, 414908},
{{649215, "lbryclass-11-how-to-get-your-deposit"}, 632420},
{{649543, "spring-break-madness-at-universal"}, 599698},
{{649921, "navegador-brave-navegador-da-web-seguro"}, 649261},
{{650191, "stream-intro"}, 591301},
{{650946, "platelet-chan-fan-art"}, 584601},
{{650946, "aqua-fanart"}, 584601},
{{650946, "virginmedia-stores-password-in-plain"}, 619537},
{{650946, "running-linux-on-android-teaser"}, 604441},
{{650946, "hatsune-miku-ievan-polka"}, 600126},
{{650946, "digital-security-and-privacy-2-and-a-new"}, 600135},
{{650993, "my-editorial-comment-on-recent-youtube"}, 590305},
{{650993, "drive-7-18-2018"}, 590305},
{{651011, "old-world-put-on-realm-realms-gg"}, 591899},
{{651011, "make-your-own-soundboard-with-autohotkey"}, 591899},
{{651011, "ark-survival-https-discord-gg-ad26xa"}, 637680},
{{651011, "minecraft-featuring-seus-8-just-came-4"}, 596488},
{{651057, "found-footage-bikinis-at-the-beach-with"}, 593586},
{{651057, "found-footage-sexy-mom-a-mink-stole"}, 593586},
{{651067, "who-are-the-gentiles-gomer"}, 597094},
{{651067, "take-back-the-kingdom-ep-2-450-million"}, 597094},
{{651067, "mmxtac-implemented-footstep-sounds-and"}, 597094},
{{651067, "dynasoul-s-blender-to-unreal-animated"}, 597094},
{{651103, "calling-a-scammer-syntax-error"}, 612532},
{{651103, "quick-highlight-of-my-day"}, 647651},
{{651103, "calling-scammers-and-singing-christmas"}, 612531},
{{651109, "@livingtzm"}, 637322},
{{651109, "living-tzm-juuso-from-finland-september"}, 643412},
{{651373, "se-voc-rir-ou-sorrir-reinicie-o-v-deo"}, 649302},
{{651476, "what-is-pagan-online-polished-new-arpg"}, 592157},
{{651476, "must-have-elder-scrolls-online-addons"}, 592156},
{{651476, "who-should-play-albion-online"}, 592156},
{{651730, "person-detection-with-keras-tensorflow"}, 621276},
{{651730, "youtube-censorship-take-two"}, 587249},
{{651730, "new-red-tail-shark-and-two-silver-sharks"}, 587251},
{{651730, "around-auckland"}, 587250},
{{651730, "humanism-in-islam"}, 587250},
{{651730, "tigers-at-auckland-zoo"}, 587250},
{{651730, "gravity-demonstration"}, 587250},
{{651730, "copyright-question"}, 587249},
{{651730, "uberg33k-the-ultimate-software-developer"}, 599522},
{{651730, "chl-e-swarbrick-auckland-mayoral"}, 587250},
{{651730, "code-reviews"}, 587249},
{{651730, "raising-robots"}, 587251},
{{651730, "teaching-python"}, 587250},
{{651730, "kelly-tarlton-2016"}, 587250},
{{652172, "where-is-everything"}, 589491},
{{652172, "some-guy-and-his-camera"}, 617062},
{{652172, "practical-information-pt-1"}, 589491},
{{652172, "latent-vibrations"}, 589491},
{{652172, "maldek-compilation"}, 589491},
{{652444, "thank-you-etika-thank-you-desmond"}, 652121},
{{652611, "plants-vs-zombies-gw2-20190827183609"}, 624339},
{{652611, "wolfenstein-the-new-order-playthrough-6"}, 650299},
{{652887, "a-codeigniter-cms-open-source-download"}, 652737},
{{652966, "@pokesadventures"}, 632391},
{{653009, "flat-earth-uk-convention-is-a-bust"}, 585786},
{{653009, "flat-earth-reset-flat-earth-money-tree"}, 585786},
{{653011, "veil-of-thorns-dispirit-brutal-leech-3"}, 652475},
{{653069, "being-born-after-9-11"}, 632218},
{{653069, "8-years-on-youtube-what-it-has-done-for"}, 637130},
{{653069, "answering-questions-how-original"}, 521447},
{{653069, "talking-about-my-first-comedy-stand-up"}, 583450},
{{653069, "doing-push-ups-in-public"}, 650920},
{{653069, "vlog-extra"}, 465997},
{{653069, "crying-myself"}, 465997},
{{653069, "xbox-rejection"}, 465992},
{{653354, "msps-how-to-find-a-linux-job-where-no"}, 642537},
{{653354, "windows-is-better-than-linux-vlog-it-and"}, 646306},
{{653354, "luke-smith-is-wrong-about-everything"}, 507717},
{{653354, "advice-for-those-starting-out-in-tech"}, 612452},
{{653354, "treating-yourself-to-make-studying-more"}, 623561},
{{653354, "lpi-linux-essential-dns-tools-vlog-what"}, 559464},
{{653354, "is-learning-linux-worth-it-in-2019-vlog"}, 570886},
{{653354, "huawei-linux-and-cellphones-in-2019-vlog"}, 578501},
{{653354, "how-to-use-webmin-to-manage-linux"}, 511507},
{{653354, "latency-concurrency-and-the-best-value"}, 596857},
{{653354, "how-to-use-the-pomodoro-method-in-it"}, 506632},
{{653354, "negotiating-compensation-vlog-it-and"}, 542317},
{{653354, "procedural-goals-vs-outcome-goals-vlog"}, 626785},
{{653354, "intro-to-raid-understanding-how-raid"}, 529341},
{{653354, "smokeping"}, 574693},
{{653354, "richard-stallman-should-not-be-fired"}, 634928},
{{653354, "unusual-or-specialty-certifications-vlog"}, 620146},
{{653354, "gratitude-and-small-projects-vlog-it"}, 564900},
{{653354, "why-linux-on-the-smartphone-is-important"}, 649543},
{{653354, "opportunity-costs-vlog-it-devops-career"}, 549708},
{{653354, "double-giveaway-lpi-class-dates-and"}, 608129},
{{653354, "linux-on-the-smartphone-in-2019-librem"}, 530426},
{{653524, "celtic-folk-music-full-live-concert-mps"}, 589762},
{{653745, "aftermath-of-the-mac"}, 592768},
{{653745, "b-c-a-glock-17-threaded-barrel"}, 592770},
{{653800, "middle-earth-shadow-of-mordor-by"}, 590229},
{{654079, "tomand-jeremy-chirs45"}, 614296},
{{654096, "achamos-carteira-com-grana-olha-o-que"}, 466262},
{{654096, "viagem-bizarra-e-cansativa-ao-nordeste"}, 466263},
{{654096, "tedio-na-tailandia-limpeza-de-area"}, 466265},
{{654425, "schau-bung-2014-in-windischgarsten"}, 654410},
{{654425, "mitternachtseinlage-ball-rk"}, 654410},
{{654425, "zugabe-ball-rk-windischgarsten"}, 654412},
{{654722, "skytrain-in-korea"}, 463145},
{{654722, "luwak-coffee-the-shit-coffee"}, 463155},
{{654722, "puppet-show-in-bangkok-thailand"}, 462812},
{{654722, "kyaito-market-myanmar"}, 462813},
{{654724, "wipeout-zombies-bo3-custom-zombies-1st"}, 589569},
{{654724, "the-street-bo3-custom-zombies"}, 589544},
{{654880, "wwii-airsoft-pow"}, 586968},
{{654880, "dueling-geese-fight-to-the-death"}, 586968},
{{654880, "wwii-airsoft-torgau-raw-footage-part4"}, 586968},
{{655173, "april-2019-q-and-a"}, 554032},
{{655173, "the-meaning-and-reality-of-individual"}, 607892},
{{655173, "steven-pinker-progress-despite"}, 616984},
{{655173, "we-make-stories-out-of-totem-poles"}, 549090},
{{655173, "jamil-jivani-author-of-why-young-men"}, 542035},
{{655173, "commentaries-on-jb-peterson-rebel-wisdom"}, 528898},
{{655173, "auckland-clip-4-on-cain-and-abel"}, 629242},
{{655173, "peterson-vs-zizek-livestream-tickets"}, 545285},
{{655173, "auckland-clip-3-the-dawning-of-the-moral"}, 621154},
{{655173, "religious-belief-and-the-enlightenment"}, 606269},
{{655173, "auckland-lc-highlight-1-the-presumption"}, 565783},
{{655173, "q-a-sir-roger-scruton-dr-jordan-b"}, 544184},
{{655173, "cancellation-polish-national-foundation"}, 562529},
{{655173, "the-coddling-of-the-american-mind-haidt"}, 440185},
{{655173, "02-harris-weinstein-peterson-discussion"}, 430896},
{{655173, "jordan-peterson-threatens-everything-of"}, 519737},
{{655173, "on-claiming-belief-in-god-commentary"}, 581738},
{{655173, "how-to-make-the-world-better-really-with"}, 482317},
{{655173, "quillette-discussion-with-founder-editor"}, 413749},
{{655173, "jb-peterson-on-free-thought-and-speech"}, 462849},
{{655173, "marxism-zizek-peterson-official-video"}, 578453},
{{655173, "patreon-problem-solution-dave-rubin-dr"}, 490394},
{{655173, "next-week-st-louis-salt-lake-city"}, 445933},
{{655173, "conversations-with-john-anderson-jordan"}, 529981},
{{655173, "nz-australia-12-rules-tour-next-2-weeks"}, 518649},
{{655173, "a-call-to-rebellion-for-ontario-legal"}, 285451},
{{655173, "2016-personality-lecture-12"}, 578465},
{{655173, "on-the-vital-necessity-of-free-speech"}, 427404},
{{655173, "2017-01-23-social-justice-freedom-of"}, 578465},
{{655173, "discussion-sam-harris-the-idw-and-the"}, 423332},
{{655173, "march-2018-patreon-q-a"}, 413749},
{{655173, "take-aim-even-badly"}, 490395},
{{655173, "jp-f-wwbgo6a2w"}, 539940},
{{655173, "patreon-account-deletion"}, 503477},
{{655173, "canada-us-europe-tour-august-dec-2018"}, 413749},
{{655173, "leaders-myth-reality-general-stanley"}, 514333},
{{655173, "jp-ifi5kkxig3s"}, 539940},
{{655173, "documentary-a-glitch-in-the-matrix-david"}, 413749},
{{655173, "2017-08-14-patreon-q-and-a"}, 285451},
{{655173, "postmodernism-history-and-diagnosis"}, 285451},
{{655173, "23-minutes-from-maps-of-meaning-the"}, 413749},
{{655173, "milo-forbidden-conversation"}, 578493},
{{655173, "jp-wnjbasba-qw"}, 539940},
{{655173, "uk-12-rules-tour-october-and-november"}, 462849},
{{655173, "2015-maps-of-meaning-10-culture-anomaly"}, 578465},
{{655173, "ayaan-hirsi-ali-islam-mecca-vs-medina"}, 285452},
{{655173, "jp-f9393el2z1i"}, 539940},
{{655173, "campus-indoctrination-the-parasitization"}, 285453},
{{655173, "jp-owgc63khcl8"}, 539940},
{{655173, "the-death-and-resurrection-of-christ-a"}, 413749},
{{655173, "01-harris-weinstein-peterson-discussion"}, 430896},
{{655173, "enlightenment-now-steven-pinker-jb"}, 413749},
{{655173, "the-lindsay-shepherd-affair-update"}, 413749},
{{655173, "jp-g3fwumq5k8i"}, 539940},
{{655173, "jp-evvs3l-abv4"}, 539940},
{{655173, "former-australian-deputy-pm-john"}, 413750},
{{655173, "message-to-my-korean-readers-90-seconds"}, 477424},
{{655173, "jp--0xbomwjkgm"}, 539940},
{{655173, "ben-shapiro-jordan-peterson-and-a-12"}, 413749},
{{655173, "jp-91jwsb7zyhw"}, 539940},
{{655173, "deconstruction-the-lindsay-shepherd"}, 299272},
{{655173, "september-patreon-q-a"}, 285451},
{{655173, "jp-2c3m0tt5kce"}, 539940},
{{655173, "australia-s-john-anderson-dr-jordan-b"}, 413749},
{{655173, "jp-hdrlq7dpiws"}, 539940},
{{655173, "stephen-hicks-postmodernism-reprise"}, 578480},
{{655173, "october-patreon-q-a"}, 285451},
{{655173, "an-animated-intro-to-truth-order-and"}, 413749},
{{655173, "jp-bsh37-x5rny"}, 539940},
{{655173, "january-2019-q-a"}, 503477},
{{655173, "comedians-canaries-and-coalmines"}, 498586},
{{655173, "the-democrats-apology-and-promise"}, 465433},
{{655173, "jp-s4c-jodptn8"}, 539940},
{{655173, "2014-personality-lecture-16-extraversion"}, 578465},
{{655173, "dr-jordan-b-peterson-on-femsplainers"}, 490395},
{{655173, "higher-ed-our-cultural-inflection-point"}, 527291},
{{655173, "archetype-reality-friendship-and"}, 519736},
{{655173, "sir-roger-scruton-dr-jordan-b-peterson"}, 490395},
{{655173, "jp-cf2nqmqifxc"}, 539940},
{{655173, "penguin-uk-12-rules-for-life"}, 413749},
{{655173, "march-2019-q-and-a"}, 537138},
{{655173, "jp-ne5vbomsqjc"}, 539940},
{{655173, "dublin-london-harris-murray-new-usa-12"}, 413749},
{{655173, "12-rules-12-cities-tickets-now-available"}, 413749},
{{655173, "jp-j9j-bvdrgdi"}, 539940},
{{655173, "responsibility-conscience-and-meaning"}, 499123},
{{655173, "04-harris-murray-peterson-discussion"}, 436678},
{{655173, "jp-ayhaz9k008q"}, 539940},
{{655173, "with-jocko-willink-the-catastrophe-of"}, 490395},
{{655173, "interview-with-the-grievance-studies"}, 501296},
{{655173, "russell-brand-jordan-b-peterson-under"}, 413750},
{{655173, "goodbye-to-patreon"}, 496771},
{{655173, "revamped-podcast-announcement-with"}, 540943},
{{655173, "swedes-want-to-know"}, 285453},
{{655173, "auckland-clip-2-the-four-fundamental"}, 607892},
{{655173, "jp-dtirzqmgbdm"}, 539940},
{{655173, "political-correctness-a-force-for-good-a"}, 413750},
{{655173, "sean-plunket-full-interview-new-zealand"}, 597638},
{{655173, "q-a-the-meaning-and-reality-of"}, 616984},
{{655173, "lecture-and-q-a-with-jordan-peterson-the"}, 413749},
{{655173, "2017-personality-07-carl-jung-and-the"}, 578465},
{{655173, "nina-paley-animator-extraordinaire"}, 413750},
{{655173, "truth-as-the-antidote-to-suffering-with"}, 455127},
{{655173, "bishop-barron-word-on-fire"}, 599814},
{{655173, "zizek-vs-peterson-april-19"}, 527291},
{{655173, "revamped-podcast-with-westwood-one"}, 540943},
{{655173, "2016-11-19-university-of-toronto-free"}, 578465},
{{655173, "jp-1emrmtrj5jc"}, 539940},
{{655173, "who-is-joe-rogan-with-jordan-peterson"}, 585578},
{{655173, "who-dares-say-he-believes-in-god"}, 581738},
{{655252, "games-with-live2d"}, 589978},
{{655252, "kaenbyou-rin-live2d"}, 589978},
{{655374, "steam-groups-are-crazy"}, 607590},
{{655379, "asmr-captain-falcon-happily-beats-you-up"}, 644574},
{{655379, "pixel-art-series-5-link-holding-the"}, 442952},
{{655379, "who-can-cross-the-planck-length-the-hero"}, 610830},
{{655379, "ssbb-the-yoshi-grab-release-crash"}, 609747},
{{655379, "tas-captain-falcon-s-bizarre-adventure"}, 442958},
{{655379, "super-smash-bros-in-360-test"}, 442963},
{{655379, "what-if-luigi-was-b-u-f-f"}, 442971},
{{655803, "sun-time-lapse-test-7"}, 610634},
{{655952, "upper-build-complete"}, 591728},
{{656758, "cryptocurrency-awareness-adoption-the"}, 541770},
{{656829, "3d-printing-technologies-comparison"}, 462685},
{{656829, "3d-printing-for-everyone"}, 462685},
{{657052, "tni-punya-ilmu-kanuragan-gaya-baru"}, 657045},
{{657052, "papa-sunimah-nelpon-sri-utami-emon"}, 657045},
{{657274, "rapforlife-4-win"}, 656856},
{{657274, "bizzilion-proof-of-withdrawal"}, 656856},
{{657420, "quick-drawing-prince-tribute-colored"}, 605630},
{{657453, "white-boy-tom-mcdonald-facts"}, 597169},
{{657453, "is-it-ok-to-look-when-you-with-your-girl"}, 610508},
{{657584, "need-for-speed-ryzen-5-1600-gtx-1050-ti"}, 657161},
{{657584, "quantum-break-ryzen-5-1600-gtx-1050-ti-4"}, 657161},
{{657584, "nightcore-legends-never-die"}, 657161},
{{657706, "mtb-enduro-ferragosto-2019-sestri"}, 638904},
{{657706, "warface-free-for-all"}, 638908},
{{657782, "nick-warren-at-loveland-but-not-really"}, 444299},
{{658098, "le-temps-nous-glisse-entre-les-doigts"}, 600099},
};
#endif // TAKEOVERWORKAROUNDS_H

953
src/claimtrie/trie.cpp Normal file
View file

@ -0,0 +1,953 @@
#include <forks.h>
#include <hashes.h>
#include <log.h>
#include <sqlite.h>
#include <trie.h>
#include <algorithm>
#include <memory>
#define logPrint CLogPrint::global()
static const auto one = uint256S("0000000000000000000000000000000000000000000000000000000000000001");
std::vector<unsigned char> heightToVch(int n)
{
std::vector<uint8_t> vchHeight(8, 0);
vchHeight[4] = n >> 24;
vchHeight[5] = n >> 16;
vchHeight[6] = n >> 8;
vchHeight[7] = n;
return vchHeight;
}
uint256 getValueHash(const COutPoint& outPoint, int nHeightOfLastTakeover)
{
auto hash1 = Hash(outPoint.hash.begin(), outPoint.hash.end());
auto snOut = std::to_string(outPoint.n);
auto hash2 = Hash(snOut.begin(), snOut.end());
auto vchHash = heightToVch(nHeightOfLastTakeover);
auto hash3 = Hash(vchHash.begin(), vchHash.end());
return Hash(hash1.begin(), hash1.end(), hash2.begin(), hash2.end(), hash3.begin(), hash3.end());
}
static const sqlite::sqlite_config sharedConfig {
sqlite::OpenFlags::READWRITE | sqlite::OpenFlags::CREATE,
nullptr, sqlite::Encoding::UTF8
};
void applyPragmas(sqlite::database& db, std::size_t cache)
{
db << "PRAGMA cache_size=-" + std::to_string(cache); // in -KB
db << "PRAGMA synchronous=OFF"; // don't disk sync after transaction commit
db << "PRAGMA journal_mode=WAL";
db << "PRAGMA temp_store=MEMORY";
db << "PRAGMA case_sensitive_like=true";
db.define("POPS", [](std::string s) -> std::string { if (!s.empty()) s.pop_back(); return s; });
db.define("REVERSE", [](std::vector<uint8_t> s) -> std::vector<uint8_t> { std::reverse(s.begin(), s.end()); return s; });
}
CClaimTrie::CClaimTrie(std::size_t cacheBytes, bool fWipe, int height,
const std::string& dataDir,
int nNormalizedNameForkHeight,
int nMinRemovalWorkaroundHeight,
int nMaxRemovalWorkaroundHeight,
int64_t nOriginalClaimExpirationTime,
int64_t nExtendedClaimExpirationTime,
int64_t nExtendedClaimExpirationForkHeight,
int64_t nAllClaimsInMerkleForkHeight,
int proportionalDelayFactor) :
nNextHeight(height),
dbCacheBytes(cacheBytes),
dbFile(dataDir + "/claims.sqlite"), db(dbFile, sharedConfig),
nProportionalDelayFactor(proportionalDelayFactor),
nNormalizedNameForkHeight(nNormalizedNameForkHeight),
nMinRemovalWorkaroundHeight(nMinRemovalWorkaroundHeight),
nMaxRemovalWorkaroundHeight(nMaxRemovalWorkaroundHeight),
nOriginalClaimExpirationTime(nOriginalClaimExpirationTime),
nExtendedClaimExpirationTime(nExtendedClaimExpirationTime),
nExtendedClaimExpirationForkHeight(nExtendedClaimExpirationForkHeight),
nAllClaimsInMerkleForkHeight(nAllClaimsInMerkleForkHeight)
{
applyPragmas(db, 5U * 1024U); // in KB
db << "CREATE TABLE IF NOT EXISTS node (name BLOB NOT NULL PRIMARY KEY, "
"parent BLOB REFERENCES node(name) DEFERRABLE INITIALLY DEFERRED, "
"hash BLOB)";
db << "CREATE TABLE IF NOT EXISTS claim (claimID BLOB NOT NULL PRIMARY KEY, name BLOB NOT NULL, "
"nodeName BLOB NOT NULL REFERENCES node(name) DEFERRABLE INITIALLY DEFERRED, "
"txID BLOB NOT NULL, txN INTEGER NOT NULL, originalHeight INTEGER NOT NULL, updateHeight INTEGER NOT NULL, "
"validHeight INTEGER NOT NULL, activationHeight INTEGER NOT NULL, "
"expirationHeight INTEGER NOT NULL, amount INTEGER NOT NULL);";
db << "CREATE TABLE IF NOT EXISTS support (txID BLOB NOT NULL, txN INTEGER NOT NULL, "
"supportedClaimID BLOB NOT NULL, name BLOB NOT NULL, nodeName BLOB NOT NULL, "
"blockHeight INTEGER NOT NULL, validHeight INTEGER NOT NULL, activationHeight INTEGER NOT NULL, "
"expirationHeight INTEGER NOT NULL, amount INTEGER NOT NULL, PRIMARY KEY(txID, txN));";
db << "CREATE TABLE IF NOT EXISTS takeover (name BLOB NOT NULL, height INTEGER NOT NULL, "
"claimID BLOB, PRIMARY KEY(name, height DESC));";
if (fWipe) {
db << "DELETE FROM node";
db << "DELETE FROM claim";
db << "DELETE FROM support";
db << "DELETE FROM takeover";
}
db << "CREATE INDEX IF NOT EXISTS node_hash_len_name ON node (hash, LENGTH(name) DESC)";
// db << "CREATE UNIQUE INDEX IF NOT EXISTS node_parent_name ON node (parent, name)"; // no apparent gain
db << "CREATE INDEX IF NOT EXISTS node_parent ON node (parent)";
db << "CREATE INDEX IF NOT EXISTS takeover_height ON takeover (height)";
db << "CREATE INDEX IF NOT EXISTS claim_activationHeight ON claim (activationHeight)";
db << "CREATE INDEX IF NOT EXISTS claim_expirationHeight ON claim (expirationHeight)";
db << "CREATE INDEX IF NOT EXISTS claim_nodeName ON claim (nodeName)";
db << "CREATE INDEX IF NOT EXISTS support_supportedClaimID ON support (supportedClaimID)";
db << "CREATE INDEX IF NOT EXISTS support_activationHeight ON support (activationHeight)";
db << "CREATE INDEX IF NOT EXISTS support_expirationHeight ON support (expirationHeight)";
db << "CREATE INDEX IF NOT EXISTS support_nodeName ON support (nodeName)";
db << "INSERT OR IGNORE INTO node(name, hash) VALUES(x'', ?)" << one; // ensure that we always have our root node
}
CClaimTrieCacheBase::~CClaimTrieCacheBase()
{
if (transacting) {
db << "ROLLBACK";
transacting = false;
}
claimHashQuery.used(true);
childHashQuery.used(true);
claimHashQueryLimit.used(true);
}
bool CClaimTrie::SyncToDisk()
{
// alternatively, switch to full sync after we are caught up on the chain
auto rc = sqlite3_wal_checkpoint_v2(db.connection().get(), nullptr, SQLITE_CHECKPOINT_FULL, nullptr, nullptr);
return rc == SQLITE_OK;
}
bool CClaimTrie::empty()
{
int64_t count;
db << "SELECT COUNT(*) FROM (SELECT 1 FROM claim WHERE activationHeight < ?1 AND expirationHeight >= ?1 LIMIT 1)" << nNextHeight >> count;
return count == 0;
}
bool CClaimTrieCacheBase::haveClaim(const std::string& name, const COutPoint& outPoint) const
{
auto query = db << "SELECT 1 FROM claim WHERE nodeName = ?1 AND txID = ?2 AND txN = ?3 "
"AND activationHeight < ?4 AND expirationHeight >= ?4 LIMIT 1"
<< name << outPoint.hash << outPoint.n << nNextHeight;
return query.begin() != query.end();
}
bool CClaimTrieCacheBase::haveSupport(const std::string& name, const COutPoint& outPoint) const
{
auto query = db << "SELECT 1 FROM support WHERE nodeName = ?1 AND txID = ?2 AND txN = ?3 "
"AND activationHeight < ?4 AND expirationHeight >= ?4 LIMIT 1"
<< name << outPoint.hash << outPoint.n << nNextHeight;
return query.begin() != query.end();
}
supportEntryType CClaimTrieCacheBase::getSupportsForName(const std::string& name) const
{
// includes values that are not yet valid
auto query = db << "SELECT supportedClaimID, txID, txN, blockHeight, activationHeight, amount "
"FROM support WHERE nodeName = ? AND expirationHeight >= ?" << name << nNextHeight;
supportEntryType ret;
for (auto&& row: query) {
CSupportValue value;
row >> value.supportedClaimId >> value.outPoint.hash >> value.outPoint.n
>> value.nHeight >> value.nValidAtHeight >> value.nAmount;
ret.push_back(std::move(value));
}
return ret;
}
bool CClaimTrieCacheBase::haveClaimInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const
{
auto query = db << "SELECT activationHeight FROM claim WHERE nodeName = ? AND txID = ? AND txN = ? "
"AND activationHeight >= ? AND expirationHeight >= activationHeight LIMIT 1"
<< name << outPoint.hash << outPoint.n << nNextHeight;
for (auto&& row: query) {
row >> nValidAtHeight;
return true;
}
return false;
}
bool CClaimTrieCacheBase::haveSupportInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const
{
auto query = db << "SELECT activationHeight FROM support WHERE nodeName = ? AND txID = ? AND txN = ? "
"AND activationHeight >= ? AND expirationHeight >= activationHeight LIMIT 1"
<< name << outPoint.hash << outPoint.n << nNextHeight;
for (auto&& row: query) {
row >> nValidAtHeight;
return true;
}
return false;
}
bool CClaimTrieCacheBase::deleteNodeIfPossible(const std::string& name, std::string& parent, int64_t& claims)
{
if (name.empty()) return false;
// to remove a node it must have one or less children and no claims
db << "SELECT COUNT(*) FROM (SELECT 1 FROM claim WHERE nodeName = ?1 AND activationHeight < ?2 AND expirationHeight >= ?2 LIMIT 1)"
<< name << nNextHeight >> claims;
if (claims > 0) return false; // still has claims
// we now know it has no claims, but we need to check its children
int64_t count;
std::string childName;
// this line assumes that we've set the parent on child nodes already,
// which means we are len(name) desc in our parent method
db << "SELECT COUNT(*), MAX(name) FROM node WHERE parent = ?" << name >> std::tie(count, childName);
if (count > 1) return false; // still has multiple children
logPrint << "Removing node " << name << " with " << count << " children" << Clog::endl;
// okay. it's going away
auto query = db << "SELECT parent FROM node WHERE name = ?" << name;
auto it = query.begin();
if (it == query.end())
return true; // we'll assume that whoever deleted this node previously cleaned things up correctly
*it >> parent;
db << "DELETE FROM node WHERE name = ?" << name;
auto ret = db.rows_modified() > 0;
if (ret && count == 1) // make the child skip us and point to its grandparent:
db << "UPDATE node SET parent = ? WHERE name = ?" << parent << childName;
if (ret)
db << "UPDATE node SET hash = NULL WHERE name = ?" << parent;
return ret;
}
void CClaimTrieCacheBase::ensureTreeStructureIsUpToDate()
{
if (!transacting) return;
// your children are your nodes that match your key but go at least one longer,
// and have no trailing prefix in common with the other nodes in that set -- a hard query w/o parent field
// when we get into this method, we have some claims that have been added, removed, and updated
// those each have a corresponding node in the list with a null hash
// some of our nodes will go away, some new ones will be added, some will be reparented
// the plan: update all the claim hashes first
std::vector<std::string> names;
db << "SELECT name FROM node WHERE hash IS NULL"
>> [&names](std::string name) {
names.push_back(std::move(name));
};
if (names.empty()) return; // nothing to do
std::sort(names.begin(), names.end()); // guessing this is faster than "ORDER BY name"
// there's an assumption that all nodes with claims are here; we do that as claims are inserted
// assume parents are not set correctly here:
auto parentQuery = db << "SELECT MAX(name) FROM node WHERE "
"name IN (WITH RECURSIVE prefix(p) AS (VALUES(?) UNION ALL "
"SELECT POPS(p) FROM prefix WHERE p != x'') SELECT p FROM prefix)";
auto insertQuery = db << "INSERT INTO node(name, parent, hash) VALUES(?, ?, NULL) "
"ON CONFLICT(name) DO UPDATE SET parent = excluded.parent, hash = NULL";
auto nodeQuery = db << "SELECT name FROM node WHERE parent = ?";
auto updateQuery = db << "UPDATE node SET parent = ? WHERE name = ?";
for (auto& name: names) {
int64_t claims;
std::string parent, node;
for (node = name; deleteNodeIfPossible(node, parent, claims);)
node = parent;
if (node != name || name.empty() || claims <= 0)
continue; // if you have no claims but we couldn't delete you, you must have legitimate children
parentQuery << name.substr(0, name.size() - 1);
auto queryIt = parentQuery.begin();
if (queryIt != parentQuery.end())
*queryIt >> parent;
else
parent.clear();
parentQuery++; // reusing knocks off about 10% of the query time
// we know now that we need to insert it,
// but we may need to insert a parent node for it first (also called a split)
const auto psize = parent.size() + 1;
for (auto&& row : nodeQuery << parent) {
std::string sibling; row >> sibling;
if (sibling.compare(0, psize, name, 0, psize) != 0)
continue;
auto splitPos = psize;
while(splitPos < sibling.size() && splitPos < name.size() && sibling[splitPos] == name[splitPos])
++splitPos;
auto newNodeName = name.substr(0, splitPos);
// update the to-be-fostered sibling:
updateQuery << newNodeName << sibling;
updateQuery++;
if (splitPos == name.size())
// our new node is the same as the one we wanted to insert
break;
// insert the split node:
logPrint << "Inserting split node " << newNodeName << " near " << sibling << ", parent " << parent << Clog::endl;
insertQuery << newNodeName << parent;
insertQuery++;
parent = std::move(newNodeName);
break;
}
nodeQuery++;
logPrint << "Inserting or updating node " << name << ", parent " << parent << Clog::endl;
insertQuery << name << parent;
insertQuery++;
}
nodeQuery.used(true);
updateQuery.used(true);
parentQuery.used(true);
insertQuery.used(true);
// now we need to percolate the nulls up the tree
// parents should all be set right
db << "UPDATE node SET hash = NULL WHERE name IN (WITH RECURSIVE prefix(p) AS "
"(SELECT parent FROM node WHERE hash IS NULL UNION SELECT parent FROM prefix, node "
"WHERE name = prefix.p AND prefix.p != x'') SELECT p FROM prefix)";
}
std::size_t CClaimTrieCacheBase::getTotalNamesInTrie() const
{
// you could do this select from the node table, but you would have to ensure it is not dirty first
std::size_t ret;
db << "SELECT COUNT(DISTINCT nodeName) FROM claim WHERE activationHeight < ?1 AND expirationHeight >= ?1"
<< nNextHeight >> ret;
return ret;
}
std::size_t CClaimTrieCacheBase::getTotalClaimsInTrie() const
{
std::size_t ret;
db << "SELECT COUNT(*) FROM claim WHERE activationHeight < ?1 AND expirationHeight >= ?1"
<< nNextHeight >> ret;
return ret;
}
int64_t CClaimTrieCacheBase::getTotalValueOfClaimsInTrie(bool fControllingOnly) const
{
int64_t ret = 0;
const std::string query = fControllingOnly ?
"SELECT SUM(amount) FROM (SELECT c.amount as amount, "
"(SELECT(SELECT IFNULL(SUM(s.amount),0)+c.amount FROM support s "
"WHERE s.supportedClaimID = c.claimID AND c.nodeName = s.nodeName "
"AND s.activationHeight < ?1 AND s.expirationHeight >= ?1) as effective "
"ORDER BY effective DESC LIMIT 1) as winner FROM claim c "
"WHERE c.activationHeight < ?1 AND c.expirationHeight >= ?1 GROUP BY c.nodeName)"
:
"SELECT SUM(amount) FROM (SELECT c.amount as amount "
"FROM claim c WHERE c.activationHeight < ?1 AND c.expirationHeight >= ?1)";
db << query << nNextHeight >> ret;
return ret;
}
bool CClaimTrieCacheBase::getInfoForName(const std::string& name, CClaimValue& claim, int heightOffset) const
{
auto ret = false;
auto nextHeight = nNextHeight + heightOffset;
for (auto&& row: claimHashQueryLimit << nextHeight << name) {
row >> claim.outPoint.hash >> claim.outPoint.n >> claim.claimId
>> claim.nHeight >> claim.nValidAtHeight >> claim.nAmount >> claim.nEffectiveAmount;
ret = true;
break;
}
claimHashQueryLimit++;
return ret;
}
CClaimSupportToName CClaimTrieCacheBase::getClaimsForName(const std::string& name) const
{
uint160 claimId;
int nLastTakeoverHeight = 0;
getLastTakeoverForName(name, claimId, nLastTakeoverHeight);
auto supports = getSupportsForName(name);
auto find = [&supports](decltype(supports)::iterator& it, const CClaimValue& claim) {
it = std::find_if(it, supports.end(), [&claim](const CSupportValue& support) {
return claim.claimId == support.supportedClaimId;
});
return it != supports.end();
};
auto query = db << "SELECT claimID, txID, txN, originalHeight, updateHeight, activationHeight, amount "
"FROM claim WHERE nodeName = ? AND expirationHeight >= ?"
<< name << nNextHeight;
// match support to claim
std::vector<CClaimNsupports> claimsNsupports;
for (auto &&row: query) {
CClaimValue claim;
int originalHeight;
row >> claim.claimId >> claim.outPoint.hash >> claim.outPoint.n
>> originalHeight >> claim.nHeight >> claim.nValidAtHeight >> claim.nAmount;
int64_t nAmount = claim.nValidAtHeight < nNextHeight ? claim.nAmount : 0;
auto ic = claimsNsupports.emplace(claimsNsupports.end(), claim, nAmount, originalHeight);
for (auto it = supports.begin(); find(it, claim); it = supports.erase(it)) {
if (it->nValidAtHeight < nNextHeight)
ic->effectiveAmount += it->nAmount;
ic->supports.push_back(std::move(*it));
}
ic->claim.nEffectiveAmount = ic->effectiveAmount;
}
std::sort(claimsNsupports.rbegin(), claimsNsupports.rend());
return {name, nLastTakeoverHeight, std::move(claimsNsupports), std::move(supports)};
}
void completeHash(uint256& partialHash, const std::string& key, int to)
{
for (auto it = key.rbegin(); std::distance(it, key.rend()) > to + 1; ++it)
partialHash = Hash(it, it + 1, partialHash.begin(), partialHash.end());
}
uint256 CClaimTrieCacheBase::computeNodeHash(const std::string& name, int takeoverHeight)
{
const auto pos = name.size();
std::vector<uint8_t> vchToHash;
// we have to free up the hash query so it can be reused by a child
childHashQuery << name >> [&vchToHash, pos](std::string name, uint256 hash) {
completeHash(hash, name, pos);
vchToHash.push_back(name[pos]);
vchToHash.insert(vchToHash.end(), hash.begin(), hash.end());
};
childHashQuery++;
CClaimValue claim;
if (getInfoForName(name, claim)) {
auto valueHash = getValueHash(claim.outPoint, takeoverHeight);
vchToHash.insert(vchToHash.end(), valueHash.begin(), valueHash.end());
}
return vchToHash.empty() ? one : Hash(vchToHash.begin(), vchToHash.end());
}
bool CClaimTrieCacheBase::checkConsistency()
{
// verify that all claims hash to the values on the nodes
auto query = db << "SELECT n.name, n.hash, "
"IFNULL((SELECT CASE WHEN t.claimID IS NULL THEN 0 ELSE t.height END "
"FROM takeover t WHERE t.name = n.name ORDER BY t.height DESC LIMIT 1), 0) FROM node n";
for (auto&& row: query) {
std::string name;
uint256 hash;
int takeoverHeight;
row >> name >> hash >> takeoverHeight;
auto computedHash = computeNodeHash(name, takeoverHeight);
if (computedHash != hash)
return false;
}
return true;
}
bool CClaimTrieCacheBase::validateDb(int height, const uint256& rootHash)
{
base->nNextHeight = nNextHeight = height + 1;
if (checkConsistency()) {
if (rootHash != getMerkleHash()) {
logPrint << "CClaimTrieCacheBase::" << __func__ << "(): the block's root claim hash doesn't match the persisted claim root hash." << Clog::endl;
return false;
}
if (nNextHeight > base->nAllClaimsInMerkleForkHeight) // index not used as part of sync:
db << "CREATE UNIQUE INDEX IF NOT EXISTS claim_reverseClaimID ON claim (REVERSE(claimID))";
return true;
}
return false;
}
bool CClaimTrieCacheBase::flush()
{
if (transacting) {
getMerkleHash();
auto code = sqlite::commit(db);
if (code != SQLITE_OK) {
logPrint << "ERROR in CClaimTrieCacheBase::" << __func__ << "(): SQLite code: " << code << Clog::endl;
return false;
}
transacting = false;
}
base->nNextHeight = nNextHeight;
removalWorkaround.clear();
return true;
}
const std::string childHashQuery_s = "SELECT name, hash FROM node WHERE parent = ? ORDER BY name";
const std::string claimHashQuery_s =
"SELECT c.txID, c.txN, c.claimID, c.updateHeight, c.activationHeight, c.amount, "
"(SELECT IFNULL(SUM(s.amount),0)+c.amount FROM support s "
"WHERE s.supportedClaimID = c.claimID AND s.nodeName = c.nodeName "
"AND s.activationHeight < ?1 AND s.expirationHeight >= ?1) as effectiveAmount "
"FROM claim c WHERE c.nodeName = ?2 AND c.activationHeight < ?1 AND c.expirationHeight >= ?1 "
"ORDER BY effectiveAmount DESC, c.updateHeight, c.txID, c.txN";
const std::string claimHashQueryLimit_s = claimHashQuery_s + " LIMIT 1";
extern const std::string proofClaimQuery_s =
"SELECT n.name, IFNULL((SELECT CASE WHEN t.claimID IS NULL THEN 0 ELSE t.height END "
"FROM takeover t WHERE t.name = n.name ORDER BY t.height DESC LIMIT 1), 0) FROM node n "
"WHERE n.name IN (WITH RECURSIVE prefix(p) AS (VALUES(?) UNION ALL "
"SELECT POPS(p) FROM prefix WHERE p != x'') SELECT p FROM prefix) "
"ORDER BY n.name";
CClaimTrieCacheBase::CClaimTrieCacheBase(CClaimTrie* base)
: base(base), db(base->dbFile, sharedConfig), transacting(false),
childHashQuery(db << childHashQuery_s),
claimHashQuery(db << claimHashQuery_s),
claimHashQueryLimit(db << claimHashQueryLimit_s)
{
assert(base);
nNextHeight = base->nNextHeight;
applyPragmas(db, base->dbCacheBytes >> 10U); // in KB
}
void CClaimTrieCacheBase::ensureTransacting()
{
if (!transacting) {
transacting = true;
db << "BEGIN";
}
}
int CClaimTrieCacheBase::expirationTime() const
{
return base->nOriginalClaimExpirationTime;
}
uint256 CClaimTrieCacheBase::getMerkleHash()
{
ensureTreeStructureIsUpToDate();
uint256 hash;
db << "SELECT hash FROM node WHERE name = x''"
>> [&hash](std::unique_ptr<uint256> rootHash) {
if (rootHash)
hash = std::move(*rootHash);
};
if (!hash.IsNull())
return hash;
assert(transacting); // no data changed but we didn't have the root hash there already?
auto updateQuery = db << "UPDATE node SET hash = ? WHERE name = ?";
db << "SELECT n.name, IFNULL((SELECT CASE WHEN t.claimID IS NULL THEN 0 ELSE t.height END FROM takeover t WHERE t.name = n.name "
"ORDER BY t.height DESC LIMIT 1), 0) FROM node n WHERE n.hash IS NULL ORDER BY LENGTH(n.name) DESC" // assumes n.name is blob
>> [this, &hash, &updateQuery](const std::string& name, int takeoverHeight) {
hash = computeNodeHash(name, takeoverHeight);
updateQuery << hash << name;
updateQuery++;
};
updateQuery.used(true);
return hash;
}
bool CClaimTrieCacheBase::getLastTakeoverForName(const std::string& name, uint160& claimId, int& takeoverHeight) const
{
auto query = db << "SELECT t.height, t.claimID FROM takeover t "
"WHERE t.name = ?1 ORDER BY t.height DESC LIMIT 1" << name;
auto it = query.begin();
if (it != query.end()) {
std::unique_ptr<uint160> claimIdOrNull;
*it >> takeoverHeight >> claimIdOrNull;
if (claimIdOrNull) {
claimId = *claimIdOrNull;
return true;
}
}
return false;
}
bool CClaimTrieCacheBase::addClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId,
int64_t nAmount, int nHeight, int nValidHeight, int originalHeight)
{
ensureTransacting();
// in the update scenario the previous one should be removed already
// in the downgrade scenario, the one ahead will be removed already and the old one's valid height is input
// revisiting the update scenario we have two options:
// 1. let them pull the old one first, in which case they will be responsible to pass in validHeight (since we can't determine it's a 0 delay)
// 2. don't remove the old one; have this method do a kinder "update" situation.
// Option 2 has the issue in that we don't actually update if we don't have an existing match,
// and no way to know that here without an 'update' flag
// In addition, as we currently do option 1 they use that to get the old valid height and store that for undo
// We would have to make this method return that if we go without the removal
// The other problem with 1 is that the outer shell would need to know if the one they removed was a winner or not
if (nValidHeight <= 0)
nValidHeight = nHeight + getDelayForName(name, claimId); // sets nValidHeight to the old value
if (originalHeight <= 0)
originalHeight = nHeight;
auto nodeName = adjustNameForValidHeight(name, nValidHeight);
auto expires = expirationTime() + nHeight;
db << "INSERT INTO claim(claimID, name, nodeName, txID, txN, amount, originalHeight, updateHeight, "
"validHeight, activationHeight, expirationHeight) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"
<< claimId << name << nodeName << outPoint.hash << outPoint.n << nAmount
<< originalHeight << nHeight << nValidHeight << nValidHeight << expires;
if (nValidHeight < nNextHeight)
db << "INSERT INTO node(name) VALUES(?) ON CONFLICT(name) DO UPDATE SET hash = NULL" << nodeName;
return true;
}
bool CClaimTrieCacheBase::addSupport(const std::string& name, const COutPoint& outPoint, const uint160& supportedClaimId,
int64_t nAmount, int nHeight, int nValidHeight)
{
ensureTransacting();
if (nValidHeight < 0)
nValidHeight = nHeight + getDelayForName(name, supportedClaimId);
auto nodeName = adjustNameForValidHeight(name, nValidHeight);
auto expires = expirationTime() + nHeight;
db << "INSERT INTO support(supportedClaimID, name, nodeName, txID, txN, amount, blockHeight, validHeight, activationHeight, expirationHeight) "
"VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"
<< supportedClaimId << name << nodeName << outPoint.hash << outPoint.n << nAmount << nHeight << nValidHeight << nValidHeight << expires;
if (nValidHeight < nNextHeight)
db << "UPDATE node SET hash = NULL WHERE name = ?" << nodeName;
return true;
}
bool CClaimTrieCacheBase::removeClaim(const uint160& claimId, const COutPoint& outPoint, std::string& nodeName,
int& validHeight, int& originalHeight)
{
// this gets tricky in that we may be removing an update
// when going forward we spend a claim (aka, call removeClaim) before updating it (aka, call addClaim)
// when going backwards we first remove the update by calling removeClaim
// we then undo the spend of the previous one by calling addClaim with the original data
// in order to maintain the proper takeover height the updater will need to use our height returned here
{
auto query = db << "SELECT nodeName, originalHeight, activationHeight FROM claim "
"WHERE claimID = ? AND txID = ? AND txN = ? AND expirationHeight >= ?"
<< claimId << outPoint.hash << outPoint.n << nNextHeight;
auto it = query.begin();
if (it == query.end())
return false;
*it >> nodeName >> originalHeight >> validHeight;
}
ensureTransacting();
db << "DELETE FROM claim WHERE claimID = ? AND txID = ? AND txN = ?"
<< claimId << outPoint.hash << outPoint.n;
if (!db.rows_modified())
return false;
db << "UPDATE node SET hash = NULL WHERE name = ?" << nodeName;
// when node should be deleted from cache but instead it's kept
// because it's a parent one and should not be effectively erased
// we had a bug in the old code where that situation would force a zero delay on re-add
if (nNextHeight >= base->nMinRemovalWorkaroundHeight
&& nNextHeight < base->nMaxRemovalWorkaroundHeight) { // TODO: hard fork this out (which we already tried once but failed)
// neither LIKE nor SUBSTR will use an index on a blob, but BETWEEN is a good, fast alternative
auto end = nodeName + std::string( 256, std::numeric_limits<char>::max()); // 256 == MAX_CLAIM_NAME_SIZE + 1
auto innerQuery = db << "SELECT nodeName FROM claim WHERE nodeName BETWEEN ?1 AND ?2 "
"AND activationHeight < ?3 AND expirationHeight >= ?3 ORDER BY nodeName LIMIT 1"
<< nodeName << end << nNextHeight;
for (auto&& row: innerQuery) {
std::string shortestMatch;
row >> shortestMatch;
if (shortestMatch != nodeName)
// set this when there are no more claims on that name and that node still has children
removalWorkaround.insert(nodeName);
}
}
return true;
}
bool CClaimTrieCacheBase::removeSupport(const COutPoint& outPoint, std::string& nodeName, int& validHeight)
{
{
auto query = db << "SELECT nodeName, activationHeight FROM support "
"WHERE txID = ? AND txN = ? AND expirationHeight >= ?"
<< outPoint.hash << outPoint.n << nNextHeight;
auto it = query.begin();
if (it == query.end())
return false;
*it >> nodeName >> validHeight;
}
ensureTransacting();
db << "DELETE FROM support WHERE txID = ? AND txN = ?" << outPoint.hash << outPoint.n;
if (!db.rows_modified())
return false;
db << "UPDATE node SET hash = NULL WHERE name = ?" << nodeName;
return true;
}
// hardcoded claims that should trigger a takeover
#include <takeoverworkarounds.h>
bool CClaimTrieCacheBase::incrementBlock()
{
// the plan:
// for every claim and support that becomes active this block set its node hash to null (aka, dirty)
// for every claim and support that expires this block set its node hash to null and add it to the expire(Support)Undo
// for all dirty nodes look for new takeovers
ensureTransacting();
db << "INSERT INTO node(name) SELECT nodeName FROM claim INDEXED BY claim_activationHeight "
"WHERE activationHeight = ?1 AND expirationHeight > ?1 "
"ON CONFLICT(name) DO UPDATE SET hash = NULL"
<< nNextHeight;
// don't make new nodes for items in supports or items that expire this block that don't exist in claims
db << "UPDATE node SET hash = NULL WHERE name IN "
"(SELECT nodeName FROM claim WHERE expirationHeight = ?1 "
"UNION SELECT nodeName FROM support WHERE expirationHeight = ?1 OR activationHeight = ?1)"
<< nNextHeight;
auto insertTakeoverQuery = db << "INSERT INTO takeover(name, height, claimID) VALUES(?, ?, ?)";
// takeover handling:
db << "SELECT name FROM node WHERE hash IS NULL"
>> [this, &insertTakeoverQuery](const std::string& nameWithTakeover) {
// if somebody activates on this block and they are the new best, then everybody activates on this block
CClaimValue candidateValue;
auto hasCandidate = getInfoForName(nameWithTakeover, candidateValue, 1);
// now that they're all in get the winner:
uint160 existingID;
int existingHeight = 0;
auto hasCurrentWinner = getLastTakeoverForName(nameWithTakeover, existingID, existingHeight);
// we have a takeover if we had a winner and its changing or we never had a winner
auto takeoverHappening = !hasCandidate || !hasCurrentWinner || existingID != candidateValue.claimId;
if (takeoverHappening && activateAllFor(nameWithTakeover))
hasCandidate = getInfoForName(nameWithTakeover, candidateValue, 1);
// This is a super ugly hack to work around bug in old code.
// The bug: un/support a name then update it. This will cause its takeover height to be reset to current.
// This is because the old code with add to the cache without setting block originals when dealing in supports.
if (nNextHeight < 658300) {
auto wit = takeoverWorkarounds.find(std::make_pair(nNextHeight, nameWithTakeover));
takeoverHappening |= wit != takeoverWorkarounds.end();
}
logPrint << "Takeover on " << nameWithTakeover << " at " << nNextHeight << ", happening: " << takeoverHappening << ", set before: " << hasCurrentWinner << Clog::endl;
if (takeoverHappening) {
if (hasCandidate)
insertTakeoverQuery << nameWithTakeover << nNextHeight << candidateValue.claimId;
else
insertTakeoverQuery << nameWithTakeover << nNextHeight << nullptr;
insertTakeoverQuery++;
}
};
insertTakeoverQuery.used(true);
nNextHeight++;
return true;
}
bool CClaimTrieCacheBase::activateAllFor(const std::string& name)
{
// now that we know a takeover is happening, we bring everybody in:
auto ret = false;
// all to activate now:
db << "UPDATE claim SET activationHeight = ?1 WHERE nodeName = ?2 AND activationHeight > ?1 AND expirationHeight > ?1" << nNextHeight << name;
ret |= db.rows_modified() > 0;
// then do the same for supports:
db << "UPDATE support SET activationHeight = ?1 WHERE nodeName = ?2 AND activationHeight > ?1 AND expirationHeight > ?1" << nNextHeight << name;
ret |= db.rows_modified() > 0;
return ret;
}
bool CClaimTrieCacheBase::decrementBlock()
{
ensureTransacting();
nNextHeight--;
db << "INSERT INTO node(name) SELECT nodeName FROM claim "
"WHERE expirationHeight = ? ON CONFLICT(name) DO UPDATE SET hash = NULL"
<< nNextHeight;
db << "UPDATE node SET hash = NULL WHERE name IN("
"SELECT nodeName FROM support WHERE expirationHeight = ?1 OR activationHeight = ?1 "
"UNION SELECT nodeName FROM claim WHERE activationHeight = ?1)"
<< nNextHeight;
db << "UPDATE claim SET activationHeight = validHeight WHERE activationHeight = ?"
<< nNextHeight;
db << "UPDATE support SET activationHeight = validHeight WHERE activationHeight = ?"
<< nNextHeight;
return true;
}
bool CClaimTrieCacheBase::finalizeDecrement()
{
db << "UPDATE node SET hash = NULL WHERE name IN "
"(SELECT nodeName FROM claim WHERE activationHeight = ?1 AND expirationHeight > ?1 "
"UNION SELECT nodeName FROM support WHERE activationHeight = ?1 AND expirationHeight > ?1 "
"UNION SELECT name FROM takeover WHERE height = ?1)" << nNextHeight;
db << "DELETE FROM takeover WHERE height >= ?" << nNextHeight;
return true;
}
int CClaimTrieCacheBase::getDelayForName(const std::string& name, const uint160& claimId) const
{
uint160 winningClaimId;
int winningTakeoverHeight;
auto hasCurrentWinner = getLastTakeoverForName(name, winningClaimId, winningTakeoverHeight);
if (hasCurrentWinner && winningClaimId == claimId) {
assert(winningTakeoverHeight <= nNextHeight);
return 0;
}
// NOTE: old code had a bug in it where nodes with no claims but with children would get left in the cache after removal.
// This would cause the getNumBlocksOfContinuousOwnership to return zero (causing incorrect takeover height calc).
auto hit = removalWorkaround.find(name);
if (hit != removalWorkaround.end()) {
removalWorkaround.erase(hit);
return 0;
}
return hasCurrentWinner ? std::min((nNextHeight - winningTakeoverHeight) / base->nProportionalDelayFactor, 4032) : 0;
}
std::string CClaimTrieCacheBase::adjustNameForValidHeight(const std::string& name, int validHeight) const
{
return name;
}
bool CClaimTrieCacheBase::getProofForName(const std::string& name, const uint160& finalClaim, CClaimTrieProof& proof)
{
// cache the parent nodes
getMerkleHash();
proof = CClaimTrieProof();
for (auto&& row: db << proofClaimQuery_s << name) {
CClaimValue claim;
std::string key;
int takeoverHeight;
row >> key >> takeoverHeight;
bool fNodeHasValue = getInfoForName(key, claim);
uint256 valueHash;
if (fNodeHasValue)
valueHash = getValueHash(claim.outPoint, takeoverHeight);
const auto pos = key.size();
std::vector<std::pair<unsigned char, uint256>> children;
for (auto&& child : childHashQuery << key) {
std::string childKey;
uint256 hash;
child >> childKey >> hash;
if (name.find(childKey) == 0) {
for (auto i = pos; i + 1 < childKey.size(); ++i) {
children.emplace_back(childKey[i], uint256{});
proof.nodes.emplace_back(children, fNodeHasValue, valueHash);
children.clear();
valueHash.SetNull();
fNodeHasValue = false;
}
children.emplace_back(childKey.back(), uint256{});
continue;
}
completeHash(hash, childKey, pos);
children.emplace_back(childKey[pos], hash);
}
childHashQuery++;
if (key == name) {
proof.hasValue = fNodeHasValue && claim.claimId == finalClaim;
if (proof.hasValue) {
proof.outPoint = claim.outPoint;
proof.nHeightOfLastTakeover = takeoverHeight;
}
valueHash.SetNull();
}
proof.nodes.emplace_back(std::move(children), fNodeHasValue, valueHash);
}
return true;
}
bool CClaimTrieCacheBase::findNameForClaim(std::vector<unsigned char> claim, CClaimValue& value, std::string& name) const
{
if (claim.size() > 20)
return false;
auto maximum = claim;
maximum.insert(maximum.end(), 20 - claim.size(), std::numeric_limits<unsigned char>::max());
auto query = db << "SELECT nodeName, claimID, txID, txN, amount, activationHeight, updateHeight "
"FROM claim WHERE REVERSE(claimID) BETWEEN ?1 AND ?2 "
"AND activationHeight < ?3 AND expirationHeight >= ?3 LIMIT 2"
<< claim << maximum << nNextHeight;
auto hit = false;
for (auto&& row: query) {
if (hit) return false;
row >> name >> value.claimId >> value.outPoint.hash >> value.outPoint.n
>> value.nAmount >> value.nValidAtHeight >> value.nHeight;
hit = true;
}
return hit;
}
void CClaimTrieCacheBase::getNamesInTrie(std::function<void(const std::string&)> callback) const
{
db << "SELECT DISTINCT nodeName FROM claim WHERE activationHeight < ?1 AND expirationHeight >= ?1"
<< nNextHeight >> [&callback](const std::string& name) {
callback(name);
};
}
std::vector<uint160> CClaimTrieCacheBase::getActivatedClaims(int height) {
std::vector<uint160> ret;
auto query = db << "SELECT DISTINCT claimID FROM claim WHERE activationHeight = ?1 AND updateHeight < ?1" << height;
for (auto&& row: query) {
ret.emplace_back();
row >> ret.back();
}
return ret;
}
std::vector<uint160> CClaimTrieCacheBase::getClaimsWithActivatedSupports(int height) {
std::vector<uint160> ret;
auto query = db << "SELECT DISTINCT supportedClaimID FROM support WHERE activationHeight = ?1 AND blockHeight < ?1" << height;
for (auto&& row: query) {
ret.emplace_back();
row >> ret.back();
}
return ret;
}
std::vector<uint160> CClaimTrieCacheBase::getExpiredClaims(int height) {
std::vector<uint160> ret;
auto query = db << "SELECT DISTINCT claimID FROM claim WHERE expirationHeight = ?1 AND updateHeight < ?1" << height;
for (auto&& row: query) {
ret.emplace_back();
row >> ret.back();
}
return ret;
}
std::vector<uint160> CClaimTrieCacheBase::getClaimsWithExpiredSupports(int height) {
std::vector<uint160> ret;
auto query = db << "SELECT DISTINCT supportedClaimID FROM support WHERE expirationHeight = ?1 AND blockHeight < ?1" << height;
for (auto&& row: query) {
ret.emplace_back();
row >> ret.back();
}
return ret;
}

140
src/claimtrie/trie.h Normal file
View file

@ -0,0 +1,140 @@
#ifndef CLAIMTRIE_TRIE_H
#define CLAIMTRIE_TRIE_H
#include <data.h>
#include <sqlite.h>
#include <txoutpoint.h>
#include <uints.h>
#include <functional>
#include <memory>
#include <string>
#include <vector>
#include <unordered_set>
#include <utility>
uint256 getValueHash(const COutPoint& outPoint, int nHeightOfLastTakeover);
class CClaimTrie
{
friend class CClaimTrieCacheBase;
friend class ClaimTrieChainFixture;
friend class CClaimTrieCacheHashFork;
friend class CClaimTrieCacheExpirationFork;
friend class CClaimTrieCacheNormalizationFork;
public:
CClaimTrie() = delete;
CClaimTrie(CClaimTrie&&) = delete;
CClaimTrie(const CClaimTrie&) = delete;
CClaimTrie(std::size_t cacheBytes, bool fWipe, int height = 0,
const std::string& dataDir = ".",
int nNormalizedNameForkHeight = 1,
int nMinRemovalWorkaroundHeight = 1,
int nMaxRemovalWorkaroundHeight = -1,
int64_t nOriginalClaimExpirationTime = 1,
int64_t nExtendedClaimExpirationTime = 1,
int64_t nExtendedClaimExpirationForkHeight = 1,
int64_t nAllClaimsInMerkleForkHeight = 1,
int proportionalDelayFactor = 32);
CClaimTrie& operator=(CClaimTrie&&) = delete;
CClaimTrie& operator=(const CClaimTrie&) = delete;
bool empty();
bool SyncToDisk();
protected:
int nNextHeight;
std::size_t dbCacheBytes;
const std::string dbFile;
sqlite::database db;
const int nProportionalDelayFactor;
const int nNormalizedNameForkHeight;
const int nMinRemovalWorkaroundHeight, nMaxRemovalWorkaroundHeight;
const int64_t nOriginalClaimExpirationTime;
const int64_t nExtendedClaimExpirationTime;
const int64_t nExtendedClaimExpirationForkHeight;
const int64_t nAllClaimsInMerkleForkHeight;
};
class CClaimTrieCacheBase
{
public:
explicit CClaimTrieCacheBase(CClaimTrie* base);
virtual ~CClaimTrieCacheBase();
bool flush();
bool checkConsistency();
uint256 getMerkleHash();
bool validateDb(int height, const uint256& rootHash);
std::size_t getTotalNamesInTrie() const;
std::size_t getTotalClaimsInTrie() const;
int64_t getTotalValueOfClaimsInTrie(bool fControllingOnly) const;
bool haveClaim(const std::string& name, const COutPoint& outPoint) const;
bool haveClaimInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool haveSupport(const std::string& name, const COutPoint& outPoint) const;
bool haveSupportInQueue(const std::string& name, const COutPoint& outPoint, int& nValidAtHeight) const;
bool addClaim(const std::string& name, const COutPoint& outPoint, const uint160& claimId, int64_t nAmount,
int nHeight, int nValidHeight = -1, int originalHeight = -1);
bool addSupport(const std::string& name, const COutPoint& outPoint, const uint160& supportedClaimId, int64_t nAmount,
int nHeight, int nValidHeight = -1);
bool removeClaim(const uint160& claimId, const COutPoint& outPoint, std::string& nodeName,
int& validHeight, int& originalHeight);
bool removeSupport(const COutPoint& outPoint, std::string& nodeName, int& validHeight);
virtual bool incrementBlock();
virtual bool decrementBlock();
virtual bool finalizeDecrement();
virtual int expirationTime() const;
virtual bool getProofForName(const std::string& name, const uint160& claim, CClaimTrieProof& proof);
virtual bool getInfoForName(const std::string& name, CClaimValue& claim, int heightOffset = 0) const;
virtual CClaimSupportToName getClaimsForName(const std::string& name) const;
virtual std::string adjustNameForValidHeight(const std::string& name, int validHeight) const;
void getNamesInTrie(std::function<void(const std::string&)> callback) const;
bool getLastTakeoverForName(const std::string& name, uint160& claimId, int& takeoverHeight) const;
bool findNameForClaim(std::vector<unsigned char> claim, CClaimValue& value, std::string& name) const;
std::vector<uint160> getActivatedClaims(int height);
std::vector<uint160> getClaimsWithActivatedSupports(int height);
std::vector<uint160> getExpiredClaims(int height);
std::vector<uint160> getClaimsWithExpiredSupports(int height);
protected:
int nNextHeight; // Height of the block that is being worked on, which is
CClaimTrie* base;
mutable sqlite::database db;
mutable std::unordered_set<std::string> removalWorkaround;
mutable sqlite::database_binder claimHashQuery, childHashQuery, claimHashQueryLimit;
virtual uint256 computeNodeHash(const std::string& name, int takeoverHeight);
supportEntryType getSupportsForName(const std::string& name) const;
virtual int getDelayForName(const std::string& name, const uint160& claimId) const;
bool deleteNodeIfPossible(const std::string& name, std::string& parent, int64_t& claims);
void ensureTreeStructureIsUpToDate();
void ensureTransacting();
private:
bool transacting;
// for unit test
friend struct ClaimTrieChainFixture;
friend class CClaimTrieCacheTest;
bool activateAllFor(const std::string& name);
};
#endif // CLAIMTRIE_TRIE_H

View file

@ -0,0 +1,42 @@
#include <txoutpoint.h>
#include <sstream>
COutPoint::COutPoint(uint256 hashIn, uint32_t nIn) : hash(std::move(hashIn)), n(nIn)
{
}
void COutPoint::SetNull()
{
hash.SetNull();
n = uint32_t(-1);
}
bool COutPoint::IsNull() const
{
return hash.IsNull() && n == uint32_t(-1);
}
bool COutPoint::operator<(const COutPoint& b) const
{
int cmp = hash.Compare(b.hash);
return cmp < 0 || (cmp == 0 && n < b.n);
}
bool COutPoint::operator==(const COutPoint& b) const
{
return hash == b.hash && n == b.n;
}
bool COutPoint::operator!=(const COutPoint& b) const
{
return !(*this == b);
}
std::string COutPoint::ToString() const
{
std::stringstream ss;
ss << "COutPoint(" << hash.ToString().substr(0, 10) << ", " << n << ')';
return ss.str();
}

View file

@ -0,0 +1,38 @@
#ifndef CLAIMTRIE_TXOUTPUT_H
#define CLAIMTRIE_TXOUTPUT_H
#include <uints.h>
#include <algorithm>
#include <string>
#include <type_traits>
#include <vector>
#include <utility>
/** An outpoint - a combination of a transaction hash and an index n into its vout */
class COutPoint
{
public:
uint256 hash;
uint32_t n = uint32_t(-1);
COutPoint() = default;
COutPoint(COutPoint&&) = default;
COutPoint(const COutPoint&) = default;
COutPoint(uint256 hashIn, uint32_t nIn);
COutPoint& operator=(COutPoint&&) = default;
COutPoint& operator=(const COutPoint&) = default;
void SetNull();
bool IsNull() const;
bool operator<(const COutPoint& b) const;
bool operator==(const COutPoint& b) const;
bool operator!=(const COutPoint& b) const;
std::string ToString() const;
};
#endif // CLAIMTRIE_TXOUTPUT_H

39
src/claimtrie/uints.cpp Normal file
View file

@ -0,0 +1,39 @@
#include <uints.h>
#include <cstring>
uint160::uint160(const std::vector<uint8_t>& vec) : CBaseBlob<160>(vec)
{
}
uint256::uint256(const std::vector<uint8_t>& vec) : CBaseBlob<256>(vec)
{
}
uint256::uint256(int64_t value) : CBaseBlob<256>() {
std::memcpy(this->begin(), &value, sizeof(value)); // TODO: fix the endianness here
}
uint160 uint160S(const char* str)
{
uint160 s;
s.SetHex(str);
return s;
}
uint160 uint160S(const std::string& s)
{
return uint160S(s.c_str());
}
uint256 uint256S(const char* str)
{
uint256 s;
s.SetHex(str);
return s;
}
uint256 uint256S(const std::string& s)
{
return uint256S(s.c_str());
}

43
src/claimtrie/uints.h Normal file
View file

@ -0,0 +1,43 @@
#ifndef CLAIMTRIE_UINTS_H
#define CLAIMTRIE_UINTS_H
#include <blob.h>
#include <string>
class uint160 : public CBaseBlob<160>
{
public:
uint160() = default;
explicit uint160(const std::vector<uint8_t>& vec);
uint160(uint160&&) = default;
uint160& operator=(uint160&&) = default;
uint160(const uint160&) = default;
uint160& operator=(const uint160&) = default;
};
class uint256 : public CBaseBlob<256>
{
public:
uint256() = default;
explicit uint256(const std::vector<uint8_t>& vec);
explicit uint256(int64_t value);
uint256(uint256&&) = default;
uint256& operator=(uint256&&) = default;
uint256(const uint256&) = default;
uint256& operator=(const uint256&) = default;
};
uint160 uint160S(const char* str);
uint160 uint160S(const std::string& s);
uint256 uint256S(const char* str);
uint256 uint256S(const std::string& s);
#endif // CLAIMTRIE_UINTS_H

103
src/claimtrie_serial.h Normal file
View file

@ -0,0 +1,103 @@
#ifndef CLAIMTRIE_SERIAL_H
#define CLAIMTRIE_SERIAL_H
#include <claimtrie/data.h>
#include <claimtrie/txoutpoint.h>
#include <claimtrie/uints.h>
template<typename Stream>
void Serialize(Stream& s, const uint160& u)
{
s.write((const char*)u.begin(), u.size());
}
template<typename Stream>
void Serialize(Stream& s, const uint256& u)
{
s.write((const char*)u.begin(), u.size());
}
template<typename Stream>
void Unserialize(Stream& s, uint160& u)
{
s.read((char*)u.begin(), u.size());
}
template<typename Stream>
void Unserialize(Stream& s, uint256& u)
{
s.read((char*)u.begin(), u.size());
}
template<typename Stream>
void Serialize(Stream& s, const COutPoint& u)
{
Serialize(s, u.hash);
Serialize(s, u.n);
}
template<typename Stream>
void Unserialize(Stream& s, COutPoint& u)
{
Unserialize(s, u.hash);
Unserialize(s, u.n);
}
template<typename Stream>
void Serialize(Stream& s, const CClaimValue& u)
{
Serialize(s, u.outPoint);
Serialize(s, u.claimId);
Serialize(s, u.nAmount);
Serialize(s, u.nHeight);
Serialize(s, u.nValidAtHeight);
}
template<typename Stream>
void Unserialize(Stream& s, CClaimValue& u)
{
Unserialize(s, u.outPoint);
Unserialize(s, u.claimId);
Unserialize(s, u.nAmount);
Unserialize(s, u.nHeight);
Unserialize(s, u.nValidAtHeight);
}
template<typename Stream>
void Serialize(Stream& s, const CSupportValue& u)
{
Serialize(s, u.outPoint);
Serialize(s, u.supportedClaimId);
Serialize(s, u.nAmount);
Serialize(s, u.nHeight);
Serialize(s, u.nValidAtHeight);
}
template<typename Stream>
void Unserialize(Stream& s, CSupportValue& u)
{
Unserialize(s, u.outPoint);
Unserialize(s, u.supportedClaimId);
Unserialize(s, u.nAmount);
Unserialize(s, u.nHeight);
Unserialize(s, u.nValidAtHeight);
}
template<typename Stream>
void Serialize(Stream& s, const CNameOutPointHeightType& u)
{
Serialize(s, u.name);
Serialize(s, u.outPoint);
Serialize(s, u.nValidHeight);
}
template<typename Stream>
void Unserialize(Stream& s, CNameOutPointHeightType& u)
{
Unserialize(s, u.name);
Unserialize(s, u.outPoint);
Unserialize(s, u.nValidHeight);
}
#endif // CLAIMTRIE_SERIAL_H

View file

@ -1,481 +0,0 @@
#include <consensus/merkle.h>
#include <chainparams.h>
#include <claimtrie.h>
#include <hash.h>
#include <boost/locale.hpp>
#include <boost/locale/conversion.hpp>
#include <boost/locale/localization_backend.hpp>
#include <boost/scope_exit.hpp>
#include <boost/scoped_ptr.hpp>
CClaimTrieCacheExpirationFork::CClaimTrieCacheExpirationFork(CClaimTrie* base)
: CClaimTrieCacheBase(base)
{
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
}
void CClaimTrieCacheExpirationFork::setExpirationTime(int time)
{
nExpirationTime = time;
}
int CClaimTrieCacheExpirationFork::expirationTime() const
{
return nExpirationTime;
}
bool CClaimTrieCacheExpirationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
if (CClaimTrieCacheBase::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo)) {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
return true;
}
return false;
}
bool CClaimTrieCacheExpirationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo)
{
if (CClaimTrieCacheBase::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo)) {
setExpirationTime(Params().GetConsensus().GetExpirationTime(nNextHeight));
return true;
}
return false;
}
void CClaimTrieCacheExpirationFork::initializeIncrement()
{
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nExtendedClaimExpirationForkHeight)
return;
forkForExpirationChange(true);
}
bool CClaimTrieCacheExpirationFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheBase::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nExtendedClaimExpirationForkHeight)
forkForExpirationChange(false);
return ret;
}
bool CClaimTrieCacheExpirationFork::forkForExpirationChange(bool increment)
{
/*
If increment is True, we have forked to extend the expiration time, thus items in the expiration queue
will have their expiration extended by "new expiration time - original expiration time"
If increment is False, we are decremented a block to reverse the fork. Thus items in the expiration queue
will have their expiration extension removed.
*/
//look through db for expiration queues, if we haven't already found it in dirty expiration queue
boost::scoped_ptr<CDBIterator> pcursor(base->db->NewIterator());
for (pcursor->SeekToFirst(); pcursor->Valid(); pcursor->Next()) {
std::pair<uint8_t, int> key;
if (!pcursor->GetKey(key))
continue;
int height = key.second;
if (key.first == CLAIM_EXP_QUEUE_ROW) {
expirationQueueRowType row;
if (pcursor->GetValue(row)) {
reactivateClaim(row, height, increment);
} else {
return error("%s(): error reading expiration queue rows from disk", __func__);
}
} else if (key.first == SUPPORT_EXP_QUEUE_ROW) {
expirationQueueRowType row;
if (pcursor->GetValue(row)) {
reactivateSupport(row, height, increment);
} else {
return error("%s(): error reading support expiration queue rows from disk", __func__);
}
}
}
return true;
}
bool CClaimTrieCacheNormalizationFork::shouldNormalize() const
{
return nNextHeight > Params().GetConsensus().nNormalizedNameForkHeight;
}
std::string CClaimTrieCacheNormalizationFork::normalizeClaimName(const std::string& name, bool force) const
{
if (!force && !shouldNormalize())
return name;
static std::locale utf8;
static bool initialized = false;
if (!initialized) {
static boost::locale::localization_backend_manager manager =
boost::locale::localization_backend_manager::global();
manager.select("icu");
static boost::locale::generator curLocale(manager);
utf8 = curLocale("en_US.UTF8");
initialized = true;
}
std::string normalized;
try {
// Check if it is a valid utf-8 string. If not, it will throw a
// boost::locale::conv::conversion_error exception which we catch later
normalized = boost::locale::conv::to_utf<char>(name, "UTF-8", boost::locale::conv::stop);
if (normalized.empty())
return name;
// these methods supposedly only use the "UTF8" portion of the locale object:
normalized = boost::locale::normalize(normalized, boost::locale::norm_nfd, utf8);
normalized = boost::locale::fold_case(normalized, utf8);
} catch (const boost::locale::conv::conversion_error& e) {
return name;
} catch (const std::bad_cast& e) {
LogPrintf("%s() is invalid or dependencies are missing: %s\n", __func__, e.what());
throw;
} catch (const std::exception& e) { // TODO: change to use ... with current_exception() in c++11
LogPrintf("%s() had an unexpected exception: %s\n", __func__, e.what());
return name;
}
return normalized;
}
bool CClaimTrieCacheNormalizationFork::insertClaimIntoTrie(const std::string& name, const CClaimValue& claim, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::insertClaimIntoTrie(normalizeClaimName(name, overrideInsertNormalization), claim, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::removeClaimFromTrie(const std::string& name, const COutPoint& outPoint, CClaimValue& claim, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::removeClaimFromTrie(normalizeClaimName(name, overrideRemoveNormalization), outPoint, claim, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::insertSupportIntoMap(const std::string& name, const CSupportValue& support, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::insertSupportIntoMap(normalizeClaimName(name, overrideInsertNormalization), support, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::removeSupportFromMap(const std::string& name, const COutPoint& outPoint, CSupportValue& support, bool fCheckTakeover)
{
return CClaimTrieCacheExpirationFork::removeSupportFromMap(normalizeClaimName(name, overrideRemoveNormalization), outPoint, support, fCheckTakeover);
}
bool CClaimTrieCacheNormalizationFork::normalizeAllNamesInTrieIfNecessary(insertUndoType& insertUndo, claimQueueRowType& removeUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
if (nNextHeight != Params().GetConsensus().nNormalizedNameForkHeight)
return false;
// run the one-time upgrade of all names that need to change
// it modifies the (cache) trie as it goes, so we need to grab everything to be modified first
for (auto it = base->cbegin(); it != base->cend(); ++it) {
const std::string normalized = normalizeClaimName(it.key(), true);
if (normalized == it.key())
continue;
auto& name = it.key();
auto supports = getSupportsForName(name);
for (auto support : supports) {
// if it's already going to expire just skip it
if (support.nHeight + expirationTime() <= nNextHeight)
continue;
assert(removeSupportFromMap(name, support.outPoint, support, false));
expireSupportUndo.emplace_back(name, support);
assert(insertSupportIntoMap(normalized, support, false));
insertSupportUndo.emplace_back(name, support.outPoint, -1);
}
namesToCheckForTakeover.insert(normalized);
auto cached = cacheData(name, false);
if (!cached || cached->empty())
continue;
auto claimsCopy = cached->claims;
auto takeoverHeightCopy = cached->nHeightOfLastTakeover;
for (auto claim : claimsCopy) {
if (claim.nHeight + expirationTime() <= nNextHeight)
continue;
assert(removeClaimFromTrie(name, claim.outPoint, claim, false));
removeUndo.emplace_back(name, claim);
assert(insertClaimIntoTrie(normalized, claim, true));
insertUndo.emplace_back(name, claim.outPoint, -1);
}
takeoverHeightUndo.emplace_back(name, takeoverHeightCopy);
}
return true;
}
bool CClaimTrieCacheNormalizationFork::incrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo, std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
overrideInsertNormalization = normalizeAllNamesInTrieIfNecessary(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
BOOST_SCOPE_EXIT(&overrideInsertNormalization) { overrideInsertNormalization = false; }
BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::incrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo, takeoverHeightUndo);
}
bool CClaimTrieCacheNormalizationFork::decrementBlock(insertUndoType& insertUndo, claimQueueRowType& expireUndo, insertUndoType& insertSupportUndo, supportQueueRowType& expireSupportUndo)
{
overrideRemoveNormalization = shouldNormalize();
BOOST_SCOPE_EXIT(&overrideRemoveNormalization) { overrideRemoveNormalization = false; }
BOOST_SCOPE_EXIT_END
return CClaimTrieCacheExpirationFork::decrementBlock(insertUndo, expireUndo, insertSupportUndo, expireSupportUndo);
}
bool CClaimTrieCacheNormalizationFork::getProofForName(const std::string& name, CClaimTrieProof& proof)
{
return CClaimTrieCacheExpirationFork::getProofForName(normalizeClaimName(name), proof);
}
bool CClaimTrieCacheNormalizationFork::getInfoForName(const std::string& name, CClaimValue& claim) const
{
return CClaimTrieCacheExpirationFork::getInfoForName(normalizeClaimName(name), claim);
}
CClaimSupportToName CClaimTrieCacheNormalizationFork::getClaimsForName(const std::string& name) const
{
return CClaimTrieCacheExpirationFork::getClaimsForName(normalizeClaimName(name));
}
int CClaimTrieCacheNormalizationFork::getDelayForName(const std::string& name, const uint160& claimId) const
{
return CClaimTrieCacheExpirationFork::getDelayForName(normalizeClaimName(name), claimId);
}
std::string CClaimTrieCacheNormalizationFork::adjustNameForValidHeight(const std::string& name, int validHeight) const
{
return normalizeClaimName(name, validHeight > Params().GetConsensus().nNormalizedNameForkHeight);
}
CClaimTrieCacheHashFork::CClaimTrieCacheHashFork(CClaimTrie* base) : CClaimTrieCacheNormalizationFork(base)
{
}
static const uint256 leafHash = uint256S("0000000000000000000000000000000000000000000000000000000000000002");
static const uint256 emptyHash = uint256S("0000000000000000000000000000000000000000000000000000000000000003");
std::vector<uint256> getClaimHashes(const CClaimTrieData& data)
{
std::vector<uint256> hashes;
for (auto& claim : data.claims)
hashes.push_back(getValueHash(claim.outPoint, data.nHeightOfLastTakeover));
return hashes;
}
template <typename T>
using iCbType = std::function<uint256(T&)>;
template <typename TIterator>
uint256 recursiveBinaryTreeHash(TIterator& it, const iCbType<TIterator>& process)
{
std::vector<uint256> childHashes;
for (auto& child : it.children())
childHashes.emplace_back(process(child));
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
else if (!it.hasChildren())
return {};
auto left = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
auto right = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
return Hash(left.begin(), left.end(), right.begin(), right.end());
}
uint256 CClaimTrieCacheHashFork::recursiveComputeMerkleHash(CClaimTrie::iterator& it)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveComputeMerkleHash(it);
using iterator = CClaimTrie::iterator;
iCbType<iterator> process = [&process](iterator& it) -> uint256 {
if (it->hash.IsNull())
it->hash = recursiveBinaryTreeHash(it, process);
assert(!it->hash.IsNull());
return it->hash;
};
return process(it);
}
bool CClaimTrieCacheHashFork::recursiveCheckConsistency(CClaimTrie::const_iterator& it, std::string& failed) const
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::recursiveCheckConsistency(it, failed);
struct CRecursiveBreak {};
using iterator = CClaimTrie::const_iterator;
iCbType<iterator> process = [&failed, &process](iterator& it) -> uint256 {
if (it->hash.IsNull() || it->hash != recursiveBinaryTreeHash(it, process)) {
failed = it.key();
throw CRecursiveBreak();
}
return it->hash;
};
try {
process(it);
} catch (const CRecursiveBreak&) {
return false;
}
return true;
}
std::vector<uint256> ComputeMerklePath(const std::vector<uint256>& hashes, uint32_t idx)
{
uint32_t count = 0;
int matchlevel = -1;
bool matchh = false;
uint256 inner[32], h;
const uint32_t one = 1;
std::vector<uint256> res;
const auto iterateInner = [&](int& level) {
for (; !(count & (one << level)); level++) {
const auto& ihash = inner[level];
if (matchh) {
res.push_back(ihash);
} else if (matchlevel == level) {
res.push_back(h);
matchh = true;
}
h = Hash(ihash.begin(), ihash.end(), h.begin(), h.end());
}
};
while (count < hashes.size()) {
h = hashes[count];
matchh = count == idx;
count++;
int level = 0;
iterateInner(level);
// Store the resulting hash at inner position level.
inner[level] = h;
if (matchh)
matchlevel = level;
}
int level = 0;
while (!(count & (one << level)))
level++;
h = inner[level];
matchh = matchlevel == level;
while (count != (one << level)) {
// If we reach this point, h is an inner value that is not the top.
if (matchh)
res.push_back(h);
h = Hash(h.begin(), h.end(), h.begin(), h.end());
// Increment count to the value it would have if two entries at this
count += (one << level);
level++;
iterateInner(level);
}
return res;
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof)
{
return getProofForName(name, proof, nullptr);
}
bool CClaimTrieCacheHashFork::getProofForName(const std::string& name, CClaimTrieProof& proof, const std::function<bool(const CClaimValue&)>& comp)
{
if (nNextHeight < Params().GetConsensus().nAllClaimsInMerkleForkHeight)
return CClaimTrieCacheNormalizationFork::getProofForName(name, proof);
auto fillPairs = [&proof](const std::vector<uint256>& hashes, uint32_t idx) {
auto partials = ComputeMerklePath(hashes, idx);
for (int i = partials.size() - 1; i >= 0; --i)
proof.pairs.emplace_back((idx >> i) & 1, partials[i]);
};
// cache the parent nodes
cacheData(name, false);
getMerkleHash();
proof = CClaimTrieProof();
for (auto& it : static_cast<const CClaimTrie&>(nodesToAddOrUpdate).nodes(name)) {
std::vector<uint256> childHashes;
uint32_t nextCurrentIdx = 0;
for (auto& child : it.children()) {
if (name.find(child.key()) == 0)
nextCurrentIdx = uint32_t(childHashes.size());
childHashes.push_back(child->hash);
}
std::vector<uint256> claimHashes;
if (!it->empty())
claimHashes = getClaimHashes(it.data());
// I am on a node; I need a hash(children, claims)
// if I am the last node on the list, it will be hash(children, x)
// else it will be hash(x, claims)
if (it.key() == name) {
uint32_t nClaimIndex = 0;
auto& claims = it->claims;
auto itClaim = !comp ? claims.begin() : std::find_if(claims.begin(), claims.end(), comp);
if (itClaim != claims.end()) {
proof.hasValue = true;
proof.outPoint = itClaim->outPoint;
proof.nHeightOfLastTakeover = it->nHeightOfLastTakeover;
nClaimIndex = std::distance(claims.begin(), itClaim);
}
auto hash = childHashes.empty() ? leafHash : ComputeMerkleRoot(childHashes);
proof.pairs.emplace_back(true, hash);
if (!claimHashes.empty())
fillPairs(claimHashes, nClaimIndex);
} else {
auto hash = claimHashes.empty() ? emptyHash : ComputeMerkleRoot(claimHashes);
proof.pairs.emplace_back(false, hash);
if (!childHashes.empty())
fillPairs(childHashes, nextCurrentIdx);
}
}
std::reverse(proof.pairs.begin(), proof.pairs.end());
return true;
}
void CClaimTrieCacheHashFork::copyAllBaseToCache()
{
for (auto it = base->cbegin(); it != base->cend(); ++it)
if (nodesAlreadyCached.insert(it.key()).second)
nodesToAddOrUpdate.insert(it.key(), it.data());
for (auto it = nodesToAddOrUpdate.begin(); it != nodesToAddOrUpdate.end(); ++it)
it->hash.SetNull();
}
void CClaimTrieCacheHashFork::initializeIncrement()
{
CClaimTrieCacheNormalizationFork::initializeIncrement();
// we could do this in the constructor, but that would not allow for multiple increments in a row (as done in unit tests)
if (nNextHeight != Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
return;
// if we are forking, we load the entire base trie into the cache trie
// we reset its hash computation so it can be recomputed completely
copyAllBaseToCache();
}
bool CClaimTrieCacheHashFork::finalizeDecrement(std::vector<std::pair<std::string, int>>& takeoverHeightUndo)
{
auto ret = CClaimTrieCacheNormalizationFork::finalizeDecrement(takeoverHeightUndo);
if (ret && nNextHeight == Params().GetConsensus().nAllClaimsInMerkleForkHeight - 1)
copyAllBaseToCache();
return ret;
}
bool CClaimTrieCacheHashFork::allowSupportMetadata() const
{
return nNextHeight >= Params().GetConsensus().nAllClaimsInMerkleForkHeight;
}

View file

@ -10,7 +10,7 @@
bool CCoinsView::GetCoin(const COutPoint &outpoint, Coin &coin) const { return false; } bool CCoinsView::GetCoin(const COutPoint &outpoint, Coin &coin) const { return false; }
uint256 CCoinsView::GetBestBlock() const { return uint256(); } uint256 CCoinsView::GetBestBlock() const { return uint256(); }
std::vector<uint256> CCoinsView::GetHeadBlocks() const { return std::vector<uint256>(); } std::vector<uint256> CCoinsView::GetHeadBlocks() const { return std::vector<uint256>(); }
bool CCoinsView::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock) { return false; } bool CCoinsView::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock, bool sync) { return false; }
CCoinsViewCursor *CCoinsView::Cursor() const { return nullptr; } CCoinsViewCursor *CCoinsView::Cursor() const { return nullptr; }
bool CCoinsView::HaveCoin(const COutPoint &outpoint) const bool CCoinsView::HaveCoin(const COutPoint &outpoint) const
@ -25,7 +25,7 @@ bool CCoinsViewBacked::HaveCoin(const COutPoint &outpoint) const { return base->
uint256 CCoinsViewBacked::GetBestBlock() const { return base->GetBestBlock(); } uint256 CCoinsViewBacked::GetBestBlock() const { return base->GetBestBlock(); }
std::vector<uint256> CCoinsViewBacked::GetHeadBlocks() const { return base->GetHeadBlocks(); } std::vector<uint256> CCoinsViewBacked::GetHeadBlocks() const { return base->GetHeadBlocks(); }
void CCoinsViewBacked::SetBackend(CCoinsView &viewIn) { base = &viewIn; } void CCoinsViewBacked::SetBackend(CCoinsView &viewIn) { base = &viewIn; }
bool CCoinsViewBacked::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock) { return base->BatchWrite(mapCoins, hashBlock); } bool CCoinsViewBacked::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock, bool sync) { return base->BatchWrite(mapCoins, hashBlock, sync); }
CCoinsViewCursor *CCoinsViewBacked::Cursor() const { return base->Cursor(); } CCoinsViewCursor *CCoinsViewBacked::Cursor() const { return base->Cursor(); }
size_t CCoinsViewBacked::EstimateSize() const { return base->EstimateSize(); } size_t CCoinsViewBacked::EstimateSize() const { return base->EstimateSize(); }
@ -142,7 +142,7 @@ void CCoinsViewCache::SetBestBlock(const uint256 &hashBlockIn) {
hashBlock = hashBlockIn; hashBlock = hashBlockIn;
} }
bool CCoinsViewCache::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlockIn) { bool CCoinsViewCache::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlockIn, bool sync) {
for (CCoinsMap::iterator it = mapCoins.begin(); it != mapCoins.end(); it = mapCoins.erase(it)) { for (CCoinsMap::iterator it = mapCoins.begin(); it != mapCoins.end(); it = mapCoins.erase(it)) {
// Ignore non-dirty entries (optimization). // Ignore non-dirty entries (optimization).
if (!(it->second.flags & CCoinsCacheEntry::DIRTY)) { if (!(it->second.flags & CCoinsCacheEntry::DIRTY)) {
@ -200,8 +200,8 @@ bool CCoinsViewCache::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlockIn
return true; return true;
} }
bool CCoinsViewCache::Flush() { bool CCoinsViewCache::Flush(bool sync) {
bool fOk = base->BatchWrite(cacheCoins, hashBlock); bool fOk = base->BatchWrite(cacheCoins, hashBlock, sync);
cacheCoins.clear(); cacheCoins.clear();
cachedCoinsUsage = 0; cachedCoinsUsage = 0;
return fOk; return fOk;

View file

@ -124,21 +124,19 @@ typedef std::unordered_map<COutPoint, CCoinsCacheEntry, SaltedOutpointHasher> CC
/** Cursor for iterating over CoinsView state */ /** Cursor for iterating over CoinsView state */
class CCoinsViewCursor class CCoinsViewCursor
{ {
uint256 hashBlock;
public: public:
CCoinsViewCursor(const uint256 &hashBlockIn): hashBlock(hashBlockIn) {} CCoinsViewCursor(const uint256 &hashBlockIn): hashBlock(hashBlockIn) {}
virtual ~CCoinsViewCursor() {} virtual ~CCoinsViewCursor() noexcept {}
virtual bool GetKey(COutPoint &key) const = 0; virtual bool GetKey(COutPoint &key) const = 0;
virtual bool GetValue(Coin &coin) const = 0; virtual bool GetValue(Coin &coin) const = 0;
virtual unsigned int GetValueSize() const = 0;
virtual bool Valid() const = 0; virtual bool Valid() const = 0;
virtual void Next() = 0; virtual void Next() = 0;
//! Get best block at the time this cursor was created //! Get best block at the time this cursor was created
const uint256 &GetBestBlock() const { return hashBlock; } const uint256 &GetBestBlock() const { return hashBlock; }
private:
uint256 hashBlock;
}; };
/** Abstract view on the open txout dataset. */ /** Abstract view on the open txout dataset. */
@ -165,7 +163,7 @@ public:
//! Do a bulk modification (multiple Coin changes + BestBlock change). //! Do a bulk modification (multiple Coin changes + BestBlock change).
//! The passed mapCoins can be modified. //! The passed mapCoins can be modified.
virtual bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock); virtual bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock, bool sync);
//! Get a cursor to iterate over the whole state //! Get a cursor to iterate over the whole state
virtual CCoinsViewCursor *Cursor() const; virtual CCoinsViewCursor *Cursor() const;
@ -191,7 +189,7 @@ public:
uint256 GetBestBlock() const override; uint256 GetBestBlock() const override;
std::vector<uint256> GetHeadBlocks() const override; std::vector<uint256> GetHeadBlocks() const override;
void SetBackend(CCoinsView &viewIn); void SetBackend(CCoinsView &viewIn);
bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock) override; bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock, bool sync) override;
CCoinsViewCursor *Cursor() const override; CCoinsViewCursor *Cursor() const override;
size_t EstimateSize() const override; size_t EstimateSize() const override;
}; };
@ -224,7 +222,7 @@ public:
bool HaveCoin(const COutPoint &outpoint) const override; bool HaveCoin(const COutPoint &outpoint) const override;
uint256 GetBestBlock() const override; uint256 GetBestBlock() const override;
void SetBestBlock(const uint256 &hashBlock); void SetBestBlock(const uint256 &hashBlock);
bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock) override; bool BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlock, bool sync) override;
CCoinsViewCursor* Cursor() const override { CCoinsViewCursor* Cursor() const override {
throw std::logic_error("CCoinsViewCache cursor iteration not supported."); throw std::logic_error("CCoinsViewCache cursor iteration not supported.");
} }
@ -266,7 +264,7 @@ public:
* Failure to call this method before destruction will cause the changes to be forgotten. * Failure to call this method before destruction will cause the changes to be forgotten.
* If false is returned, the state of this cache (and its backing view) will be undefined. * If false is returned, the state of this cache (and its backing view) will be undefined.
*/ */
bool Flush(); bool Flush(bool sync = false);
/** /**
* Removes the UTXO with the given outpoint from the cache, if it is * Removes the UTXO with the given outpoint from the cache, if it is

View file

@ -6,6 +6,9 @@
#include <hash.h> #include <hash.h>
#include <utilstrencodings.h> #include <utilstrencodings.h>
#include <functional>
#include <vector>
/* WARNING! If you're reading this because you're learning about crypto /* WARNING! If you're reading this because you're learning about crypto
and/or designing a new system that will use merkle trees, keep in mind and/or designing a new system that will use merkle trees, keep in mind
that the following merkle tree algorithm has a serious flaw related to that the following merkle tree algorithm has a serious flaw related to
@ -42,6 +45,7 @@
root. root.
*/ */
extern std::function<void(std::vector<uint256>&)> sha256n_way;
uint256 ComputeMerkleRoot(std::vector<uint256> hashes, bool* mutated) { uint256 ComputeMerkleRoot(std::vector<uint256> hashes, bool* mutated) {
bool mutation = false; bool mutation = false;
@ -54,7 +58,7 @@ uint256 ComputeMerkleRoot(std::vector<uint256> hashes, bool* mutated) {
if (hashes.size() & 1) { if (hashes.size() & 1) {
hashes.push_back(hashes.back()); hashes.push_back(hashes.back());
} }
SHA256D64(hashes[0].begin(), hashes[0].begin(), hashes.size() / 2); sha256n_way(hashes);
hashes.resize(hashes.size() / 2); hashes.resize(hashes.size() / 2);
} }
if (mutated) *mutated = mutation; if (mutated) *mutated = mutation;

View file

@ -78,8 +78,8 @@ struct Params {
int nAllowMinDiffMaxHeight; int nAllowMinDiffMaxHeight;
int nNormalizedNameForkHeight; int nNormalizedNameForkHeight;
int nMinTakeoverWorkaroundHeight; int nMinRemovalWorkaroundHeight;
int nMaxTakeoverWorkaroundHeight; int nMaxRemovalWorkaroundHeight;
int nWitnessForkHeight; int nWitnessForkHeight;

View file

@ -1,267 +0,0 @@
// Copyright (c) 2012-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <dbwrapper.h>
#include <memory>
#include <random.h>
#include <leveldb/cache.h>
#include <leveldb/env.h>
#include <leveldb/filter_policy.h>
#include <memenv.h>
#include <stdint.h>
#include <algorithm>
class CBitcoinLevelDBLogger : public leveldb::Logger {
public:
// This code is adapted from posix_logger.h, which is why it is using vsprintf.
// Please do not do this in normal code
void Logv(const char * format, va_list ap) override {
if (!LogAcceptCategory(BCLog::LEVELDB)) {
return;
}
char buffer[500];
for (int iter = 0; iter < 2; iter++) {
char* base;
int bufsize;
if (iter == 0) {
bufsize = sizeof(buffer);
base = buffer;
}
else {
bufsize = 30000;
base = new char[bufsize];
}
char* p = base;
char* limit = base + bufsize;
// Print the message
if (p < limit) {
va_list backup_ap;
va_copy(backup_ap, ap);
// Do not use vsnprintf elsewhere in bitcoin source code, see above.
p += vsnprintf(p, limit - p, format, backup_ap);
va_end(backup_ap);
}
// Truncate to available space if necessary
if (p >= limit) {
if (iter == 0) {
continue; // Try again with larger buffer
}
else {
p = limit - 1;
}
}
// Add newline if necessary
if (p == base || p[-1] != '\n') {
*p++ = '\n';
}
assert(p <= limit);
base[std::min(bufsize - 1, (int)(p - base))] = '\0';
LogPrintf("leveldb: %s", base); /* Continued */
if (base != buffer) {
delete[] base;
}
break;
}
}
};
static void SetMaxOpenFiles(leveldb::Options *options) {
// On most platforms the default setting of max_open_files (which is 1000)
// is optimal. On Windows using a large file count is OK because the handles
// do not interfere with select() loops. On 64-bit Unix hosts this value is
// also OK, because up to that amount LevelDB will use an mmap
// implementation that does not use extra file descriptors (the fds are
// closed after being mmaped).
//
// Increasing the value beyond the nmap count is dangerous because LevelDB will
// fall back to a non-mmap implementation when the file count is too large (thus contending select()).
// On 32-bit Unix host we should decrease the value because the handles use
// up real fds, and we want to avoid fd exhaustion issues.
//
// See PR #12495 for further discussion.
int default_open_files = 400;
#ifndef WIN32
if (sizeof(void*) < 8) {
options->max_open_files = 64;
}
#endif
LogPrint(BCLog::LEVELDB, "LevelDB using max_open_files=%d (default=%d)\n",
options->max_open_files, default_open_files);
}
static leveldb::Options GetOptions(size_t nCacheSize)
{
leveldb::Options options;
auto write_cache = std::min(nCacheSize / 4, size_t(32 * 1024 * 1024)); // cap write_cache
options.block_cache = leveldb::NewLRUCache(nCacheSize - write_cache * 2);
options.write_buffer_size = write_cache; // up to two write buffers may be held in memory simultaneously
options.filter_policy = leveldb::NewBloomFilterPolicy(12);
options.compression = leveldb::kNoCompression;
options.info_log = new CBitcoinLevelDBLogger();
if (leveldb::kMajorVersion > 1 || (leveldb::kMajorVersion == 1 && leveldb::kMinorVersion >= 16)) {
// LevelDB versions before 1.16 consider short writes to be corruption. Only trigger error
// on corruption in later versions.
options.paranoid_checks = true;
}
SetMaxOpenFiles(&options);
return options;
}
CDBWrapper::CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory, bool fWipe, bool obfuscate)
: m_name(fs::basename(path)), ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION)
{
penv = nullptr;
readoptions.verify_checksums = true;
iteroptions.verify_checksums = true;
iteroptions.fill_cache = false;
syncoptions.sync = true;
options = GetOptions(nCacheSize);
options.create_if_missing = true;
if (fMemory) {
penv = leveldb::NewMemEnv(leveldb::Env::Default());
options.env = penv;
} else {
if (fWipe) {
LogPrintf("Wiping LevelDB in %s\n", path.string());
leveldb::Status result = leveldb::DestroyDB(path.string(), options);
dbwrapper_private::HandleError(result);
}
TryCreateDirectories(path);
LogPrintf("Opening LevelDB in %s\n", path.string());
}
leveldb::Status status = leveldb::DB::Open(options, path.string(), &pdb);
dbwrapper_private::HandleError(status);
LogPrintf("Opened LevelDB successfully\n");
if (gArgs.GetBoolArg("-forcecompactdb", false)) {
LogPrintf("Starting database compaction of %s\n", path.string());
pdb->CompactRange(nullptr, nullptr);
LogPrintf("Finished database compaction of %s\n", path.string());
}
// The base-case obfuscation key, which is a noop.
obfuscate_key = std::vector<unsigned char>(OBFUSCATE_KEY_NUM_BYTES, '\000');
bool key_exists = Read(OBFUSCATE_KEY_KEY, obfuscate_key);
if (!key_exists && obfuscate && IsEmpty()) {
// Initialize non-degenerate obfuscation if it won't upset
// existing, non-obfuscated data.
std::vector<unsigned char> new_key = CreateObfuscateKey();
// Write `new_key` so we don't obfuscate the key with itself
Write(OBFUSCATE_KEY_KEY, new_key);
obfuscate_key = new_key;
LogPrintf("Wrote new obfuscate key for %s: %s\n", path.string(), HexStr(obfuscate_key));
}
LogPrintf("Using obfuscation key for %s: %s\n", path.string(), HexStr(obfuscate_key));
}
CDBWrapper::~CDBWrapper()
{
delete pdb;
pdb = nullptr;
delete options.filter_policy;
options.filter_policy = nullptr;
delete options.info_log;
options.info_log = nullptr;
delete options.block_cache;
options.block_cache = nullptr;
delete penv;
options.env = nullptr;
}
bool CDBWrapper::Sync() {
CDBBatch batch(*this);
return WriteBatch(batch, true);
}
bool CDBWrapper::WriteBatch(CDBBatch& batch, bool fSync)
{
if (!pdb)
return false;
const bool log_memory = LogAcceptCategory(BCLog::LEVELDB);
double mem_before = 0;
if (log_memory) {
mem_before = DynamicMemoryUsage() / 1024.0 / 1024;
}
leveldb::Status status = pdb->Write(fSync ? syncoptions : writeoptions, &batch.batch);
dbwrapper_private::HandleError(status);
if (log_memory) {
double mem_after = DynamicMemoryUsage() / 1024.0 / 1024;
LogPrint(BCLog::LEVELDB, "WriteBatch memory usage: db=%s, before=%.1fMiB, after=%.1fMiB\n",
m_name, mem_before, mem_after);
}
return true;
}
size_t CDBWrapper::DynamicMemoryUsage() const {
std::string memory;
if (!pdb->GetProperty("leveldb.approximate-memory-usage", &memory)) {
LogPrint(BCLog::LEVELDB, "Failed to get approximate-memory-usage property\n");
return 0;
}
return stoul(memory);
}
// Prefixed with null character to avoid collisions with other keys
//
// We must use a string constructor which specifies length so that we copy
// past the null-terminator.
const std::string CDBWrapper::OBFUSCATE_KEY_KEY("\000obfuscate_key", 14);
const unsigned int CDBWrapper::OBFUSCATE_KEY_NUM_BYTES = 8;
/**
* Returns a string (consisting of 8 random bytes) suitable for use as an
* obfuscating XOR key.
*/
std::vector<unsigned char> CDBWrapper::CreateObfuscateKey() const
{
unsigned char buff[OBFUSCATE_KEY_NUM_BYTES];
GetRandBytes(buff, OBFUSCATE_KEY_NUM_BYTES);
return std::vector<unsigned char>(&buff[0], &buff[OBFUSCATE_KEY_NUM_BYTES]);
}
bool CDBWrapper::IsEmpty()
{
std::unique_ptr<CDBIterator> it(NewIterator());
it->SeekToFirst();
return !(it->Valid());
}
CDBIterator::~CDBIterator() { delete piter; }
bool CDBIterator::Valid() const { return piter->Valid(); }
void CDBIterator::SeekToFirst() { piter->SeekToFirst(); }
void CDBIterator::Next() { piter->Next(); }
namespace dbwrapper_private {
void HandleError(const leveldb::Status& status)
{
if (status.ok())
return;
const std::string errmsg = "Fatal LevelDB error: " + status.ToString();
LogPrintf("%s\n", errmsg);
LogPrintf("You can use -debug=leveldb to get more complete diagnostic messages\n");
throw dbwrapper_error(errmsg);
}
const std::vector<unsigned char>& GetObfuscateKey(const CDBWrapper &w)
{
return w.obfuscate_key;
}
} // namespace dbwrapper_private

View file

@ -1,350 +0,0 @@
// Copyright (c) 2012-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_DBWRAPPER_H
#define BITCOIN_DBWRAPPER_H
#include <clientversion.h>
#include <fs.h>
#include <serialize.h>
#include <streams.h>
#include <util.h>
#include <utilstrencodings.h>
#include <version.h>
#include <leveldb/db.h>
#include <leveldb/write_batch.h>
class dbwrapper_error : public std::runtime_error
{
public:
explicit dbwrapper_error(const std::string& msg) : std::runtime_error(msg) {}
};
class CDBWrapper;
/** These should be considered an implementation detail of the specific database.
*/
namespace dbwrapper_private {
/** Handle database error by throwing dbwrapper_error exception.
*/
void HandleError(const leveldb::Status& status);
/** Work around circular dependency, as well as for testing in dbwrapper_tests.
* Database obfuscation should be considered an implementation detail of the
* specific database.
*/
const std::vector<unsigned char>& GetObfuscateKey(const CDBWrapper &w);
};
class CDBIterator
{
private:
const CDBWrapper &parent;
leveldb::Iterator *piter;
public:
/**
* @param[in] _parent Parent CDBWrapper instance.
* @param[in] _piter The original leveldb iterator.
*/
CDBIterator(const CDBWrapper &_parent, leveldb::Iterator *_piter) :
parent(_parent), piter(_piter) { };
~CDBIterator();
bool Valid() const;
void SeekToFirst();
template<typename K> void Seek(const K& key) {
CDataStream ssKey(SER_DISK, CLIENT_VERSION);
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
piter->Seek(slKey);
}
void Next();
template<typename K> bool GetKey(K& key) {
leveldb::Slice slKey = piter->key();
try {
CDataStream ssKey(slKey.data(), slKey.data() + slKey.size(), SER_DISK, CLIENT_VERSION);
ssKey >> key;
} catch (const std::exception&) {
return false;
}
return true;
}
template<typename V> bool GetValue(V& value) {
leveldb::Slice slValue = piter->value();
try {
CDataStream ssValue(slValue.data(), slValue.data() + slValue.size(), SER_DISK, CLIENT_VERSION);
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
ssValue >> value;
} catch (const std::exception&) {
return false;
}
return true;
}
unsigned int GetValueSize() {
return piter->value().size();
}
};
class CDBBatch;
class CDBWrapper
{
friend const std::vector<unsigned char>& dbwrapper_private::GetObfuscateKey(const CDBWrapper &w);
private:
//! custom environment this database is using (may be nullptr in case of default environment)
leveldb::Env* penv;
//! database options used
leveldb::Options options;
//! options used when reading from the database
leveldb::ReadOptions readoptions;
//! options used when iterating over values of the database
leveldb::ReadOptions iteroptions;
//! options used when writing to the database
leveldb::WriteOptions writeoptions;
//! options used when sync writing to the database
leveldb::WriteOptions syncoptions;
//! the database itself
leveldb::DB* pdb;
//! the name of this database
std::string m_name;
//! a key used for optional XOR-obfuscation of the database
std::vector<unsigned char> obfuscate_key;
//! the key under which the obfuscation key is stored
static const std::string OBFUSCATE_KEY_KEY;
//! the length of the obfuscate key in number of bytes
static const unsigned int OBFUSCATE_KEY_NUM_BYTES;
std::vector<unsigned char> CreateObfuscateKey() const;
public:
mutable CDataStream ssKey, ssValue;
/**
* @param[in] path Location in the filesystem where leveldb data will be stored.
* @param[in] nCacheSize Configures various leveldb cache settings.
* @param[in] fMemory If true, use leveldb's memory environment.
* @param[in] fWipe If true, remove all existing data.
* @param[in] obfuscate If true, store data obfuscated via simple XOR. If false, XOR
* with a zero'd byte array.
*/
CDBWrapper(const fs::path& path, size_t nCacheSize, bool fMemory = false, bool fWipe = false, bool obfuscate = false);
virtual ~CDBWrapper();
CDBWrapper(const CDBWrapper&) = delete;
/* CDBWrapper& operator=(const CDBWrapper&) = delete; */
template <typename K, typename V>
bool Read(const K& key, V& value) const
{
assert(ssKey.empty());
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) {
if (status.IsNotFound())
return false;
LogPrintf("LevelDB read failure: %s\n", status.ToString());
dbwrapper_private::HandleError(status);
}
try {
CDataStream ssValue(strValue.data(), strValue.data() + strValue.size(), SER_DISK, CLIENT_VERSION);
ssValue.Xor(obfuscate_key);
ssValue >> value;
} catch (const std::exception&) {
return false;
}
return true;
}
template <typename K, typename V>
bool Write(const K& key, const V& value, bool fSync = false);
template <typename K>
bool Exists(const K& key) const
{
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
std::string strValue;
leveldb::Status status = pdb->Get(readoptions, slKey, &strValue);
ssKey.clear();
if (!status.ok()) {
if (status.IsNotFound())
return false;
LogPrintf("LevelDB read failure: %s\n", status.ToString());
dbwrapper_private::HandleError(status);
}
return true;
}
template <typename K>
bool Erase(const K& key, bool fSync = false);
bool WriteBatch(CDBBatch& batch, bool fSync = false);
// Get an estimate of LevelDB memory usage (in bytes).
size_t DynamicMemoryUsage() const;
// not available for LevelDB; provide for compatibility with BDB
bool Flush()
{
return true;
}
bool Sync();
CDBIterator *NewIterator()
{
return new CDBIterator(*this, pdb->NewIterator(iteroptions));
}
/**
* Return true if the database managed by this class contains no entries.
*/
bool IsEmpty();
template<typename K>
size_t EstimateSize(const K& key_begin, const K& key_end) const
{
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(ssKey.capacity());
ssKey2.reserve(ssKey.capacity());
ssKey1 << key_begin;
ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
leveldb::Slice slKey2(ssKey2.data(), ssKey2.size());
uint64_t size = 0;
leveldb::Range range(slKey1, slKey2);
pdb->GetApproximateSizes(&range, 1, &size);
return size;
}
/**
* Compact a certain range of keys in the database.
*/
template<typename K>
void CompactRange(const K& key_begin, const K& key_end) const
{
CDataStream ssKey1(SER_DISK, CLIENT_VERSION), ssKey2(SER_DISK, CLIENT_VERSION);
ssKey1.reserve(ssKey.capacity());
ssKey2.reserve(ssKey.capacity());
ssKey1 << key_begin;
ssKey2 << key_end;
leveldb::Slice slKey1(ssKey1.data(), ssKey1.size());
leveldb::Slice slKey2(ssKey2.data(), ssKey2.size());
pdb->CompactRange(&slKey1, &slKey2);
}
};
/** Batch of changes queued to be written to a CDBWrapper */
class CDBBatch
{
friend class CDBWrapper;
const CDBWrapper &parent;
leveldb::WriteBatch batch;
size_t size_estimate;
CDataStream ssKey, ssValue;
public:
/**
* @param[in] _parent CDBWrapper that this batch is to be submitted to
*/
explicit CDBBatch(const CDBWrapper &_parent) : parent(_parent), size_estimate(0),
ssKey(SER_DISK, CLIENT_VERSION), ssValue(SER_DISK, CLIENT_VERSION) {
ssKey.reserve(parent.ssKey.capacity());
ssValue.reserve(parent.ssValue.capacity());
};
void Clear()
{
batch.Clear();
size_estimate = 0;
}
template <typename K, typename V>
void Write(const K& key, const V& value)
{
assert(ssKey.empty());
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
assert(ssValue.empty());
ssValue << value;
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent));
leveldb::Slice slValue(ssValue.data(), ssValue.size());
batch.Put(slKey, slValue);
// LevelDB serializes writes as:
// - byte: header
// - varint: key length (1 byte up to 127B, 2 bytes up to 16383B, ...)
// - byte[]: key
// - varint: value length
// - byte[]: value
// The formula below assumes the key and value are both less than 16k.
size_estimate += 3 + (slKey.size() > 127) + slKey.size() + (slValue.size() > 127) + slValue.size();
ssKey.clear();
ssValue.clear();
}
template <typename K>
void Erase(const K& key)
{
ssKey << key;
leveldb::Slice slKey(ssKey.data(), ssKey.size());
batch.Delete(slKey);
// LevelDB serializes erases as:
// - byte: header
// - varint: key length
// - byte[]: key
// The formula below assumes the key is less than 16kB.
size_estimate += 2 + (slKey.size() > 127) + slKey.size();
ssKey.clear();
}
size_t SizeEstimate() const { return size_estimate; }
};
template<typename K>
bool CDBWrapper::Erase(const K &key, bool fSync) {
CDBBatch batch(*this);
batch.Erase(key);
return WriteBatch(batch, fSync);
}
template<typename K, typename V>
bool CDBWrapper::Write(const K &key, const V &value, bool fSync) {
CDBBatch batch(*this);
batch.Write(key, value);
return WriteBatch(batch, fSync);
}
#endif // BITCOIN_DBWRAPPER_H

View file

@ -1,278 +0,0 @@
// Copyright (c) 2017-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <chainparams.h>
#include <index/base.h>
#include <shutdown.h>
#include <tinyformat.h>
#include <ui_interface.h>
#include <util.h>
#include <validation.h>
#include <warnings.h>
constexpr char DB_BEST_BLOCK = 'B';
constexpr int64_t SYNC_LOG_INTERVAL = 30; // seconds
constexpr int64_t SYNC_LOCATOR_WRITE_INTERVAL = 30; // seconds
template<typename... Args>
static void FatalError(const char* fmt, const Args&... args)
{
std::string strMessage = tfm::format(fmt, args...);
SetMiscWarning(strMessage);
LogPrintf("*** %s\n", strMessage);
uiInterface.ThreadSafeMessageBox(
"Error: A fatal internal error occurred, see debug.log for details",
"", CClientUIInterface::MSG_ERROR);
StartShutdown();
}
BaseIndex::DB::DB(const fs::path& path, size_t n_cache_size, bool f_memory, bool f_wipe, bool f_obfuscate) :
CDBWrapper(path, n_cache_size, f_memory, f_wipe, f_obfuscate)
{}
bool BaseIndex::DB::ReadBestBlock(CBlockLocator& locator) const
{
bool success = Read(DB_BEST_BLOCK, locator);
if (!success) {
locator.SetNull();
}
return success;
}
bool BaseIndex::DB::WriteBestBlock(const CBlockLocator& locator)
{
return Write(DB_BEST_BLOCK, locator);
}
BaseIndex::~BaseIndex()
{
Interrupt();
Stop();
}
bool BaseIndex::Init()
{
CBlockLocator locator;
if (!GetDB().ReadBestBlock(locator)) {
locator.SetNull();
}
LOCK(cs_main);
m_best_block_index = FindForkInGlobalIndex(chainActive, locator);
m_synced = m_best_block_index.load() == chainActive.Tip();
return true;
}
static const CBlockIndex* NextSyncBlock(const CBlockIndex* pindex_prev)
{
AssertLockHeld(cs_main);
if (!pindex_prev) {
return chainActive.Genesis();
}
const CBlockIndex* pindex = chainActive.Next(pindex_prev);
if (pindex) {
return pindex;
}
return chainActive.Next(chainActive.FindFork(pindex_prev));
}
void BaseIndex::ThreadSync()
{
const CBlockIndex* pindex = m_best_block_index.load();
if (!m_synced) {
auto& consensus_params = Params().GetConsensus();
int64_t last_log_time = 0;
int64_t last_locator_write_time = 0;
while (true) {
if (m_interrupt) {
WriteBestBlock(pindex);
return;
}
{
LOCK(cs_main);
const CBlockIndex* pindex_next = NextSyncBlock(pindex);
if (!pindex_next) {
WriteBestBlock(pindex);
m_best_block_index = pindex;
m_synced = true;
break;
}
pindex = pindex_next;
}
int64_t current_time = GetTime();
if (last_log_time + SYNC_LOG_INTERVAL < current_time) {
LogPrintf("Syncing %s with block chain from height %d\n",
GetName(), pindex->nHeight);
last_log_time = current_time;
}
if (last_locator_write_time + SYNC_LOCATOR_WRITE_INTERVAL < current_time) {
WriteBestBlock(pindex);
last_locator_write_time = current_time;
}
CBlock block;
if (!ReadBlockFromDisk(block, pindex, consensus_params)) {
FatalError("%s: Failed to read block %s from disk",
__func__, pindex->GetBlockHash().ToString());
return;
}
if (!WriteBlock(block, pindex)) {
FatalError("%s: Failed to write block %s to index database",
__func__, pindex->GetBlockHash().ToString());
return;
}
}
}
if (pindex) {
LogPrintf("%s is enabled at height %d\n", GetName(), pindex->nHeight);
} else {
LogPrintf("%s is enabled\n", GetName());
}
}
bool BaseIndex::WriteBestBlock(const CBlockIndex* block_index)
{
LOCK(cs_main);
if (!GetDB().WriteBestBlock(chainActive.GetLocator(block_index))) {
return error("%s: Failed to write locator to disk", __func__);
}
return true;
}
void BaseIndex::BlockConnected(const std::shared_ptr<const CBlock>& block, const CBlockIndex* pindex,
const std::vector<CTransactionRef>& txn_conflicted)
{
if (!m_synced) {
return;
}
const CBlockIndex* best_block_index = m_best_block_index.load();
if (!best_block_index) {
if (pindex->nHeight != 0) {
FatalError("%s: First block connected is not the genesis block (height=%d)",
__func__, pindex->nHeight);
return;
}
} else {
// Ensure block connects to an ancestor of the current best block. This should be the case
// most of the time, but may not be immediately after the sync thread catches up and sets
// m_synced. Consider the case where there is a reorg and the blocks on the stale branch are
// in the ValidationInterface queue backlog even after the sync thread has caught up to the
// new chain tip. In this unlikely event, log a warning and let the queue clear.
if (best_block_index->GetAncestor(pindex->nHeight - 1) != pindex->pprev) {
LogPrintf("%s: WARNING: Block %s does not connect to an ancestor of " /* Continued */
"known best chain (tip=%s); not updating index\n",
__func__, pindex->GetBlockHash().ToString(),
best_block_index->GetBlockHash().ToString());
return;
}
}
if (WriteBlock(*block, pindex)) {
m_best_block_index = pindex;
} else {
FatalError("%s: Failed to write block %s to index",
__func__, pindex->GetBlockHash().ToString());
return;
}
}
void BaseIndex::ChainStateFlushed(const CBlockLocator& locator)
{
if (!m_synced) {
return;
}
const uint256& locator_tip_hash = locator.vHave.front();
const CBlockIndex* locator_tip_index;
{
LOCK(cs_main);
locator_tip_index = LookupBlockIndex(locator_tip_hash);
}
if (!locator_tip_index) {
FatalError("%s: First block (hash=%s) in locator was not found",
__func__, locator_tip_hash.ToString());
return;
}
// This checks that ChainStateFlushed callbacks are received after BlockConnected. The check may fail
// immediately after the sync thread catches up and sets m_synced. Consider the case where
// there is a reorg and the blocks on the stale branch are in the ValidationInterface queue
// backlog even after the sync thread has caught up to the new chain tip. In this unlikely
// event, log a warning and let the queue clear.
const CBlockIndex* best_block_index = m_best_block_index.load();
if (best_block_index->GetAncestor(locator_tip_index->nHeight) != locator_tip_index) {
LogPrintf("%s: WARNING: Locator contains block (hash=%s) not on known best " /* Continued */
"chain (tip=%s); not writing index locator\n",
__func__, locator_tip_hash.ToString(),
best_block_index->GetBlockHash().ToString());
return;
}
if (!GetDB().WriteBestBlock(locator)) {
error("%s: Failed to write locator to disk", __func__);
}
}
bool BaseIndex::BlockUntilSyncedToCurrentChain()
{
AssertLockNotHeld(cs_main);
if (!m_synced) {
return false;
}
{
// Skip the queue-draining stuff if we know we're caught up with
// chainActive.Tip().
LOCK(cs_main);
const CBlockIndex* chain_tip = chainActive.Tip();
const CBlockIndex* best_block_index = m_best_block_index.load();
if (best_block_index->GetAncestor(chain_tip->nHeight) == chain_tip) {
return true;
}
}
LogPrintf("%s: %s is catching up on block notifications\n", __func__, GetName());
SyncWithValidationInterfaceQueue();
return true;
}
void BaseIndex::Interrupt()
{
m_interrupt();
}
void BaseIndex::Start()
{
// Need to register this ValidationInterface before running Init(), so that
// callbacks are not missed if Init sets m_synced to true.
RegisterValidationInterface(this);
if (!Init()) {
FatalError("%s: %s failed to initialize", __func__, GetName());
return;
}
m_thread_sync = std::thread(&TraceThread<std::function<void()>>, GetName(),
std::bind(&BaseIndex::ThreadSync, this));
}
void BaseIndex::Stop()
{
UnregisterValidationInterface(this);
if (m_thread_sync.joinable()) {
m_thread_sync.join();
}
}

View file

@ -1,100 +0,0 @@
// Copyright (c) 2017-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_INDEX_BASE_H
#define BITCOIN_INDEX_BASE_H
#include <dbwrapper.h>
#include <primitives/block.h>
#include <primitives/transaction.h>
#include <threadinterrupt.h>
#include <uint256.h>
#include <validationinterface.h>
class CBlockIndex;
/**
* Base class for indices of blockchain data. This implements
* CValidationInterface and ensures blocks are indexed sequentially according
* to their position in the active chain.
*/
class BaseIndex : public CValidationInterface
{
protected:
class DB : public CDBWrapper
{
public:
DB(const fs::path& path, size_t n_cache_size,
bool f_memory = false, bool f_wipe = false, bool f_obfuscate = false);
~DB() override {}
/// Read block locator of the chain that the txindex is in sync with.
bool ReadBestBlock(CBlockLocator& locator) const;
/// Write block locator of the chain that the txindex is in sync with.
bool WriteBestBlock(const CBlockLocator& locator);
};
private:
/// Whether the index is in sync with the main chain. The flag is flipped
/// from false to true once, after which point this starts processing
/// ValidationInterface notifications to stay in sync.
std::atomic<bool> m_synced{false};
/// The last block in the chain that the index is in sync with.
std::atomic<const CBlockIndex*> m_best_block_index{nullptr};
std::thread m_thread_sync;
CThreadInterrupt m_interrupt;
/// Sync the index with the block index starting from the current best block.
/// Intended to be run in its own thread, m_thread_sync, and can be
/// interrupted with m_interrupt. Once the index gets in sync, the m_synced
/// flag is set and the BlockConnected ValidationInterface callback takes
/// over and the sync thread exits.
void ThreadSync();
/// Write the current chain block locator to the DB.
bool WriteBestBlock(const CBlockIndex* block_index);
protected:
void BlockConnected(const std::shared_ptr<const CBlock>& block, const CBlockIndex* pindex,
const std::vector<CTransactionRef>& txn_conflicted) override;
void ChainStateFlushed(const CBlockLocator& locator) override;
/// Initialize internal state from the database and block index.
virtual bool Init();
/// Write update index entries for a newly connected block.
virtual bool WriteBlock(const CBlock& block, const CBlockIndex* pindex) { return true; }
virtual DB& GetDB() const = 0;
/// Get the name of the index for display in logs.
virtual const char* GetName() const = 0;
public:
/// Destructor interrupts sync thread if running and blocks until it exits.
virtual ~BaseIndex();
/// Blocks the current thread until the index is caught up to the current
/// state of the block chain. This only blocks if the index has gotten in
/// sync once and only needs to process blocks in the ValidationInterface
/// queue. If the index is catching up from far behind, this method does
/// not block and immediately returns false.
bool BlockUntilSyncedToCurrentChain();
void Interrupt();
/// Start initializes the sync state and registers the instance as a
/// ValidationInterface so that it stays in sync with blockchain updates.
void Start();
/// Stops the instance from staying in sync with blockchain updates.
void Stop();
};
#endif // BITCOIN_INDEX_BASE_H

View file

@ -1,262 +0,0 @@
// Copyright (c) 2017-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <index/txindex.h>
#include <shutdown.h>
#include <ui_interface.h>
#include <util.h>
#include <validation.h>
#include <boost/thread.hpp>
constexpr char DB_BEST_BLOCK = 'B';
constexpr char DB_TXINDEX = 't';
constexpr char DB_TXINDEX_BLOCK = 'T';
std::unique_ptr<TxIndex> g_txindex;
/**
* Access to the txindex database (indexes/txindex/)
*
* The database stores a block locator of the chain the database is synced to
* so that the TxIndex can efficiently determine the point it last stopped at.
* A locator is used instead of a simple hash of the chain tip because blocks
* and block index entries may not be flushed to disk until after this database
* is updated.
*/
class TxIndex::DB : public BaseIndex::DB
{
public:
explicit DB(size_t n_cache_size, bool f_memory = false, bool f_wipe = false);
~DB() override {}
/// Read the disk location of the transaction data with the given hash. Returns false if the
/// transaction hash is not indexed.
bool ReadTxPos(const uint256& txid, CDiskTxPos& pos) const;
/// Write a batch of transaction positions to the DB.
bool WriteTxs(const std::vector<std::pair<uint256, CDiskTxPos>>& v_pos);
/// Migrate txindex data from the block tree DB, where it may be for older nodes that have not
/// been upgraded yet to the new database.
bool MigrateData(CBlockTreeDB& block_tree_db, const CBlockLocator& best_locator);
};
TxIndex::DB::DB(size_t n_cache_size, bool f_memory, bool f_wipe) :
BaseIndex::DB(GetDataDir() / "indexes" / "txindex", n_cache_size, f_memory, f_wipe)
{}
bool TxIndex::DB::ReadTxPos(const uint256 &txid, CDiskTxPos& pos) const
{
return Read(std::make_pair(DB_TXINDEX, txid), pos);
}
bool TxIndex::DB::WriteTxs(const std::vector<std::pair<uint256, CDiskTxPos>>& v_pos)
{
CDBBatch batch(*this);
for (const auto& tuple : v_pos) {
batch.Write(std::make_pair(DB_TXINDEX, tuple.first), tuple.second);
}
return WriteBatch(batch);
}
/*
* Safely persist a transfer of data from the old txindex database to the new one, and compact the
* range of keys updated. This is used internally by MigrateData.
*/
static void WriteTxIndexMigrationBatches(CDBWrapper& newdb, CDBWrapper& olddb,
CDBBatch& batch_newdb, CDBBatch& batch_olddb,
const std::pair<unsigned char, uint256>& begin_key,
const std::pair<unsigned char, uint256>& end_key)
{
// Sync new DB changes to disk before deleting from old DB.
newdb.WriteBatch(batch_newdb, /*fSync=*/ true);
olddb.WriteBatch(batch_olddb);
olddb.CompactRange(begin_key, end_key);
batch_newdb.Clear();
batch_olddb.Clear();
}
bool TxIndex::DB::MigrateData(CBlockTreeDB& block_tree_db, const CBlockLocator& best_locator)
{
// The prior implementation of txindex was always in sync with block index
// and presence was indicated with a boolean DB flag. If the flag is set,
// this means the txindex from a previous version is valid and in sync with
// the chain tip. The first step of the migration is to unset the flag and
// write the chain hash to a separate key, DB_TXINDEX_BLOCK. After that, the
// index entries are copied over in batches to the new database. Finally,
// DB_TXINDEX_BLOCK is erased from the old database and the block hash is
// written to the new database.
//
// Unsetting the boolean flag ensures that if the node is downgraded to a
// previous version, it will not see a corrupted, partially migrated index
// -- it will see that the txindex is disabled. When the node is upgraded
// again, the migration will pick up where it left off and sync to the block
// with hash DB_TXINDEX_BLOCK.
bool f_legacy_flag = false;
block_tree_db.ReadFlag("txindex", f_legacy_flag);
if (f_legacy_flag) {
if (!block_tree_db.Write(DB_TXINDEX_BLOCK, best_locator)) {
return error("%s: cannot write block indicator", __func__);
}
if (!block_tree_db.WriteFlag("txindex", false)) {
return error("%s: cannot write block index db flag", __func__);
}
}
CBlockLocator locator;
if (!block_tree_db.Read(DB_TXINDEX_BLOCK, locator)) {
return true;
}
int64_t count = 0;
LogPrintf("Upgrading txindex database... [0%%]\n");
uiInterface.ShowProgress(_("Upgrading txindex database"), 0, true);
int report_done = 0;
const size_t batch_size = 1 << 24; // 16 MiB
CDBBatch batch_newdb(*this);
CDBBatch batch_olddb(block_tree_db);
std::pair<unsigned char, uint256> key;
std::pair<unsigned char, uint256> begin_key{DB_TXINDEX, uint256()};
std::pair<unsigned char, uint256> prev_key = begin_key;
bool interrupted = false;
std::unique_ptr<CDBIterator> cursor(block_tree_db.NewIterator());
for (cursor->Seek(begin_key); cursor->Valid(); cursor->Next()) {
boost::this_thread::interruption_point();
if (ShutdownRequested()) {
interrupted = true;
break;
}
if (!cursor->GetKey(key)) {
return error("%s: cannot get key from valid cursor", __func__);
}
if (key.first != DB_TXINDEX) {
break;
}
// Log progress every 10%.
if (++count % 256 == 0) {
// Since txids are uniformly random and traversed in increasing order, the high 16 bits
// of the hash can be used to estimate the current progress.
const uint256& txid = key.second;
uint32_t high_nibble =
(static_cast<uint32_t>(*(txid.begin() + 0)) << 8) +
(static_cast<uint32_t>(*(txid.begin() + 1)) << 0);
int percentage_done = (int)(high_nibble * 100.0 / 65536.0 + 0.5);
uiInterface.ShowProgress(_("Upgrading txindex database"), percentage_done, true);
if (report_done < percentage_done/10) {
LogPrintf("Upgrading txindex database... [%d%%]\n", percentage_done);
report_done = percentage_done/10;
}
}
CDiskTxPos value;
if (!cursor->GetValue(value)) {
return error("%s: cannot parse txindex record", __func__);
}
batch_newdb.Write(key, value);
batch_olddb.Erase(key);
if (batch_newdb.SizeEstimate() > batch_size || batch_olddb.SizeEstimate() > batch_size) {
// NOTE: it's OK to delete the key pointed at by the current DB cursor while iterating
// because LevelDB iterators are guaranteed to provide a consistent view of the
// underlying data, like a lightweight snapshot.
WriteTxIndexMigrationBatches(*this, block_tree_db,
batch_newdb, batch_olddb,
prev_key, key);
prev_key = key;
}
}
// If these final DB batches complete the migration, write the best block
// hash marker to the new database and delete from the old one. This signals
// that the former is fully caught up to that point in the blockchain and
// that all txindex entries have been removed from the latter.
if (!interrupted) {
batch_olddb.Erase(DB_TXINDEX_BLOCK);
batch_newdb.Write(DB_BEST_BLOCK, locator);
}
WriteTxIndexMigrationBatches(*this, block_tree_db,
batch_newdb, batch_olddb,
begin_key, key);
if (interrupted) {
LogPrintf("[CANCELLED].\n");
return false;
}
uiInterface.ShowProgress("", 100, false);
LogPrintf("[DONE].\n");
return true;
}
TxIndex::TxIndex(size_t n_cache_size, bool f_memory, bool f_wipe)
: m_db(MakeUnique<TxIndex::DB>(n_cache_size, f_memory, f_wipe))
{}
TxIndex::~TxIndex() {}
bool TxIndex::Init()
{
LOCK(cs_main);
// Attempt to migrate txindex from the old database to the new one. Even if
// chain_tip is null, the node could be reindexing and we still want to
// delete txindex records in the old database.
if (!m_db->MigrateData(*pblocktree, chainActive.GetLocator())) {
return false;
}
return BaseIndex::Init();
}
bool TxIndex::WriteBlock(const CBlock& block, const CBlockIndex* pindex)
{
CDiskTxPos pos(pindex->GetBlockPos(), GetSizeOfCompactSize(block.vtx.size()));
std::vector<std::pair<uint256, CDiskTxPos>> vPos;
vPos.reserve(block.vtx.size());
for (const auto& tx : block.vtx) {
vPos.emplace_back(tx->GetHash(), pos);
pos.nTxOffset += ::GetSerializeSize(*tx, SER_DISK, CLIENT_VERSION);
}
return m_db->WriteTxs(vPos);
}
BaseIndex::DB& TxIndex::GetDB() const { return *m_db; }
bool TxIndex::FindTx(const uint256& tx_hash, uint256& block_hash, CTransactionRef& tx) const
{
CDiskTxPos postx;
if (!m_db->ReadTxPos(tx_hash, postx)) {
return false;
}
CAutoFile file(OpenBlockFile(postx, true), SER_DISK, CLIENT_VERSION);
if (file.IsNull()) {
return error("%s: OpenBlockFile failed", __func__);
}
CBlockHeader header;
try {
file >> header;
if (fseek(file.Get(), postx.nTxOffset, SEEK_CUR)) {
return error("%s: fseek(...) failed", __func__);
}
file >> tx;
} catch (const std::exception& e) {
return error("%s: Deserialize or I/O error - %s", __func__, e.what());
}
if (tx->GetHash() != tx_hash) {
return error("%s: txid mismatch", __func__);
}
block_hash = header.GetHash();
return true;
}

View file

@ -1,79 +0,0 @@
// Copyright (c) 2017-2018 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_INDEX_TXINDEX_H
#define BITCOIN_INDEX_TXINDEX_H
#include <chain.h>
#include <index/base.h>
#include <txdb.h>
struct CDiskTxPos : public CDiskBlockPos
{
unsigned int nTxOffset; // after header
ADD_SERIALIZE_METHODS;
template <typename Stream, typename Operation>
inline void SerializationOp(Stream& s, Operation ser_action) {
READWRITEAS(CDiskBlockPos, *this);
READWRITE(VARINT(nTxOffset));
}
CDiskTxPos(const CDiskBlockPos &blockIn, unsigned int nTxOffsetIn) : CDiskBlockPos(blockIn.nFile, blockIn.nPos), nTxOffset(nTxOffsetIn) {
}
CDiskTxPos() {
SetNull();
}
void SetNull() {
CDiskBlockPos::SetNull();
nTxOffset = 0;
}
};
/**
* TxIndex is used to look up transactions included in the blockchain by hash.
* The index is written to a LevelDB database and records the filesystem
* location of each transaction by transaction hash.
*/
class TxIndex final : public BaseIndex
{
protected:
class DB;
private:
const std::unique_ptr<DB> m_db;
protected:
/// Override base class init to migrate from old database.
bool Init() override;
bool WriteBlock(const CBlock& block, const CBlockIndex* pindex) override;
BaseIndex::DB& GetDB() const override;
const char* GetName() const override { return "txindex"; }
public:
/// Constructs the index, which becomes available to be queried.
explicit TxIndex(size_t n_cache_size, bool f_memory = false, bool f_wipe = false);
// Destructor is declared because this class contains a unique_ptr to an incomplete type.
virtual ~TxIndex() override;
/// Look up a transaction by hash.
///
/// @param[in] tx_hash The hash of the transaction to be returned.
/// @param[out] block_hash The hash of the block the transaction is found in.
/// @param[out] tx The transaction itself.
/// @return true if transaction is found, false otherwise
bool FindTx(const uint256& tx_hash, uint256& block_hash, CTransactionRef& tx) const;
};
/// The global transaction index, used in GetTransaction. May be null.
extern std::unique_ptr<TxIndex> g_txindex;
#endif // BITCOIN_INDEX_TXINDEX_H

View file

@ -14,13 +14,14 @@
#include <chain.h> #include <chain.h>
#include <chainparams.h> #include <chainparams.h>
#include <checkpoints.h> #include <checkpoints.h>
#include <claimtrie.h> #include <claimtrie/forks.h>
#include <claimtrie/hashes.h>
#include <clientversion.h>
#include <compat/sanity.h> #include <compat/sanity.h>
#include <consensus/validation.h> #include <consensus/validation.h>
#include <fs.h> #include <fs.h>
#include <httpserver.h> #include <httpserver.h>
#include <httprpc.h> #include <httprpc.h>
#include <index/txindex.h>
#include <key.h> #include <key.h>
#include <lbry.h> #include <lbry.h>
#include <validation.h> #include <validation.h>
@ -36,6 +37,7 @@
#include <rpc/blockchain.h> #include <rpc/blockchain.h>
#include <script/standard.h> #include <script/standard.h>
#include <script/sigcache.h> #include <script/sigcache.h>
#include <sqlite/hdr/sqlite_modern_cpp/log.h>
#include <scheduler.h> #include <scheduler.h>
#include <shutdown.h> #include <shutdown.h>
#include <timedata.h> #include <timedata.h>
@ -46,11 +48,10 @@
#include <uint256.h> #include <uint256.h>
#include <util.h> #include <util.h>
#include <utilmoneystr.h> #include <utilmoneystr.h>
#include <utilstrencodings.h>
#include <validationinterface.h> #include <validationinterface.h>
#include <warnings.h> #include <warnings.h>
#include <walletinitinterface.h> #include <walletinitinterface.h>
#include <stdint.h>
#include <stdio.h>
#ifndef WIN32 #ifndef WIN32
#include <signal.h> #include <signal.h>
@ -181,9 +182,6 @@ void Interrupt()
InterruptMapPort(); InterruptMapPort();
if (g_connman) if (g_connman)
g_connman->Interrupt(); g_connman->Interrupt();
if (g_txindex) {
g_txindex->Interrupt();
}
} }
void Shutdown() void Shutdown()
@ -212,7 +210,6 @@ void Shutdown()
// using the other before destroying them. // using the other before destroying them.
if (peerLogic) UnregisterValidationInterface(peerLogic.get()); if (peerLogic) UnregisterValidationInterface(peerLogic.get());
if (g_connman) g_connman->Stop(); if (g_connman) g_connman->Stop();
if (g_txindex) g_txindex->Stop();
StopTorControl(); StopTorControl();
@ -225,7 +222,6 @@ void Shutdown()
// destruct and reset all to nullptr. // destruct and reset all to nullptr.
peerLogic.reset(); peerLogic.reset();
g_connman.reset(); g_connman.reset();
g_txindex.reset();
if (g_is_mempool_loaded && gArgs.GetArg("-persistmempool", DEFAULT_PERSIST_MEMPOOL)) { if (g_is_mempool_loaded && gArgs.GetArg("-persistmempool", DEFAULT_PERSIST_MEMPOOL)) {
DumpMempool(); DumpMempool();
@ -353,7 +349,7 @@ void SetupServerArgs()
// Hidden Options // Hidden Options
std::vector<std::string> hidden_args = {"-rpcssl", "-benchmark", "-h", "-help", "-socks", "-tor", "-debugnet", "-whitelistalwaysrelay", std::vector<std::string> hidden_args = {"-rpcssl", "-benchmark", "-h", "-help", "-socks", "-tor", "-debugnet", "-whitelistalwaysrelay",
"-prematurewitness", "-walletprematurewitness", "-promiscuousmempoolflags", "-blockminsize", "-dbcrashratio", "-forcecompactdb", "-usehd", "-prematurewitness", "-walletprematurewitness", "-promiscuousmempoolflags", "-blockminsize", "-forcecompactdb", "-usehd",
// GUI args. These will be overwritten by SetupUIArgs for the GUI // GUI args. These will be overwritten by SetupUIArgs for the GUI
"-allowselfsignedrootcertificates", "-choosedatadir", "-lang=<lang>", "-min", "-resetguisettings", "-rootcertificates=<file>", "-splash", "-uiplatform"}; "-allowselfsignedrootcertificates", "-choosedatadir", "-lang=<lang>", "-min", "-resetguisettings", "-rootcertificates=<file>", "-splash", "-uiplatform"};
@ -369,9 +365,7 @@ void SetupServerArgs()
gArgs.AddArg("-blocksonly", strprintf("Whether to operate in a blocks only mode (default: %u)", DEFAULT_BLOCKSONLY), true, OptionsCategory::OPTIONS); gArgs.AddArg("-blocksonly", strprintf("Whether to operate in a blocks only mode (default: %u)", DEFAULT_BLOCKSONLY), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-conf=<file>", strprintf("Specify configuration file. Relative paths will be prefixed by datadir location. (default: %s)", BITCOIN_CONF_FILENAME), false, OptionsCategory::OPTIONS); gArgs.AddArg("-conf=<file>", strprintf("Specify configuration file. Relative paths will be prefixed by datadir location. (default: %s)", BITCOIN_CONF_FILENAME), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-datadir=<dir>", "Specify data directory", false, OptionsCategory::OPTIONS); gArgs.AddArg("-datadir=<dir>", "Specify data directory", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-dbcache=<n>", strprintf("Set database cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS); gArgs.AddArg("-dbcache=<n>", strprintf("Set database cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-claimtriecache=<n>", strprintf("Set claim trie cache size in megabytes (%d to %d, default: %d)", nMinDbCache, nMaxDbCache, nDefaultDbCache), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), false, OptionsCategory::OPTIONS); gArgs.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-feefilter", strprintf("Tell other nodes to filter invs to us by our mempool min fee (default: %u)", DEFAULT_FEEFILTER), true, OptionsCategory::OPTIONS); gArgs.AddArg("-feefilter", strprintf("Tell other nodes to filter invs to us by our mempool min fee (default: %u)", DEFAULT_FEEFILTER), true, OptionsCategory::OPTIONS);
gArgs.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", false, OptionsCategory::OPTIONS); gArgs.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", false, OptionsCategory::OPTIONS);
@ -388,7 +382,7 @@ void SetupServerArgs()
#else #else
hidden_args.emplace_back("-pid"); hidden_args.emplace_back("-pid");
#endif #endif
gArgs.AddArg("-prune=<n>", strprintf("Reduce storage requirements by enabling pruning (deleting) of old blocks. This allows the pruneblockchain RPC to be called to delete specific blocks, and enables automatic pruning of old blocks if a target size in MiB is provided. This mode is incompatible with -txindex and -rescan. " gArgs.AddArg("-prune=<n>", strprintf("Reduce storage requirements by enabling pruning (deleting) of old blocks. This allows the pruneblockchain RPC to be called to delete specific blocks, and enables automatic pruning of old blocks if a target size in MiB is provided. This mode is incompatible with -rescan. "
"Warning: Reverting this setting requires re-downloading the entire blockchain. " "Warning: Reverting this setting requires re-downloading the entire blockchain. "
"(default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, >=%u = automatically prune block files to stay under the specified target size in MiB)", MIN_DISK_SPACE_FOR_BLOCK_FILES / 1024 / 1024), false, OptionsCategory::OPTIONS); "(default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, >=%u = automatically prune block files to stay under the specified target size in MiB)", MIN_DISK_SPACE_FOR_BLOCK_FILES / 1024 / 1024), false, OptionsCategory::OPTIONS);
gArgs.AddArg("-reindex", "Rebuild chain state and block index from the blk*.dat files on disk", false, OptionsCategory::OPTIONS); gArgs.AddArg("-reindex", "Rebuild chain state and block index from the blk*.dat files on disk", false, OptionsCategory::OPTIONS);
@ -398,9 +392,7 @@ void SetupServerArgs()
#else #else
hidden_args.emplace_back("-sysperms"); hidden_args.emplace_back("-sysperms");
#endif #endif
gArgs.AddArg("-txindex", strprintf("Maintain a full transaction index, used by the getrawtransaction rpc call (default: %u)", DEFAULT_TXINDEX), false, OptionsCategory::OPTIONS); gArgs.AddArg("-txindex", "Deprecated", false, OptionsCategory::HIDDEN);
gArgs.AddArg("-memfile=<GiB>", "Use a memory mapped file for the claimtrie allocations (default: use RAM instead)", false, OptionsCategory::OPTIONS);
gArgs.AddArg("-addnode=<ip>", "Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes.", false, OptionsCategory::CONNECTION); gArgs.AddArg("-addnode=<ip>", "Add a node to connect to and attempt to keep the connection open (see the `addnode` RPC command help for more info). This option can be specified multiple times to add multiple nodes.", false, OptionsCategory::CONNECTION);
gArgs.AddArg("-banscore=<n>", strprintf("Threshold for disconnecting misbehaving peers (default: %u)", DEFAULT_BANSCORE_THRESHOLD), false, OptionsCategory::CONNECTION); gArgs.AddArg("-banscore=<n>", strprintf("Threshold for disconnecting misbehaving peers (default: %u)", DEFAULT_BANSCORE_THRESHOLD), false, OptionsCategory::CONNECTION);
gArgs.AddArg("-bantime=<n>", strprintf("Number of seconds to keep misbehaving peers from reconnecting (default: %u)", DEFAULT_MISBEHAVING_BANTIME), false, OptionsCategory::CONNECTION); gArgs.AddArg("-bantime=<n>", strprintf("Number of seconds to keep misbehaving peers from reconnecting (default: %u)", DEFAULT_MISBEHAVING_BANTIME), false, OptionsCategory::CONNECTION);
@ -628,7 +620,7 @@ static void CleanupBlockRevFiles()
// start removing block files. // start removing block files.
int nContigCounter = 0; int nContigCounter = 0;
for (const std::pair<const std::string, fs::path>& item : mapBlockFiles) { for (const std::pair<const std::string, fs::path>& item : mapBlockFiles) {
if (atoi(item.first) == nContigCounter) { if (std::atoi(item.first.c_str()) == nContigCounter) {
nContigCounter++; nContigCounter++;
continue; continue;
} }
@ -936,12 +928,6 @@ bool AppInitParameterInteraction()
return InitError(strprintf(_("Specified blocks directory \"%s\" does not exist."), gArgs.GetArg("-blocksdir", "").c_str())); return InitError(strprintf(_("Specified blocks directory \"%s\" does not exist."), gArgs.GetArg("-blocksdir", "").c_str()));
} }
// if using block pruning, then disallow txindex
if (gArgs.GetArg("-prune", 0)) {
if (gArgs.GetBoolArg("-txindex", DEFAULT_TXINDEX))
return InitError(_("Prune mode is incompatible with -txindex."));
}
// -bind and -whitebind can't be set when not listening // -bind and -whitebind can't be set when not listening
size_t nUserBind = gArgs.GetArgs("-bind").size() + gArgs.GetArgs("-whitebind").size(); size_t nUserBind = gArgs.GetArgs("-bind").size() + gArgs.GetArgs("-whitebind").size();
if (nUserBind != 0 && !gArgs.GetBoolArg("-listen", DEFAULT_LISTEN)) { if (nUserBind != 0 && !gArgs.GetBoolArg("-listen", DEFAULT_LISTEN)) {
@ -1227,6 +1213,8 @@ bool AppInitLockDataDirectory()
return true; return true;
} }
extern CChainState g_chainstate;
bool AppInitMain() bool AppInitMain()
{ {
const CChainParams& chainparams = Params(); const CChainParams& chainparams = Params();
@ -1257,6 +1245,15 @@ bool AppInitMain()
gArgs.GetArg("-datadir", ""), fs::current_path().string()); gArgs.GetArg("-datadir", ""), fs::current_path().string());
} }
sqlite::error_log(
[](sqlite::sqlite_exception& e) {
LogPrintf("Error with Sqlite: %s\n", e.what());
},
[](sqlite::errors::misuse& e) {
// You can behave differently to specific errors
}
);
InitSignatureCache(); InitSignatureCache();
InitScriptExecutionCache(); InitScriptExecutionCache();
@ -1419,29 +1416,28 @@ bool AppInitMain()
int64_t nTotalCache = (gArgs.GetArg("-dbcache", nDefaultDbCache) << 20); int64_t nTotalCache = (gArgs.GetArg("-dbcache", nDefaultDbCache) << 20);
nTotalCache = std::max(nTotalCache, nMinDbCache << 20); // total cache cannot be less than nMinDbCache nTotalCache = std::max(nTotalCache, nMinDbCache << 20); // total cache cannot be less than nMinDbCache
nTotalCache = std::min(nTotalCache, nMaxDbCache << 20); // total cache cannot be greater than nMaxDbcache nTotalCache = std::min(nTotalCache, nMaxDbCache << 20); // total cache cannot be greater than nMaxDbcache
int64_t nBlockTreeDBCache = std::min(nTotalCache / 8, nMaxBlockDBCache << 20);
// we're going to chop the cache into three pieces:
// the coin cache, the block cache, the claimtrie cache
// however, we want the claimtrie cache to be larger than the others
int64_t nBlockTreeDBCache = std::min(nTotalCache / 4, nMaxBlockDBCache << 20);
int64_t nCoinDBCache = std::min(nTotalCache / 8, nMaxCoinsDBCache << 20);
int64_t nClaimtrieCache = nTotalCache / 4;
nTotalCache -= nBlockTreeDBCache; nTotalCache -= nBlockTreeDBCache;
int64_t nTxIndexCache = std::min(nTotalCache / 8, gArgs.GetBoolArg("-txindex", DEFAULT_TXINDEX) ? nMaxTxIndexCache << 20 : 0);
nTotalCache -= nTxIndexCache;
int64_t nCoinDBCache = std::min(nTotalCache / 2, (nTotalCache / 4) + (1 << 23)); // use 25%-50% of the remainder for disk cache
nCoinDBCache = std::min(nCoinDBCache, nMaxCoinsDBCache << 20); // cap total coins db cache
nTotalCache -= nCoinDBCache; nTotalCache -= nCoinDBCache;
nTotalCache -= nClaimtrieCache;
nCoinCacheUsage = nTotalCache; // the rest goes to in-memory cache nCoinCacheUsage = nTotalCache; // the rest goes to in-memory cache
std::cout << "nTotalCache: " << nTotalCache << ", nCoinCacheUsage: " << nCoinCacheUsage << ", nCoinDBCache: " << nCoinDBCache << std::endl; std::cout << "nTotalCache: " << nTotalCache << ", nCoinCacheUsage: " << nCoinCacheUsage << ", nCoinDBCache: " << nCoinDBCache << std::endl;
int64_t nMempoolSizeMax = gArgs.GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000; int64_t nMempoolSizeMax = gArgs.GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000;
LogPrintf("Cache configuration:\n"); LogPrintf("Cache configuration:\n");
LogPrintf("* Using %.1fMiB for block index database\n", nBlockTreeDBCache * (1.0 / 1024 / 1024)); LogPrintf("* Using %.1fMiB for block index database cache\n", nBlockTreeDBCache * (1.0 / 1024 / 1024));
if (gArgs.GetBoolArg("-txindex", DEFAULT_TXINDEX)) { LogPrintf("* Using %.1fMiB for chain state database cache\n", nCoinDBCache * (1.0 / 1024 / 1024));
LogPrintf("* Using %.1fMiB for transaction index database\n", nTxIndexCache * (1.0 / 1024 / 1024)); LogPrintf("* Using %.1fMiB for claimtrie database cache\n", nClaimtrieCache * (1.0 / 1024 / 1024));
}
LogPrintf("* Using %.1fMiB for chain state database\n", nCoinDBCache * (1.0 / 1024 / 1024));
LogPrintf("* Using %.1fMiB for in-memory UTXO set (plus up to %.1fMiB of unused mempool space)\n", nCoinCacheUsage * (1.0 / 1024 / 1024), nMempoolSizeMax * (1.0 / 1024 / 1024)); LogPrintf("* Using %.1fMiB for in-memory UTXO set (plus up to %.1fMiB of unused mempool space)\n", nCoinCacheUsage * (1.0 / 1024 / 1024), nMempoolSizeMax * (1.0 / 1024 / 1024));
g_memfileSize = gArgs.GetArg("-memfile", 0u);
bool fLoaded = false; bool fLoaded = false;
while (!fLoaded && !ShutdownRequested()) { while (!fLoaded && !ShutdownRequested()) {
bool fReset = fReindex;
std::string strLoadError; std::string strLoadError;
uiInterface.InitMessage(_("Loading block index...")); uiInterface.InitMessage(_("Loading block index..."));
@ -1458,14 +1454,16 @@ bool AppInitMain()
// new CBlockTreeDB tries to delete the existing file, which // new CBlockTreeDB tries to delete the existing file, which
// fails if it's still open from the previous loop. Close it first: // fails if it's still open from the previous loop. Close it first:
pblocktree.reset(); pblocktree.reset();
pblocktree.reset(new CBlockTreeDB(nBlockTreeDBCache, false, fReset)); pblocktree.reset(new CBlockTreeDB(nBlockTreeDBCache, false, fReindex));
delete pclaimTrie;
int64_t trieCacheMB = gArgs.GetArg("-claimtriecache", nDefaultDbCache);
trieCacheMB = std::min(trieCacheMB, nMaxDbCache);
trieCacheMB = std::max(trieCacheMB, nMinDbCache);
pclaimTrie = new CClaimTrie(false, fReindex || fReindexChainState, 32, trieCacheMB);
if (fReset) { // use faster N way hash function
// NOTE: it assumes memory is continuous
// that means uint256 itself should use std::array or raw pointer
sha256n_way = [](std::vector<uint256>& hashes) {
SHA256D64(hashes[0].begin(), hashes[0].begin(), hashes.size() / 2);
};
if (fReindex) {
pblocktree->WriteReindexing(true); pblocktree->WriteReindexing(true);
//If we're reindexing in prune mode, wipe away unusable block files and all undo data files //If we're reindexing in prune mode, wipe away unusable block files and all undo data files
if (fPruneMode) if (fPruneMode)
@ -1479,13 +1477,13 @@ bool AppInitMain()
// Note that it also sets fReindex based on the disk flag! // Note that it also sets fReindex based on the disk flag!
// From here on out fReindex and fReset mean something different! // From here on out fReindex and fReset mean something different!
if (!LoadBlockIndex(chainparams)) { if (!LoadBlockIndex(chainparams)) {
strLoadError = _("Error loading block database"); strLoadError = _("Error loading block index database");
break; break;
} }
// If the loaded chain has a wrong genesis, bail out immediately // If the loaded chain has a wrong genesis, bail out immediately
// (we're likely using a testnet datadir, or the other way around). // (we're likely using a testnet datadir, or the other way around).
if (!mapBlockIndex.empty() && !LookupBlockIndex(chainparams.GetConsensus().hashGenesisBlock)) { if (!g_chainstate.mapBlockIndex.empty() && !LookupBlockIndex(chainparams.GetConsensus().hashGenesisBlock)) {
return InitError(_("Incorrect or no genesis block found. Wrong datadir for network?")); return InitError(_("Incorrect or no genesis block found. Wrong datadir for network?"));
} }
@ -1508,15 +1506,24 @@ bool AppInitMain()
// At this point we're either in reindex or we've loaded a useful // At this point we're either in reindex or we've loaded a useful
// block tree into mapBlockIndex! // block tree into mapBlockIndex!
pcoinsdbview.reset(new CCoinsViewDB(nCoinDBCache, false, fReset || fReindexChainState)); pcoinsdbview.reset(new CCoinsViewDB(nCoinDBCache, false, fReindex || fReindexChainState));
pcoinscatcher.reset(new CCoinsViewErrorCatcher(pcoinsdbview.get())); pcoinscatcher.reset(new CCoinsViewErrorCatcher(pcoinsdbview.get()));
// If necessary, upgrade from older database format. if (g_logger->Enabled() && LogAcceptCategory(BCLog::CLAIMS))
// This is a no-op if we cleared the coinsviewdb with -reindex or -reindex-chainstate CLogPrint::global().setLogger(g_logger);
if (!pcoinsdbview->Upgrade()) {
strLoadError = _("Error upgrading chainstate database"); delete pclaimTrie;
break; auto& consensus = chainparams.GetConsensus();
} pclaimTrie = new CClaimTrie(nClaimtrieCache, fReindex || fReindexChainState, 0,
GetDataDir().string(),
consensus.nNormalizedNameForkHeight,
consensus.nMinRemovalWorkaroundHeight,
consensus.nMaxRemovalWorkaroundHeight,
consensus.nOriginalClaimExpirationTime,
consensus.nExtendedClaimExpirationTime,
consensus.nExtendedClaimExpirationForkHeight,
consensus.nAllClaimsInMerkleForkHeight,
32);
// ReplayBlocks is a no-op if we cleared the coinsviewdb with -reindex or -reindex-chainstate // ReplayBlocks is a no-op if we cleared the coinsviewdb with -reindex or -reindex-chainstate
if (!ReplayBlocks(chainparams, pcoinsdbview.get())) { if (!ReplayBlocks(chainparams, pcoinsdbview.get())) {
@ -1527,7 +1534,7 @@ bool AppInitMain()
// The on-disk coinsdb is now in a good state, create the cache // The on-disk coinsdb is now in a good state, create the cache
pcoinsTip.reset(new CCoinsViewCache(pcoinscatcher.get())); pcoinsTip.reset(new CCoinsViewCache(pcoinscatcher.get()));
bool is_coinsview_empty = fReset || fReindexChainState || pcoinsTip->GetBestBlock().IsNull(); bool is_coinsview_empty = fReindex || fReindexChainState || pcoinsTip->GetBestBlock().IsNull();
if (!is_coinsview_empty) { if (!is_coinsview_empty) {
// LoadChainTip sets chainActive based on pcoinsTip's best block // LoadChainTip sets chainActive based on pcoinsTip's best block
if (!LoadChainTip(chainparams)) { if (!LoadChainTip(chainparams)) {
@ -1537,14 +1544,14 @@ bool AppInitMain()
assert(chainActive.Tip() != nullptr); assert(chainActive.Tip() != nullptr);
} }
CClaimTrieCache trieCache(pclaimTrie); auto tip = chainActive.Tip();
if (!trieCache.ReadFromDisk(chainActive.Tip())) LogPrintf("Checking existing claim trie consistency...\n");
{ if (tip && !CClaimTrieCache(pclaimTrie).validateDb(tip->nHeight, tip->hashClaimTrie)) {
strLoadError = _("Error loading the claim trie from disk"); strLoadError = _("Error validating the stored claim trie");
break; break;
} }
if (!fReset) { if (!fReindex) {
// Note that RewindBlockIndex MUST run even if we're about to -reindex-chainstate. // Note that RewindBlockIndex MUST run even if we're about to -reindex-chainstate.
// It both disconnects blocks based on chainActive, and drops block data in // It both disconnects blocks based on chainActive, and drops block data in
// mapBlockIndex based on lack of available witness data. // mapBlockIndex based on lack of available witness data.
@ -1589,10 +1596,10 @@ bool AppInitMain()
if (!fLoaded && !ShutdownRequested()) { if (!fLoaded && !ShutdownRequested()) {
// first suggest a reindex // first suggest a reindex
if (!fReset) { if (!fReindex) {
bool fRet = uiInterface.ThreadSafeQuestion( bool fRet = uiInterface.ThreadSafeQuestion(
strLoadError + ".\n\n" + _("Do you want to rebuild the block database now?"), strLoadError + ".\n\n" + _("Do you want to rebuild the block database now?"),
strLoadError + ".\nPlease restart with -reindex or -reindex-chainstate to recover.", strLoadError + ".\nPlease restart with -reindex to recover.",
"", CClientUIInterface::MSG_ERROR | CClientUIInterface::BTN_ABORT); "", CClientUIInterface::MSG_ERROR | CClientUIInterface::BTN_ABORT);
if (fRet) { if (fRet) {
fReindex = true; fReindex = true;
@ -1605,6 +1612,14 @@ bool AppInitMain()
return InitError(strLoadError); return InitError(strLoadError);
} }
} }
if (fReindex) {
// remove old LevelDB indexes
boost::system::error_code ec;
fs::remove_all(GetDataDir() / "blocks" / "index", ec);
fs::remove_all(GetDataDir() / "chainstate", ec);
fs::remove_all(GetDataDir() / "claimtrie", ec);
}
} }
// As LoadBlockIndex can take several minutes, it's possible the user // As LoadBlockIndex can take several minutes, it's possible the user
@ -1624,8 +1639,7 @@ bool AppInitMain()
// ********************************************************* Step 8: start indexers // ********************************************************* Step 8: start indexers
if (gArgs.GetBoolArg("-txindex", DEFAULT_TXINDEX)) { if (gArgs.GetBoolArg("-txindex", DEFAULT_TXINDEX)) {
g_txindex = MakeUnique<TxIndex>(nTxIndexCache, false, fReindex); LogPrintf("The txindex parameter is no longer necessary. It is always on.\n");
g_txindex->Start();
} }
// ********************************************************* Step 9: load wallet // ********************************************************* Step 9: load wallet
@ -1699,7 +1713,6 @@ bool AppInitMain()
//// debug print //// debug print
{ {
LOCK(cs_main); LOCK(cs_main);
LogPrintf("mapBlockIndex.size() = %u\n", mapBlockIndex.size());
chain_active_height = chainActive.Height(); chain_active_height = chainActive.Height();
} }
LogPrintf("nBestHeight = %d\n", chain_active_height); LogPrintf("nBestHeight = %d\n", chain_active_height);

View file

@ -88,8 +88,7 @@ WalletTx MakeWalletTx(CWallet& wallet, const CWalletTx& wtx)
WalletTxStatus MakeWalletTxStatus(const CWalletTx& wtx) WalletTxStatus MakeWalletTxStatus(const CWalletTx& wtx)
{ {
WalletTxStatus result; WalletTxStatus result;
auto mi = ::mapBlockIndex.find(wtx.hashBlock); CBlockIndex* block = LookupBlockIndex(wtx.hashBlock);
CBlockIndex* block = mi != ::mapBlockIndex.end() ? mi->second : nullptr;
result.block_height = (block ? block->nHeight : std::numeric_limits<int>::max()); result.block_height = (block ? block->nHeight : std::numeric_limits<int>::max());
result.blocks_to_maturity = wtx.GetBlocksToMaturity(); result.blocks_to_maturity = wtx.GetBlocksToMaturity();
result.depth_in_main_chain = wtx.GetDepthInMainChain(); result.depth_in_main_chain = wtx.GetDepthInMainChain();

View file

@ -3,8 +3,6 @@
#include <cstdio> #include <cstdio>
uint32_t g_memfileSize = 0;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params) unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nFirstBlockTime, const Consensus::Params& params)
{ {
if (params.fPowNoRetargeting) if (params.fPowNoRetargeting)

View file

@ -4,7 +4,6 @@
#include <chain.h> #include <chain.h>
#include <chainparams.h> #include <chainparams.h>
extern uint32_t g_memfileSize;
unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nLastRetargetTime, const Consensus::Params& params); unsigned int CalculateLbryNextWorkRequired(const CBlockIndex* pindexLast, int64_t nLastRetargetTime, const Consensus::Params& params);
#endif #endif

View file

@ -1,13 +0,0 @@
build_config.mk
*.a
*.o
*.dylib*
*.so
*.so.*
*_test
db_bench
leveldbutil
Release
Debug
Benchmark
vs2010.*

View file

@ -1,13 +0,0 @@
language: cpp
compiler:
- clang
- gcc
os:
- linux
- osx
sudo: false
before_install:
- echo $LANG
- echo $LC_ALL
script:
- make -j 4 check

View file

@ -1,12 +0,0 @@
# Names should be added to this file like so:
# Name or Organization <email address>
Google Inc.
# Initial version authors:
Jeffrey Dean <jeff@google.com>
Sanjay Ghemawat <sanjay@google.com>
# Partial list of contributors:
Kevin Regan <kevin.d.regan@gmail.com>
Johan Bilien <jobi@litl.com>

View file

@ -1,36 +0,0 @@
# Contributing
We'd love to accept your code patches! However, before we can take them, we
have to jump a couple of legal hurdles.
## Contributor License Agreements
Please fill out either the individual or corporate Contributor License
Agreement as appropriate.
* If you are an individual writing original source code and you're sure you
own the intellectual property, then sign an [individual CLA](https://developers.google.com/open-source/cla/individual).
* If you work for a company that wants to allow you to contribute your work,
then sign a [corporate CLA](https://developers.google.com/open-source/cla/corporate).
Follow either of the two links above to access the appropriate CLA and
instructions for how to sign and return it.
## Submitting a Patch
1. Sign the contributors license agreement above.
2. Decide which code you want to submit. A submission should be a set of changes
that addresses one issue in the [issue tracker](https://github.com/google/leveldb/issues).
Please don't mix more than one logical change per submission, because it makes
the history hard to follow. If you want to make a change
(e.g. add a sample or feature) that doesn't have a corresponding issue in the
issue tracker, please create one.
3. **Submitting**: When you are ready to submit, send us a Pull Request. Be
sure to include the issue number you fixed and the name you used to sign
the CLA.
## Writing Code ##
If your contribution contains code, please make sure that it follows
[the style guide](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml).
Otherwise we will have to ask you to make changes, and that's no fun for anyone.

View file

@ -1,27 +0,0 @@
Copyright (c) 2011 The LevelDB Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,424 +0,0 @@
# Copyright (c) 2011 The LevelDB Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file. See the AUTHORS file for names of contributors.
#-----------------------------------------------
# Uncomment exactly one of the lines labelled (A), (B), and (C) below
# to switch between compilation modes.
# (A) Production use (optimized mode)
OPT ?= -O2 -DNDEBUG
# (B) Debug mode, w/ full line-level debugging symbols
# OPT ?= -g2
# (C) Profiling mode: opt, but w/debugging symbols
# OPT ?= -O2 -g2 -DNDEBUG
#-----------------------------------------------
# detect what platform we're building on
$(shell CC="$(CC)" CXX="$(CXX)" TARGET_OS="$(TARGET_OS)" \
./build_detect_platform build_config.mk ./)
# this file is generated by the previous line to set build flags and sources
include build_config.mk
TESTS = \
db/autocompact_test \
db/c_test \
db/corruption_test \
db/db_test \
db/dbformat_test \
db/fault_injection_test \
db/filename_test \
db/log_test \
db/recovery_test \
db/skiplist_test \
db/version_edit_test \
db/version_set_test \
db/write_batch_test \
helpers/memenv/memenv_test \
issues/issue178_test \
issues/issue200_test \
table/filter_block_test \
table/table_test \
util/arena_test \
util/bloom_test \
util/cache_test \
util/coding_test \
util/crc32c_test \
util/env_posix_test \
util/env_test \
util/hash_test
UTILS = \
db/db_bench \
db/leveldbutil
# Put the object files in a subdirectory, but the application at the top of the object dir.
PROGNAMES := $(notdir $(TESTS) $(UTILS))
# On Linux may need libkyotocabinet-dev for dependency.
BENCHMARKS = \
doc/bench/db_bench_sqlite3 \
doc/bench/db_bench_tree_db
CFLAGS += -I. -I./include $(PLATFORM_CCFLAGS) $(OPT)
CXXFLAGS += -I. -I./include $(PLATFORM_CXXFLAGS) $(OPT)
LDFLAGS += $(PLATFORM_LDFLAGS)
LIBS += $(PLATFORM_LIBS)
SIMULATOR_OUTDIR=out-ios-x86
DEVICE_OUTDIR=out-ios-arm
ifeq ($(PLATFORM), IOS)
# Note: iOS should probably be using libtool, not ar.
AR=xcrun ar
SIMULATORSDK=$(shell xcrun -sdk iphonesimulator --show-sdk-path)
DEVICESDK=$(shell xcrun -sdk iphoneos --show-sdk-path)
DEVICE_CFLAGS = -isysroot "$(DEVICESDK)" -arch armv6 -arch armv7 -arch armv7s -arch arm64
SIMULATOR_CFLAGS = -isysroot "$(SIMULATORSDK)" -arch i686 -arch x86_64
STATIC_OUTDIR=out-ios-universal
else
STATIC_OUTDIR=out-static
SHARED_OUTDIR=out-shared
STATIC_PROGRAMS := $(addprefix $(STATIC_OUTDIR)/, $(PROGNAMES))
SHARED_PROGRAMS := $(addprefix $(SHARED_OUTDIR)/, db_bench)
endif
STATIC_LIBOBJECTS := $(addprefix $(STATIC_OUTDIR)/, $(SOURCES:.cc=.o))
STATIC_MEMENVOBJECTS := $(addprefix $(STATIC_OUTDIR)/, $(MEMENV_SOURCES:.cc=.o))
DEVICE_LIBOBJECTS := $(addprefix $(DEVICE_OUTDIR)/, $(SOURCES:.cc=.o))
DEVICE_MEMENVOBJECTS := $(addprefix $(DEVICE_OUTDIR)/, $(MEMENV_SOURCES:.cc=.o))
SIMULATOR_LIBOBJECTS := $(addprefix $(SIMULATOR_OUTDIR)/, $(SOURCES:.cc=.o))
SIMULATOR_MEMENVOBJECTS := $(addprefix $(SIMULATOR_OUTDIR)/, $(MEMENV_SOURCES:.cc=.o))
SHARED_LIBOBJECTS := $(addprefix $(SHARED_OUTDIR)/, $(SOURCES:.cc=.o))
SHARED_MEMENVOBJECTS := $(addprefix $(SHARED_OUTDIR)/, $(MEMENV_SOURCES:.cc=.o))
TESTUTIL := $(STATIC_OUTDIR)/util/testutil.o
TESTHARNESS := $(STATIC_OUTDIR)/util/testharness.o $(TESTUTIL)
STATIC_TESTOBJS := $(addprefix $(STATIC_OUTDIR)/, $(addsuffix .o, $(TESTS)))
STATIC_UTILOBJS := $(addprefix $(STATIC_OUTDIR)/, $(addsuffix .o, $(UTILS)))
STATIC_ALLOBJS := $(STATIC_LIBOBJECTS) $(STATIC_MEMENVOBJECTS) $(STATIC_TESTOBJS) $(STATIC_UTILOBJS) $(TESTHARNESS)
DEVICE_ALLOBJS := $(DEVICE_LIBOBJECTS) $(DEVICE_MEMENVOBJECTS)
SIMULATOR_ALLOBJS := $(SIMULATOR_LIBOBJECTS) $(SIMULATOR_MEMENVOBJECTS)
default: all
# Should we build shared libraries?
ifneq ($(PLATFORM_SHARED_EXT),)
# Many leveldb test apps use non-exported API's. Only build a subset for testing.
SHARED_ALLOBJS := $(SHARED_LIBOBJECTS) $(SHARED_MEMENVOBJECTS) $(TESTHARNESS)
ifneq ($(PLATFORM_SHARED_VERSIONED),true)
SHARED_LIB1 = libleveldb.$(PLATFORM_SHARED_EXT)
SHARED_LIB2 = $(SHARED_LIB1)
SHARED_LIB3 = $(SHARED_LIB1)
SHARED_LIBS = $(SHARED_LIB1)
SHARED_MEMENVLIB = $(SHARED_OUTDIR)/libmemenv.a
else
# Update db.h if you change these.
SHARED_VERSION_MAJOR = 1
SHARED_VERSION_MINOR = 20
SHARED_LIB1 = libleveldb.$(PLATFORM_SHARED_EXT)
SHARED_LIB2 = $(SHARED_LIB1).$(SHARED_VERSION_MAJOR)
SHARED_LIB3 = $(SHARED_LIB1).$(SHARED_VERSION_MAJOR).$(SHARED_VERSION_MINOR)
SHARED_LIBS = $(SHARED_OUTDIR)/$(SHARED_LIB1) $(SHARED_OUTDIR)/$(SHARED_LIB2) $(SHARED_OUTDIR)/$(SHARED_LIB3)
$(SHARED_OUTDIR)/$(SHARED_LIB1): $(SHARED_OUTDIR)/$(SHARED_LIB3)
ln -fs $(SHARED_LIB3) $(SHARED_OUTDIR)/$(SHARED_LIB1)
$(SHARED_OUTDIR)/$(SHARED_LIB2): $(SHARED_OUTDIR)/$(SHARED_LIB3)
ln -fs $(SHARED_LIB3) $(SHARED_OUTDIR)/$(SHARED_LIB2)
SHARED_MEMENVLIB = $(SHARED_OUTDIR)/libmemenv.a
endif
$(SHARED_OUTDIR)/$(SHARED_LIB3): $(SHARED_LIBOBJECTS)
$(CXX) $(LDFLAGS) $(PLATFORM_SHARED_LDFLAGS)$(SHARED_LIB2) $(SHARED_LIBOBJECTS) -o $(SHARED_OUTDIR)/$(SHARED_LIB3) $(LIBS)
endif # PLATFORM_SHARED_EXT
all: $(SHARED_LIBS) $(SHARED_PROGRAMS) $(STATIC_OUTDIR)/libleveldb.a $(STATIC_OUTDIR)/libmemenv.a $(STATIC_PROGRAMS)
check: $(STATIC_PROGRAMS)
for t in $(notdir $(TESTS)); do echo "***** Running $$t"; $(STATIC_OUTDIR)/$$t || exit 1; done
clean:
-rm -rf out-static out-shared out-ios-x86 out-ios-arm out-ios-universal
-rm -f build_config.mk
-rm -rf ios-x86 ios-arm
$(STATIC_OUTDIR):
mkdir $@
$(STATIC_OUTDIR)/db: | $(STATIC_OUTDIR)
mkdir $@
$(STATIC_OUTDIR)/helpers/memenv: | $(STATIC_OUTDIR)
mkdir -p $@
$(STATIC_OUTDIR)/port: | $(STATIC_OUTDIR)
mkdir $@
$(STATIC_OUTDIR)/table: | $(STATIC_OUTDIR)
mkdir $@
$(STATIC_OUTDIR)/util: | $(STATIC_OUTDIR)
mkdir $@
.PHONY: STATIC_OBJDIRS
STATIC_OBJDIRS: \
$(STATIC_OUTDIR)/db \
$(STATIC_OUTDIR)/port \
$(STATIC_OUTDIR)/table \
$(STATIC_OUTDIR)/util \
$(STATIC_OUTDIR)/helpers/memenv
$(SHARED_OUTDIR):
mkdir $@
$(SHARED_OUTDIR)/db: | $(SHARED_OUTDIR)
mkdir $@
$(SHARED_OUTDIR)/helpers/memenv: | $(SHARED_OUTDIR)
mkdir -p $@
$(SHARED_OUTDIR)/port: | $(SHARED_OUTDIR)
mkdir $@
$(SHARED_OUTDIR)/table: | $(SHARED_OUTDIR)
mkdir $@
$(SHARED_OUTDIR)/util: | $(SHARED_OUTDIR)
mkdir $@
.PHONY: SHARED_OBJDIRS
SHARED_OBJDIRS: \
$(SHARED_OUTDIR)/db \
$(SHARED_OUTDIR)/port \
$(SHARED_OUTDIR)/table \
$(SHARED_OUTDIR)/util \
$(SHARED_OUTDIR)/helpers/memenv
$(DEVICE_OUTDIR):
mkdir $@
$(DEVICE_OUTDIR)/db: | $(DEVICE_OUTDIR)
mkdir $@
$(DEVICE_OUTDIR)/helpers/memenv: | $(DEVICE_OUTDIR)
mkdir -p $@
$(DEVICE_OUTDIR)/port: | $(DEVICE_OUTDIR)
mkdir $@
$(DEVICE_OUTDIR)/table: | $(DEVICE_OUTDIR)
mkdir $@
$(DEVICE_OUTDIR)/util: | $(DEVICE_OUTDIR)
mkdir $@
.PHONY: DEVICE_OBJDIRS
DEVICE_OBJDIRS: \
$(DEVICE_OUTDIR)/db \
$(DEVICE_OUTDIR)/port \
$(DEVICE_OUTDIR)/table \
$(DEVICE_OUTDIR)/util \
$(DEVICE_OUTDIR)/helpers/memenv
$(SIMULATOR_OUTDIR):
mkdir $@
$(SIMULATOR_OUTDIR)/db: | $(SIMULATOR_OUTDIR)
mkdir $@
$(SIMULATOR_OUTDIR)/helpers/memenv: | $(SIMULATOR_OUTDIR)
mkdir -p $@
$(SIMULATOR_OUTDIR)/port: | $(SIMULATOR_OUTDIR)
mkdir $@
$(SIMULATOR_OUTDIR)/table: | $(SIMULATOR_OUTDIR)
mkdir $@
$(SIMULATOR_OUTDIR)/util: | $(SIMULATOR_OUTDIR)
mkdir $@
.PHONY: SIMULATOR_OBJDIRS
SIMULATOR_OBJDIRS: \
$(SIMULATOR_OUTDIR)/db \
$(SIMULATOR_OUTDIR)/port \
$(SIMULATOR_OUTDIR)/table \
$(SIMULATOR_OUTDIR)/util \
$(SIMULATOR_OUTDIR)/helpers/memenv
$(STATIC_ALLOBJS): | STATIC_OBJDIRS
$(DEVICE_ALLOBJS): | DEVICE_OBJDIRS
$(SIMULATOR_ALLOBJS): | SIMULATOR_OBJDIRS
$(SHARED_ALLOBJS): | SHARED_OBJDIRS
ifeq ($(PLATFORM), IOS)
$(DEVICE_OUTDIR)/libleveldb.a: $(DEVICE_LIBOBJECTS)
rm -f $@
$(AR) -rs $@ $(DEVICE_LIBOBJECTS)
$(SIMULATOR_OUTDIR)/libleveldb.a: $(SIMULATOR_LIBOBJECTS)
rm -f $@
$(AR) -rs $@ $(SIMULATOR_LIBOBJECTS)
$(DEVICE_OUTDIR)/libmemenv.a: $(DEVICE_MEMENVOBJECTS)
rm -f $@
$(AR) -rs $@ $(DEVICE_MEMENVOBJECTS)
$(SIMULATOR_OUTDIR)/libmemenv.a: $(SIMULATOR_MEMENVOBJECTS)
rm -f $@
$(AR) -rs $@ $(SIMULATOR_MEMENVOBJECTS)
# For iOS, create universal object libraries to be used on both the simulator and
# a device.
$(STATIC_OUTDIR)/libleveldb.a: $(STATIC_OUTDIR) $(DEVICE_OUTDIR)/libleveldb.a $(SIMULATOR_OUTDIR)/libleveldb.a
lipo -create $(DEVICE_OUTDIR)/libleveldb.a $(SIMULATOR_OUTDIR)/libleveldb.a -output $@
$(STATIC_OUTDIR)/libmemenv.a: $(STATIC_OUTDIR) $(DEVICE_OUTDIR)/libmemenv.a $(SIMULATOR_OUTDIR)/libmemenv.a
lipo -create $(DEVICE_OUTDIR)/libmemenv.a $(SIMULATOR_OUTDIR)/libmemenv.a -output $@
else
$(STATIC_OUTDIR)/libleveldb.a:$(STATIC_LIBOBJECTS)
rm -f $@
$(AR) -rs $@ $(STATIC_LIBOBJECTS)
$(STATIC_OUTDIR)/libmemenv.a:$(STATIC_MEMENVOBJECTS)
rm -f $@
$(AR) -rs $@ $(STATIC_MEMENVOBJECTS)
endif
$(SHARED_MEMENVLIB):$(SHARED_MEMENVOBJECTS)
rm -f $@
$(AR) -rs $@ $(SHARED_MEMENVOBJECTS)
$(STATIC_OUTDIR)/db_bench:db/db_bench.cc $(STATIC_LIBOBJECTS) $(TESTUTIL)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/db_bench.cc $(STATIC_LIBOBJECTS) $(TESTUTIL) -o $@ $(LIBS)
$(STATIC_OUTDIR)/db_bench_sqlite3:doc/bench/db_bench_sqlite3.cc $(STATIC_LIBOBJECTS) $(TESTUTIL)
$(CXX) $(LDFLAGS) $(CXXFLAGS) doc/bench/db_bench_sqlite3.cc $(STATIC_LIBOBJECTS) $(TESTUTIL) -o $@ -lsqlite3 $(LIBS)
$(STATIC_OUTDIR)/db_bench_tree_db:doc/bench/db_bench_tree_db.cc $(STATIC_LIBOBJECTS) $(TESTUTIL)
$(CXX) $(LDFLAGS) $(CXXFLAGS) doc/bench/db_bench_tree_db.cc $(STATIC_LIBOBJECTS) $(TESTUTIL) -o $@ -lkyotocabinet $(LIBS)
$(STATIC_OUTDIR)/leveldbutil:db/leveldbutil.cc $(STATIC_LIBOBJECTS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/leveldbutil.cc $(STATIC_LIBOBJECTS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/arena_test:util/arena_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/arena_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/autocompact_test:db/autocompact_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/autocompact_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/bloom_test:util/bloom_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/bloom_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/c_test:$(STATIC_OUTDIR)/db/c_test.o $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(STATIC_OUTDIR)/db/c_test.o $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/cache_test:util/cache_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/cache_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/coding_test:util/coding_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/coding_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/corruption_test:db/corruption_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/corruption_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/crc32c_test:util/crc32c_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/crc32c_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/db_test:db/db_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/db_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/dbformat_test:db/dbformat_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/dbformat_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/env_posix_test:util/env_posix_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/env_posix_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/env_test:util/env_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/env_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/fault_injection_test:db/fault_injection_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/fault_injection_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/filename_test:db/filename_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/filename_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/filter_block_test:table/filter_block_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) table/filter_block_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/hash_test:util/hash_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) util/hash_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/issue178_test:issues/issue178_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) issues/issue178_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/issue200_test:issues/issue200_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) issues/issue200_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/log_test:db/log_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/log_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/recovery_test:db/recovery_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/recovery_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/table_test:table/table_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) table/table_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/skiplist_test:db/skiplist_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/skiplist_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/version_edit_test:db/version_edit_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/version_edit_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/version_set_test:db/version_set_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/version_set_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/write_batch_test:db/write_batch_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS)
$(CXX) $(LDFLAGS) $(CXXFLAGS) db/write_batch_test.cc $(STATIC_LIBOBJECTS) $(TESTHARNESS) -o $@ $(LIBS)
$(STATIC_OUTDIR)/memenv_test:$(STATIC_OUTDIR)/helpers/memenv/memenv_test.o $(STATIC_OUTDIR)/libmemenv.a $(STATIC_OUTDIR)/libleveldb.a $(TESTHARNESS)
$(XCRUN) $(CXX) $(LDFLAGS) $(STATIC_OUTDIR)/helpers/memenv/memenv_test.o $(STATIC_OUTDIR)/libmemenv.a $(STATIC_OUTDIR)/libleveldb.a $(TESTHARNESS) -o $@ $(LIBS)
$(SHARED_OUTDIR)/db_bench:$(SHARED_OUTDIR)/db/db_bench.o $(SHARED_LIBS) $(TESTUTIL)
$(XCRUN) $(CXX) $(LDFLAGS) $(CXXFLAGS) $(PLATFORM_SHARED_CFLAGS) $(SHARED_OUTDIR)/db/db_bench.o $(TESTUTIL) $(SHARED_OUTDIR)/$(SHARED_LIB3) -o $@ $(LIBS)
.PHONY: run-shared
run-shared: $(SHARED_OUTDIR)/db_bench
LD_LIBRARY_PATH=$(SHARED_OUTDIR) $(SHARED_OUTDIR)/db_bench
$(SIMULATOR_OUTDIR)/%.o: %.cc
xcrun -sdk iphonesimulator $(CXX) $(CXXFLAGS) $(SIMULATOR_CFLAGS) -c $< -o $@
$(DEVICE_OUTDIR)/%.o: %.cc
xcrun -sdk iphoneos $(CXX) $(CXXFLAGS) $(DEVICE_CFLAGS) -c $< -o $@
$(SIMULATOR_OUTDIR)/%.o: %.c
xcrun -sdk iphonesimulator $(CC) $(CFLAGS) $(SIMULATOR_CFLAGS) -c $< -o $@
$(DEVICE_OUTDIR)/%.o: %.c
xcrun -sdk iphoneos $(CC) $(CFLAGS) $(DEVICE_CFLAGS) -c $< -o $@
$(STATIC_OUTDIR)/%.o: %.cc
$(CXX) $(CXXFLAGS) -c $< -o $@
$(STATIC_OUTDIR)/%.o: %.c
$(CC) $(CFLAGS) -c $< -o $@
$(SHARED_OUTDIR)/%.o: %.cc
$(CXX) $(CXXFLAGS) $(PLATFORM_SHARED_CFLAGS) -c $< -o $@
$(SHARED_OUTDIR)/%.o: %.c
$(CC) $(CFLAGS) $(PLATFORM_SHARED_CFLAGS) -c $< -o $@
$(STATIC_OUTDIR)/port/port_posix_sse.o: port/port_posix_sse.cc
$(CXX) $(CXXFLAGS) $(PLATFORM_SSEFLAGS) -c $< -o $@
$(SHARED_OUTDIR)/port/port_posix_sse.o: port/port_posix_sse.cc
$(CXX) $(CXXFLAGS) $(PLATFORM_SHARED_CFLAGS) $(PLATFORM_SSEFLAGS) -c $< -o $@

View file

@ -1,17 +0,0 @@
Release 1.2 2011-05-16
----------------------
Fixes for larger databases (tested up to one billion 100-byte entries,
i.e., ~100GB).
(1) Place hard limit on number of level-0 files. This fixes errors
of the form "too many open files".
(2) Fixed memtable management. Before the fix, a heavy write burst
could cause unbounded memory usage.
A fix for a logging bug where the reader would incorrectly complain
about corruption.
Allow public access to WriteBatch contents so that users can easily
wrap a DB.

View file

@ -1,174 +0,0 @@
**LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.**
[![Build Status](https://travis-ci.org/google/leveldb.svg?branch=master)](https://travis-ci.org/google/leveldb)
Authors: Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)
# Features
* Keys and values are arbitrary byte arrays.
* Data is stored sorted by key.
* Callers can provide a custom comparison function to override the sort order.
* The basic operations are `Put(key,value)`, `Get(key)`, `Delete(key)`.
* Multiple changes can be made in one atomic batch.
* Users can create a transient snapshot to get a consistent view of data.
* Forward and backward iteration is supported over the data.
* Data is automatically compressed using the [Snappy compression library](http://google.github.io/snappy/).
* External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.
# Documentation
[LevelDB library documentation](https://github.com/google/leveldb/blob/master/doc/index.md) is online and bundled with the source code.
# Limitations
* This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
* Only a single process (possibly multi-threaded) can access a particular database at a time.
* There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library.
# Contributing to the leveldb Project
The leveldb project welcomes contributions. leveldb's primary goal is to be
a reliable and fast key/value store. Changes that are in line with the
features/limitations outlined above, and meet the requirements below,
will be considered.
Contribution requirements:
1. **POSIX only**. We _generally_ will only accept changes that are both
compiled, and tested on a POSIX platform - usually Linux. Very small
changes will sometimes be accepted, but consider that more of an
exception than the rule.
2. **Stable API**. We strive very hard to maintain a stable API. Changes that
require changes for projects using leveldb _might_ be rejected without
sufficient benefit to the project.
3. **Tests**: All changes must be accompanied by a new (or changed) test, or
a sufficient explanation as to why a new (or changed) test is not required.
## Submitting a Pull Request
Before any pull request will be accepted the author must first sign a
Contributor License Agreement (CLA) at https://cla.developers.google.com/.
In order to keep the commit timeline linear
[squash](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#Squashing-Commits)
your changes down to a single commit and [rebase](https://git-scm.com/docs/git-rebase)
on google/leveldb/master. This keeps the commit timeline linear and more easily sync'ed
with the internal repository at Google. More information at GitHub's
[About Git rebase](https://help.github.com/articles/about-git-rebase/) page.
# Performance
Here is a performance report (with explanations) from the run of the
included db_bench program. The results are somewhat noisy, but should
be enough to get a ballpark performance estimate.
## Setup
We use a database with a million entries. Each entry has a 16 byte
key, and a 100 byte value. Values used by the benchmark compress to
about half their original size.
LevelDB: version 1.1
Date: Sun May 1 12:11:26 2011
CPU: 4 x Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
CPUCache: 4096 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
Raw Size: 110.6 MB (estimated)
File Size: 62.9 MB (estimated)
## Write performance
The "fill" benchmarks create a brand new database, in either
sequential, or random order. The "fillsync" benchmark flushes data
from the operating system to the disk after every operation; the other
write operations leave the data sitting in the operating system buffer
cache for a while. The "overwrite" benchmark does random writes that
update existing keys in the database.
fillseq : 1.765 micros/op; 62.7 MB/s
fillsync : 268.409 micros/op; 0.4 MB/s (10000 ops)
fillrandom : 2.460 micros/op; 45.0 MB/s
overwrite : 2.380 micros/op; 46.5 MB/s
Each "op" above corresponds to a write of a single key/value pair.
I.e., a random write benchmark goes at approximately 400,000 writes per second.
Each "fillsync" operation costs much less (0.3 millisecond)
than a disk seek (typically 10 milliseconds). We suspect that this is
because the hard disk itself is buffering the update in its memory and
responding before the data has been written to the platter. This may
or may not be safe based on whether or not the hard disk has enough
power to save its memory in the event of a power failure.
## Read performance
We list the performance of reading sequentially in both the forward
and reverse direction, and also the performance of a random lookup.
Note that the database created by the benchmark is quite small.
Therefore the report characterizes the performance of leveldb when the
working set fits in memory. The cost of reading a piece of data that
is not present in the operating system buffer cache will be dominated
by the one or two disk seeks needed to fetch the data from disk.
Write performance will be mostly unaffected by whether or not the
working set fits in memory.
readrandom : 16.677 micros/op; (approximately 60,000 reads per second)
readseq : 0.476 micros/op; 232.3 MB/s
readreverse : 0.724 micros/op; 152.9 MB/s
LevelDB compacts its underlying storage data in the background to
improve read performance. The results listed above were done
immediately after a lot of random writes. The results after
compactions (which are usually triggered automatically) are better.
readrandom : 11.602 micros/op; (approximately 85,000 reads per second)
readseq : 0.423 micros/op; 261.8 MB/s
readreverse : 0.663 micros/op; 166.9 MB/s
Some of the high cost of reads comes from repeated decompression of blocks
read from disk. If we supply enough cache to the leveldb so it can hold the
uncompressed blocks in memory, the read performance improves again:
readrandom : 9.775 micros/op; (approximately 100,000 reads per second before compaction)
readrandom : 5.215 micros/op; (approximately 190,000 reads per second after compaction)
## Repository contents
See [doc/index.md](doc/index.md) for more explanation. See
[doc/impl.md](doc/impl.md) for a brief overview of the implementation.
The public interface is in include/*.h. Callers should not include or
rely on the details of any other header files in this package. Those
internal APIs may be changed without warning.
Guide to header files:
* **include/db.h**: Main interface to the DB: Start here
* **include/options.h**: Control over the behavior of an entire database,
and also control over the behavior of individual reads and writes.
* **include/comparator.h**: Abstraction for user-specified comparison function.
If you want just bytewise comparison of keys, you can use the default
comparator, but clients can write their own comparator implementations if they
want custom ordering (e.g. to handle different character encodings, etc.)
* **include/iterator.h**: Interface for iterating over data. You can get
an iterator from a DB object.
* **include/write_batch.h**: Interface for atomically applying multiple
updates to a database.
* **include/slice.h**: A simple module for maintaining a pointer and a
length into some other byte array.
* **include/status.h**: Status is returned from many of the public interfaces
and is used to report success and various kinds of errors.
* **include/env.h**:
Abstraction of the OS environment. A posix implementation of this interface is
in util/env_posix.cc
* **include/table.h, include/table_builder.h**: Lower-level modules that most
clients probably won't use directly

View file

@ -1,14 +0,0 @@
ss
- Stats
db
- Maybe implement DB::BulkDeleteForRange(start_key, end_key)
that would blow away files whose ranges are entirely contained
within [start_key..end_key]? For Chrome, deletion of obsolete
object stores, etc. can be done in the background anyway, so
probably not that important.
- There have been requests for MultiGet.
After a range is completely deleted, what gets rid of the
corresponding files if we do no future changes to that range. Make
the conditions for triggering compactions fire in more situations?

Some files were not shown because too many files have changed in this diff Show more