Updated Setting up a Hub (markdown)
parent
bc754f218e
commit
6fa4df1900
1 changed files with 43 additions and 69 deletions
|
@ -1,94 +1,68 @@
|
||||||
## Installation
|
## create an aws instance (see screenshots for details)
|
||||||
|
|
||||||
Scribe may be run from source, a binary, or a docker image.
|
|
||||||
Our [releases page](https://github.com/lbryio/hub/releases) contains pre-built binaries of the latest release, pre-releases, and past releases for macOS and Debian-based Linux.
|
|
||||||
Prebuilt [docker images](https://hub.docker.com/r/lbry/hub/tags) are also available.
|
|
||||||
|
|
||||||
### Prebuilt docker image
|
|
||||||
|
|
||||||
`docker pull lbry/hub:master`
|
##create lbry user
|
||||||
|
|
||||||
### Build your own docker image
|
|
||||||
|
|
||||||
```
|
```
|
||||||
git clone https://github.com/lbryio/hub.git
|
sudo adduser lbry
|
||||||
cd hub
|
|
||||||
docker build -t lbry/hub:development .
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Install from source
|
|
||||||
|
|
||||||
Scribe has been tested with python 3.7-3.9. Higher versions probably work but have not yet been tested.
|
|
||||||
|
|
||||||
1. clone the scribe repo
|
## let lbry do sudo without password
|
||||||
|
|
||||||
|
https://www.atlantic.net/vps-hosting/how-to-setup-passwordless-sudo-for-a-specific-user/
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## install docker
|
||||||
|
|
||||||
|
https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
|
||||||
|
|
||||||
```
|
```
|
||||||
git clone https://github.com/lbryio/hub.git
|
sudo apt-get update
|
||||||
cd hub
|
sudo apt-get install ca-certificates curl gnupg lsb-release
|
||||||
```
|
sudo mkdir -p /etc/apt/keyrings
|
||||||
2. make a virtual env
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
```
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
python3.9 -m venv hub-venv
|
sudo apt-get update
|
||||||
```
|
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
||||||
3. from the virtual env, install scribe
|
sudo service docker start
|
||||||
```
|
sudo usermod -aG docker $USER
|
||||||
source hub-venv/bin/activate
|
newgrp docker
|
||||||
pip install -e .
|
sudo systemctl enable docker.service
|
||||||
|
sudo systemctl enable containerd.service
|
||||||
```
|
```
|
||||||
|
|
||||||
That completes the installation, now you should have the commands `scribe`, `scribe-elastic-sync` and `herald`
|
|
||||||
|
|
||||||
These can also optionally be run with `python -m hub.scribe`, `python -m hub.elastic_sync`, and `python -m hub.herald`
|
|
||||||
|
|
||||||
## Usage
|
## create lbcd and rocksdb volumes
|
||||||
|
|
||||||
### Requirements
|
```
|
||||||
|
mkdir /home/lbry/docker-volumes
|
||||||
|
sudo chown -R 999:999 /home/lbry/docker-volumes
|
||||||
|
docker volume create --driver local --opt type=none --opt device=/home/lbry/docker-volumes --opt o=bind lbcd
|
||||||
|
docker volume create --driver local --opt type=none --opt device=/home/lbry/docker-volumes --opt o=bind lbry_rocksdb
|
||||||
|
```
|
||||||
|
|
||||||
Scribe needs elasticsearch and either the [lbrycrd](https://github.com/lbryio/lbrycrd) or [lbcd](https://github.com/lbryio/lbcd) blockchain daemon to be running.
|
|
||||||
|
|
||||||
With options for high performance, if you have 64gb of memory and 12 cores, everything can be run on the same machine. However, the recommended way is with elasticsearch on one instance with 8gb of memory and at least 4 cores dedicated to it and the blockchain daemon on another with 16gb of memory and at least 4 cores. Then the scribe hub services can be run their own instance with between 16 and 32gb of memory (depending on settings) and 8 cores.
|
## download lbcd snapshot
|
||||||
|
|
||||||
As of block 1147423 (4/21/22) the size of the scribe rocksdb database is 120GB and the size of the elasticsearch volume is 63GB.
|
```
|
||||||
|
sudo apt install zstd
|
||||||
|
cd /home/lbry/docker-volumes/lbcd
|
||||||
|
wget -c https://snapshots.lbry.com/blockchain/lbcd_snapshot_1238238_v0.22.116_2022-10-07.tar.zst -O - | tar --zstd -x
|
||||||
|
sudo chown -R 999:999 .
|
||||||
|
```
|
||||||
|
|
||||||
### docker-compose
|
|
||||||
The recommended way to run a scribe hub is with docker. See [this guide](https://github.com/lbryio/hub/blob/master/docs/cluster_guide.md) for instructions.
|
|
||||||
|
|
||||||
If you have the resources to run all of the services on one machine (at least 300gb of fast storage, preferably nvme, 64gb of RAM, 12 fast cores), see [this](https://github.com/lbryio/hub/blob/master/docs/docker_examples/docker-compose.yml) docker-compose example.
|
## start lbcd first and let it catch up
|
||||||
|
|
||||||
### From source
|
## then start scribe and let it sync (takes two days)
|
||||||
|
|
||||||
### Options
|
## then start the rest of the docker-compose
|
||||||
|
|
||||||
#### Content blocking and filtering
|
## federation
|
||||||
|
|
||||||
For various reasons it may be desirable to block or filtering content from claim search and resolve results, [here](https://github.com/lbryio/hub/blob/master/docs/blocking.md) are instructions for how to configure and use this feature as well as information about the recommended defaults.
|
|
||||||
|
|
||||||
#### Common options across `scribe`, `herald`, and `scribe-elastic-sync`:
|
|
||||||
- `--db_dir` (required) Path of the directory containing lbry-rocksdb, set from the environment with `DB_DIRECTORY`
|
|
||||||
- `--daemon_url` (required for `scribe` and `herald`) URL for rpc from lbrycrd or lbcd<rpcuser>:<rpcpassword>@<lbrycrd rpc ip><lbrycrd rpc port>.
|
|
||||||
- `--reorg_limit` Max reorg depth, defaults to 200, set from the environment with `REORG_LIMIT`.
|
|
||||||
- `--chain` With blockchain to use - either `mainnet`, `testnet`, or `regtest` - set from the environment with `NET`
|
|
||||||
- `--max_query_workers` Size of the thread pool, set from the environment with `MAX_QUERY_WORKERS`
|
|
||||||
- `--cache_all_tx_hashes` If this flag is set, all tx hashes will be stored in memory. For `scribe`, this speeds up the rate it can apply blocks as well as process mempool. For `herald`, this will speed up syncing address histories. This setting will use 10+g of memory. It can be set from the environment with `CACHE_ALL_TX_HASHES=Yes`
|
|
||||||
- `--cache_all_claim_txos` If this flag is set, all claim txos will be indexed in memory. Set from the environment with `CACHE_ALL_CLAIM_TXOS=Yes`
|
|
||||||
- `--prometheus_port` If provided this port will be used to provide prometheus metrics, set from the environment with `PROMETHEUS_PORT`
|
|
||||||
|
|
||||||
#### Options for `scribe`
|
|
||||||
- `--db_max_open_files` This setting translates into the max_open_files option given to rocksdb. A higher number will use more memory. Defaults to 64.
|
|
||||||
- `--address_history_cache_size` The count of items in the address history cache used for processing blocks and mempool updates. A higher number will use more memory, shouldn't ever need to be higher than 10000. Defaults to 1000.
|
|
||||||
- `--index_address_statuses` Maintain an index of the statuses of address transaction histories, this makes handling notifications for transactions in a block uniformly fast at the expense of more time to process new blocks and somewhat more disk space (~10gb as of block 1161417).
|
|
||||||
|
|
||||||
#### Options for `scribe-elastic-sync`
|
|
||||||
- `--reindex` If this flag is set drop and rebuild the elasticsearch index.
|
|
||||||
|
|
||||||
#### Options for `herald`
|
|
||||||
- `--host` Interface for server to listen on, use 0.0.0.0 to listen on the external interface. Can be set from the environment with `HOST`
|
|
||||||
- `--tcp_port` Electrum TCP port to listen on for hub server. Can be set from the environment with `TCP_PORT`
|
|
||||||
- `--udp_port` UDP port to listen on for hub server. Can be set from the environment with `UDP_PORT`
|
|
||||||
- `--elastic_host` Hostname or ip address of the elasticsearch instance to connect to. Can be set from the environment with `ELASTIC_HOST`
|
|
||||||
- `--elastic_port` Elasticsearch port to connect to. Can be set from the environment with `ELASTIC_PORT`
|
|
||||||
- `--elastic_notifier_host` Elastic sync notifier host to connect to, defaults to localhost. Can be set from the environment with `ELASTIC_NOTIFIER_HOST`
|
|
||||||
- `--elastic_notifier_port` Elastic sync notifier port to connect using. Can be set from the environment with `ELASTIC_NOTIFIER_PORT`
|
|
||||||
- `--query_timeout_ms` Timeout for claim searches in elasticsearch in milliseconds. Can be set from the environment with `QUERY_TIMEOUT_MS`
|
|
||||||
- `--blocking_channel_ids` Space separated list of channel claim ids used for blocking. Claims that are reposted by these channels can't be resolved or returned in search results. Can be set from the environment with `BLOCKING_CHANNEL_IDS`.
|
|
||||||
- `--filtering_channel_ids` Space separated list of channel claim ids used for blocking. Claims that are reposted by these channels aren't returned in search results. Can be set from the environment with `FILTERING_CHANNEL_IDS`
|
|
||||||
- `--index_address_statuses` Use the address history status index, this makes handling notifications for transactions in a block uniformly fast (must be turned on in `scribe` too).
|
|
Loading…
Reference in a new issue