.. | ||
charts | ||
kick-ascii | ||
.gitignore | ||
Chart.yaml | ||
README.md | ||
requirements.lock | ||
requirements.yaml | ||
run.sh | ||
values.yaml |
LBRY on Kubernetes with Helm
Contributing Author: EnigmaCurry
Last Update: June 14 2019
Deploy lbrycrd, lbrynet, chainquery, mysql, and spee.ch on your Kubernetes cluster.
- Requirements
- Security Notice
- Installation
- Services
- Extra commands that k8s-lbry (run.sh) provides
- TLS / SSL / HTTPS
- Cloud specific notes
- Uninstall
Requirements
- A Kubernetes cluster.
- Tested on DigitalOcean managed Kubernetes cluster on nodes with 8GB of RAM, on kubernetes 1.14.1.
- Tested on AWS with Charmed Kubenetes Distribution - See AWS specific notes.
- Tested on minikube for a self-contained virtual machine running kubernetes in VirtualBox - - See minikube specific notes.
- Local development machine dependencies:
- Optional: for TLS / HTTPs support, you will also need an internet domain name, and the ability to update its DNS.
Your cloud provider should have instructions for setting up kubectl
to talk to
your cluster. This usually involves downloading a config file and putting it in
$HOME/.kube/config
. (The file has to be renamed config
and put in the
$HOME/.kube
directory.)
Test that your kubectl
can talk to your cluster, by querying for a list of
running nodes:
kubectl get nodes
If everything is working, you should see a list of one or more nodes running and
showing STATUS=Ready
Security Notice
Any cryptocurrency wallet that is online is a security concern. For any real-world production deployment, you will need to review this architecture closely to see if it fits with your chosen platform and network environment.
This system is currently designed for a kubernetes cluster that has a single administrator (or possibly a small team of trusted users). It will not support untrusted multi-tenancy out of the box.
All of the services are created in their own namespace, but no Security Policies have been applied to the pods.
The Helm configuration file contains all of the configuration for the system, including passwords in plain text.
The lbrynet SDK wallets are individually stored unencrypted in their own persistent volumes.
“We take security seriously. Please contact security@lbry.io regarding any security issues. Our PGP key is here if you need it.” If you find vulnerabilites, especially any that might increase the risk of someone losing their crypto currency Responsible Disclosure is appreciated.
Installation
This system is installed via Helm, the package manager for Kubernetes. Helm Charts are the basis for packages in Helm. This directory is a Helm chart itself.
All of the helm and kubectl commands necessary to install, upgrade, and maintain
your deployments, are wrapped in the included run.sh
script. For
debugging purposes, this wrapper also prints to stdout the full underlying
commands (helm, kubectl, etc) as they are run.
Create a project directory
Create a new directory someplace to store your deployment configuration. For the rest of this tutorial, you will work from this directory:
mkdir $HOME/k8s-lbry-test
cd $HOME/k8s-lbry-test
Download run.sh
to this same directory:
curl -Lo run.sh https://raw.githubusercontent.com/EnigmaCurry/lbry-docker/k8s-lbry/contrib/k8s-lbry/run.sh
chmod a+x run.sh
It's a good idea to always read any script you download from the internet, before running it.
Setup alias and tab completion
run.sh
can be run directly without any setup. However, without adding it to
your PATH
, you need to specify the full path to the script each time. Setting
a bash alias for run.sh
is the quickest way of setting up to run from
anywhere, as well as activating support for bash tab completion.
One time setup to install alias to $HOME/.bashrc
:
./run.sh setup-alias
It should prompt you if it is OK for the script to edit $HOME/.bashrc
. Once
you confirm, close your terminal session, then reopen it.
Verify the new k8s-lbry
alias to run.sh
is working:
k8s-lbry kubectl get nodes
Notice that tab completion should work throughout typing the above command.
k8s-lbry setup
Setup will check for dependencies, update helm repositories, and create an
initial config file (values-dev.yaml
).
k8s-lbry setup
k8s-lbry install-nginx-ingress
An Ingress Controller (nginx-ingress) will help you to route outside internet traffic into your cluster. nginx-ingress will also help terminate TLS connections (SSL) so that your containers don't need to worry about encryption of traffic.
Install nginx-ingress into the k8s-lbry
namespace:
k8s-lbry install-nginx-ingress
k8s-lbry install-cert-manager
cert-manager will provide TLS certificates (SSL) for your cluster, using Let's Encrypt.
Install cert-manager into the cert-manager
namespace:
k8s-lbry install-cert-manager
k8s-lbry install
Once nginx-ingress and cert-manager are installed, the main helm chart can be
installed. This installs lbrycrd, chainquery, lbrynet, spee.ch, and mysql,
depending on what you enable in values-dev.yaml
.
Find the External IP address for your load balancer:
k8s-lbry kubectl get svc nginx-ingress-controller -o wide
If you find a hostname instead of an IP address, this means your load balancer
has multiple IP addresses. In this case, you will need to resolve the domain
name to find the IP addresses. If this affects you, run dig
with the hostname
as the second argument. If your system does not have dig
installed, you can
paste the hostname into this tool.
Look for the ;ANSWER
section and you should see two or more IP addresses
listed. Since lbrycrd will only advertise one IP address, pick just one of the
IP addresses to use for the purposes of this tutorial.
You must edit your own values-dev.yaml
. (The setup procedure created an
initial configuration in the same directory as run.sh
.) To use a different
config file, export the VALUES
environment variable before subsequent
commands, specifying the full path to your values file.
Edit values-dev.yaml
. You only need to change one thing right now:
- Change
lbrycrd.configurationFile.lbrycrd.conf
at the bottom of this section findexternalip=
and set it equal to the External IP address of the Load Balancer obtained above. (Example:externalip=123.123.123.123
)
Save values-dev.yaml
.
Now run the install script to create the new release:
k8s-lbry install
k8s-lbry upgrade
For helm, upgrade
does not necessarily mean you are upgrading to a new version
of any particular software, upgrade
just means to apply your configuration
file to the cluster. If you edit values-dev.yaml
, you then need to apply your
changes with k8s-lbry upgrade
.
You can make changes to values-dev.yaml
at any time. You can apply your
configuration to your cluster by upgrading the release:
k8s-lbry upgrade
You can upgrade as often as you want. Each time you upgrade the release, helm
increases the REVISION
number:
k8s-lbry helm ls
Services
lbrycrd
After running the installation above, you should now have a running lbrycrd pod.
Verify this by listing the pods for the k8s-lbry
namespace:
k8s-lbry kubectl get pods
You should see a pod listed with a name that starts with lbrycrd
.
Check the lbrycrd logs:
k8s-lbry logs lbrycrd
Press Ctrl-C to stop viewing the log.
It is advisable to wait for lbrycrd to synchronize with the full blockchain
before starting other services, so watch the logs until synchronization
completes (progress=1.0
).
You can utilize lbrycrd-cli
as well:
k8s-lbry lbrycrd-cli --help
chainquery
MySQL for chainquery
MySQL is used as the database chainquery talks to.
Edit values-dev.yaml
and set chainquery-mysql.enabled
to true
.
Upgrade the release to turn on mysql for chainquery:
k8s-lbry upgrade
You can try logging into the mysql shell if you like:
k8s-lbry chainquery-mysql-client
You can view the mysql logs:
k8s-lbry logs chainquery-mysql
Press Ctrl-C to stop viewing the log.
Start chainquery
Edit values-dev.yaml
and set chainquery.enabled
to true
.
Upgrade the release to turn on chainquery:
k8s-lbry upgrade
You can view the chainquery logs:
k8s-lbry logs chainquery
Startup chainquery with a database snapshot
If chainquery is starting with a blank MySQL database, it will take several days to synchronize with the full lbrycrd blockchain. If this is OK, you can just watch the chainquery logs and wait for it to get to the current block height.
If you cannot wait that long, you can scrap your existing chainquery database and restart from a more recent database snapshot:
k8s-lbry chainquery-override-snapshot
This will prompt if you really wish to destroy the current chainquery database. If you confirm, the existing chainquery and chainquery-mysql deployments will be deleted, and pods will be terminated, and the contents of the Persistent Volume Claim (PVC) for chainquery-mysql will be deleted. The snapshot will be downloaded and restored in its place.
Once the snapshot is restored, upgrade the release to restore the chainquery and chainquery-mysql deployments, and restart pods:
k8s-lbry upgrade
You can verify that the database now has data up to the height of the database snapshot. Login to the mysql shell:
k8s-lbry chainquery-mysql-client
Then query for the latest block height:
mysql> select height from chainquery.block order by id desc limit 1;
+--------+
| height |
+--------+
| 561080 |
+--------+
1 row in set (0.00 sec)
Also verify that chainquery is again happy. View the chainquery logs:
k8s-lbry logs chainquery
Press Ctrl-C to quit viewing the logs.
lbrynet API service (not for spee.ch)
This is for a standalone lbrynet API service inside your cluster. Blob storage
goes to its own persistent volume, but is configured with save_files=false
.
There is no outside access to the Downloads directory provided. You can stream
blobs from lbrynet via http://lbrynet:5279/get/CLAIM_NAME/CLAIM_ID
.
This particular lbrynet configuration won't work for spee.ch (v0.5.12). spee.ch needs direct access to the Downloads directory of lbrynet. If you are wanting lbrynet for spee.ch, skip this section, and head directly to the spee.ch section, which implements its own lbrynet sidecar.
Edit values-dev.yaml
and set lbrynet.enabled
to true
.
Upgrade the release to turn on lbrynet:
k8s-lbry upgrade
You can view the lbrynet logs:
k8s-lbry logs lbrynet
IMPORTANT - Backup your cluster wallet
The wallet is created inside the lbrynet
persistent volume.
Copy the wallet in case the volume gets destroyed:
k8s-lbry lbrynet-copy-wallet /tmp/k8s-lbry-lbrynet-wallet-backup.json
Check the contents of /tmp/k8s-lbry-lbrynet-wallet-backup.json
and move the
file to a safe place for backup (make sure to delete the temporary file.)
Once your wallet is backed up, you can generate a receiving address in order to deposit LBC:
k8s-lbry lbrynet address unused
spee.ch (and lbrynet sidecar and mysql)
Note: Throughout this deployment, the unstylized name speech
is used.
Speech needs three containers, running in two pods:
-
speech
pod:-
speech, the nodejs server container.
-
lbrynet, running in the same pod as speech, so as to share one downloads directory. (This is called a 'sidecar' container, which is guaranteed to run on the same kubernetes node as the spee.ch container.)
-
-
speech-mysql
pod:- mysql for storing the speech database.
Edit values-dev.yaml
.
- Set
speech-mysql.enabled
totrue
. - Set
speech.enabled
totrue
. - Set
speech.service.hostname
to your subdomain name for speech. - Set
speech.site.details.host
to your subdomain name for speech. - Set
speech.site.details.ipAddress
to your Load Balancer external IP address. - Set
speech.site.details.title
Upgrade the release to turn on speech
, speech-lbrynet
, and speech-mysql
:
k8s-lbry upgrade
Speech will not work yet! Continue on through the next sections.
IMPORTANT - Backup your speech wallet
The wallet for speech is created inside the speech-lbrynet
persistent volume.
Copy the wallet in case the volume gets destroyed:
k8s-lbry speech-lbrynet-copy-wallet /tmp/k8s-lbry-speech-lbrynet-wallet-backup.json
Check the contents of /tmp/k8s-lbry-speech-lbrynet-wallet-backup.json
and move
the file to a safe place for backup (make sure to delete the temporary file.)
Fund your speech wallet
Once your wallet is backed up, you can generate a receiving address in order to deposit LBC:
k8s-lbry speech-lbrynet address unused
Now send at least 5 LBC to your new speech wallet address.
Verify your speech wallet balance:
k8s-lbry speech-lbrynet account balance
Create a thumbnail channel
Create the LBRY channel for hosting speech thumbnails. Replace @YOUR_NAME_HERE
with your chosen (unique) channel name to create. Amount is how much LBC to
reserve for the claim:
k8s-lbry speech-lbrynet channel new @YOUR_NAME_HERE --amount=1.0
Get the claim id for the channel:
k8s-lbry speech-lbrynet channel list
The claim_id
field is your thumbnailChannelId
used in the next section.
Finish speech setup
Edit values-dev.yaml
again:
- Set
speech.site.publishing.primaryClaimAddress
The fresh wallet address generated above. - Set
speech.site.publishing.thumbnailChannel
The name of the channel to publish thumbnails - Set
speech.site.publishing.thumbnailChannelId
The claim id of the channel to publish thumbnails. (seek8s-lbry speech-lbrynet channel list
) - Set
speech.site.publishing.serviceOnlyApproved
if you want to limit the channels served. - Set
speech.site.publishing.approvedChannels
if you want to limit the channels served. - Set
speech.site.analytics.googleId
See the speech settings docs for more info
Upgrade the release to apply the new settings:
k8s-lbry upgrade
Restart the speech pod:
k8s-lbry restart speech
Extra commands that k8s-lbry (run.sh) provides
You can run k8s-lbry
without any arguments, and it will provide you some help.
k8s-lbry helm
This script encapsulates helm so that it can run it's own local instance of tiller through helm-tiller. As a convenience function, run.sh can start tiller locally, pass through any helm commands to your cluster, and then shutdown tiller:
Example:
k8s-lbry helm ls
If you try to run helm
without the run.sh helm
wrapper, you should expect to
see this error:
Error: could not find tiller
By design, tiller is not running on your
cluster, it just runs locally for the
duration that run.sh
needs it, then shuts down.
k8s-lbry kubectl
This script encapsulates kubectl so that you do not have to keep typing
--namespace k8s-lbry
all the time. All kubectl commands will default to
k8s-lbry
or the NAMESPACE
environment variable if set.
Example:
k8s-lbry kubectl get pods
k8s-lbry logs
Stream the logs for a pod into your terminal, given the helm app name. If the pod contains more than one container you must specify it as the third argument.
Examples:
k8s-lbry logs lbrycrd
k8s-lbry logs speech speech-lbrynet
Press Ctrl-C to stop streaming the logs. If the logs seemingly hang forever, press Ctrl-C and try the command again.
k8s-lbry shell
When doing maintainance tasks, it is often useful to be able to attach a shell to a running pod. This is a convenience wrapper that uses the helm app name to connect to the correct pod.
This will connect to the pod running the lbrynet
service.
Example:
k8s-lbry shell lbrynet
Once in the shell, do whatever maintaince is necessary, and press Ctrl-D
or
type exit
when done.
k8s-lbry shell-pvc
When doing maintainance tasks, it is often useful to be able to run a utility
container that mounts the persistent volume
(PVC) of
another container. This is especially useful in scenarios where the pod will not
start, and therefore cannot use the run.sh shell
command in the previous
section.
This will run a shell in a new utility container, mounting the lbrynet PVC to
/pvcs/lbrynet
.
Example:
k8s-lbry shell-pvc lbrynet
Once in the shell, do whatever maintaince is necessary, and press Ctrl-D
or
type exit
when done.
k8s-lbry restart
Delete a pod for a given app name. The existing deployment will immediately restart a new pod.
Example:
k8s-lbry restart speech
k8s-lbry lbrynet
You can run the lbrynet
client from within the running pod, redirecting output
to your local console.
Example:
k8s-lbry lbrynet --help
k8s-lbry chainquery-mysql-client
Run the mysql shell for the chainquery database.
k8s-lbry speech-mysql-client
Run the mysql shell for the speech database.
k8s-lbry lbrynet-copy-wallet <local_backup_path>
Backup the lbrynet wallet to a local path.
Example:
k8s-lbry lbrynet-copy-wallet /tmp/k8s-lbry-lbrynet-wallet-backup.json
k8s-lbry package
This is for the developer of this package to build and maintain the helm package
releases, and upload to the S3 package repository. Requires s3cmd
installed.
Example:
k8s-lbry package 0.1.1
TLS / SSL / HTTPS
You have already installed cert-manager for your cluster, but HTTPs is not turned on out of the box. Setup is easy:
-
You need to create a DNS A record for your domain pointing to the External IP address of the nginx-ingress Load Balancer. (Preferably create a wildcard record for an entire subdomain [
*.example.com
or*.lbry.example.com
], that way you only have to set this up once, no matter how many sub-domains you end up needing.) Refer to the install section for how to retrieve the IP address. -
Edit
values-dev.yaml
-
Change
cert-manager-issuer.email
from the example email address to your own. Let's Encrypt is a free TLS certificate issuer, and they will send you important emails about your domain and certificate expirations. -
You can turn on the echo service to test with:
-
Change
echo-http-server.enabled
totrue
-
Change
echo-http-server.hostname
to a hostname you've configured the DNS for.
-
Upgrade nginx-ingress, turning on HTTPs support (copy the command including the wrapping parentheses):
(
NGINX_ENABLE_HTTPS=true
k8s-lbry upgrade-nginx-ingress
)
And Upgrade k8s-lbry
:
k8s-lbry upgrade
If you turned on the echo service, try it out with curl:
curl -L https://echo.example.com
It should return the name of the service: echo-http-server
.
If you get any certificate validation errors, then you may need to wait for up to 20 minutes for the certificate to be issued, and then retry.
If you run into problems with certificates, check out the cert-manager logs:
kubectl -n cert-manager logs -l app=cert-manager -f
Also check the certificate resources:
k8s-lbry kubectl get certificates
You should see the echo-http-server-tls
certificate resource listed. The
READY
status is the indicator as to whether the certificate has been issued
yet or not.
Cloud specific notes
AWS
Deployment on AWS requires these modifications:
Following the CDK on AWS docs, install the StorageClass for EBS:
kubectl create -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
EOF
In values-dev.yaml
all of your persistence configurations need to add
storageClass: ebs-gp2
. There are commented versions in the default config file
which you can simply uncomment.
minikube
minikube lets you run kubernetes on your development machine, in VirtualBox.
Make sure you start minikube with sufficient RAM for testing:
minikube start --memory 4096
In order for minikube to route the LoadBalancer correctly, you need to add a route on the host, and install a patch to your cluster:
## ONLY RUN THESE COMMANDS IF YOU ARE USING MINIKUBE:
sudo ip route add $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") via $(minikube ip)
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
If it works correctly, after you run k8s-lbry install-nginx-ingress
, the
External IP address for the LoadBalancer should no longer be pending
:
k8s-lbry kubectl get svc nginx-ingress-controller
Uninstall
If you wish to uninstall k8s-lbry from your cluster, here are the steps:
-
Delete the helm releases:
k8s-lbry helm delete k8s-lbry k8s-lbry helm delete cert-manager k8s-lbry helm delete nginx-ingress
- By deleting the
nginx-ingress
release, the Load Balancer resource should be automatically cleaned up. You can verify this yourself in your cloud provider's console that no Load Balancer is still running.
- By deleting the
-
Delete the Persistent Volume Claims:
-
In
values-dev.yaml
all of the persistence claims are labeled as"helm.sh/resource-policy": keep
. This means that helm will not automatically delete the volume when it deletes the release. -
List all of your Persistent Volume Claims:
k8s-lbry kubectl get pvc
- Then delete each one you no longer want:
k8s-lbry kubectl delete pvc [name-of-pvc]
- Deleting the claim, should delete the volume. You can verify this yourself in your cloud provider's console that no Volumes exist.
-