Compare commits

..

8 commits

Author SHA1 Message Date
Victor Shyba
574c2ce794 adds integration test for release_time 2019-01-23 11:16:43 -03:00
Victor Shyba
7f5541eb5c adds release_time parameter to RPC publish 2019-01-23 11:16:43 -03:00
Victor Shyba
7b49241c38 adjust balance assertions for integration test using new signature model due new fees from new size 2019-01-23 11:16:43 -03:00
Victor Shyba
da716b2e3d remove new signature model integration test as its the default now 2019-01-23 11:16:43 -03:00
Victor Shyba
a144fae8b2 activate new signature model 2019-01-23 11:16:43 -03:00
Victor Shyba
0d90b82287 clear releaseTime when not set 2019-01-23 11:16:43 -03:00
Victor Shyba
567c23e10f regen for metadata releaseTime field 2019-01-23 11:16:43 -03:00
Victor Shyba
16970b7d37 regen proto from lbry/types, removes local .proto files 2019-01-23 11:16:43 -03:00
516 changed files with 39335 additions and 73723 deletions

View file

@ -2,6 +2,6 @@
.tox
__pycache__
dist
lbry.egg-info
lbrynet.egg-info
docs
tests

1
.gitattributes vendored Normal file
View file

@ -0,0 +1 @@
/CHANGELOG.md merge=union

74
.github/ISSUE_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,74 @@
<!--
Thanks for reporting an issue to LBRY and helping us improve!
To make it possible for us to help you, please fill out below information carefully.
Before reporting any issues, please make sure that you're using the latest version.
- App: https://github.com/lbryio/lbry-desktop/releases
- Daemon: https://github.com/lbryio/lbry/releases
We are also available on Discord at https://chat.lbry.io
-->
## The Issue
In order to <achieve some value>,
as a <type of user>,
I want <some functionality>.
### Steps to reproduce
1.
2.
3.
### Expected behaviour
Tell us what should happen
### Actual behaviour
Tell us what happens instead
## System Configuration
<!-- For the app, this info is in the About section at the bottom of the Help page.
You can include a screenshot instead of typing it out -->
<!-- For the daemon, run:
curl 'http://localhost:5279' --data '{"method":"version"}'
and include the full output -->
- LBRY Daemon version:
- LBRY App version:
- LBRY Installation ID:
- Operating system:
## Anything Else
<!-- Include anything else that does not fit into the above sections -->
## Screenshots
<!-- If a screenshot would help explain the bug, please include one or two here -->
## Internal Use
### Acceptance Criteria
1.
2.
3.
### Definition of Done
- [ ] Tested against acceptance criteria
- [ ] Tested against the assumptions of user story
- [ ] The project builds without errors
- [ ] Unit tests are written and passing
- [ ] Tests on devices/browsers listed in the issue have passed
- [ ] QA performed & issues resolved
- [ ] Refactoring completed
- [ ] Any configuration or build changes documented
- [ ] Documentation updated
- [ ] Peer Code Review performed

38
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,38 @@
## PR Checklist
Please check all that apply to this PR using "x":
- [ ] I have checked that this PR is not a duplicate of an existing PR (open, closed or merged)
- [ ] I have checked that this PR does not introduce a breaking change
- [ ] This PR introduces breaking changes and I have provided a detailed explanation below
## PR Type
What kind of change does this PR introduce?
Why is this change necessary?
<!-- Please check all that apply to this PR using "x". -->
- [ ] Bugfix
- [ ] Feature
- [ ] Breaking changes (bugfix or feature that introduces breaking changes)
- [ ] Code style update (formatting)
- [ ] Refactoring (no functional changes)
- [ ] Documentation changes
- [ ] Other - Please describe:
## Fixes
Issue Number: N/A
## What is the current behavior?
## What is the new behavior?
## Other information
<!-- If this PR contains a breaking change, please describe the impact and solution strategy for existing applications below. -->

View file

@ -1,206 +0,0 @@
name: ci
on: ["push", "pull_request", "workflow_dispatch"]
jobs:
lint:
name: lint
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: extract pip cache
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- run: pip install --user --upgrade pip wheel
- run: pip install -e .[lint]
- run: make lint
tests-unit:
name: "tests / unit"
strategy:
matrix:
os:
- ubuntu-20.04
- macos-latest
- windows-latest
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: set pip cache dir
shell: bash
run: echo "PIP_CACHE_DIR=$(pip cache dir)" >> $GITHUB_ENV
- name: extract pip cache
uses: actions/cache@v3
with:
path: ${{ env.PIP_CACHE_DIR }}
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- id: os-name
uses: ASzc/change-string-case-action@v5
with:
string: ${{ runner.os }}
- run: python -m pip install --user --upgrade pip wheel
- if: startsWith(runner.os, 'linux')
run: pip install -e .[test]
- if: startsWith(runner.os, 'linux')
env:
HOME: /tmp
run: make test-unit-coverage
- if: startsWith(runner.os, 'linux') != true
run: pip install -e .[test]
- if: startsWith(runner.os, 'linux') != true
env:
HOME: /tmp
run: coverage run --source=lbry -m unittest tests/unit/test_conf.py
- name: submit coverage report
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_FLAG_NAME: tests-unit-${{ steps.os-name.outputs.lowercase }}
COVERALLS_PARALLEL: true
run: |
pip install coveralls
coveralls --service=github
tests-integration:
name: "tests / integration"
runs-on: ubuntu-20.04
strategy:
matrix:
test:
- datanetwork
- blockchain
- claims
- takeovers
- transactions
- other
steps:
- name: Configure sysctl limits
run: |
sudo swapoff -a
sudo sysctl -w vm.swappiness=1
sudo sysctl -w fs.file-max=262144
sudo sysctl -w vm.max_map_count=262144
- name: Runs Elasticsearch
uses: elastic/elastic-github-actions/elasticsearch@master
with:
stack-version: 7.12.1
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- if: matrix.test == 'other'
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends ffmpeg
- name: extract pip cache
uses: actions/cache@v3
with:
path: ./.tox
key: tox-integration-${{ matrix.test }}-${{ hashFiles('setup.py') }}
restore-keys: txo-integration-${{ matrix.test }}-
- run: pip install tox coverage coveralls
- if: matrix.test == 'claims'
run: rm -rf .tox
- run: tox -e ${{ matrix.test }}
- name: submit coverage report
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_FLAG_NAME: tests-integration-${{ matrix.test }}
COVERALLS_PARALLEL: true
run: |
coverage combine tests
coveralls --service=github
coverage:
needs: ["tests-unit", "tests-integration"]
runs-on: ubuntu-20.04
steps:
- name: finalize coverage report submission
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
pip install coveralls
coveralls --service=github --finish
build:
needs: ["lint", "tests-unit", "tests-integration"]
name: "build / binary"
strategy:
matrix:
os:
- ubuntu-20.04
- macos-latest
- windows-latest
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- id: os-name
uses: ASzc/change-string-case-action@v5
with:
string: ${{ runner.os }}
- name: set pip cache dir
shell: bash
run: echo "PIP_CACHE_DIR=$(pip cache dir)" >> $GITHUB_ENV
- name: extract pip cache
uses: actions/cache@v3
with:
path: ${{ env.PIP_CACHE_DIR }}
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- run: pip install pyinstaller==4.6
- run: pip install -e .
- if: startsWith(github.ref, 'refs/tags/v')
run: python docker/set_build.py
- if: startsWith(runner.os, 'linux') || startsWith(runner.os, 'mac')
name: Build & Run (Unix)
run: |
pyinstaller --onefile --name lbrynet lbry/extras/cli.py
dist/lbrynet --version
- if: startsWith(runner.os, 'windows')
name: Build & Run (Windows)
run: |
pip install pywin32==301
pyinstaller --additional-hooks-dir=scripts/. --icon=icons/lbry256.ico --onefile --name lbrynet lbry/extras/cli.py
dist/lbrynet.exe --version
- uses: actions/upload-artifact@v3
with:
name: lbrynet-${{ steps.os-name.outputs.lowercase }}
path: dist/
release:
name: "release"
if: startsWith(github.ref, 'refs/tags/v')
needs: ["build"]
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v1
- uses: actions/download-artifact@v2
- name: upload binaries
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_API_TOKEN }}
run: |
pip install githubrelease
chmod +x lbrynet-macos/lbrynet
chmod +x lbrynet-linux/lbrynet
zip --junk-paths lbrynet-mac.zip lbrynet-macos/lbrynet
zip --junk-paths lbrynet-linux.zip lbrynet-linux/lbrynet
zip --junk-paths lbrynet-windows.zip lbrynet-windows/lbrynet.exe
ls -lh
githubrelease release lbryio/lbry-sdk info ${GITHUB_REF#refs/tags/}
githubrelease asset lbryio/lbry-sdk upload ${GITHUB_REF#refs/tags/} \
lbrynet-mac.zip lbrynet-linux.zip lbrynet-windows.zip
githubrelease release lbryio/lbry-sdk publish ${GITHUB_REF#refs/tags/}

View file

@ -1,22 +0,0 @@
name: slack
on:
release:
types: [published]
jobs:
release:
name: "slack notification"
runs-on: ubuntu-20.04
steps:
- uses: LoveToKnow/slackify-markdown-action@v1.0.0
id: markdown
with:
text: "There is a new SDK release: ${{github.event.release.html_url}}\n${{ github.event.release.body }}"
- uses: slackapi/slack-github-action@v1.14.0
env:
CHANGELOG: '<!channel> ${{ steps.markdown.outputs.text }}'
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_RELEASE_BOT_WEBHOOK }}
with:
payload: '{"type": "mrkdwn", "text": ${{ toJSON(env.CHANGELOG) }} }'

19
.gitignore vendored
View file

@ -1,22 +1,9 @@
/.idea
/.DS_Store
/build
/dist
/.tox
/.coverage*
/lbry-venv
/venv
/lbry/blockchain
/.idea
/.coverage
lbry.egg-info
lbrynet.egg-info
__pycache__
_trial_temp/
trending*.log
/tests/integration/claims/files
/tests/.coverage.*
/lbry/wallet/bin
/.vscode
/.gitignore

447
.pylintrc Normal file
View file

@ -0,0 +1,447 @@
[MASTER]
# Specify a configuration file.
#rcfile=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=CVS,schema
# Add files or directories matching the regex patterns to the
# blacklist. The regex matches against base names, not paths.
# `\.#.*` - add emacs tmp files to the blacklist
ignore-patterns=\.#.*
# Pickle collected data for later comparisons.
persistent=yes
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
# Use multiple processes to speed up Pylint.
jobs=1
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code
extension-pkg-whitelist=
miniupnpc,
unqlite
# Allow optimization of some AST trees. This will activate a peephole AST
# optimizer, which will apply various small optimizations. For instance, it can
# be used to obtain the result of joining multiple strings with the addition
# operator. Joining a lot of strings can lead to a maximum recursion error in
# Pylint and this flag can prevent that. It has one side effect, the resulting
# AST will be different than the one from reality.
optimize-ast=no
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED
confidence=
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
#enable=
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once).You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use"--disable=all --enable=classes
# --disable=W"
disable=
anomalous-backslash-in-string,
arguments-differ,
attribute-defined-outside-init,
bad-continuation,
bare-except,
broad-except,
cell-var-from-loop,
consider-iterating-dictionary,
dangerous-default-value,
duplicate-code,
fixme,
global-statement,
inherit-non-class,
invalid-name,
len-as-condition,
locally-disabled,
logging-not-lazy,
missing-docstring,
no-else-return,
no-init,
no-member,
no-self-use,
protected-access,
redefined-builtin,
redefined-outer-name,
redefined-variable-type,
relative-import,
signature-differs,
super-init-not-called,
too-few-public-methods,
too-many-arguments,
too-many-branches,
too-many-instance-attributes,
too-many-lines,
too-many-locals,
too-many-nested-blocks,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
trailing-newlines,
undefined-loop-variable,
ungrouped-imports,
unnecessary-lambda,
unused-argument,
unused-variable,
wildcard-import,
wrong-import-order,
wrong-import-position,
deprecated-lambda,
simplifiable-if-statement,
unidiomatic-typecheck,
global-at-module-level,
inconsistent-return-statements,
keyword-arg-before-vararg,
assignment-from-no-return,
useless-return,
assignment-from-none,
stop-iteration-return
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html. You can also give a reporter class, eg
# mypackage.mymodule.MyReporterClass.
output-format=text
# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no
# Tells whether to display a full report or only the messages
reports=no
# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details
#msg-template=
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=no
# A regular expression matching the name of dummy variables (i.e. expectedly
# not used).
dummy-variables-rgx=_$|dummy
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,_cb
[LOGGING]
# Logging modules to check that the string format arguments are in logging
# function parameter format
logging-modules=logging
[BASIC]
# List of builtins function names that should not be used, separated by a comma
bad-functions=map,filter,input
# Good variable names which should always be accepted, separated by a comma
# allow `d` as its used frequently for deferred callback chains
good-names=i,j,k,ex,Run,_,d
# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Include a hint for the correct naming format with invalid-name
include-naming-hint=no
# Regular expression matching correct function names
function-rgx=[a-z_][a-z0-9_]{2,30}$
# Naming hint for function names
function-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression matching correct variable names
variable-rgx=[a-z_][a-z0-9_]{2,30}$
# Naming hint for variable names
variable-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression matching correct constant names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
# Naming hint for constant names
const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$
# Regular expression matching correct attribute names
attr-rgx=[a-z_][a-z0-9_]{2,30}$
# Naming hint for attribute names
attr-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression matching correct argument names
argument-rgx=[a-z_][a-z0-9_]{2,30}$
# Naming hint for argument names
argument-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression matching correct class attribute names
class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
# Naming hint for class attribute names
class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
# Regular expression matching correct inline iteration names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
# Naming hint for inline iteration names
inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$
# Regular expression matching correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$
# Naming hint for class names
class-name-hint=[A-Z_][a-zA-Z0-9]+$
# Regular expression matching correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Naming hint for module names
module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression matching correct method names
method-rgx=[a-z_][a-z0-9_]{2,30}$
# Naming hint for method names
method-name-hint=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
[ELIF]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
[SPELLING]
# Spelling dictionary name. Available dictionaries: none. To make it working
# install python-enchant package.
spelling-dict=
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to indicated private dictionary in
# --spelling-private-dict-file option instead of raising a message.
spelling-store-unknown-words=no
[FORMAT]
# Maximum number of characters on a single line.
max-line-length=120
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
# List of optional constructs for which whitespace checking is disabled. `dict-
# separator` is used to allow tabulation in dicts, etc.: {1 : 1,\n222: 2}.
# `trailing-comma` allows a space between comma and closing bracket: (a, ).
# `empty-line` allows space-only lines.
no-space-check=trailing-comma,dict-separator
# Maximum number of lines in a module
max-module-lines=1000
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
[SIMILARITIES]
# Minimum lines number of a similarity.
min-similarity-lines=4
# Ignore comments when computing similarities.
ignore-comments=yes
# Ignore docstrings when computing similarities.
ignore-docstrings=yes
# Ignore imports when computing similarities.
ignore-imports=no
[TYPECHECK]
# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes
# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis. It
# supports qualified module names, as well as Unix pattern matching.
ignored-modules=twisted.internet.reactor,leveldb,distutils
# Ignoring distutils because: https://github.com/PyCQA/pylint/issues/73
# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set). This supports can work
# with qualified names.
ignored-classes=twisted.internet.reactor,RequestMessage
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=lbrynet.lbrynet_daemon.LBRYDaemon.Parameters
[IMPORTS]
# Deprecated modules which should not be used, separated by a comma
deprecated-modules=regsub,TERMIOS,Bastion,rexec
# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=
# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=
# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=
[DESIGN]
# Maximum number of arguments for function / method
max-args=10
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branches=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
max-parents=8
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of boolean expressions in a if statement
max-bool-expr=5
[CLASSES]
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception

130
.travis.yml Normal file
View file

@ -0,0 +1,130 @@
sudo: required
dist: xenial
language: python
python: "3.7"
jobs:
include:
- stage: code quality
name: "pylint lbrynet"
install:
- pip install astroid==2.0.4 aiohttp==3.4.4
# newer astroid and aiohttp fails in pylint so we pre-install older version
- pip install pylint
- pip install git+https://github.com/lbryio/torba.git#egg=torba
- pip install -e .
script: pylint lbrynet
- &tests
stage: test
name: "Unit Tests"
install:
- pip install coverage
- pip install git+https://github.com/lbryio/torba.git#egg=torba
- pip install -e .[test]
script:
- HOME=/tmp coverage run -p --source=lbrynet -m twisted.trial --reactor=asyncio tests.unit.core tests.unit.cryptstream tests.unit.database tests.unit.dht tests.unit.lbryfilemanager tests.unit.lbrynet_daemon tests.unit.schema tests.unit.wallet tests.unit.components tests.unit.test_conf
- HOME=/tmp coverage run -p --source=lbrynet -m twisted.trial --reactor=asyncio tests.unit.test_cli
#- HOME=/tmp coverage run -p --source=lbrynet -m twisted.trial --reactor=asyncio tests.unit.analytics
after_success:
- coverage combine
- bash <(curl -s https://codecov.io/bash)
- <<: *tests
name: "DHT Tests"
script: HOME=/tmp coverage run --source=lbrynet -m twisted.trial --reactor=asyncio tests.functional
- name: "Integration Tests"
install:
- pip install tox-travis coverage
- pushd .. && git clone https://github.com/lbryio/torba.git && popd
script: tox
after_success:
- coverage combine tests/
- bash <(curl -s https://codecov.io/bash)
- stage: build
name: "Windows"
language: generic
services:
- docker
install:
- docker pull lbry/pyinstaller34_32bits:py371
script:
- python scripts/set_build.py
- docker run -v "$(pwd):/src/lbry" lbry/pyinstaller34_32bits:py371 lbry/scripts/wine_build.sh
- sudo zip -j dist/lbrynet-windows.zip dist/lbrynet.exe
deploy:
provider: releases
api_key: $GITHUB_OAUTH_TOKEN
file: dist/lbrynet-windows.zip
skip_cleanup: true
overwrite: true
draft: true
on:
tags: true
addons:
artifacts:
working_dir: dist
paths:
- lbrynet-windows.zip
target_paths:
- /daemon/build-${TRAVIS_BUILD_NUMBER}_commit-${TRAVIS_COMMIT:0:7}_branch-${TRAVIS_BRANCH}$([ ! -z ${TRAVIS_TAG} ] && echo _tag-${TRAVIS_TAG})
- &build
name: "Linux"
env: OS=linux
install:
- pip3 install pyinstaller
- pip3 install git+https://github.com/lbryio/torba.git
- python scripts/set_build.py
- pip3 install -e .
script:
- pyinstaller -F -n lbrynet lbrynet/extras/cli.py
- chmod +x dist/lbrynet
- zip -j dist/lbrynet-${OS}.zip dist/lbrynet
- ./dist/lbrynet --version
deploy:
provider: releases
api_key: $GITHUB_OAUTH_TOKEN
file: dist/lbrynet-${OS}.zip
skip_cleanup: true
overwrite: true
draft: true
on:
tags: true
addons:
artifacts:
working_dir: dist
paths:
- lbrynet-${OS}.zip
# artifact uploader thinks lbrynet is a directory, https://github.com/travis-ci/artifacts/issues/78
target_paths:
- /daemon/build-${TRAVIS_BUILD_NUMBER}_commit-${TRAVIS_COMMIT:0:7}_branch-${TRAVIS_BRANCH}$([ ! -z ${TRAVIS_TAG} ] && echo _tag-${TRAVIS_TAG})
- <<: *build
name: "Mac"
os: osx
osx_image: xcode8.3
language: generic
env: OS=mac
cache: false
before_install:
- brew upgrade python || true
- brew upgrade python || true
install:
- python3 --version
- pip3 --version
- pip3 install pyinstaller
- git clone https://github.com/lbryio/torba.git --depth 1
- sed -i -e "s/'plyvel',//" torba/setup.py
- cd torba && pip3 install -e . && cd ..
- python3 scripts/set_build.py
- pip3 install -e .
cache:
directories:
- $HOME/.cache/pip
- $HOME/Library/Caches/pip
- $TRAVIS_BUILD_DIR/.tox

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,3 @@
## Contributing to LBRY
https://lbry.tech/contribute
https://lbry.io/faq/contributing

6
Dangerfile Normal file
View file

@ -0,0 +1,6 @@
# Add a CHANGELOG entry for app changes
has_app_changes = !git.modified_files.grep(/lbrynet/).empty?
if !git.modified_files.include?("CHANGELOG.md") && has_app_changes
fail("Please include a CHANGELOG entry.")
message "See http://keepachangelog.com/en/0.3.0/ for details on good changelog guidelines"
end

View file

@ -1,6 +1,6 @@
# Installing LBRY
If only the JSON-RPC API server is needed, the recommended way to install LBRY is to use a pre-built binary. We provide binaries for all major operating systems. See the [README](README.md)!
If only the JSON-RPC API server is needed, the recommended way to install LBRY is to use a pre-built binary. We provide binaries for all major operating systems. See the [README](README.md).
These instructions are for installing LBRY from source, which is recommended if you are interested in doing development work or LBRY is not available on your operating system (godspeed, TempleOS users).
@ -9,47 +9,36 @@ Here's a video walkthrough of this setup, which is itself hosted by the LBRY net
## Prerequisites
Running `lbrynet` from source requires Python 3.7. Get the installer for your OS [here](https://www.python.org/downloads/release/python-370/).
Running `lbrynet` from source requires Python 3.6 or higher (3.7 is preferred). Get the installer for your OS [here](https://www.python.org/downloads/release/python-370/)
After installing Python 3.7, you'll need to install some additional libraries depending on your operating system.
After installing python 3, you'll need to install some additional libraries depending on your operating system.
### Virtualenv
Once python 3 is installed run `python3 -m pip install virtualenv` to install virtualenv.
### Windows
Windows users will need to install `Visual C++ Build Tools`, which can be installed by [Microsoft Build Tools](Microsoft Build Tools 2015)
Because of [issue #2769](https://github.com/lbryio/lbry-sdk/issues/2769)
at the moment the `lbrynet` daemon will only work correctly with Python 3.7.
If Python 3.8+ is used, the daemon will start but the RPC server
may not accept messages, returning the following:
```
Could not connect to daemon. Are you sure it's running?
```
### macOS
macOS users will need to install [xcode command line tools](https://developer.xamarin.com/guides/testcloud/calabash/configuring/osx/install-xcode-command-line-tools/) and [homebrew](http://brew.sh/).
These environment variables also need to be set:
```
PYTHONUNBUFFERED=1
EVENT_NOKQUEUE=1
```
Remaining dependencies can then be installed by running:
```
brew install python protobuf
```
Assistance installing Python3: https://docs.python-guide.org/starting/install3/osx/.
```
brew install python3 protobuf
```
### Linux
On Ubuntu (we recommend 18.04 or 20.04), install the following:
```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install build-essential python3.7 python3.7-dev git python3.7-venv libssl-dev python-protobuf
```
On Ubuntu (we recommend 18.04), install the following:
The [deadsnakes PPA](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa) provides Python 3.7
for those Ubuntu distributions that no longer have it in their
official repositories.
```
sudo apt-get install build-essential python3.7 python3.7-dev git python-virtualenv libssl-dev python-protobuf
```
On Raspbian, you will also need to install `python-pyparsing`.
@ -57,121 +46,32 @@ If you're running another Linux distro, install the equivalent of the above pack
## Installation
### Linux/Mac
To install:
Clone the repository:
```bash
git clone https://github.com/lbryio/lbry-sdk.git
cd lbry-sdk
```
```
git clone https://github.com/lbryio/lbry.git
cd lbry
Create a Python virtual environment for lbry-sdk:
```bash
python3.7 -m venv lbry-venv
```
virtualenv lbry-venv --python=python3.7
source lbry-venv/bin/activate
Activate virtual environment:
```bash
source lbry-venv/bin/activate
```
python --version # Python 2 is not supported. Make sure you're on Python 3.7
Make sure you're on Python 3.7+ as default in the virtual environment:
```bash
python --version
```
pip install -e .
```
Install packages:
```bash
make install
```
If you are on Linux and using PyCharm, generates initial configs:
```bash
make idea
```
To verify your installation, `which lbrynet` should return a path inside
of the `lbry-venv` folder.
```bash
(lbry-venv) $ which lbrynet
/opt/lbry-sdk/lbry-venv/bin/lbrynet
```
To exit the virtual environment simply use the command `deactivate`.
### Windows
Clone the repository:
```bash
git clone https://github.com/lbryio/lbry-sdk.git
cd lbry-sdk
```
Create a Python virtual environment for lbry-sdk:
```bash
python -m venv lbry-venv
```
Activate virtual environment:
```bash
lbry-venv\Scripts\activate
```
Install packages:
```bash
pip install -e .
```
To verify your installation, `which lbrynet` should return a path inside of the `lbry-venv` folder created by the `virtualenv` command.
## Run the tests
### Elasticsearch
For running integration tests, Elasticsearch is required to be available at localhost:9200/
The easiest way to start it is using docker with:
```bash
make elastic-docker
```
Alternative installation methods are available [at Elasticsearch website](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html).
To run the unit and integration tests from the repo directory:
```
python -m unittest discover tests.unit
python -m unittest discover tests.integration
```
To run the unit tests from the repo directory:
```
trial --reactor=asyncio tests.unit
```
## Usage
To start the API server:
```
lbrynet start
```
`lbrynet start`
Whenever the code inside [lbry-sdk/lbry](./lbry)
is modified we should run `make install` to recompile the `lbrynet`
executable with the newest code.
## Development
When developing, remember to enter the environment,
and if you wish start the server interactively.
```bash
$ source lbry-venv/bin/activate
(lbry-venv) $ python lbry/extras/cli.py start
```
Parameters can be passed in the same way.
```bash
(lbry-venv) $ python lbry/extras/cli.py wallet balance
```
If a Python debugger (`pdb` or `ipdb`) is installed we can also start it
in this way, set up break points, and step through the code.
```bash
(lbry-venv) $ pip install ipdb
(lbry-venv) $ ipdb lbry/extras/cli.py
```
Happy hacking!

View file

@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2015-2022 LBRY Inc
Copyright (c) 2015-2018 LBRY Inc
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish,

View file

@ -1,4 +0,0 @@
include README.md
include CHANGELOG.md
include LICENSE
recursive-include lbry *.txt *.py

View file

@ -1,26 +0,0 @@
.PHONY: install tools lint test test-unit test-unit-coverage test-integration idea
install:
pip install -e .
lint:
pylint --rcfile=setup.cfg lbry
#mypy --ignore-missing-imports lbry
test: test-unit test-integration
test-unit:
python -m unittest discover tests.unit
test-unit-coverage:
coverage run --source=lbry -m unittest discover -vv tests.unit
test-integration:
tox
idea:
mkdir -p .idea
cp -r scripts/idea/* .idea
elastic-docker:
docker run -d -v lbryhub:/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 -e"ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.12.1

View file

@ -1,19 +1,19 @@
# <img src="https://raw.githubusercontent.com/lbryio/lbry-sdk/master/lbry.png" alt="LBRY" width="48" height="36" /> LBRY SDK [![build](https://github.com/lbryio/lbry-sdk/actions/workflows/main.yml/badge.svg)](https://github.com/lbryio/lbry-sdk/actions/workflows/main.yml) [![coverage](https://coveralls.io/repos/github/lbryio/lbry-sdk/badge.svg)](https://coveralls.io/github/lbryio/lbry-sdk)
# <img src="https://raw.githubusercontent.com/lbryio/lbry/master/lbry.png" alt="LBRY" width="48" height="36" /> LBRY SDK [![Build Status](https://travis-ci.org/lbryio/lbry.svg?branch=master)](https://travis-ci.org/lbryio/lbry) [![Test Coverage](https://codecov.io/gh/lbryio/lbry/branch/master/graph/badge.svg)](https://codecov.io/gh/lbryio/lbry)
LBRY is a decentralized peer-to-peer protocol for publishing and accessing digital content. It utilizes the [LBRY blockchain](https://github.com/lbryio/lbrycrd) as a global namespace and database of digital content. Blockchain entries contain searchable content metadata, identities, rights and access rules. LBRY also provides a data network that consists of peers (seeders) uploading and downloading data from other peers, possibly in exchange for payments, as well as a distributed hash table used by peers to discover other peers.
LBRY is a decentralized peer-to-peer network providing distribution, discovery, and purchase of digital content (data). It utilizes the [LBRY blockchain](https://github.com/lbryio/lbrycrd) as a global namespace and database of digital content. Blockchain entries contain searchable content metadata, identities, rights and access rules. LBRY also provides a data network that consists of peers (seeders) uploading and downloading data from other peers, possibly in exchange for payments, as well as a distributed hash table used by peers to discover other peers.
LBRY SDK for Python is currently the most fully featured implementation of the LBRY Network protocols and includes many useful components and tools for building decentralized applications. Primary features and components include:
LBRY SDK for Python is currently the most full featured implementation of the LBRY Network protocols and includes many useful components and tools for building decentralized applications. Primary features and components:
* Built on Python 3.7 and `asyncio`.
* Kademlia DHT (Distributed Hash Table) implementation for finding peers to download from and announcing to peers what we have to host ([lbry.dht](https://github.com/lbryio/lbry-sdk/tree/master/lbry/dht)).
* Blob exchange protocol for transferring encrypted blobs of content and negotiating payments ([lbry.blob_exchange](https://github.com/lbryio/lbry-sdk/tree/master/lbry/blob_exchange)).
* Protobuf schema for encoding and decoding metadata stored on the blockchain ([lbry.schema](https://github.com/lbryio/lbry-sdk/tree/master/lbry/schema)).
* Wallet implementation for the LBRY blockchain ([lbry.wallet](https://github.com/lbryio/lbry-sdk/tree/master/lbry/wallet)).
* Daemon with a JSON-RPC API to ease building end user applications in any language and for automating various tasks ([lbry.extras.daemon](https://github.com/lbryio/lbry-sdk/tree/master/lbry/extras/daemon)).
* Built on Python 3.7+ and `asyncio`.
* DHT (Distributed Hash Table) implementation for finding peers ([lbrynet.dht](https://github.com/lbryio/lbry/tree/master/lbrynet/dht)).
* Blob exchange protocol for downloading content and negotiating payments ([lbrynet.blob_exchange](https://github.com/lbryio/lbry/tree/master/lbrynet/blob_exchange)).
* Protobuf schema for encoding and decoding metadata stored on the blockchain ([lbrynet.schema](https://github.com/lbryio/lbry/tree/master/lbrynet/schema)).
* Wallet implementation for the LBRY blockchain ([lbrynet.extras.wallet](https://github.com/lbryio/lbry/tree/master/lbrynet/extras/wallet)).
* Daemon with a JSON-RPC API to ease building end user applications in any language and for automating various tasks ([lbrynet.extras.daemon](https://github.com/lbryio/lbry/tree/master/lbrynet/extras/daemon)).
## Installation
Our [releases page](https://github.com/lbryio/lbry-sdk/releases) contains pre-built binaries of the latest release, pre-releases, and past releases for macOS, Debian-based Linux, and Windows. [Automated travis builds](http://build.lbry.io/daemon/) are also available for testing.
Our [releases page](https://github.com/lbryio/lbry/releases) contains pre-built binaries of the latest release, pre-releases, and past releases for macOS, Debian-based Linux, and Windows. [Automated travis builds](http://build.lbry.io/daemon/) are also available for testing.
## Usage
@ -33,7 +33,7 @@ Installing from source is also relatively painless. Full instructions are in [IN
## Contributing
Contributions to this project are welcome, encouraged, and compensated. For more details, please check [this](https://lbry.tech/contribute) link.
Contributions to this project are welcome, encouraged, and compensated. For more details, please check [this](https://lbry.io/faq/contributing) link.
## License
@ -41,11 +41,11 @@ This project is MIT licensed. For the full license, see [LICENSE](LICENSE).
## Security
We take security seriously. Please contact security@lbry.com regarding any security issues. [Our PGP key is here](https://lbry.com/faq/pgp-key) if you need it.
We take security seriously. Please contact security@lbry.io regarding any security issues. [Our GPG key is here](https://lbry.io/faq/gpg-key) if you need it.
## Contact
The primary contact for this project is [@eukreign](mailto:lex@lbry.com).
The primary contact for this project is [@eukreign](mailto:lex@lbry.io).
## Additional information and links
@ -53,4 +53,4 @@ The documentation for the API can be found [here](https://lbry.tech/api/sdk).
Daemon defaults, ports, and other settings are documented [here](https://lbry.tech/resources/daemon-settings).
Settings can be configured using a daemon-settings.yml file. An example can be found [here](https://github.com/lbryio/lbry-sdk/blob/master/example_daemon_settings.yml).
Settings can be configured using a daemon-settings.yml file. An example can be found [here](https://github.com/lbryio/lbry/blob/master/example_daemon_settings.yml).

View file

@ -1,9 +0,0 @@
# Security Policy
## Supported Versions
While we are not at v1.0 yet, only the latest release will be supported.
## Reporting a Vulnerability
See https://lbry.com/faq/security

View file

@ -1,43 +0,0 @@
FROM debian:10-slim
ARG user=lbry
ARG projects_dir=/home/$user
ARG db_dir=/database
ARG DOCKER_TAG
ARG DOCKER_COMMIT=docker
ENV DOCKER_TAG=$DOCKER_TAG DOCKER_COMMIT=$DOCKER_COMMIT
RUN apt-get update && \
apt-get -y --no-install-recommends install \
wget \
automake libtool \
tar unzip \
build-essential \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \
python3-pip \
python3-wheel \
python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/*
RUN groupadd -g 999 $user && useradd -m -u 999 -g $user $user
COPY . $projects_dir
RUN chown -R $user:$user $projects_dir
RUN mkdir -p $db_dir
RUN chown -R $user:$user $db_dir
USER $user
WORKDIR $projects_dir
RUN python3 -m pip install -U setuptools pip
RUN make install
RUN python3 docker/set_build.py
RUN rm ~/.cache -rf
VOLUME $db_dir
ENTRYPOINT ["python3", "scripts/dht_node.py"]

View file

@ -1,56 +0,0 @@
FROM debian:10-slim
ARG user=lbry
ARG db_dir=/database
ARG projects_dir=/home/$user
ARG DOCKER_TAG
ARG DOCKER_COMMIT=docker
ENV DOCKER_TAG=$DOCKER_TAG DOCKER_COMMIT=$DOCKER_COMMIT
RUN apt-get update && \
apt-get -y --no-install-recommends install \
wget \
tar unzip \
build-essential \
automake libtool \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \
python3-pip \
python3-wheel \
python3-cffi \
python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/*
RUN groupadd -g 999 $user && useradd -m -u 999 -g $user $user
RUN mkdir -p $db_dir
RUN chown -R $user:$user $db_dir
COPY . $projects_dir
RUN chown -R $user:$user $projects_dir
USER $user
WORKDIR $projects_dir
RUN pip install uvloop
RUN make install
RUN python3 docker/set_build.py
RUN rm ~/.cache -rf
# entry point
ARG host=0.0.0.0
ARG tcp_port=50001
ARG daemon_url=http://lbry:lbry@localhost:9245/
VOLUME $db_dir
ENV TCP_PORT=$tcp_port
ENV HOST=$host
ENV DAEMON_URL=$daemon_url
ENV DB_DIRECTORY=$db_dir
ENV MAX_SESSIONS=1000000000
ENV MAX_SEND=1000000000000000000
ENV EVENT_LOOP_POLICY=uvloop
COPY ./docker/wallet_server_entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -1,45 +0,0 @@
FROM debian:10-slim
ARG user=lbry
ARG downloads_dir=/database
ARG projects_dir=/home/$user
ARG DOCKER_TAG
ARG DOCKER_COMMIT=docker
ENV DOCKER_TAG=$DOCKER_TAG DOCKER_COMMIT=$DOCKER_COMMIT
RUN apt-get update && \
apt-get -y --no-install-recommends install \
wget \
automake libtool \
tar unzip \
build-essential \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \
python3-pip \
python3-wheel \
python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/*
RUN groupadd -g 999 $user && useradd -m -u 999 -g $user $user
RUN mkdir -p $downloads_dir
RUN chown -R $user:$user $downloads_dir
COPY . $projects_dir
RUN chown -R $user:$user $projects_dir
USER $user
WORKDIR $projects_dir
RUN pip install uvloop
RUN make install
RUN python3 docker/set_build.py
RUN rm ~/.cache -rf
# entry point
VOLUME $downloads_dir
COPY ./docker/webconf.yaml /webconf.yaml
ENTRYPOINT ["/home/lbry/.local/bin/lbrynet", "start", "--config=/webconf.yaml"]

View file

@ -1,9 +0,0 @@
### How to run with docker-compose
1. Edit config file and after that fix permissions with
```
sudo chown -R 999:999 webconf.yaml
```
2. Start SDK with
```
docker-compose up -d
```

View file

@ -1,49 +0,0 @@
version: "3"
volumes:
wallet_server:
es01:
services:
wallet_server:
depends_on:
- es01
image: lbry/wallet-server:${WALLET_SERVER_TAG:-latest-release}
restart: always
network_mode: host
ports:
- "50001:50001" # rpc port
- "2112:2112" # uncomment to enable prometheus
volumes:
- "wallet_server:/database"
environment:
- DAEMON_URL=http://lbry:lbry@127.0.0.1:9245
- MAX_QUERY_WORKERS=4
- CACHE_MB=1024
- CACHE_ALL_TX_HASHES=
- CACHE_ALL_CLAIM_TXOS=
- MAX_SEND=1000000000000000000
- MAX_RECEIVE=1000000000000000000
- MAX_SESSIONS=100000
- HOST=0.0.0.0
- TCP_PORT=50001
- PROMETHEUS_PORT=2112
- FILTERING_CHANNEL_IDS=770bd7ecba84fd2f7607fb15aedd2b172c2e153f 95e5db68a3101df19763f3a5182e4b12ba393ee8
- BLOCKING_CHANNEL_IDS=dd687b357950f6f271999971f43c785e8067c3a9 06871aa438032244202840ec59a469b303257cad b4a2528f436eca1bf3bf3e10ff3f98c57bd6c4c6
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es01
environment:
- node.name=es01
- discovery.type=single-node
- indices.query.bool.max_clause_count=8192
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g" # no more than 32, remember to disable swap
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200

View file

@ -1,9 +0,0 @@
version: '3'
services:
websdk:
image: vshyba/websdk
ports:
- '5279:5279'
- '5280:5280'
volumes:
- ./webconf.yaml:/webconf.yaml

View file

@ -1,7 +0,0 @@
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
cd "$DIR/../.." ## make sure we're in the right place. Docker Hub screws this up sometimes
echo "docker build dir: $(pwd)"
docker build --build-arg DOCKER_TAG=$DOCKER_TAG --build-arg DOCKER_COMMIT=$SOURCE_COMMIT -f $DOCKERFILE_PATH -t $IMAGE_NAME .

View file

@ -1,11 +0,0 @@
# requires powershell and .NET 4+. see https://chocolatey.org/install for more info.
$chocoVersion = powershell choco -v
if(-not($chocoVersion)){
Write-Output "Chocolatey is not installed, installing now"
Write-Output "IF YOU KEEP GETTING THIS MESSAGE ON EVERY BUILD, TRY RESTARTING THE GITLAB RUNNER SO IT GETS CHOCO INTO IT'S ENV"
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
}
else{
Write-Output "Chocolatey version $chocoVersion is already installed"
}

View file

@ -1,44 +0,0 @@
import sys
import os
import re
import logging
import lbry.build_info as build_info_mod
log = logging.getLogger()
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)
def _check_and_set(d: dict, key: str, value: str):
try:
d[key]
except KeyError:
raise Exception(f"{key} var does not exist in {build_info_mod.__file__}")
d[key] = value
def main():
build_info = {item: build_info_mod.__dict__[item] for item in dir(build_info_mod) if not item.startswith("__")}
commit_hash = os.getenv('DOCKER_COMMIT', os.getenv('GITHUB_SHA'))
if commit_hash is None:
raise ValueError("Commit hash not found in env vars")
_check_and_set(build_info, "COMMIT_HASH", commit_hash[:6])
docker_tag = os.getenv('DOCKER_TAG')
if docker_tag:
_check_and_set(build_info, "DOCKER_TAG", docker_tag)
_check_and_set(build_info, "BUILD", "docker")
else:
if re.match(r'refs/tags/v\d+\.\d+\.\d+$', str(os.getenv('GITHUB_REF'))):
_check_and_set(build_info, "BUILD", "release")
else:
_check_and_set(build_info, "BUILD", "qa")
log.debug("build info: %s", ", ".join([f"{k}={v}" for k, v in build_info.items()]))
with open(build_info_mod.__file__, 'w') as f:
f.write("\n".join([f"{k} = \"{v}\"" for k, v in build_info.items()]) + "\n")
if __name__ == '__main__':
sys.exit(main())

View file

@ -1,25 +0,0 @@
#!/bin/bash
# entrypoint for wallet server Docker image
set -euo pipefail
SNAPSHOT_URL="${SNAPSHOT_URL:-}" #off by default. latest snapshot at https://lbry.com/snapshot/wallet
if [[ -n "$SNAPSHOT_URL" ]] && [[ ! -f /database/lbry-leveldb ]]; then
files="$(ls)"
echo "Downloading wallet snapshot from $SNAPSHOT_URL"
wget --no-verbose --trust-server-names --content-disposition "$SNAPSHOT_URL"
echo "Extracting snapshot..."
filename="$(grep -vf <(echo "$files") <(ls))" # finds the file that was not there before
case "$filename" in
*.tgz|*.tar.gz|*.tar.bz2 ) tar xvf "$filename" --directory /database ;;
*.zip ) unzip "$filename" -d /database ;;
* ) echo "Don't know how to extract ${filename}. SNAPSHOT COULD NOT BE LOADED" && exit 1 ;;
esac
rm "$filename"
fi
/home/lbry/.local/bin/lbry-hub-elastic-sync
echo 'starting server'
/home/lbry/.local/bin/lbry-hub "$@"

View file

@ -1,9 +0,0 @@
allowed_origin: "*"
max_key_fee: "0.0 USD"
save_files: false
save_blobs: false
streaming_server: "0.0.0.0:5280"
api: "0.0.0.0:5279"
data_dir: /tmp
download_dir: /tmp
wallet_dir: /tmp

307
docs/404.html Normal file
View file

@ -0,0 +1,307 @@
<!DOCTYPE html>
<html lang="en" class="no-js">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,initial-scale=1">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="lang:clipboard.copy" content="Copy to clipboard">
<meta name="lang:clipboard.copied" content="Copied to clipboard">
<meta name="lang:search.language" content="en">
<meta name="lang:search.pipeline.stopwords" content="True">
<meta name="lang:search.pipeline.trimmer" content="True">
<meta name="lang:search.result.none" content="No matching documents">
<meta name="lang:search.result.one" content="1 matching document">
<meta name="lang:search.result.other" content="# matching documents">
<meta name="lang:search.tokenizer" content="[\s\-]+">
<link rel="shortcut icon" href="/assets/images/favicon.png">
<meta name="generator" content="mkdocs-0.17.3, mkdocs-material-2.7.0">
<title>LBRY</title>
<link rel="stylesheet" href="/assets/stylesheets/application.78aab2dc.css">
<link rel="stylesheet" href="/assets/stylesheets/application-palette.6079476c.css">
<script src="/assets/javascripts/modernizr.1aa3b519.js"></script>
<link href="https://fonts.gstatic.com" rel="preconnect" crossorigin>
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,400i,700|Roboto+Mono">
<style>body,input{font-family:"Roboto","Helvetica Neue",Helvetica,Arial,sans-serif}code,kbd,pre{font-family:"Roboto Mono","Courier New",Courier,monospace}</style>
<link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body dir="ltr" data-md-color-primary="teal" data-md-color-accent="green">
<svg class="md-svg">
<defs>
<svg xmlns="http://www.w3.org/2000/svg" width="416" height="448"
viewBox="0 0 416 448" id="github">
<path fill="currentColor" d="M160 304q0 10-3.125 20.5t-10.75 19-18.125
8.5-18.125-8.5-10.75-19-3.125-20.5 3.125-20.5 10.75-19 18.125-8.5
18.125 8.5 10.75 19 3.125 20.5zM320 304q0 10-3.125 20.5t-10.75
19-18.125 8.5-18.125-8.5-10.75-19-3.125-20.5 3.125-20.5 10.75-19
18.125-8.5 18.125 8.5 10.75 19 3.125 20.5zM360
304q0-30-17.25-51t-46.75-21q-10.25 0-48.75 5.25-17.75 2.75-39.25
2.75t-39.25-2.75q-38-5.25-48.75-5.25-29.5 0-46.75 21t-17.25 51q0 22 8
38.375t20.25 25.75 30.5 15 35 7.375 37.25 1.75h42q20.5 0
37.25-1.75t35-7.375 30.5-15 20.25-25.75 8-38.375zM416 260q0 51.75-15.25
82.75-9.5 19.25-26.375 33.25t-35.25 21.5-42.5 11.875-42.875 5.5-41.75
1.125q-19.5 0-35.5-0.75t-36.875-3.125-38.125-7.5-34.25-12.875-30.25-20.25-21.5-28.75q-15.5-30.75-15.5-82.75
0-59.25 34-99-6.75-20.5-6.75-42.5 0-29 12.75-54.5 27 0 47.5 9.875t47.25
30.875q36.75-8.75 77.25-8.75 37 0 70 8 26.25-20.5
46.75-30.25t47.25-9.75q12.75 25.5 12.75 54.5 0 21.75-6.75 42 34 40 34
99.5z" />
</svg>
</defs>
</svg>
<input class="md-toggle" data-md-toggle="drawer" type="checkbox" id="drawer">
<input class="md-toggle" data-md-toggle="search" type="checkbox" id="search">
<label class="md-overlay" data-md-component="overlay" for="drawer"></label>
<header class="md-header" data-md-component="header">
<nav class="md-header-nav md-grid">
<div class="md-flex">
<div class="md-flex__cell md-flex__cell--shrink">
<a href="/" title="LBRY" class="md-header-nav__button md-logo">
<img src="https://s3.amazonaws.com/files.lbry.io/logo-square-white-bookonly.png" alt="LBRY logo" width="24" height="24">
</a>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<label class="md-icon md-icon--menu md-header-nav__button" for="drawer"></label>
</div>
<div class="md-flex__cell md-flex__cell--stretch">
<div class="md-flex__ellipsis md-header-nav__title" data-md-component="title">
<span class="md-header-nav__topic">
LBRY
</span>
<span class="md-header-nav__topic">
</span>
</div>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<label class="md-icon md-icon--search md-header-nav__button" for="search"></label>
<div class="md-search" data-md-component="search" role="dialog">
<label class="md-search__overlay" for="search"></label>
<div class="md-search__inner" role="search">
<form class="md-search__form" name="search">
<input type="text" class="md-search__input" name="query" placeholder="Search" autocapitalize="off" autocorrect="off" autocomplete="off" spellcheck="false" data-md-component="query" data-md-state="active">
<label class="md-icon md-search__icon" for="search"></label>
<button type="reset" class="md-icon md-search__icon" data-md-component="reset" tabindex="-1">
&#xE5CD;
</button>
</form>
<div class="md-search__output">
<div class="md-search__scrollwrap" data-md-scrollfix>
<div class="md-search-result" data-md-component="result">
<div class="md-search-result__meta">
Type to start searching
</div>
<ol class="md-search-result__list"></ol>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
<div class="md-header-nav__source">
<a href="https://github.com/lbryio/lbry/" title="Go to repository" class="md-source" data-md-source="github">
<div class="md-source__icon">
<svg viewBox="0 0 24 24" width="24" height="24">
<use xlink:href="#github" width="24" height="24"></use>
</svg>
</div>
<div class="md-source__repository">
GitHub
</div>
</a>
</div>
</div>
</div>
</nav>
</header>
<div class="md-container">
<main class="md-main">
<div class="md-main__inner md-grid" data-md-component="container">
<div class="md-sidebar md-sidebar--primary" data-md-component="navigation">
<div class="md-sidebar__scrollwrap">
<div class="md-sidebar__inner">
<nav class="md-nav md-nav--primary" data-md-level="0">
<label class="md-nav__title md-nav__title--site" for="drawer">
<span class="md-nav__button md-logo">
<img src="https://s3.amazonaws.com/files.lbry.io/logo-square-white-bookonly.png" alt="LBRY logo" width="48" height="48">
</span>
LBRY
</label>
<div class="md-nav__source">
<a href="https://github.com/lbryio/lbry/" title="Go to repository" class="md-source" data-md-source="github">
<div class="md-source__icon">
<svg viewBox="0 0 24 24" width="24" height="24">
<use xlink:href="#github" width="24" height="24"></use>
</svg>
</div>
<div class="md-source__repository">
GitHub
</div>
</a>
</div>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="/" title="API" class="md-nav__link">
API
</a>
</li>
<li class="md-nav__item">
<a href="/cli/" title="CLI" class="md-nav__link">
CLI
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<div class="md-content">
<article class="md-content__inner md-typeset">
<h1>404 - Not found</h1>
</article>
</div>
</div>
</main>
<footer class="md-footer">
<div class="md-footer-meta md-typeset">
<div class="md-footer-meta__inner md-grid">
<div class="md-footer-copyright">
powered by
<a href="http://www.mkdocs.org">MkDocs</a>
and
<a href="https://squidfunk.github.io/mkdocs-material/">
Material for MkDocs</a>
</div>
</div>
</div>
</footer>
</div>
<script src="/assets/javascripts/application.8eb9be28.js"></script>
<script>app.initialize({version:"0.17.3",url:{base:""}})</script>
<script>!function(e,a,t,n,o,c,i){e.GoogleAnalyticsObject=o,e.ga=e.ga||function(){(e.ga.q=e.ga.q||[]).push(arguments)},e.ga.l=1*new Date,c=a.createElement(t),i=a.getElementsByTagName(t)[0],c.async=1,c.src="https://www.google-analytics.com/analytics.js",i.parentNode.insertBefore(c,i)}(window,document,"script",0,"ga"),ga("create","UA-60403362-1","auto"),ga("set","anonymizeIp",!0),ga("send","pageview");var links=document.getElementsByTagName("a");if(Array.prototype.map.call(links,function(e){e.host!=document.location.host&&e.addEventListener("click",function(){var a=e.getAttribute("data-md-action")||"follow";ga("send","event","outbound",a,e.href)})}),document.forms.search){var query=document.forms.search.query;query.addEventListener("blur",function(){if(this.value){var e=document.location.pathname;ga("send","pageview",e+"?q="+this.value)}})}</script>
</body>
</html>

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 521 B

View file

@ -0,0 +1,20 @@
<svg xmlns="http://www.w3.org/2000/svg" width="352" height="448"
viewBox="0 0 352 448" id="bitbucket">
<path fill="currentColor" d="M203.75 214.75q2 15.75-12.625 25.25t-27.875
1.5q-9.75-4.25-13.375-14.5t-0.125-20.5 13-14.5q9-4.5 18.125-3t16 8.875
6.875 16.875zM231.5 209.5q-3.5-26.75-28.25-41t-49.25-3.25q-15.75
7-25.125 22.125t-8.625 32.375q1 22.75 19.375 38.75t41.375 14q22.75-2
38-21t12.5-42zM291.25
74q-5-6.75-14-11.125t-14.5-5.5-17.75-3.125q-72.75-11.75-141.5 0.5-10.75
1.75-16.5 3t-13.75 5.5-12.5 10.75q7.5 7 19 11.375t18.375 5.5 21.875
2.875q57 7.25 112 0.25 15.75-2 22.375-3t18.125-5.375 18.75-11.625zM305.5
332.75q-2 6.5-3.875 19.125t-3.5 21-7.125 17.5-14.5 14.125q-21.5
12-47.375 17.875t-50.5 5.5-50.375-4.625q-11.5-2-20.375-4.5t-19.125-6.75-18.25-10.875-13-15.375q-6.25-24-14.25-73l1.5-4
4.5-2.25q55.75 37 126.625 37t126.875-37q5.25 1.5 6 5.75t-1.25 11.25-2
9.25zM350.75 92.5q-6.5 41.75-27.75 163.75-1.25 7.5-6.75 14t-10.875
10-13.625 7.75q-63 31.5-152.5
22-62-6.75-98.5-34.75-3.75-3-6.375-6.625t-4.25-8.75-2.25-8.5-1.5-9.875-1.375-8.75q-2.25-12.5-6.625-37.5t-7-40.375-5.875-36.875-5.5-39.5q0.75-6.5
4.375-12.125t7.875-9.375 11.25-7.5 11.5-5.625 12-4.625q31.25-11.5
78.25-16 94.75-9.25 169 12.5 38.75 11.5 53.75 30.5 4 5 4.125
12.75t-1.375 13.5z" />
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View file

@ -0,0 +1,18 @@
<svg xmlns="http://www.w3.org/2000/svg" width="416" height="448"
viewBox="0 0 416 448" id="github">
<path fill="currentColor" d="M160 304q0 10-3.125 20.5t-10.75 19-18.125
8.5-18.125-8.5-10.75-19-3.125-20.5 3.125-20.5 10.75-19 18.125-8.5
18.125 8.5 10.75 19 3.125 20.5zM320 304q0 10-3.125 20.5t-10.75
19-18.125 8.5-18.125-8.5-10.75-19-3.125-20.5 3.125-20.5 10.75-19
18.125-8.5 18.125 8.5 10.75 19 3.125 20.5zM360
304q0-30-17.25-51t-46.75-21q-10.25 0-48.75 5.25-17.75 2.75-39.25
2.75t-39.25-2.75q-38-5.25-48.75-5.25-29.5 0-46.75 21t-17.25 51q0 22 8
38.375t20.25 25.75 30.5 15 35 7.375 37.25 1.75h42q20.5 0
37.25-1.75t35-7.375 30.5-15 20.25-25.75 8-38.375zM416 260q0 51.75-15.25
82.75-9.5 19.25-26.375 33.25t-35.25 21.5-42.5 11.875-42.875 5.5-41.75
1.125q-19.5 0-35.5-0.75t-36.875-3.125-38.125-7.5-34.25-12.875-30.25-20.25-21.5-28.75q-15.5-30.75-15.5-82.75
0-59.25 34-99-6.75-20.5-6.75-42.5 0-29 12.75-54.5 27 0 47.5 9.875t47.25
30.875q36.75-8.75 77.25-8.75 37 0 70 8 26.25-20.5
46.75-30.25t47.25-9.75q12.75 25.5 12.75 54.5 0 21.75-6.75 42 34 40 34
99.5z" />
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View file

@ -0,0 +1,38 @@
<svg xmlns="http://www.w3.org/2000/svg" width="500" height="500"
viewBox="0 0 500 500" id="gitlab">
<g transform="translate(156.197863, 1.160267)">
<path fill="currentColor"
d="M93.667,473.347L93.667,473.347l90.684-279.097H2.983L93.667,
473.347L93.667,473.347z" />
</g>
<g transform="translate(28.531199, 1.160800)" opacity="0.7">
<path fill="currentColor"
d="M221.333,473.345L130.649,194.25H3.557L221.333,473.345L221.333,
473.345z" />
</g>
<g transform="translate(0.088533, 0.255867)" opacity="0.5">
<path fill="currentColor"
d="M32,195.155L32,195.155L4.441,279.97c-2.513,7.735,0.24,16.21,6.821,
20.99l238.514,173.29 L32,195.155L32,195.155z" />
</g>
<g transform="translate(29.421866, 280.255593)">
<path fill="currentColor"
d="M2.667-84.844h127.092L75.14-252.942c-2.811-8.649-15.047-8.649-17.856,
0L2.667-84.844 L2.667-84.844z" />
</g>
<g transform="translate(247.197860, 1.160800)" opacity="0.7">
<path fill="currentColor"
d="M2.667,473.345L93.351,194.25h127.092L2.667,473.345L2.667,
473.345z" />
</g>
<g transform="translate(246.307061, 0.255867)" opacity="0.5">
<path fill="currentColor"
d="M221.334,195.155L221.334,195.155l27.559,84.815c2.514,7.735-0.24,
16.21-6.821,20.99 L3.557,474.25L221.334,195.155L221.334,195.155z" />
</g>
<g transform="translate(336.973725, 280.255593)">
<path fill="currentColor"
d="M130.667-84.844H3.575l54.618-168.098c2.811-8.649,15.047-8.649,
17.856,0L130.667-84.844 L130.667-84.844z" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1 @@
!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r,i,n;e.da=function(){this.pipeline.reset(),this.pipeline.add(e.da.trimmer,e.da.stopWordFilter,e.da.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.da.stemmer))},e.da.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA--",e.da.trimmer=e.trimmerSupport.generateTrimmer(e.da.wordCharacters),e.Pipeline.registerFunction(e.da.trimmer,"trimmer-da"),e.da.stemmer=(r=e.stemmerSupport.Among,i=e.stemmerSupport.SnowballProgram,n=new function(){var e,n,t,s=[new r("hed",-1,1),new r("ethed",0,1),new r("ered",-1,1),new r("e",-1,1),new r("erede",3,1),new r("ende",3,1),new r("erende",5,1),new r("ene",3,1),new r("erne",3,1),new r("ere",3,1),new r("en",-1,1),new r("heden",10,1),new r("eren",10,1),new r("er",-1,1),new r("heder",13,1),new r("erer",13,1),new r("s",-1,2),new r("heds",16,1),new r("es",16,1),new r("endes",18,1),new r("erendes",19,1),new r("enes",18,1),new r("ernes",18,1),new r("eres",18,1),new r("ens",16,1),new r("hedens",24,1),new r("erens",24,1),new r("ers",16,1),new r("ets",16,1),new r("erets",28,1),new r("et",-1,1),new r("eret",30,1)],o=[new r("gd",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1)],a=[new r("ig",-1,1),new r("lig",0,1),new r("elig",1,1),new r("els",-1,1),new r("løst",-1,2)],d=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],u=[239,254,42,3,0,0,0,0,0,0,0,0,0,0,0,0,16],c=new i;function l(){var e,r=c.limit-c.cursor;c.cursor>=n&&(e=c.limit_backward,c.limit_backward=n,c.ket=c.cursor,c.find_among_b(o,4)?(c.bra=c.cursor,c.limit_backward=e,c.cursor=c.limit-r,c.cursor>c.limit_backward&&(c.cursor--,c.bra=c.cursor,c.slice_del())):c.limit_backward=e)}this.setCurrent=function(e){c.setCurrent(e)},this.getCurrent=function(){return c.getCurrent()},this.stem=function(){var r,i=c.cursor;return function(){var r,i=c.cursor+3;if(n=c.limit,0<=i&&i<=c.limit){for(e=i;;){if(r=c.cursor,c.in_grouping(d,97,248)){c.cursor=r;break}if(c.cursor=r,r>=c.limit)return;c.cursor++}for(;!c.out_grouping(d,97,248);){if(c.cursor>=c.limit)return;c.cursor++}(n=c.cursor)<e&&(n=e)}}(),c.limit_backward=i,c.cursor=c.limit,function(){var e,r;if(c.cursor>=n&&(r=c.limit_backward,c.limit_backward=n,c.ket=c.cursor,e=c.find_among_b(s,32),c.limit_backward=r,e))switch(c.bra=c.cursor,e){case 1:c.slice_del();break;case 2:c.in_grouping_b(u,97,229)&&c.slice_del()}}(),c.cursor=c.limit,l(),c.cursor=c.limit,function(){var e,r,i,t=c.limit-c.cursor;if(c.ket=c.cursor,c.eq_s_b(2,"st")&&(c.bra=c.cursor,c.eq_s_b(2,"ig")&&c.slice_del()),c.cursor=c.limit-t,c.cursor>=n&&(r=c.limit_backward,c.limit_backward=n,c.ket=c.cursor,e=c.find_among_b(a,5),c.limit_backward=r,e))switch(c.bra=c.cursor,e){case 1:c.slice_del(),i=c.limit-c.cursor,l(),c.cursor=c.limit-i;break;case 2:c.slice_from("løs")}}(),c.cursor=c.limit,c.cursor>=n&&(r=c.limit_backward,c.limit_backward=n,c.ket=c.cursor,c.out_grouping_b(d,97,248)?(c.bra=c.cursor,t=c.slice_to(t),c.limit_backward=r,c.eq_v_b(t)&&c.slice_del()):c.limit_backward=r),!0}},function(e){return"function"==typeof e.update?e.update(function(e){return n.setCurrent(e),n.stem(),n.getCurrent()}):(n.setCurrent(e),n.stem(),n.getCurrent())}),e.Pipeline.registerFunction(e.da.stemmer,"stemmer-da"),e.da.stopWordFilter=e.generateStopWordFilter("ad af alle alt anden at blev blive bliver da de dem den denne der deres det dette dig din disse dog du efter eller en end er et for fra ham han hans har havde have hende hendes her hos hun hvad hvis hvor i ikke ind jeg jer jo kunne man mange med meget men mig min mine mit mod ned noget nogle nu når og også om op os over på selv sig sin sine sit skal skulle som sådan thi til ud under var vi vil ville vor være været".split(" ")),e.Pipeline.registerFunction(e.da.stopWordFilter,"stopWordFilter-da")}});

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1 @@
!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r="2"==e.version[0];e.jp=function(){this.pipeline.reset(),this.pipeline.add(e.jp.stopWordFilter,e.jp.stemmer),r?this.tokenizer=e.jp.tokenizer:(e.tokenizer&&(e.tokenizer=e.jp.tokenizer),this.tokenizerFn&&(this.tokenizerFn=e.jp.tokenizer))};var t=new e.TinySegmenter;e.jp.tokenizer=function(n){if(!arguments.length||null==n||null==n)return[];if(Array.isArray(n))return n.map(function(t){return r?new e.Token(t.toLowerCase()):t.toLowerCase()});for(var i=n.toString().toLowerCase().replace(/^\s+/,""),o=i.length-1;o>=0;o--)if(/\S/.test(i.charAt(o))){i=i.substring(0,o+1);break}return t.segment(i).filter(function(e){return!!e}).map(function(t){return r?new e.Token(t):t})},e.jp.stemmer=function(e){return e},e.Pipeline.registerFunction(e.jp.stemmer,"stemmer-jp"),e.jp.wordCharacters="一二三四五六七八九十百千万億兆一-龠々〆ヵヶぁ-んァ-ヴーア-ン゙a-zA-Z--0-9-",e.jp.stopWordFilter=function(t){if(-1===e.jp.stopWordFilter.stopWords.indexOf(r?t.toString():t))return t},e.jp.stopWordFilter=e.generateStopWordFilter("これ それ あれ この その あの ここ そこ あそこ こちら どこ だれ なに なん 何 私 貴方 貴方方 我々 私達 あの人 あのかた 彼女 彼 です あります おります います は が の に を で え から まで より も どの と し それで しかし".split(" ")),e.Pipeline.registerFunction(e.jp.stopWordFilter,"stopWordFilter-jp")}});

View file

@ -0,0 +1 @@
!function(e,i){"function"==typeof define&&define.amd?define(i):"object"==typeof exports?module.exports=i():i()(e.lunr)}(this,function(){return function(e){e.multiLanguage=function(){for(var i=Array.prototype.slice.call(arguments),t=i.join("-"),r="",n=[],s=[],p=0;p<i.length;++p)"en"==i[p]?(r+="\\w",n.unshift(e.stopWordFilter),n.push(e.stemmer),s.push(e.stemmer)):(r+=e[i[p]].wordCharacters,n.unshift(e[i[p]].stopWordFilter),n.push(e[i[p]].stemmer),s.push(e[i[p]].stemmer));var o=e.trimmerSupport.generateTrimmer(r);return e.Pipeline.registerFunction(o,"lunr-multi-trimmer-"+t),n.unshift(o),function(){this.pipeline.reset(),this.pipeline.add.apply(this.pipeline,n),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add.apply(this.searchPipeline,s))}}}});

View file

@ -0,0 +1 @@
!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r,n,i;e.no=function(){this.pipeline.reset(),this.pipeline.add(e.no.trimmer,e.no.stopWordFilter,e.no.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.no.stemmer))},e.no.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA--",e.no.trimmer=e.trimmerSupport.generateTrimmer(e.no.wordCharacters),e.Pipeline.registerFunction(e.no.trimmer,"trimmer-no"),e.no.stemmer=(r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,i=new function(){var e,i,t=[new r("a",-1,1),new r("e",-1,1),new r("ede",1,1),new r("ande",1,1),new r("ende",1,1),new r("ane",1,1),new r("ene",1,1),new r("hetene",6,1),new r("erte",1,3),new r("en",-1,1),new r("heten",9,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",12,1),new r("s",-1,2),new r("as",14,1),new r("es",14,1),new r("edes",16,1),new r("endes",16,1),new r("enes",16,1),new r("hetenes",19,1),new r("ens",14,1),new r("hetens",21,1),new r("ers",14,1),new r("ets",14,1),new r("et",-1,1),new r("het",25,1),new r("ert",-1,3),new r("ast",-1,1)],o=[new r("dt",-1,-1),new r("vt",-1,-1)],s=[new r("leg",-1,1),new r("eleg",0,1),new r("ig",-1,1),new r("eig",2,1),new r("lig",2,1),new r("elig",4,1),new r("els",-1,1),new r("lov",-1,1),new r("elov",7,1),new r("slov",7,1),new r("hetslov",9,1)],a=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,48,0,128],m=[119,125,149,1],l=new n;this.setCurrent=function(e){l.setCurrent(e)},this.getCurrent=function(){return l.getCurrent()},this.stem=function(){var r,n,u,d,c=l.cursor;return function(){var r,n=l.cursor+3;if(i=l.limit,0<=n||n<=l.limit){for(e=n;;){if(r=l.cursor,l.in_grouping(a,97,248)){l.cursor=r;break}if(r>=l.limit)return;l.cursor=r+1}for(;!l.out_grouping(a,97,248);){if(l.cursor>=l.limit)return;l.cursor++}(i=l.cursor)<e&&(i=e)}}(),l.limit_backward=c,l.cursor=l.limit,function(){var e,r,n;if(l.cursor>=i&&(r=l.limit_backward,l.limit_backward=i,l.ket=l.cursor,e=l.find_among_b(t,29),l.limit_backward=r,e))switch(l.bra=l.cursor,e){case 1:l.slice_del();break;case 2:n=l.limit-l.cursor,l.in_grouping_b(m,98,122)?l.slice_del():(l.cursor=l.limit-n,l.eq_s_b(1,"k")&&l.out_grouping_b(a,97,248)&&l.slice_del());break;case 3:l.slice_from("er")}}(),l.cursor=l.limit,n=l.limit-l.cursor,l.cursor>=i&&(r=l.limit_backward,l.limit_backward=i,l.ket=l.cursor,l.find_among_b(o,2)?(l.bra=l.cursor,l.limit_backward=r,l.cursor=l.limit-n,l.cursor>l.limit_backward&&(l.cursor--,l.bra=l.cursor,l.slice_del())):l.limit_backward=r),l.cursor=l.limit,l.cursor>=i&&(d=l.limit_backward,l.limit_backward=i,l.ket=l.cursor,(u=l.find_among_b(s,11))?(l.bra=l.cursor,l.limit_backward=d,1==u&&l.slice_del()):l.limit_backward=d),!0}},function(e){return"function"==typeof e.update?e.update(function(e){return i.setCurrent(e),i.stem(),i.getCurrent()}):(i.setCurrent(e),i.stem(),i.getCurrent())}),e.Pipeline.registerFunction(e.no.stemmer,"stemmer-no"),e.no.stopWordFilter=e.generateStopWordFilter("alle at av bare begge ble blei bli blir blitt både båe da de deg dei deim deira deires dem den denne der dere deres det dette di din disse ditt du dykk dykkar då eg ein eit eitt eller elles en enn er et ett etter for fordi fra før ha hadde han hans har hennar henne hennes her hjå ho hoe honom hoss hossen hun hva hvem hver hvilke hvilken hvis hvor hvordan hvorfor i ikke ikkje ikkje ingen ingi inkje inn inni ja jeg kan kom korleis korso kun kunne kva kvar kvarhelst kven kvi kvifor man mange me med medan meg meget mellom men mi min mine mitt mot mykje ned no noe noen noka noko nokon nokor nokre nå når og også om opp oss over på samme seg selv si si sia sidan siden sin sine sitt sjøl skal skulle slik so som som somme somt så sånn til um upp ut uten var vart varte ved vere verte vi vil ville vore vors vort vår være være vært å".split(" ")),e.Pipeline.registerFunction(e.no.stopWordFilter,"stopWordFilter-no")}});

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1 @@
!function(r,t){"function"==typeof define&&define.amd?define(t):"object"==typeof exports?module.exports=t():t()(r.lunr)}(this,function(){return function(r){r.stemmerSupport={Among:function(r,t,i,s){if(this.toCharArray=function(r){for(var t=r.length,i=new Array(t),s=0;s<t;s++)i[s]=r.charCodeAt(s);return i},!r&&""!=r||!t&&0!=t||!i)throw"Bad Among initialisation: s:"+r+", substring_i: "+t+", result: "+i;this.s_size=r.length,this.s=this.toCharArray(r),this.substring_i=t,this.result=i,this.method=s},SnowballProgram:function(){var r;return{bra:0,ket:0,limit:0,cursor:0,limit_backward:0,setCurrent:function(t){r=t,this.cursor=0,this.limit=t.length,this.limit_backward=0,this.bra=this.cursor,this.ket=this.limit},getCurrent:function(){var t=r;return r=null,t},in_grouping:function(t,i,s){if(this.cursor<this.limit){var e=r.charCodeAt(this.cursor);if(e<=s&&e>=i&&t[(e-=i)>>3]&1<<(7&e))return this.cursor++,!0}return!1},in_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e<=s&&e>=i&&t[(e-=i)>>3]&1<<(7&e))return this.cursor--,!0}return!1},out_grouping:function(t,i,s){if(this.cursor<this.limit){var e=r.charCodeAt(this.cursor);if(e>s||e<i)return this.cursor++,!0;if(!(t[(e-=i)>>3]&1<<(7&e)))return this.cursor++,!0}return!1},out_grouping_b:function(t,i,s){if(this.cursor>this.limit_backward){var e=r.charCodeAt(this.cursor-1);if(e>s||e<i)return this.cursor--,!0;if(!(t[(e-=i)>>3]&1<<(7&e)))return this.cursor--,!0}return!1},eq_s:function(t,i){if(this.limit-this.cursor<t)return!1;for(var s=0;s<t;s++)if(r.charCodeAt(this.cursor+s)!=i.charCodeAt(s))return!1;return this.cursor+=t,!0},eq_s_b:function(t,i){if(this.cursor-this.limit_backward<t)return!1;for(var s=0;s<t;s++)if(r.charCodeAt(this.cursor-t+s)!=i.charCodeAt(s))return!1;return this.cursor-=t,!0},find_among:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o<h?o:h,_=t[a],m=l;m<_.s_size;m++){if(n+l==u){f=-1;break}if(f=r.charCodeAt(n+l)-_.s[m])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){if(o>=(_=t[s]).s_size){if(this.cursor=n+_.s_size,!_.method)return _.result;var b=_.method();if(this.cursor=n+_.s_size,b)return _.result}if((s=_.substring_i)<0)return 0}},find_among_b:function(t,i){for(var s=0,e=i,n=this.cursor,u=this.limit_backward,o=0,h=0,c=!1;;){for(var a=s+(e-s>>1),f=0,l=o<h?o:h,_=(m=t[a]).s_size-1-l;_>=0;_--){if(n-l==u){f=-1;break}if(f=r.charCodeAt(n-1-l)-m.s[_])break;l++}if(f<0?(e=a,h=l):(s=a,o=l),e-s<=1){if(s>0||e==s||c)break;c=!0}}for(;;){var m;if(o>=(m=t[s]).s_size){if(this.cursor=n-m.s_size,!m.method)return m.result;var b=m.method();if(this.cursor=n-m.s_size,b)return m.result}if((s=m.substring_i)<0)return 0}},replace_s:function(t,i,s){var e=s.length-(i-t),n=r.substring(0,t),u=r.substring(i);return r=n+s+u,this.limit+=e,this.cursor>=i?this.cursor+=e:this.cursor>t&&(this.cursor=t),e},slice_check:function(){if(this.bra<0||this.bra>this.ket||this.ket>this.limit||this.limit>r.length)throw"faulty slice operation"},slice_from:function(r){this.slice_check(),this.replace_s(this.bra,this.ket,r)},slice_del:function(){this.slice_from("")},insert:function(r,t,i){var s=this.replace_s(r,t,i);r<=this.bra&&(this.bra+=s),r<=this.ket&&(this.ket+=s)},slice_to:function(){return this.slice_check(),r.substring(this.bra,this.ket)},eq_v_b:function(r){return this.eq_s_b(r.length,r)}}}},r.trimmerSupport={generateTrimmer:function(r){var t=new RegExp("^[^"+r+"]+"),i=new RegExp("[^"+r+"]+$");return function(r){return"function"==typeof r.update?r.update(function(r){return r.replace(t,"").replace(i,"")}):r.replace(t,"").replace(i,"")}}}}});

View file

@ -0,0 +1 @@
!function(e,r){"function"==typeof define&&define.amd?define(r):"object"==typeof exports?module.exports=r():r()(e.lunr)}(this,function(){return function(e){if(void 0===e)throw new Error("Lunr is not present. Please include / require Lunr before this script.");if(void 0===e.stemmerSupport)throw new Error("Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.");var r,n,t;e.sv=function(){this.pipeline.reset(),this.pipeline.add(e.sv.trimmer,e.sv.stopWordFilter,e.sv.stemmer),this.searchPipeline&&(this.searchPipeline.reset(),this.searchPipeline.add(e.sv.stemmer))},e.sv.wordCharacters="A-Za-zªºÀ-ÖØ-öø-ʸˠ-ˤᴀ-ᴥᴬ-ᵜᵢ-ᵥᵫ-ᵷᵹ-ᶾḀ-ỿⁱⁿₐ-ₜKÅℲⅎⅠ-ↈⱠ-ⱿꜢ-ꞇꞋ-ꞭꞰ-ꞷꟷ-ꟿꬰ-ꭚꭜ-ꭤff-stA--",e.sv.trimmer=e.trimmerSupport.generateTrimmer(e.sv.wordCharacters),e.Pipeline.registerFunction(e.sv.trimmer,"trimmer-sv"),e.sv.stemmer=(r=e.stemmerSupport.Among,n=e.stemmerSupport.SnowballProgram,t=new function(){var e,t,i=[new r("a",-1,1),new r("arna",0,1),new r("erna",0,1),new r("heterna",2,1),new r("orna",0,1),new r("ad",-1,1),new r("e",-1,1),new r("ade",6,1),new r("ande",6,1),new r("arne",6,1),new r("are",6,1),new r("aste",6,1),new r("en",-1,1),new r("anden",12,1),new r("aren",12,1),new r("heten",12,1),new r("ern",-1,1),new r("ar",-1,1),new r("er",-1,1),new r("heter",18,1),new r("or",-1,1),new r("s",-1,2),new r("as",21,1),new r("arnas",22,1),new r("ernas",22,1),new r("ornas",22,1),new r("es",21,1),new r("ades",26,1),new r("andes",26,1),new r("ens",21,1),new r("arens",29,1),new r("hetens",29,1),new r("erns",21,1),new r("at",-1,1),new r("andet",-1,1),new r("het",-1,1),new r("ast",-1,1)],s=[new r("dd",-1,-1),new r("gd",-1,-1),new r("nn",-1,-1),new r("dt",-1,-1),new r("gt",-1,-1),new r("kt",-1,-1),new r("tt",-1,-1)],a=[new r("ig",-1,1),new r("lig",0,1),new r("els",-1,1),new r("fullt",-1,3),new r("löst",-1,2)],o=[17,65,16,1,0,0,0,0,0,0,0,0,0,0,0,0,24,0,32],u=[119,127,149],m=new n;this.setCurrent=function(e){m.setCurrent(e)},this.getCurrent=function(){return m.getCurrent()},this.stem=function(){var r,n=m.cursor;return function(){var r,n=m.cursor+3;if(t=m.limit,0<=n||n<=m.limit){for(e=n;;){if(r=m.cursor,m.in_grouping(o,97,246)){m.cursor=r;break}if(m.cursor=r,m.cursor>=m.limit)return;m.cursor++}for(;!m.out_grouping(o,97,246);){if(m.cursor>=m.limit)return;m.cursor++}(t=m.cursor)<e&&(t=e)}}(),m.limit_backward=n,m.cursor=m.limit,function(){var e,r=m.limit_backward;if(m.cursor>=t&&(m.limit_backward=t,m.cursor=m.limit,m.ket=m.cursor,e=m.find_among_b(i,37),m.limit_backward=r,e))switch(m.bra=m.cursor,e){case 1:m.slice_del();break;case 2:m.in_grouping_b(u,98,121)&&m.slice_del()}}(),m.cursor=m.limit,r=m.limit_backward,m.cursor>=t&&(m.limit_backward=t,m.cursor=m.limit,m.find_among_b(s,7)&&(m.cursor=m.limit,m.ket=m.cursor,m.cursor>m.limit_backward&&(m.bra=--m.cursor,m.slice_del())),m.limit_backward=r),m.cursor=m.limit,function(){var e,r;if(m.cursor>=t){if(r=m.limit_backward,m.limit_backward=t,m.cursor=m.limit,m.ket=m.cursor,e=m.find_among_b(a,5))switch(m.bra=m.cursor,e){case 1:m.slice_del();break;case 2:m.slice_from("lös");break;case 3:m.slice_from("full")}m.limit_backward=r}}(),!0}},function(e){return"function"==typeof e.update?e.update(function(e){return t.setCurrent(e),t.stem(),t.getCurrent()}):(t.setCurrent(e),t.stem(),t.getCurrent())}),e.Pipeline.registerFunction(e.sv.stemmer,"stemmer-sv"),e.sv.stopWordFilter=e.generateStopWordFilter("alla allt att av blev bli blir blivit de dem den denna deras dess dessa det detta dig din dina ditt du där då efter ej eller en er era ert ett från för ha hade han hans har henne hennes hon honom hur här i icke ingen inom inte jag ju kan kunde man med mellan men mig min mina mitt mot mycket ni nu när någon något några och om oss på samma sedan sig sin sina sitta själv skulle som så sådan sådana sådant till under upp ut utan vad var vara varför varit varje vars vart vem vi vid vilka vilkas vilken vilket vår våra vårt än är åt över".split(" ")),e.Pipeline.registerFunction(e.sv.stopWordFilter,"stopWordFilter-sv")}});

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

2589
docs/cli/index.html Normal file

File diff suppressed because it is too large Load diff

2315
docs/index.html Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -4,22 +4,20 @@
share_usage_data: True
lbryum_servers:
- lbryumx1.lbry.com:50001
- lbryumx2.lbry.com:50001
- lbryumx4.lbry.com:50001
- lbryumx1.lbry.io:50001
- lbryumx2.lbry.io:50001
blockchain_name: lbrycrd_main
data_dir: /home/lbry/.lbrynet
download_directory: /home/lbry/downloads
save_blobs: true
save_files: false
delete_blobs_on_remove: True
dht_node_port: 4444
peer_port: 3333
use_upnp: true
use_upnp: True
#components_to_skip:
# - peer_protocol_server
# - hash_announcer
# - blob_server
# - dht

View file

@ -1,2 +0,0 @@
__version__ = "0.113.0"
version = tuple(map(int, __version__.split('.'))) # pylint: disable=invalid-name

View file

@ -1,6 +0,0 @@
from lbry.utils import get_lbry_hash_obj
MAX_BLOB_SIZE = 2 * 2 ** 20
# digest_size is in bytes, and blob hashes are hex encoded
BLOBHASH_LENGTH = get_lbry_hash_obj().digest_size * 2

View file

@ -1,366 +0,0 @@
import os
import re
import time
import asyncio
import binascii
import logging
import typing
import contextlib
from io import BytesIO
from cryptography.hazmat.primitives.ciphers import Cipher, modes
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.padding import PKCS7
from cryptography.hazmat.backends import default_backend
from lbry.utils import get_lbry_hash_obj
from lbry.error import DownloadCancelledError, InvalidBlobHashError, InvalidDataError
from lbry.blob import MAX_BLOB_SIZE, BLOBHASH_LENGTH
from lbry.blob.blob_info import BlobInfo
from lbry.blob.writer import HashBlobWriter
log = logging.getLogger(__name__)
HEXMATCH = re.compile("^[a-f,0-9]+$")
BACKEND = default_backend()
def is_valid_blobhash(blobhash: str) -> bool:
"""Checks whether the blobhash is the correct length and contains only
valid characters (0-9, a-f)
@param blobhash: string, the blobhash to check
@return: True/False
"""
return len(blobhash) == BLOBHASH_LENGTH and HEXMATCH.match(blobhash)
def encrypt_blob_bytes(key: bytes, iv: bytes, unencrypted: bytes) -> typing.Tuple[bytes, str]:
cipher = Cipher(AES(key), modes.CBC(iv), backend=BACKEND)
padder = PKCS7(AES.block_size).padder()
encryptor = cipher.encryptor()
encrypted = encryptor.update(padder.update(unencrypted) + padder.finalize()) + encryptor.finalize()
digest = get_lbry_hash_obj()
digest.update(encrypted)
return encrypted, digest.hexdigest()
def decrypt_blob_bytes(data: bytes, length: int, key: bytes, iv: bytes) -> bytes:
if len(data) != length:
raise ValueError("unexpected length")
cipher = Cipher(AES(key), modes.CBC(iv), backend=BACKEND)
unpadder = PKCS7(AES.block_size).unpadder()
decryptor = cipher.decryptor()
return unpadder.update(decryptor.update(data) + decryptor.finalize()) + unpadder.finalize()
class AbstractBlob:
"""
A chunk of data (up to 2MB) available on the network which is specified by a sha384 hash
This class is non-io specific
"""
__slots__ = [
'loop',
'blob_hash',
'length',
'blob_completed_callback',
'blob_directory',
'writers',
'verified',
'writing',
'readers',
'added_on',
'is_mine',
]
def __init__(
self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False,
):
self.loop = loop
self.blob_hash = blob_hash
self.length = length
self.blob_completed_callback = blob_completed_callback
self.blob_directory = blob_directory
self.writers: typing.Dict[typing.Tuple[typing.Optional[str], typing.Optional[int]], HashBlobWriter] = {}
self.verified: asyncio.Event = asyncio.Event()
self.writing: asyncio.Event = asyncio.Event()
self.readers: typing.List[typing.BinaryIO] = []
self.added_on = added_on or time.time()
self.is_mine = is_mine
if not is_valid_blobhash(blob_hash):
raise InvalidBlobHashError(blob_hash)
def __del__(self):
if self.writers or self.readers:
log.warning("%s not closed before being garbage collected", self.blob_hash)
self.close()
@contextlib.contextmanager
def _reader_context(self) -> typing.ContextManager[typing.BinaryIO]:
raise NotImplementedError()
@contextlib.contextmanager
def reader_context(self) -> typing.ContextManager[typing.BinaryIO]:
if not self.is_readable():
raise OSError(f"{str(type(self))} not readable, {len(self.readers)} readers {len(self.writers)} writers")
with self._reader_context() as reader:
try:
self.readers.append(reader)
yield reader
finally:
if reader in self.readers:
self.readers.remove(reader)
def _write_blob(self, blob_bytes: bytes) -> asyncio.Task:
raise NotImplementedError()
def set_length(self, length) -> None:
if self.length is not None and length == self.length:
return
if self.length is None and 0 <= length <= MAX_BLOB_SIZE:
self.length = length
return
log.warning("Got an invalid length. Previous length: %s, Invalid length: %s", self.length, length)
def get_length(self) -> typing.Optional[int]:
return self.length
def get_is_verified(self) -> bool:
return self.verified.is_set()
def is_readable(self) -> bool:
return self.verified.is_set()
def is_writeable(self) -> bool:
return not self.writing.is_set()
def write_blob(self, blob_bytes: bytes):
if not self.is_writeable():
raise OSError("cannot open blob for writing")
try:
self.writing.set()
self._write_blob(blob_bytes)
finally:
self.writing.clear()
def close(self):
while self.writers:
_, writer = self.writers.popitem()
if writer and writer.finished and not writer.finished.done() and not self.loop.is_closed():
writer.finished.cancel()
while self.readers:
reader = self.readers.pop()
if reader:
reader.close()
def delete(self):
self.close()
self.verified.clear()
self.length = None
async def sendfile(self, writer: asyncio.StreamWriter) -> int:
"""
Read and send the file to the writer and return the number of bytes sent
"""
if not self.is_readable():
raise OSError('blob files cannot be read')
with self.reader_context() as handle:
try:
return await self.loop.sendfile(writer.transport, handle, count=self.get_length())
except (ConnectionError, BrokenPipeError, RuntimeError, OSError, AttributeError):
return -1
def decrypt(self, key: bytes, iv: bytes) -> bytes:
"""
Decrypt a BlobFile to plaintext bytes
"""
with self.reader_context() as reader:
return decrypt_blob_bytes(reader.read(), self.length, key, iv)
@classmethod
async def create_from_unencrypted(
cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes,
unencrypted: bytes, blob_num: int, added_on: int, is_mine: bool,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], None]] = None,
) -> BlobInfo:
"""
Create an encrypted BlobFile from plaintext bytes
"""
blob_bytes, blob_hash = encrypt_blob_bytes(key, iv, unencrypted)
length = len(blob_bytes)
blob = cls(loop, blob_hash, length, blob_completed_callback, blob_dir, added_on, is_mine)
writer = blob.get_blob_writer()
writer.write(blob_bytes)
await blob.verified.wait()
return BlobInfo(blob_num, length, binascii.hexlify(iv).decode(), added_on, blob_hash, is_mine)
def save_verified_blob(self, verified_bytes: bytes):
if self.verified.is_set():
return
def update_events(_):
self.verified.set()
self.writing.clear()
if self.is_writeable():
self.writing.set()
task = self._write_blob(verified_bytes)
task.add_done_callback(update_events)
if self.blob_completed_callback:
task.add_done_callback(lambda _: self.blob_completed_callback(self))
def get_blob_writer(self, peer_address: typing.Optional[str] = None,
peer_port: typing.Optional[int] = None) -> HashBlobWriter:
if (peer_address, peer_port) in self.writers and not self.writers[(peer_address, peer_port)].closed():
raise OSError(f"attempted to download blob twice from {peer_address}:{peer_port}")
fut = asyncio.Future()
writer = HashBlobWriter(self.blob_hash, self.get_length, fut)
self.writers[(peer_address, peer_port)] = writer
def remove_writer(_):
if (peer_address, peer_port) in self.writers:
del self.writers[(peer_address, peer_port)]
fut.add_done_callback(remove_writer)
def writer_finished_callback(finished: asyncio.Future):
try:
err = finished.exception()
if err:
raise err
verified_bytes = finished.result()
while self.writers:
_, other = self.writers.popitem()
if other is not writer:
other.close_handle()
self.save_verified_blob(verified_bytes)
except (InvalidBlobHashError, InvalidDataError) as error:
log.warning("writer error downloading %s: %s", self.blob_hash[:8], str(error))
except (DownloadCancelledError, asyncio.CancelledError, asyncio.TimeoutError):
pass
fut.add_done_callback(writer_finished_callback)
return writer
class BlobBuffer(AbstractBlob):
"""
An in-memory only blob
"""
def __init__(
self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False
):
self._verified_bytes: typing.Optional[BytesIO] = None
super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory, added_on, is_mine)
@contextlib.contextmanager
def _reader_context(self) -> typing.ContextManager[typing.BinaryIO]:
if not self.is_readable():
raise OSError("cannot open blob for reading")
try:
yield self._verified_bytes
finally:
if self._verified_bytes:
self._verified_bytes.close()
self._verified_bytes = None
self.verified.clear()
def _write_blob(self, blob_bytes: bytes):
async def write():
if self._verified_bytes:
raise OSError("already have bytes for blob")
self._verified_bytes = BytesIO(blob_bytes)
return self.loop.create_task(write())
def delete(self):
if self._verified_bytes:
self._verified_bytes.close()
self._verified_bytes = None
return super().delete()
def __del__(self):
super().__del__()
if self._verified_bytes:
self.delete()
class BlobFile(AbstractBlob):
"""
A blob existing on the local file system
"""
def __init__(
self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False
):
super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory, added_on, is_mine)
if not blob_directory or not os.path.isdir(blob_directory):
raise OSError(f"invalid blob directory '{blob_directory}'")
self.file_path = os.path.join(self.blob_directory, self.blob_hash)
if self.file_exists:
file_size = int(os.stat(self.file_path).st_size)
if length and length != file_size:
log.warning("expected %s to be %s bytes, file has %s", self.blob_hash, length, file_size)
self.delete()
else:
self.length = file_size
self.verified.set()
@property
def file_exists(self):
return os.path.isfile(self.file_path)
def is_writeable(self) -> bool:
return super().is_writeable() and not os.path.isfile(self.file_path)
def get_blob_writer(self, peer_address: typing.Optional[str] = None,
peer_port: typing.Optional[str] = None) -> HashBlobWriter:
if self.file_exists:
raise OSError(f"File already exists '{self.file_path}'")
return super().get_blob_writer(peer_address, peer_port)
@contextlib.contextmanager
def _reader_context(self) -> typing.ContextManager[typing.BinaryIO]:
handle = open(self.file_path, 'rb')
try:
yield handle
finally:
handle.close()
def _write_blob(self, blob_bytes: bytes):
def _write_blob():
with open(self.file_path, 'wb') as f:
f.write(blob_bytes)
async def write_blob():
await self.loop.run_in_executor(None, _write_blob)
return self.loop.create_task(write_blob())
def delete(self):
super().delete()
if os.path.isfile(self.file_path):
os.remove(self.file_path)
@classmethod
async def create_from_unencrypted(
cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes,
unencrypted: bytes, blob_num: int, added_on: float, is_mine: bool,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None
) -> BlobInfo:
if not blob_dir or not os.path.isdir(blob_dir):
raise OSError(f"cannot create blob in directory: '{blob_dir}'")
return await super().create_from_unencrypted(
loop, blob_dir, key, iv, unencrypted, blob_num, added_on, is_mine, blob_completed_callback
)

View file

@ -1,32 +0,0 @@
import typing
class BlobInfo:
__slots__ = [
'blob_hash',
'blob_num',
'length',
'iv',
'added_on',
'is_mine'
]
def __init__(
self, blob_num: int, length: int, iv: str, added_on,
blob_hash: typing.Optional[str] = None, is_mine=False):
self.blob_hash = blob_hash
self.blob_num = blob_num
self.length = length
self.iv = iv
self.added_on = added_on
self.is_mine = is_mine
def as_dict(self) -> typing.Dict:
d = {
'length': self.length,
'blob_num': self.blob_num,
'iv': self.iv,
}
if self.blob_hash: # non-terminator blobs have a blob hash
d['blob_hash'] = self.blob_hash
return d

View file

@ -1,148 +0,0 @@
import os
import typing
import asyncio
import logging
from lbry.utils import LRUCacheWithMetrics
from lbry.blob.blob_file import is_valid_blobhash, BlobFile, BlobBuffer, AbstractBlob
from lbry.stream.descriptor import StreamDescriptor
from lbry.connection_manager import ConnectionManager
if typing.TYPE_CHECKING:
from lbry.conf import Config
from lbry.dht.protocol.data_store import DictDataStore
from lbry.extras.daemon.storage import SQLiteStorage
log = logging.getLogger(__name__)
class BlobManager:
def __init__(self, loop: asyncio.AbstractEventLoop, blob_dir: str, storage: 'SQLiteStorage', config: 'Config',
node_data_store: typing.Optional['DictDataStore'] = None):
"""
This class stores blobs on the hard disk
blob_dir - directory where blobs are stored
storage - SQLiteStorage object
"""
self.loop = loop
self.blob_dir = blob_dir
self.storage = storage
self._node_data_store = node_data_store
self.completed_blob_hashes: typing.Set[str] = set() if not self._node_data_store\
else self._node_data_store.completed_blobs
self.blobs: typing.Dict[str, AbstractBlob] = {}
self.config = config
self.decrypted_blob_lru_cache = None if not self.config.blob_lru_cache_size else LRUCacheWithMetrics(
self.config.blob_lru_cache_size)
self.connection_manager = ConnectionManager(loop)
def _get_blob(self, blob_hash: str, length: typing.Optional[int] = None, is_mine: bool = False):
if self.config.save_blobs or (
is_valid_blobhash(blob_hash) and os.path.isfile(os.path.join(self.blob_dir, blob_hash))):
return BlobFile(
self.loop, blob_hash, length, self.blob_completed, self.blob_dir, is_mine=is_mine
)
return BlobBuffer(
self.loop, blob_hash, length, self.blob_completed, self.blob_dir, is_mine=is_mine
)
def get_blob(self, blob_hash, length: typing.Optional[int] = None, is_mine: bool = False):
if blob_hash in self.blobs:
if self.config.save_blobs and isinstance(self.blobs[blob_hash], BlobBuffer):
buffer = self.blobs.pop(blob_hash)
if blob_hash in self.completed_blob_hashes:
self.completed_blob_hashes.remove(blob_hash)
self.blobs[blob_hash] = self._get_blob(blob_hash, length, is_mine)
if buffer.is_readable():
with buffer.reader_context() as reader:
self.blobs[blob_hash].write_blob(reader.read())
if length and self.blobs[blob_hash].length is None:
self.blobs[blob_hash].set_length(length)
else:
self.blobs[blob_hash] = self._get_blob(blob_hash, length, is_mine)
return self.blobs[blob_hash]
def is_blob_verified(self, blob_hash: str, length: typing.Optional[int] = None) -> bool:
if not is_valid_blobhash(blob_hash):
raise ValueError(blob_hash)
if not os.path.isfile(os.path.join(self.blob_dir, blob_hash)):
return False
if blob_hash in self.blobs:
return self.blobs[blob_hash].get_is_verified()
return self._get_blob(blob_hash, length).get_is_verified()
async def setup(self) -> bool:
def get_files_in_blob_dir() -> typing.Set[str]:
if not self.blob_dir:
return set()
return {
item.name for item in os.scandir(self.blob_dir) if is_valid_blobhash(item.name)
}
in_blobfiles_dir = await self.loop.run_in_executor(None, get_files_in_blob_dir)
to_add = await self.storage.sync_missing_blobs(in_blobfiles_dir)
if to_add:
self.completed_blob_hashes.update(to_add)
# check blobs that aren't set as finished but were seen on disk
await self.ensure_completed_blobs_status(in_blobfiles_dir - to_add)
if self.config.track_bandwidth:
self.connection_manager.start()
return True
def stop(self):
self.connection_manager.stop()
while self.blobs:
_, blob = self.blobs.popitem()
blob.close()
self.completed_blob_hashes.clear()
def get_stream_descriptor(self, sd_hash):
return StreamDescriptor.from_stream_descriptor_blob(self.loop, self.blob_dir, self.get_blob(sd_hash))
def blob_completed(self, blob: AbstractBlob) -> asyncio.Task:
if blob.blob_hash is None:
raise Exception("Blob hash is None")
if not blob.length:
raise Exception("Blob has a length of 0")
if isinstance(blob, BlobFile):
if blob.blob_hash not in self.completed_blob_hashes:
self.completed_blob_hashes.add(blob.blob_hash)
return self.loop.create_task(self.storage.add_blobs(
(blob.blob_hash, blob.length, blob.added_on, blob.is_mine), finished=True)
)
else:
return self.loop.create_task(self.storage.add_blobs(
(blob.blob_hash, blob.length, blob.added_on, blob.is_mine), finished=False)
)
async def ensure_completed_blobs_status(self, blob_hashes: typing.Iterable[str]):
"""Ensures that completed blobs from a given list of blob hashes are set as 'finished' in the database."""
to_add = []
for blob_hash in blob_hashes:
if not self.is_blob_verified(blob_hash):
continue
blob = self.get_blob(blob_hash)
to_add.append((blob.blob_hash, blob.length, blob.added_on, blob.is_mine))
if len(to_add) > 500:
await self.storage.add_blobs(*to_add, finished=True)
to_add.clear()
return await self.storage.add_blobs(*to_add, finished=True)
def delete_blob(self, blob_hash: str):
if not is_valid_blobhash(blob_hash):
raise Exception("invalid blob hash to delete")
if blob_hash not in self.blobs:
if self.blob_dir and os.path.isfile(os.path.join(self.blob_dir, blob_hash)):
os.remove(os.path.join(self.blob_dir, blob_hash))
else:
self.blobs.pop(blob_hash).delete()
if blob_hash in self.completed_blob_hashes:
self.completed_blob_hashes.remove(blob_hash)
async def delete_blobs(self, blob_hashes: typing.List[str], delete_from_db: typing.Optional[bool] = True):
for blob_hash in blob_hashes:
self.delete_blob(blob_hash)
if delete_from_db:
await self.storage.delete_blobs_from_db(blob_hashes)

View file

@ -1,77 +0,0 @@
import asyncio
import logging
log = logging.getLogger(__name__)
class DiskSpaceManager:
def __init__(self, config, db, blob_manager, cleaning_interval=30 * 60, analytics=None):
self.config = config
self.db = db
self.blob_manager = blob_manager
self.cleaning_interval = cleaning_interval
self.running = False
self.task = None
self.analytics = analytics
self._used_space_bytes = None
async def get_free_space_mb(self, is_network_blob=False):
limit_mb = self.config.network_storage_limit if is_network_blob else self.config.blob_storage_limit
space_used_mb = await self.get_space_used_mb()
space_used_mb = space_used_mb['network_storage'] if is_network_blob else space_used_mb['content_storage']
return max(0, limit_mb - space_used_mb)
async def get_space_used_bytes(self):
self._used_space_bytes = await self.db.get_stored_blob_disk_usage()
return self._used_space_bytes
async def get_space_used_mb(self, cached=True):
cached = cached and self._used_space_bytes is not None
space_used_bytes = self._used_space_bytes if cached else await self.get_space_used_bytes()
return {key: int(value/1024.0/1024.0) for key, value in space_used_bytes.items()}
async def clean(self):
await self._clean(False)
await self._clean(True)
async def _clean(self, is_network_blob=False):
space_used_mb = await self.get_space_used_mb(cached=False)
if is_network_blob:
space_used_mb = space_used_mb['network_storage']
else:
space_used_mb = space_used_mb['content_storage'] + space_used_mb['private_storage']
storage_limit_mb = self.config.network_storage_limit if is_network_blob else self.config.blob_storage_limit
if self.analytics:
asyncio.create_task(
self.analytics.send_disk_space_used(space_used_mb, storage_limit_mb, is_network_blob)
)
delete = []
available = storage_limit_mb - space_used_mb
if storage_limit_mb == 0 if not is_network_blob else available >= 0:
return 0
for blob_hash, file_size, _ in await self.db.get_stored_blobs(is_mine=False, is_network_blob=is_network_blob):
delete.append(blob_hash)
available += int(file_size/1024.0/1024.0)
if available >= 0:
break
if delete:
await self.db.stop_all_files()
await self.blob_manager.delete_blobs(delete, delete_from_db=True)
self._used_space_bytes = None
return len(delete)
async def cleaning_loop(self):
while self.running:
await asyncio.sleep(self.cleaning_interval)
await self.clean()
async def start(self):
self.running = True
self.task = asyncio.create_task(self.cleaning_loop())
self.task.add_done_callback(lambda _: log.info("Stopping blob cleanup service."))
async def stop(self):
if self.running:
self.running = False
self.task.cancel()

View file

@ -1,68 +0,0 @@
import typing
import logging
import asyncio
from io import BytesIO
from lbry.error import InvalidBlobHashError, InvalidDataError
from lbry.utils import get_lbry_hash_obj
log = logging.getLogger(__name__)
class HashBlobWriter:
def __init__(self, expected_blob_hash: str, get_length: typing.Callable[[], int],
finished: asyncio.Future):
self.expected_blob_hash = expected_blob_hash
self.get_length = get_length
self.buffer = BytesIO()
self.finished = finished
self.finished.add_done_callback(lambda *_: self.close_handle())
self._hashsum = get_lbry_hash_obj()
self.len_so_far = 0
def __del__(self):
if self.buffer is not None:
log.warning("Garbage collection was called, but writer was not closed yet")
self.close_handle()
def calculate_blob_hash(self) -> str:
return self._hashsum.hexdigest()
def closed(self):
return self.buffer is None or self.buffer.closed
def write(self, data: bytes):
expected_length = self.get_length()
if not expected_length:
raise OSError("unknown blob length")
if self.buffer is None:
log.warning("writer has already been closed")
if not self.finished.done():
self.finished.cancel()
return
raise OSError('I/O operation on closed file')
self._hashsum.update(data)
self.len_so_far += len(data)
if self.len_so_far > expected_length:
self.finished.set_exception(InvalidDataError(
f'Length so far is greater than the expected length. {self.len_so_far} to {expected_length}'
))
self.close_handle()
return
self.buffer.write(data)
if self.len_so_far == expected_length:
blob_hash = self.calculate_blob_hash()
if blob_hash != self.expected_blob_hash:
self.finished.set_exception(InvalidBlobHashError(
f"blob hash is {blob_hash} vs expected {self.expected_blob_hash}"
))
elif self.finished and not (self.finished.done() or self.finished.cancelled()):
self.finished.set_result(self.buffer.getvalue())
self.close_handle()
def close_handle(self):
if not self.finished.done():
self.finished.cancel()
if self.buffer is not None:
self.buffer.close()
self.buffer = None

View file

@ -1,255 +0,0 @@
import asyncio
import time
import logging
import typing
import binascii
from typing import Optional
from lbry.error import InvalidBlobHashError, InvalidDataError
from lbry.blob_exchange.serialization import BlobResponse, BlobRequest
from lbry.utils import cache_concurrent
if typing.TYPE_CHECKING:
from lbry.blob.blob_file import AbstractBlob
from lbry.blob.writer import HashBlobWriter
from lbry.connection_manager import ConnectionManager
log = logging.getLogger(__name__)
class BlobExchangeClientProtocol(asyncio.Protocol):
def __init__(self, loop: asyncio.AbstractEventLoop, peer_timeout: typing.Optional[float] = 10,
connection_manager: typing.Optional['ConnectionManager'] = None):
self.loop = loop
self.peer_port: typing.Optional[int] = None
self.peer_address: typing.Optional[str] = None
self.transport: typing.Optional[asyncio.Transport] = None
self.peer_timeout = peer_timeout
self.connection_manager = connection_manager
self.writer: typing.Optional['HashBlobWriter'] = None
self.blob: typing.Optional['AbstractBlob'] = None
self._blob_bytes_received = 0
self._response_fut: typing.Optional[asyncio.Future] = None
self.buf = b''
# this is here to handle the race when the downloader is closed right as response_fut gets a result
self.closed = asyncio.Event()
def data_received(self, data: bytes):
if self.connection_manager:
if not self.peer_address:
addr_info = self.transport.get_extra_info('peername')
self.peer_address, self.peer_port = addr_info
# assert self.peer_address is not None
self.connection_manager.received_data(f"{self.peer_address}:{self.peer_port}", len(data))
if not self.transport or self.transport.is_closing():
log.warning("transport closing, but got more bytes from %s:%i\n%s", self.peer_address, self.peer_port,
binascii.hexlify(data))
if self._response_fut and not self._response_fut.done():
self._response_fut.cancel()
return
if not self._response_fut:
log.warning("Protocol received data before expected, probable race on keep alive. Closing transport.")
return self.close()
if self._blob_bytes_received and not self.writer.closed():
return self._write(data)
response = BlobResponse.deserialize(self.buf + data)
if not response.responses and not self._response_fut.done():
self.buf += data
return
else:
self.buf = b''
if response.responses and self.blob:
blob_response = response.get_blob_response()
if blob_response and not blob_response.error and blob_response.blob_hash == self.blob.blob_hash:
# set the expected length for the incoming blob if we didn't know it
self.blob.set_length(blob_response.length)
elif blob_response and not blob_response.error and self.blob.blob_hash != blob_response.blob_hash:
# the server started sending a blob we didn't request
log.warning("%s started sending blob we didn't request %s instead of %s", self.peer_address,
blob_response.blob_hash, self.blob.blob_hash)
return
if response.responses:
log.debug("got response from %s:%i <- %s", self.peer_address, self.peer_port, response.to_dict())
# fire the Future with the response to our request
self._response_fut.set_result(response)
if response.blob_data and self.writer and not self.writer.closed():
# log.debug("got %i blob bytes from %s:%i", len(response.blob_data), self.peer_address, self.peer_port)
# write blob bytes if we're writing a blob and have blob bytes to write
self._write(response.blob_data)
def _write(self, data: bytes):
if len(data) > (self.blob.get_length() - self._blob_bytes_received):
data = data[:(self.blob.get_length() - self._blob_bytes_received)]
log.warning("got more than asked from %s:%d, probable sendfile bug", self.peer_address, self.peer_port)
self._blob_bytes_received += len(data)
try:
self.writer.write(data)
except OSError as err:
log.error("error downloading blob from %s:%i: %s", self.peer_address, self.peer_port, err)
if self._response_fut and not self._response_fut.done():
self._response_fut.set_exception(err)
except asyncio.TimeoutError as err:
log.error("%s downloading blob from %s:%i", str(err), self.peer_address, self.peer_port)
if self._response_fut and not self._response_fut.done():
self._response_fut.set_exception(err)
async def _download_blob(self) -> typing.Tuple[int, Optional['BlobExchangeClientProtocol']]: # pylint: disable=too-many-return-statements
"""
:return: download success (bool), connected protocol (BlobExchangeClientProtocol)
"""
start_time = time.perf_counter()
request = BlobRequest.make_request_for_blob_hash(self.blob.blob_hash)
blob_hash = self.blob.blob_hash
if not self.peer_address:
addr_info = self.transport.get_extra_info('peername')
self.peer_address, self.peer_port = addr_info
try:
msg = request.serialize()
log.debug("send request to %s:%i -> %s", self.peer_address, self.peer_port, msg.decode())
self.transport.write(msg)
if self.connection_manager:
self.connection_manager.sent_data(f"{self.peer_address}:{self.peer_port}", len(msg))
response: BlobResponse = await asyncio.wait_for(self._response_fut, self.peer_timeout)
availability_response = response.get_availability_response()
price_response = response.get_price_response()
blob_response = response.get_blob_response()
if self.closed.is_set():
msg = f"cancelled blob request for {blob_hash} immediately after we got a response"
log.warning(msg)
raise asyncio.CancelledError(msg)
if (not blob_response or blob_response.error) and\
(not availability_response or not availability_response.available_blobs):
log.warning("%s not in availability response from %s:%i", self.blob.blob_hash, self.peer_address,
self.peer_port)
log.warning(response.to_dict())
return self._blob_bytes_received, self.close()
elif availability_response and availability_response.available_blobs and \
availability_response.available_blobs != [self.blob.blob_hash]:
log.warning("blob availability response doesn't match our request from %s:%i",
self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
elif not availability_response:
log.warning("response from %s:%i did not include an availability response (we requested %s)",
self.peer_address, self.peer_port, blob_hash)
return self._blob_bytes_received, self.close()
if not price_response or price_response.blob_data_payment_rate != 'RATE_ACCEPTED':
log.warning("data rate rejected by %s:%i", self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
if not blob_response or blob_response.error:
log.warning("blob can't be downloaded from %s:%i", self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
if not blob_response.error and blob_response.blob_hash != self.blob.blob_hash:
log.warning("incoming blob hash mismatch from %s:%i", self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
if self.blob.length is not None and self.blob.length != blob_response.length:
log.warning("incoming blob unexpected length from %s:%i", self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
msg = f"downloading {self.blob.blob_hash[:8]} from {self.peer_address}:{self.peer_port}," \
f" timeout in {self.peer_timeout}"
log.debug(msg)
msg = f"downloaded {self.blob.blob_hash[:8]} from {self.peer_address}:{self.peer_port}"
await asyncio.wait_for(self.writer.finished, self.peer_timeout)
# wait for the io to finish
await self.blob.verified.wait()
log.info("%s at %fMB/s", msg,
round((float(self._blob_bytes_received) /
float(time.perf_counter() - start_time)) / 1000000.0, 2))
# await self.blob.finished_writing.wait() not necessary, but a dangerous change. TODO: is it needed?
return self._blob_bytes_received, self
except asyncio.TimeoutError:
return self._blob_bytes_received, self.close()
except (InvalidBlobHashError, InvalidDataError):
log.warning("invalid blob from %s:%i", self.peer_address, self.peer_port)
return self._blob_bytes_received, self.close()
def close(self):
self.closed.set()
if self._response_fut and not self._response_fut.done():
self._response_fut.cancel()
if self.writer and not self.writer.closed():
self.writer.close_handle()
self._response_fut = None
self.writer = None
self.blob = None
if self.transport:
self.transport.close()
self.transport = None
self.buf = b''
async def download_blob(self, blob: 'AbstractBlob') -> typing.Tuple[int, Optional['BlobExchangeClientProtocol']]:
self.closed.clear()
blob_hash = blob.blob_hash
if blob.get_is_verified() or not blob.is_writeable():
return 0, self
try:
self._blob_bytes_received = 0
self.blob, self.writer = blob, blob.get_blob_writer(self.peer_address, self.peer_port)
self._response_fut = asyncio.Future()
return await self._download_blob()
except OSError:
# i'm not sure how to fix this race condition - jack
log.warning("race happened downloading %s from %s:%s", blob_hash, self.peer_address, self.peer_port)
# return self._blob_bytes_received, self.transport
raise
except asyncio.TimeoutError:
if self._response_fut and not self._response_fut.done():
self._response_fut.cancel()
self.close()
return self._blob_bytes_received, None
except asyncio.CancelledError:
self.close()
raise
finally:
if self.writer and not self.writer.closed():
self.writer.close_handle()
self.writer = None
def connection_made(self, transport: asyncio.Transport):
addr = transport.get_extra_info('peername')
self.peer_address, self.peer_port = addr[0], addr[1]
self.transport = transport
if self.connection_manager:
self.connection_manager.connection_made(f"{self.peer_address}:{self.peer_port}")
log.debug("connection made to %s:%i", self.peer_address, self.peer_port)
def connection_lost(self, exc):
if self.connection_manager:
self.connection_manager.outgoing_connection_lost(f"{self.peer_address}:{self.peer_port}")
log.debug("connection lost to %s:%i (reason: %s, %s)", self.peer_address, self.peer_port, str(exc),
str(type(exc)))
self.close()
@cache_concurrent
async def request_blob(loop: asyncio.AbstractEventLoop, blob: Optional['AbstractBlob'], address: str,
tcp_port: int, peer_connect_timeout: float, blob_download_timeout: float,
connected_protocol: Optional['BlobExchangeClientProtocol'] = None,
connection_id: int = 0, connection_manager: Optional['ConnectionManager'] = None)\
-> typing.Tuple[int, Optional['BlobExchangeClientProtocol']]:
"""
Returns [<amount of bytes received>, <client protocol if connected>]
"""
protocol = connected_protocol
if not connected_protocol or not connected_protocol.transport or connected_protocol.transport.is_closing():
connected_protocol = None
protocol = BlobExchangeClientProtocol(
loop, blob_download_timeout, connection_manager
)
else:
log.debug("reusing connection for %s:%d", address, tcp_port)
try:
if not connected_protocol:
await asyncio.wait_for(loop.create_connection(lambda: protocol, address, tcp_port),
peer_connect_timeout)
connected_protocol = protocol
if blob is None or blob.get_is_verified() or not blob.is_writeable():
# blob is None happens when we are just opening a connection
# file exists but not verified means someone is writing right now, give it time, come back later
return 0, connected_protocol
return await connected_protocol.download_blob(blob)
except (asyncio.TimeoutError, ConnectionRefusedError, ConnectionAbortedError, OSError):
return 0, None

View file

@ -1,141 +0,0 @@
import asyncio
import typing
import logging
from lbry.utils import cache_concurrent
from lbry.blob_exchange.client import request_blob
from lbry.dht.node import get_kademlia_peers_from_hosts
if typing.TYPE_CHECKING:
from lbry.conf import Config
from lbry.dht.node import Node
from lbry.dht.peer import KademliaPeer
from lbry.blob.blob_manager import BlobManager
from lbry.blob.blob_file import AbstractBlob
from lbry.blob_exchange.client import BlobExchangeClientProtocol
log = logging.getLogger(__name__)
class BlobDownloader:
BAN_FACTOR = 2.0 # fixme: when connection manager gets implemented, move it out from here
def __init__(self, loop: asyncio.AbstractEventLoop, config: 'Config', blob_manager: 'BlobManager',
peer_queue: asyncio.Queue):
self.loop = loop
self.config = config
self.blob_manager = blob_manager
self.peer_queue = peer_queue
self.active_connections: typing.Dict['KademliaPeer', asyncio.Task] = {} # active request_blob calls
self.ignored: typing.Dict['KademliaPeer', int] = {}
self.scores: typing.Dict['KademliaPeer', int] = {}
self.failures: typing.Dict['KademliaPeer', int] = {}
self.connection_failures: typing.Set['KademliaPeer'] = set()
self.connections: typing.Dict['KademliaPeer', 'BlobExchangeClientProtocol'] = {}
self.is_running = asyncio.Event()
def should_race_continue(self, blob: 'AbstractBlob'):
max_probes = self.config.max_connections_per_download * (1 if self.connections else 10)
if len(self.active_connections) >= max_probes:
return False
return not (blob.get_is_verified() or not blob.is_writeable())
async def request_blob_from_peer(self, blob: 'AbstractBlob', peer: 'KademliaPeer', connection_id: int = 0,
just_probe: bool = False):
if blob.get_is_verified():
return
start = self.loop.time()
bytes_received, protocol = await request_blob(
self.loop, blob if not just_probe else None, peer.address, peer.tcp_port, self.config.peer_connect_timeout,
self.config.blob_download_timeout, connected_protocol=self.connections.get(peer),
connection_id=connection_id, connection_manager=self.blob_manager.connection_manager
)
if not bytes_received and not protocol and peer not in self.connection_failures:
self.connection_failures.add(peer)
if not protocol and peer not in self.ignored:
self.ignored[peer] = self.loop.time()
log.debug("drop peer %s:%i", peer.address, peer.tcp_port)
self.failures[peer] = self.failures.get(peer, 0) + 1
if peer in self.connections:
del self.connections[peer]
elif protocol:
log.debug("keep peer %s:%i", peer.address, peer.tcp_port)
self.failures[peer] = 0
self.connections[peer] = protocol
elapsed = self.loop.time() - start
self.scores[peer] = bytes_received / elapsed if bytes_received and elapsed else 1
async def new_peer_or_finished(self):
active_tasks = list(self.active_connections.values()) + [asyncio.create_task(asyncio.sleep(1))]
await asyncio.wait(active_tasks, return_when='FIRST_COMPLETED')
def cleanup_active(self):
if not self.active_connections and not self.connections:
self.clearbanned()
to_remove = [peer for (peer, task) in self.active_connections.items() if task.done()]
for peer in to_remove:
del self.active_connections[peer]
def clearbanned(self):
now = self.loop.time()
self.ignored = {
peer: when for (peer, when) in self.ignored.items()
if (now - when) < min(30.0, (self.failures.get(peer, 0) ** self.BAN_FACTOR))
}
@cache_concurrent
async def download_blob(self, blob_hash: str, length: typing.Optional[int] = None,
connection_id: int = 0) -> 'AbstractBlob':
blob = self.blob_manager.get_blob(blob_hash, length)
if blob.get_is_verified():
return blob
self.is_running.set()
try:
while not blob.get_is_verified() and self.is_running.is_set():
batch: typing.Set['KademliaPeer'] = set(self.connections.keys())
while not self.peer_queue.empty():
batch.update(self.peer_queue.get_nowait())
log.debug(
"%s running, %d peers, %d ignored, %d active, %s connections", blob_hash[:6],
len(batch), len(self.ignored), len(self.active_connections), len(self.connections)
)
for peer in sorted(batch, key=lambda peer: self.scores.get(peer, 0), reverse=True):
if peer in self.ignored:
continue
if peer in self.active_connections or not self.should_race_continue(blob):
continue
log.debug("request %s from %s:%i", blob_hash[:8], peer.address, peer.tcp_port)
t = self.loop.create_task(self.request_blob_from_peer(blob, peer, connection_id))
self.active_connections[peer] = t
self.peer_queue.put_nowait(list(batch))
await self.new_peer_or_finished()
self.cleanup_active()
log.debug("downloaded %s", blob_hash[:8])
return blob
finally:
blob.close()
if self.loop.is_running():
self.loop.call_soon(self.cleanup_active)
def close(self):
self.connection_failures.clear()
self.scores.clear()
self.ignored.clear()
self.is_running.clear()
for protocol in self.connections.values():
protocol.close()
async def download_blob(loop, config: 'Config', blob_manager: 'BlobManager', dht_node: 'Node',
blob_hash: str) -> 'AbstractBlob':
search_queue = asyncio.Queue(maxsize=config.max_connections_per_download)
search_queue.put_nowait(blob_hash)
peer_queue, accumulate_task = dht_node.accumulate_peers(search_queue)
fixed_peers = None if not config.fixed_peers else await get_kademlia_peers_from_hosts(config.fixed_peers)
if fixed_peers:
loop.call_later(config.fixed_peer_delay, peer_queue.put_nowait, fixed_peers)
downloader = BlobDownloader(loop, config, blob_manager, peer_queue)
try:
return await downloader.download_blob(blob_hash)
finally:
if accumulate_task and not accumulate_task.done():
accumulate_task.cancel()
downloader.close()

View file

@ -1,282 +0,0 @@
import typing
import json
import logging
log = logging.getLogger(__name__)
class BlobMessage:
key = ''
def to_dict(self) -> typing.Dict:
raise NotImplementedError()
class BlobPriceRequest(BlobMessage):
key = 'blob_data_payment_rate'
def __init__(self, blob_data_payment_rate: float, **kwargs) -> None:
self.blob_data_payment_rate = blob_data_payment_rate
def to_dict(self) -> typing.Dict:
return {
self.key: self.blob_data_payment_rate
}
class BlobPriceResponse(BlobMessage):
key = 'blob_data_payment_rate'
rate_accepted = 'RATE_ACCEPTED'
rate_too_low = 'RATE_TOO_LOW'
rate_unset = 'RATE_UNSET'
def __init__(self, blob_data_payment_rate: str, **kwargs) -> None:
if blob_data_payment_rate not in (self.rate_accepted, self.rate_too_low, self.rate_unset):
raise ValueError(blob_data_payment_rate)
self.blob_data_payment_rate = blob_data_payment_rate
def to_dict(self) -> typing.Dict:
return {
self.key: self.blob_data_payment_rate
}
class BlobAvailabilityRequest(BlobMessage):
key = 'requested_blobs'
def __init__(self, requested_blobs: typing.List[str], lbrycrd_address: typing.Optional[bool] = True,
**kwargs) -> None:
assert len(requested_blobs) > 0
self.requested_blobs = requested_blobs
self.lbrycrd_address = lbrycrd_address
def to_dict(self) -> typing.Dict:
return {
self.key: self.requested_blobs,
'lbrycrd_address': self.lbrycrd_address
}
class BlobAvailabilityResponse(BlobMessage):
key = 'available_blobs'
def __init__(self, available_blobs: typing.List[str], lbrycrd_address: typing.Optional[str] = True,
**kwargs) -> None:
self.available_blobs = available_blobs
self.lbrycrd_address = lbrycrd_address
def to_dict(self) -> typing.Dict:
d = {
self.key: self.available_blobs
}
if self.lbrycrd_address:
d['lbrycrd_address'] = self.lbrycrd_address
return d
class BlobDownloadRequest(BlobMessage):
key = 'requested_blob'
def __init__(self, requested_blob: str, **kwargs) -> None:
self.requested_blob = requested_blob
def to_dict(self) -> typing.Dict:
return {
self.key: self.requested_blob
}
class BlobDownloadResponse(BlobMessage):
key = 'incoming_blob'
def __init__(self, **response: typing.Dict) -> None:
incoming_blob = response[self.key]
self.error = None
self.incoming_blob = None
if 'error' in incoming_blob:
self.error = incoming_blob['error']
else:
self.incoming_blob = {'blob_hash': incoming_blob['blob_hash'], 'length': incoming_blob['length']}
self.length = None if not self.incoming_blob else self.incoming_blob['length']
self.blob_hash = None if not self.incoming_blob else self.incoming_blob['blob_hash']
def to_dict(self) -> typing.Dict:
return {
self.key if not self.error else 'error': self.incoming_blob or self.error,
}
class BlobPaymentAddressRequest(BlobMessage):
key = 'lbrycrd_address'
def __init__(self, lbrycrd_address: str, **kwargs) -> None:
self.lbrycrd_address = lbrycrd_address
def to_dict(self) -> typing.Dict:
return {
self.key: self.lbrycrd_address
}
class BlobPaymentAddressResponse(BlobPaymentAddressRequest):
pass
class BlobErrorResponse(BlobMessage):
key = 'error'
def __init__(self, error: str, **kwargs) -> None:
self.error = error
def to_dict(self) -> typing.Dict:
return {
self.key: self.error
}
blob_request_types = typing.Union[BlobPriceRequest, BlobAvailabilityRequest, BlobDownloadRequest, # pylint: disable=invalid-name
BlobPaymentAddressRequest]
blob_response_types = typing.Union[BlobPriceResponse, BlobAvailabilityResponse, BlobDownloadResponse, # pylint: disable=invalid-name
BlobErrorResponse, BlobPaymentAddressResponse]
def _parse_blob_response(response_msg: bytes) -> typing.Tuple[typing.Optional[typing.Dict], bytes]:
# scenarios:
# <json>
# <blob bytes>
# <json><blob bytes>
curr_pos = 0
while True:
next_close_paren = response_msg.find(b'}', curr_pos)
if next_close_paren == -1:
return None, response_msg
curr_pos = next_close_paren + 1
try:
response = json.loads(response_msg[:curr_pos])
except ValueError:
continue
possible_response_keys = {
BlobPaymentAddressResponse.key,
BlobAvailabilityResponse.key,
BlobPriceResponse.key,
BlobDownloadResponse.key
}
if isinstance(response, dict) and response.keys():
if set(response.keys()).issubset(possible_response_keys):
return response, response_msg[curr_pos:]
return None, response_msg
class BlobRequest:
def __init__(self, requests: typing.List[blob_request_types]) -> None:
self.requests = requests
def to_dict(self):
d = {}
for request in self.requests:
d.update(request.to_dict())
return d
def _get_request(self, request_type: blob_request_types):
request = tuple(filter(lambda r: type(r) == request_type, self.requests)) # pylint: disable=unidiomatic-typecheck
if request:
return request[0]
def get_availability_request(self) -> typing.Optional[BlobAvailabilityRequest]:
response = self._get_request(BlobAvailabilityRequest)
if response:
return response
def get_price_request(self) -> typing.Optional[BlobPriceRequest]:
response = self._get_request(BlobPriceRequest)
if response:
return response
def get_blob_request(self) -> typing.Optional[BlobDownloadRequest]:
response = self._get_request(BlobDownloadRequest)
if response:
return response
def get_address_request(self) -> typing.Optional[BlobPaymentAddressRequest]:
response = self._get_request(BlobPaymentAddressRequest)
if response:
return response
def serialize(self) -> bytes:
return json.dumps(self.to_dict()).encode()
@classmethod
def deserialize(cls, data: bytes) -> 'BlobRequest':
request = json.loads(data)
return cls([
request_type(**request)
for request_type in (BlobPriceRequest, BlobAvailabilityRequest, BlobDownloadRequest,
BlobPaymentAddressRequest)
if request_type.key in request
])
@classmethod
def make_request_for_blob_hash(cls, blob_hash: str) -> 'BlobRequest':
return cls(
[BlobAvailabilityRequest([blob_hash]), BlobPriceRequest(0.0), BlobDownloadRequest(blob_hash)]
)
class BlobResponse:
def __init__(self, responses: typing.List[blob_response_types], blob_data: typing.Optional[bytes] = None) -> None:
self.responses = responses
self.blob_data = blob_data
def to_dict(self):
d = {}
for response in self.responses:
d.update(response.to_dict())
return d
def _get_response(self, response_type: blob_response_types):
response = tuple(filter(lambda r: type(r) == response_type, self.responses)) # pylint: disable=unidiomatic-typecheck
if response:
return response[0]
def get_error_response(self) -> typing.Optional[BlobErrorResponse]:
error = self._get_response(BlobErrorResponse)
if error:
log.error(error)
return error
def get_availability_response(self) -> typing.Optional[BlobAvailabilityResponse]:
response = self._get_response(BlobAvailabilityResponse)
if response:
return response
def get_price_response(self) -> typing.Optional[BlobPriceResponse]:
response = self._get_response(BlobPriceResponse)
if response:
return response
def get_blob_response(self) -> typing.Optional[BlobDownloadResponse]:
response = self._get_response(BlobDownloadResponse)
if response:
return response
def get_address_response(self) -> typing.Optional[BlobPaymentAddressResponse]:
response = self._get_response(BlobPaymentAddressResponse)
if response:
return response
def serialize(self) -> bytes:
return json.dumps(self.to_dict()).encode()
@classmethod
def deserialize(cls, data: bytes) -> 'BlobResponse':
response, extra = _parse_blob_response(data)
requests = []
if response:
requests.extend([
response_type(**response)
for response_type in (BlobPriceResponse, BlobAvailabilityResponse, BlobDownloadResponse,
BlobErrorResponse, BlobPaymentAddressResponse)
if response_type.key in response
])
return cls(requests, extra)

View file

@ -1,194 +0,0 @@
import asyncio
import binascii
import logging
import socket
import typing
from json.decoder import JSONDecodeError
from lbry.blob_exchange.serialization import BlobResponse, BlobRequest, blob_response_types
from lbry.blob_exchange.serialization import BlobAvailabilityResponse, BlobPriceResponse, BlobDownloadResponse, \
BlobPaymentAddressResponse
if typing.TYPE_CHECKING:
from lbry.blob.blob_manager import BlobManager
log = logging.getLogger(__name__)
# a standard request will be 295 bytes
MAX_REQUEST_SIZE = 1200
class BlobServerProtocol(asyncio.Protocol):
def __init__(self, loop: asyncio.AbstractEventLoop, blob_manager: 'BlobManager', lbrycrd_address: str,
idle_timeout: float = 30.0, transfer_timeout: float = 60.0):
self.loop = loop
self.blob_manager = blob_manager
self.idle_timeout = idle_timeout
self.transfer_timeout = transfer_timeout
self.server_task: typing.Optional[asyncio.Task] = None
self.started_listening = asyncio.Event()
self.buf = b''
self.transport: typing.Optional[asyncio.Transport] = None
self.lbrycrd_address = lbrycrd_address
self.peer_address_and_port: typing.Optional[str] = None
self.started_transfer = asyncio.Event()
self.transfer_finished = asyncio.Event()
self.close_on_idle_task: typing.Optional[asyncio.Task] = None
async def close_on_idle(self):
while self.transport:
try:
await asyncio.wait_for(self.started_transfer.wait(), self.idle_timeout)
except asyncio.TimeoutError:
log.debug("closing idle connection from %s", self.peer_address_and_port)
return self.close()
self.started_transfer.clear()
await self.transfer_finished.wait()
self.transfer_finished.clear()
def close(self):
if self.transport:
self.transport.close()
def connection_made(self, transport):
self.transport = transport
self.close_on_idle_task = self.loop.create_task(self.close_on_idle())
self.peer_address_and_port = "%s:%i" % self.transport.get_extra_info('peername')
self.blob_manager.connection_manager.connection_received(self.peer_address_and_port)
log.debug("received connection from %s", self.peer_address_and_port)
def connection_lost(self, exc: typing.Optional[Exception]) -> None:
log.debug("lost connection from %s", self.peer_address_and_port)
self.blob_manager.connection_manager.incoming_connection_lost(self.peer_address_and_port)
self.transport = None
if self.close_on_idle_task and not self.close_on_idle_task.done():
self.close_on_idle_task.cancel()
self.close_on_idle_task = None
def send_response(self, responses: typing.List[blob_response_types]):
to_send = []
while responses:
to_send.append(responses.pop())
serialized = BlobResponse(to_send).serialize()
self.transport.write(serialized)
self.blob_manager.connection_manager.sent_data(self.peer_address_and_port, len(serialized))
async def handle_request(self, request: BlobRequest):
addr = self.transport.get_extra_info('peername')
peer_address, peer_port = addr
responses = []
address_request = request.get_address_request()
if address_request:
responses.append(BlobPaymentAddressResponse(lbrycrd_address=self.lbrycrd_address))
availability_request = request.get_availability_request()
if availability_request:
responses.append(BlobAvailabilityResponse(available_blobs=list(set(
filter(lambda blob_hash: blob_hash in self.blob_manager.completed_blob_hashes,
availability_request.requested_blobs)
))))
price_request = request.get_price_request()
if price_request:
responses.append(BlobPriceResponse(blob_data_payment_rate='RATE_ACCEPTED'))
download_request = request.get_blob_request()
if download_request:
blob = self.blob_manager.get_blob(download_request.requested_blob)
if blob.get_is_verified():
incoming_blob = {'blob_hash': blob.blob_hash, 'length': blob.length}
responses.append(BlobDownloadResponse(incoming_blob=incoming_blob))
self.send_response(responses)
blob_hash = blob.blob_hash[:8]
log.debug("send %s to %s:%i", blob_hash, peer_address, peer_port)
self.started_transfer.set()
try:
sent = await asyncio.wait_for(blob.sendfile(self), self.transfer_timeout)
if sent and sent > 0:
self.blob_manager.connection_manager.sent_data(self.peer_address_and_port, sent)
log.info("sent %s (%i bytes) to %s:%i", blob_hash, sent, peer_address, peer_port)
else:
self.close()
log.debug("stopped sending %s to %s:%i", blob_hash, peer_address, peer_port)
return
except (OSError, ValueError, asyncio.TimeoutError) as err:
if isinstance(err, asyncio.TimeoutError):
log.debug("timed out sending blob %s to %s", blob_hash, peer_address)
else:
log.warning("could not read blob %s to send %s:%i", blob_hash, peer_address, peer_port)
self.close()
return
finally:
self.transfer_finished.set()
else:
log.info("don't have %s to send %s:%i", blob.blob_hash[:8], peer_address, peer_port)
if responses and not self.transport.is_closing():
self.send_response(responses)
def data_received(self, data):
request = None
if len(self.buf) + len(data or b'') >= MAX_REQUEST_SIZE:
log.warning("request from %s is too large", self.peer_address_and_port)
self.close()
return
if data:
self.blob_manager.connection_manager.received_data(self.peer_address_and_port, len(data))
_, separator, remainder = data.rpartition(b'}')
if not separator:
self.buf += data
return
try:
request = BlobRequest.deserialize(self.buf + data)
self.buf = remainder
except (UnicodeDecodeError, JSONDecodeError):
log.error("request from %s is not valid json (%i bytes): %s", self.peer_address_and_port,
len(self.buf + data), '' if not data else binascii.hexlify(self.buf + data).decode())
self.close()
return
if not request.requests:
log.error("failed to decode request from %s (%i bytes): %s", self.peer_address_and_port,
len(self.buf + data), '' if not data else binascii.hexlify(self.buf + data).decode())
self.close()
return
self.loop.create_task(self.handle_request(request))
class BlobServer:
def __init__(self, loop: asyncio.AbstractEventLoop, blob_manager: 'BlobManager', lbrycrd_address: str,
idle_timeout: float = 30.0, transfer_timeout: float = 60.0):
self.loop = loop
self.blob_manager = blob_manager
self.server_task: typing.Optional[asyncio.Task] = None
self.started_listening = asyncio.Event()
self.lbrycrd_address = lbrycrd_address
self.idle_timeout = idle_timeout
self.transfer_timeout = transfer_timeout
self.server_protocol_class = BlobServerProtocol
def start_server(self, port: int, interface: typing.Optional[str] = '0.0.0.0'):
if self.server_task is not None:
raise Exception("already running")
async def _start_server():
# checking if the port is in use
# thx https://stackoverflow.com/a/52872579
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(('localhost', port)) == 0:
# the port is already in use!
log.error("Failed to bind TCP %s:%d", interface, port)
server = await self.loop.create_server(
lambda: self.server_protocol_class(self.loop, self.blob_manager, self.lbrycrd_address,
self.idle_timeout, self.transfer_timeout),
interface, port
)
self.started_listening.set()
log.info("Blob server listening on TCP %s:%i", interface, port)
async with server:
await server.serve_forever()
self.server_task = self.loop.create_task(_start_server())
def stop_server(self):
if self.server_task:
self.server_task.cancel()
self.server_task = None
log.info("Stopped blob server")

View file

@ -1,835 +0,0 @@
import os
import re
import sys
import logging
from typing import List, Dict, Tuple, Union, TypeVar, Generic, Optional
from argparse import ArgumentParser
from contextlib import contextmanager
from appdirs import user_data_dir, user_config_dir
import yaml
from lbry.error import InvalidCurrencyError
from lbry.dht import constants
from lbry.wallet.coinselection import STRATEGIES
log = logging.getLogger(__name__)
NOT_SET = type('NOT_SET', (object,), {}) # pylint: disable=invalid-name
T = TypeVar('T')
CURRENCIES = {
'BTC': {'type': 'crypto'},
'LBC': {'type': 'crypto'},
'USD': {'type': 'fiat'},
}
class Setting(Generic[T]):
def __init__(self, doc: str, default: Optional[T] = None,
previous_names: Optional[List[str]] = None,
metavar: Optional[str] = None):
self.doc = doc
self.default = default
self.previous_names = previous_names or []
self.metavar = metavar
def __set_name__(self, owner, name):
self.name = name # pylint: disable=attribute-defined-outside-init
@property
def cli_name(self):
return f"--{self.name.replace('_', '-')}"
@property
def no_cli_name(self):
return f"--no-{self.name.replace('_', '-')}"
def __get__(self, obj: Optional['BaseConfig'], owner) -> T:
if obj is None:
return self
for location in obj.search_order:
if self.name in location:
return location[self.name]
return self.default
def __set__(self, obj: 'BaseConfig', val: Union[T, NOT_SET]):
if val == NOT_SET:
for location in obj.modify_order:
if self.name in location:
del location[self.name]
else:
self.validate(val)
for location in obj.modify_order:
location[self.name] = val
def is_set(self, obj: 'BaseConfig') -> bool:
for location in obj.search_order:
if self.name in location:
return True
return False
def is_set_to_default(self, obj: 'BaseConfig') -> bool:
for location in obj.search_order:
if self.name in location:
return location[self.name] == self.default
return False
def validate(self, value):
raise NotImplementedError()
def deserialize(self, value): # pylint: disable=no-self-use
return value
def serialize(self, value): # pylint: disable=no-self-use
return value
def contribute_to_argparse(self, parser: ArgumentParser):
parser.add_argument(
self.cli_name,
help=self.doc,
metavar=self.metavar,
default=NOT_SET
)
class String(Setting[str]):
def validate(self, value):
assert isinstance(value, str), \
f"Setting '{self.name}' must be a string."
# TODO: removes this after pylint starts to understand generics
def __get__(self, obj: Optional['BaseConfig'], owner) -> str: # pylint: disable=useless-super-delegation
return super().__get__(obj, owner)
class Integer(Setting[int]):
def validate(self, value):
assert isinstance(value, int), \
f"Setting '{self.name}' must be an integer."
def deserialize(self, value):
return int(value)
class Float(Setting[float]):
def validate(self, value):
assert isinstance(value, float), \
f"Setting '{self.name}' must be a decimal."
def deserialize(self, value):
return float(value)
class Toggle(Setting[bool]):
def validate(self, value):
assert isinstance(value, bool), \
f"Setting '{self.name}' must be a true/false value."
def contribute_to_argparse(self, parser: ArgumentParser):
parser.add_argument(
self.cli_name,
help=self.doc,
action="store_true",
default=NOT_SET
)
parser.add_argument(
self.no_cli_name,
help=f"Opposite of {self.cli_name}",
dest=self.name,
action="store_false",
default=NOT_SET
)
class Path(String):
def __init__(self, doc: str, *args, default: str = '', **kwargs):
super().__init__(doc, default, *args, **kwargs)
def __get__(self, obj, owner) -> str:
value = super().__get__(obj, owner)
if isinstance(value, str):
return os.path.expanduser(os.path.expandvars(value))
return value
class MaxKeyFee(Setting[dict]):
def validate(self, value):
if value is not None:
assert isinstance(value, dict) and set(value) == {'currency', 'amount'}, \
f"Setting '{self.name}' must be a dict like \"{{'amount': 50.0, 'currency': 'USD'}}\"."
if value["currency"] not in CURRENCIES:
raise InvalidCurrencyError(value["currency"])
@staticmethod
def _parse_list(l):
if l == ['null']:
return None
assert len(l) == 2, (
'Max key fee is made up of either two values: '
'"AMOUNT CURRENCY", or "null" (to set no limit)'
)
try:
amount = float(l[0])
except ValueError:
raise AssertionError('First value in max key fee is a decimal: "AMOUNT CURRENCY"')
currency = str(l[1]).upper()
if currency not in CURRENCIES:
raise InvalidCurrencyError(currency)
return {'amount': amount, 'currency': currency}
def deserialize(self, value):
if value is None:
return
if isinstance(value, dict):
return {
'currency': value['currency'],
'amount': float(value['amount']),
}
if isinstance(value, str):
value = value.split()
if isinstance(value, list):
return self._parse_list(value)
raise AssertionError('Invalid max key fee.')
def contribute_to_argparse(self, parser: ArgumentParser):
parser.add_argument(
self.cli_name,
help=self.doc,
nargs='+',
metavar=('AMOUNT', 'CURRENCY'),
default=NOT_SET
)
parser.add_argument(
self.no_cli_name,
help="Disable maximum key fee check.",
dest=self.name,
const=None,
action="store_const",
default=NOT_SET
)
class StringChoice(String):
def __init__(self, doc: str, valid_values: List[str], default: str, *args, **kwargs):
super().__init__(doc, default, *args, **kwargs)
if not valid_values:
raise ValueError("No valid values provided")
if default not in valid_values:
raise ValueError(f"Default value must be one of: {', '.join(valid_values)}")
self.valid_values = valid_values
def validate(self, value):
super().validate(value)
if value not in self.valid_values:
raise ValueError(f"Setting '{self.name}' value must be one of: {', '.join(self.valid_values)}")
class ListSetting(Setting[list]):
def validate(self, value):
assert isinstance(value, (tuple, list)), \
f"Setting '{self.name}' must be a tuple or list."
def contribute_to_argparse(self, parser: ArgumentParser):
parser.add_argument(
self.cli_name,
help=self.doc,
action='append'
)
class Servers(ListSetting):
def validate(self, value):
assert isinstance(value, (tuple, list)), \
f"Setting '{self.name}' must be a tuple or list of servers."
for idx, server in enumerate(value):
assert isinstance(server, (tuple, list)) and len(server) == 2, \
f"Server defined '{server}' at index {idx} in setting " \
f"'{self.name}' must be a tuple or list of two items."
assert isinstance(server[0], str), \
f"Server defined '{server}' at index {idx} in setting " \
f"'{self.name}' must be have hostname as string in first position."
assert isinstance(server[1], int), \
f"Server defined '{server}' at index {idx} in setting " \
f"'{self.name}' must be have port as int in second position."
def deserialize(self, value):
servers = []
if isinstance(value, list):
for server in value:
if isinstance(server, str) and server.count(':') == 1:
host, port = server.split(':')
try:
servers.append((host, int(port)))
except ValueError:
pass
return servers
def serialize(self, value):
if value:
return [f"{host}:{port}" for host, port in value]
return value
class Strings(ListSetting):
def validate(self, value):
assert isinstance(value, (tuple, list)), \
f"Setting '{self.name}' must be a tuple or list of strings."
for idx, string in enumerate(value):
assert isinstance(string, str), \
f"Value of '{string}' at index {idx} in setting " \
f"'{self.name}' must be a string."
class KnownHubsList:
def __init__(self, config: 'Config' = None, file_name: str = 'known_hubs.yml'):
self.file_name = file_name
self.path = os.path.join(config.wallet_dir, self.file_name) if config else None
self.hubs: Dict[Tuple[str, int], Dict] = {}
if self.exists:
self.load()
@property
def exists(self):
return self.path and os.path.exists(self.path)
@property
def serialized(self) -> Dict[str, Dict]:
return {f"{host}:{port}": details for (host, port), details in self.hubs.items()}
def filter(self, match_none=False, **kwargs):
if not kwargs:
return self.hubs
result = {}
for hub, details in self.hubs.items():
for key, constraint in kwargs.items():
value = details.get(key)
if value == constraint or (match_none and value is None):
result[hub] = details
break
return result
def load(self):
if self.path:
with open(self.path, 'r') as known_hubs_file:
raw = known_hubs_file.read()
for hub, details in yaml.safe_load(raw).items():
self.set(hub, details)
def save(self):
if self.path:
with open(self.path, 'w') as known_hubs_file:
known_hubs_file.write(yaml.safe_dump(self.serialized, default_flow_style=False))
def set(self, hub: str, details: Dict):
if hub and hub.count(':') == 1:
host, port = hub.split(':')
hub_parts = (host, int(port))
if hub_parts not in self.hubs:
self.hubs[hub_parts] = details
return hub
def add_hubs(self, hubs: List[str]):
added = False
for hub in hubs:
if self.set(hub, {}) is not None:
added = True
return added
def items(self):
return self.hubs.items()
def __bool__(self):
return len(self) > 0
def __len__(self):
return self.hubs.__len__()
def __iter__(self):
return iter(self.hubs)
class EnvironmentAccess:
PREFIX = 'LBRY_'
def __init__(self, config: 'BaseConfig', environ: dict):
self.configuration = config
self.data = {}
if environ:
self.load(environ)
def load(self, environ):
for setting in self.configuration.get_settings():
value = environ.get(f'{self.PREFIX}{setting.name.upper()}', NOT_SET)
if value != NOT_SET and not (isinstance(setting, ListSetting) and value is None):
self.data[setting.name] = setting.deserialize(value)
def __contains__(self, item: str):
return item in self.data
def __getitem__(self, item: str):
return self.data[item]
class ArgumentAccess:
def __init__(self, config: 'BaseConfig', args: dict):
self.configuration = config
self.args = {}
if args:
self.load(args)
def load(self, args):
for setting in self.configuration.get_settings():
value = getattr(args, setting.name, NOT_SET)
if value != NOT_SET and not (isinstance(setting, ListSetting) and value is None):
self.args[setting.name] = setting.deserialize(value)
def __contains__(self, item: str):
return item in self.args
def __getitem__(self, item: str):
return self.args[item]
class ConfigFileAccess:
def __init__(self, config: 'BaseConfig', path: str):
self.configuration = config
self.path = path
self.data = {}
if self.exists:
self.load()
@property
def exists(self):
return self.path and os.path.exists(self.path)
def load(self):
cls = type(self.configuration)
with open(self.path, 'r') as config_file:
raw = config_file.read()
serialized = yaml.safe_load(raw) or {}
for key, value in serialized.items():
attr = getattr(cls, key, None)
if attr is None:
for setting in self.configuration.settings:
if key in setting.previous_names:
attr = setting
break
if attr is not None:
self.data[key] = attr.deserialize(value)
def save(self):
cls = type(self.configuration)
serialized = {}
for key, value in self.data.items():
attr = getattr(cls, key)
serialized[key] = attr.serialize(value)
with open(self.path, 'w') as config_file:
config_file.write(yaml.safe_dump(serialized, default_flow_style=False))
def upgrade(self) -> bool:
upgraded = False
for key in list(self.data):
for setting in self.configuration.settings:
if key in setting.previous_names:
self.data[setting.name] = self.data[key]
del self.data[key]
upgraded = True
break
return upgraded
def __contains__(self, item: str):
return item in self.data
def __getitem__(self, item: str):
return self.data[item]
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
TBC = TypeVar('TBC', bound='BaseConfig')
class BaseConfig:
config = Path("Path to configuration file.", metavar='FILE')
def __init__(self, **kwargs):
self.runtime = {} # set internally or by various API calls
self.arguments = {} # from command line arguments
self.environment = {} # from environment variables
self.persisted = {} # from config file
self._updating_config = False
for key, value in kwargs.items():
setattr(self, key, value)
@contextmanager
def update_config(self):
self._updating_config = True
yield self
self._updating_config = False
if isinstance(self.persisted, ConfigFileAccess):
self.persisted.save()
@property
def modify_order(self):
locations = [self.runtime]
if self._updating_config:
locations.append(self.persisted)
return locations
@property
def search_order(self):
return [
self.runtime,
self.arguments,
self.environment,
self.persisted
]
@classmethod
def get_settings(cls):
for attr in dir(cls):
setting = getattr(cls, attr)
if isinstance(setting, Setting):
yield setting
@property
def settings(self):
return self.get_settings()
@property
def settings_dict(self):
return {
setting.name: getattr(self, setting.name) for setting in self.settings
}
@classmethod
def create_from_arguments(cls, args) -> TBC:
conf = cls()
conf.set_arguments(args)
conf.set_environment()
conf.set_persisted()
return conf
@classmethod
def contribute_to_argparse(cls, parser: ArgumentParser):
for setting in cls.get_settings():
setting.contribute_to_argparse(parser)
def set_arguments(self, args):
self.arguments = ArgumentAccess(self, args)
def set_environment(self, environ=None):
self.environment = EnvironmentAccess(self, environ or os.environ)
def set_persisted(self, config_file_path=None):
if config_file_path is None:
config_file_path = self.config
if not config_file_path:
return
ext = os.path.splitext(config_file_path)[1]
assert ext in ('.yml', '.yaml'),\
f"File extension '{ext}' is not supported, " \
f"configuration file must be in YAML (.yaml)."
self.persisted = ConfigFileAccess(self, config_file_path)
if self.persisted.upgrade():
self.persisted.save()
class TranscodeConfig(BaseConfig):
ffmpeg_path = String('A list of places to check for ffmpeg and ffprobe. '
f'$data_dir/ffmpeg/bin and $PATH are checked afterward. Separator: {os.pathsep}',
'', previous_names=['ffmpeg_folder'])
video_encoder = String('FFmpeg codec and parameters for the video encoding. '
'Example: libaom-av1 -crf 25 -b:v 0 -strict experimental',
'libx264 -crf 24 -preset faster -pix_fmt yuv420p')
video_bitrate_maximum = Integer('Maximum bits per second allowed for video streams (0 to disable).', 5_000_000)
video_scaler = String('FFmpeg scaling parameters for reducing bitrate. '
'Example: -vf "scale=-2:720,fps=24" -maxrate 5M -bufsize 3M',
r'-vf "scale=if(gte(iw\,ih)\,min(1920\,iw)\,-2):if(lt(iw\,ih)\,min(1920\,ih)\,-2)" '
r'-maxrate 5500K -bufsize 5000K')
audio_encoder = String('FFmpeg codec and parameters for the audio encoding. '
'Example: libopus -b:a 128k',
'aac -b:a 160k')
volume_filter = String('FFmpeg filter for audio normalization. Exmple: -af loudnorm', '')
volume_analysis_time = Integer('Maximum seconds into the file that we examine audio volume (0 to disable).', 240)
class CLIConfig(TranscodeConfig):
api = String('Host name and port for lbrynet daemon API.', 'localhost:5279', metavar='HOST:PORT')
@property
def api_connection_url(self) -> str:
return f"http://{self.api}/lbryapi"
@property
def api_host(self):
return self.api.split(':')[0]
@property
def api_port(self):
return int(self.api.split(':')[1])
class Config(CLIConfig):
jurisdiction = String("Limit interactions to wallet server in this jurisdiction.")
# directories
data_dir = Path("Directory path to store blobs.", metavar='DIR')
download_dir = Path(
"Directory path to place assembled files downloaded from LBRY.",
previous_names=['download_directory'], metavar='DIR'
)
wallet_dir = Path(
"Directory containing a 'wallets' subdirectory with 'default_wallet' file.",
previous_names=['lbryum_wallet_dir'], metavar='DIR'
)
wallets = Strings(
"Wallet files in 'wallet_dir' to load at startup.",
['default_wallet']
)
# network
use_upnp = Toggle(
"Use UPnP to setup temporary port redirects for the DHT and the hosting of blobs. If you manually forward"
"ports or have firewall rules you likely want to disable this.", True
)
udp_port = Integer("UDP port for communicating on the LBRY DHT", 4444, previous_names=['dht_node_port'])
tcp_port = Integer("TCP port to listen for incoming blob requests", 4444, previous_names=['peer_port'])
prometheus_port = Integer("Port to expose prometheus metrics (off by default)", 0)
network_interface = String("Interface to use for the DHT and blob exchange", '0.0.0.0')
# routing table
split_buckets_under_index = Integer(
"Routing table bucket index below which we always split the bucket if given a new key to add to it and "
"the bucket is full. As this value is raised the depth of the routing table (and number of peers in it) "
"will increase. This setting is used by seed nodes, you probably don't want to change it during normal "
"use.", 2
)
is_bootstrap_node = Toggle(
"When running as a bootstrap node, disable all logic related to balancing the routing table, so we can "
"add as many peers as possible and better help first-runs.", False
)
# protocol timeouts
download_timeout = Float("Cumulative timeout for a stream to begin downloading before giving up", 30.0)
blob_download_timeout = Float("Timeout to download a blob from a peer", 30.0)
hub_timeout = Float("Timeout when making a hub request", 30.0)
peer_connect_timeout = Float("Timeout to establish a TCP connection to a peer", 3.0)
node_rpc_timeout = Float("Timeout when making a DHT request", constants.RPC_TIMEOUT)
# blob announcement and download
save_blobs = Toggle("Save encrypted blob files for hosting, otherwise download blobs to memory only.", True)
network_storage_limit = Integer("Disk space in MB to be allocated for helping the P2P network. 0 = disable", 0)
blob_storage_limit = Integer("Disk space in MB to be allocated for blob storage. 0 = no limit", 0)
blob_lru_cache_size = Integer(
"LRU cache size for decrypted downloaded blobs used to minimize re-downloading the same blobs when "
"replying to a range request. Set to 0 to disable.", 32
)
announce_head_and_sd_only = Toggle(
"Announce only the descriptor and first (rather than all) data blob for a stream to the DHT", True,
previous_names=['announce_head_blobs_only']
)
concurrent_blob_announcers = Integer(
"Number of blobs to iteratively announce at once, set to 0 to disable", 10,
previous_names=['concurrent_announcers']
)
max_connections_per_download = Integer(
"Maximum number of peers to connect to while downloading a blob", 4,
previous_names=['max_connections_per_stream']
)
concurrent_hub_requests = Integer("Maximum number of concurrent hub requests", 32)
fixed_peer_delay = Float(
"Amount of seconds before adding the reflector servers as potential peers to download from in case dht"
"peers are not found or are slow", 2.0
)
max_key_fee = MaxKeyFee(
"Don't download streams with fees exceeding this amount. When set to "
"null, the amount is unbounded.", {'currency': 'USD', 'amount': 50.0}
)
max_wallet_server_fee = String("Maximum daily LBC amount allowed as payment for wallet servers.", "0.0")
# reflector settings
reflect_streams = Toggle(
"Upload completed streams (published and downloaded) reflector in order to re-host them", True,
previous_names=['reflect_uploads']
)
concurrent_reflector_uploads = Integer(
"Maximum number of streams to upload to a reflector server at a time", 10
)
# servers
reflector_servers = Servers("Reflector re-hosting servers for mirroring publishes", [
('reflector.lbry.com', 5566)
])
fixed_peers = Servers("Fixed peers to fall back to if none are found on P2P for a blob", [
('cdn.reflector.lbry.com', 5567)
])
tracker_servers = Servers("BitTorrent-compatible (BEP15) UDP trackers for helping P2P discovery", [
('tracker.lbry.com', 9252),
('tracker.lbry.grin.io', 9252),
('tracker.lbry.pigg.es', 9252),
('tracker.lizard.technology', 9252),
('s1.lbry.network', 9252),
])
lbryum_servers = Servers("SPV wallet servers", [
('spv11.lbry.com', 50001),
('spv12.lbry.com', 50001),
('spv13.lbry.com', 50001),
('spv14.lbry.com', 50001),
('spv15.lbry.com', 50001),
('spv16.lbry.com', 50001),
('spv17.lbry.com', 50001),
('spv18.lbry.com', 50001),
('spv19.lbry.com', 50001),
('hub.lbry.grin.io', 50001),
('hub.lizard.technology', 50001),
('s1.lbry.network', 50001),
])
known_dht_nodes = Servers("Known nodes for bootstrapping connection to the DHT", [
('dht.lbry.grin.io', 4444), # Grin
('dht.lbry.madiator.com', 4444), # Madiator
('dht.lbry.pigg.es', 4444), # Pigges
('lbrynet1.lbry.com', 4444), # US EAST
('lbrynet2.lbry.com', 4444), # US WEST
('lbrynet3.lbry.com', 4444), # EU
('lbrynet4.lbry.com', 4444), # ASIA
('dht.lizard.technology', 4444), # Jack
('s2.lbry.network', 4444),
])
# blockchain
blockchain_name = String("Blockchain name - lbrycrd_main, lbrycrd_regtest, or lbrycrd_testnet", 'lbrycrd_main')
# daemon
save_files = Toggle("Save downloaded files when calling `get` by default", False)
components_to_skip = Strings("components which will be skipped during start-up of daemon", [])
share_usage_data = Toggle(
"Whether to share usage stats and diagnostic info with LBRY.", False,
previous_names=['upload_log', 'upload_log', 'share_debug_info']
)
track_bandwidth = Toggle("Track bandwidth usage", True)
allowed_origin = String(
"Allowed `Origin` header value for API request (sent by browser), use * to allow "
"all hosts; default is to only allow API requests with no `Origin` value.", "")
# media server
streaming_server = String('Host name and port to serve streaming media over range requests',
'localhost:5280', metavar='HOST:PORT')
streaming_get = Toggle("Enable the /get endpoint for the streaming media server. "
"Disable to prevent new streams from being added.", True)
coin_selection_strategy = StringChoice(
"Strategy to use when selecting UTXOs for a transaction",
STRATEGIES, "prefer_confirmed"
)
transaction_cache_size = Integer("Transaction cache size", 2 ** 17)
save_resolved_claims = Toggle(
"Save content claims to the database when they are resolved to keep file_list up to date, "
"only disable this if file_x commands are not needed", True
)
@property
def streaming_host(self):
return self.streaming_server.split(':')[0]
@property
def streaming_port(self):
return int(self.streaming_server.split(':')[1])
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.set_default_paths()
self.known_hubs = KnownHubsList(self)
def set_default_paths(self):
if 'darwin' in sys.platform.lower():
get_directories = get_darwin_directories
elif 'win' in sys.platform.lower():
get_directories = get_windows_directories
elif 'linux' in sys.platform.lower():
get_directories = get_linux_directories
else:
return
cls = type(self)
cls.data_dir.default, cls.wallet_dir.default, cls.download_dir.default = get_directories()
cls.config.default = os.path.join(
self.data_dir, 'daemon_settings.yml'
)
@property
def log_file_path(self):
return os.path.join(self.data_dir, 'lbrynet.log')
def get_windows_directories() -> Tuple[str, str, str]:
from lbry.winpaths import get_path, FOLDERID, UserHandle, \
PathNotFoundException # pylint: disable=import-outside-toplevel
try:
download_dir = get_path(FOLDERID.Downloads, UserHandle.current)
except PathNotFoundException:
download_dir = os.getcwd()
# old
appdata = get_path(FOLDERID.RoamingAppData, UserHandle.current)
data_dir = os.path.join(appdata, 'lbrynet')
lbryum_dir = os.path.join(appdata, 'lbryum')
if os.path.isdir(data_dir) or os.path.isdir(lbryum_dir):
return data_dir, lbryum_dir, download_dir
# new
data_dir = user_data_dir('lbrynet', 'lbry')
lbryum_dir = user_data_dir('lbryum', 'lbry')
return data_dir, lbryum_dir, download_dir
def get_darwin_directories() -> Tuple[str, str, str]:
data_dir = user_data_dir('LBRY')
lbryum_dir = os.path.expanduser('~/.lbryum')
download_dir = os.path.expanduser('~/Downloads')
return data_dir, lbryum_dir, download_dir
def get_linux_directories() -> Tuple[str, str, str]:
try:
with open(os.path.join(user_config_dir(), 'user-dirs.dirs'), 'r') as xdg:
down_dir = re.search(r'XDG_DOWNLOAD_DIR=(.+)', xdg.read())
if down_dir:
down_dir = re.sub(r'\$HOME', os.getenv('HOME') or os.path.expanduser("~/"), down_dir.group(1))
download_dir = re.sub('\"', '', down_dir)
except OSError:
download_dir = os.getenv('XDG_DOWNLOAD_DIR')
if not download_dir:
download_dir = os.path.expanduser('~/Downloads')
# old
data_dir = os.path.expanduser('~/.lbrynet')
lbryum_dir = os.path.expanduser('~/.lbryum')
if os.path.isdir(data_dir) or os.path.isdir(lbryum_dir):
return data_dir, lbryum_dir, download_dir
# new
return user_data_dir('lbry/lbrynet'), user_data_dir('lbry/lbryum'), download_dir

View file

@ -1,105 +0,0 @@
import time
import asyncio
import typing
import collections
import logging
log = logging.getLogger(__name__)
CONNECTED_EVENT = "connected"
DISCONNECTED_EVENT = "disconnected"
TRANSFERRED_EVENT = "transferred"
class ConnectionManager:
def __init__(self, loop: asyncio.AbstractEventLoop):
self.loop = loop
self.incoming_connected: typing.Set[str] = set()
self.incoming: typing.DefaultDict[str, int] = collections.defaultdict(int)
self.outgoing_connected: typing.Set[str] = set()
self.outgoing: typing.DefaultDict[str, int] = collections.defaultdict(int)
self._max_incoming_mbs = 0.0
self._max_outgoing_mbs = 0.0
self._status = {}
self._running = False
self._task: typing.Optional[asyncio.Task] = None
@property
def status(self):
return self._status
def sent_data(self, host_and_port: str, size: int):
if self._running:
self.outgoing[host_and_port] += size
def received_data(self, host_and_port: str, size: int):
if self._running:
self.incoming[host_and_port] += size
def connection_made(self, host_and_port: str):
if self._running:
self.outgoing_connected.add(host_and_port)
def connection_received(self, host_and_port: str):
# self.incoming_connected.add(host_and_port)
pass
def outgoing_connection_lost(self, host_and_port: str):
if self._running and host_and_port in self.outgoing_connected:
self.outgoing_connected.remove(host_and_port)
def incoming_connection_lost(self, host_and_port: str):
if self._running and host_and_port in self.incoming_connected:
self.incoming_connected.remove(host_and_port)
async def _update(self):
self._status = {
'incoming_bps': {},
'outgoing_bps': {},
'total_incoming_mbs': 0.0,
'total_outgoing_mbs': 0.0,
'total_sent': 0,
'total_received': 0,
'max_incoming_mbs': 0.0,
'max_outgoing_mbs': 0.0
}
while True:
last = time.perf_counter()
await asyncio.sleep(0.1)
self._status['incoming_bps'].clear()
self._status['outgoing_bps'].clear()
now = time.perf_counter()
while self.outgoing:
k, sent = self.outgoing.popitem()
self._status['total_sent'] += sent
self._status['outgoing_bps'][k] = sent / (now - last)
while self.incoming:
k, received = self.incoming.popitem()
self._status['total_received'] += received
self._status['incoming_bps'][k] = received / (now - last)
self._status['total_outgoing_mbs'] = int(sum(list(self._status['outgoing_bps'].values())
)) / 1000000.0
self._status['total_incoming_mbs'] = int(sum(list(self._status['incoming_bps'].values())
)) / 1000000.0
self._max_incoming_mbs = max(self._max_incoming_mbs, self._status['total_incoming_mbs'])
self._max_outgoing_mbs = max(self._max_outgoing_mbs, self._status['total_outgoing_mbs'])
self._status['max_incoming_mbs'] = self._max_incoming_mbs
self._status['max_outgoing_mbs'] = self._max_outgoing_mbs
def stop(self):
if self._task:
self._task.cancel()
self._task = None
self.outgoing.clear()
self.outgoing_connected.clear()
self.incoming.clear()
self.incoming_connected.clear()
self._status.clear()
self._running = False
def start(self):
self.stop()
self._running = True
self._task = self.loop.create_task(self._update())

View file

@ -1,2 +0,0 @@
CENT = 1000000
COIN = 100*CENT

View file

@ -1,86 +0,0 @@
from lbry.crypto.hash import double_sha256
from lbry.crypto.util import bytes_to_int, int_to_bytes
class Base58Error(Exception):
""" Exception used for Base58 errors. """
class Base58:
""" Class providing base 58 functionality. """
chars = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
assert len(chars) == 58
char_map = {c: n for n, c in enumerate(chars)}
@classmethod
def char_value(cls, c):
val = cls.char_map.get(c)
if val is None:
raise Base58Error(f'invalid base 58 character "{c}"')
return val
@classmethod
def decode(cls, txt):
""" Decodes txt into a big-endian bytearray. """
if isinstance(txt, memoryview):
txt = str(txt)
if isinstance(txt, bytes):
txt = txt.decode()
if not isinstance(txt, str):
raise TypeError('a string is required')
if not txt:
raise Base58Error('string cannot be empty')
value = 0
for c in txt:
value = value * 58 + cls.char_value(c)
result = int_to_bytes(value)
# Prepend leading zero bytes if necessary
count = 0
for c in txt:
if c != '1':
break
count += 1
if count:
result = bytes((0,)) * count + result
return result
@classmethod
def encode(cls, be_bytes):
"""Converts a big-endian bytearray into a base58 string."""
value = bytes_to_int(be_bytes)
txt = ''
while value:
value, mod = divmod(value, 58)
txt += cls.chars[mod]
for byte in be_bytes:
if byte != 0:
break
txt += '1'
return txt[::-1]
@classmethod
def decode_check(cls, txt, hash_fn=double_sha256):
""" Decodes a Base58Check-encoded string to a payload. The version prefixes it. """
be_bytes = cls.decode(txt)
result, check = be_bytes[:-4], be_bytes[-4:]
if check != hash_fn(result)[:4]:
raise Base58Error(f'invalid base 58 checksum for {txt}')
return result
@classmethod
def encode_check(cls, payload, hash_fn=double_sha256):
""" Encodes a payload bytearray (which includes the version byte(s))
into a Base58Check string."""
be_bytes = payload + hash_fn(payload)[:4]
return cls.encode(be_bytes)

View file

@ -1,71 +0,0 @@
import os
import base64
import typing
from cryptography.hazmat.primitives.kdf.scrypt import Scrypt
from cryptography.hazmat.primitives.ciphers import Cipher, modes
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.padding import PKCS7
from cryptography.hazmat.backends import default_backend
from lbry.error import InvalidPasswordError
from lbry.crypto.hash import double_sha256
def aes_encrypt(secret: str, value: str, init_vector: bytes = None) -> str:
if init_vector is not None:
assert len(init_vector) == 16
else:
init_vector = os.urandom(16)
key = double_sha256(secret.encode())
encryptor = Cipher(AES(key), modes.CBC(init_vector), default_backend()).encryptor()
padder = PKCS7(AES.block_size).padder()
padded_data = padder.update(value.encode()) + padder.finalize()
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
return base64.b64encode(init_vector + encrypted_data).decode()
def aes_decrypt(secret: str, value: str) -> typing.Tuple[str, bytes]:
try:
data = base64.b64decode(value.encode())
key = double_sha256(secret.encode())
init_vector, data = data[:16], data[16:]
decryptor = Cipher(AES(key), modes.CBC(init_vector), default_backend()).decryptor()
unpadder = PKCS7(AES.block_size).unpadder()
result = unpadder.update(decryptor.update(data)) + unpadder.finalize()
return result.decode(), init_vector
except UnicodeDecodeError:
raise InvalidPasswordError()
except ValueError as e:
if e.args[0] == 'Invalid padding bytes.':
raise InvalidPasswordError()
raise
def better_aes_encrypt(secret: str, value: bytes) -> bytes:
init_vector = os.urandom(16)
key = scrypt(secret.encode(), salt=init_vector)
encryptor = Cipher(AES(key), modes.CBC(init_vector), default_backend()).encryptor()
padder = PKCS7(AES.block_size).padder()
padded_data = padder.update(value) + padder.finalize()
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
return base64.b64encode(b's:8192:16:1:' + init_vector + encrypted_data)
def better_aes_decrypt(secret: str, value: bytes) -> bytes:
try:
data = base64.b64decode(value)
_, scryp_n, scrypt_r, scrypt_p, data = data.split(b':', maxsplit=4)
init_vector, data = data[:16], data[16:]
key = scrypt(secret.encode(), init_vector, int(scryp_n), int(scrypt_r), int(scrypt_p))
decryptor = Cipher(AES(key), modes.CBC(init_vector), default_backend()).decryptor()
unpadder = PKCS7(AES.block_size).unpadder()
return unpadder.update(decryptor.update(data)) + unpadder.finalize()
except ValueError as e:
if e.args[0] == 'Invalid padding bytes.':
raise InvalidPasswordError()
raise
def scrypt(passphrase, salt, scrypt_n=1<<13, scrypt_r=16, scrypt_p=1):
kdf = Scrypt(salt, length=32, n=scrypt_n, r=scrypt_r, p=scrypt_p, backend=default_backend())
return kdf.derive(passphrase)

View file

@ -1,47 +0,0 @@
import hashlib
import hmac
from binascii import hexlify, unhexlify
def sha256(x):
""" Simple wrapper of hashlib sha256. """
return hashlib.sha256(x).digest()
def sha512(x):
""" Simple wrapper of hashlib sha512. """
return hashlib.sha512(x).digest()
def ripemd160(x):
""" Simple wrapper of hashlib ripemd160. """
h = hashlib.new('ripemd160')
h.update(x)
return h.digest()
def double_sha256(x):
""" SHA-256 of SHA-256, as used extensively in bitcoin. """
return sha256(sha256(x))
def hmac_sha512(key, msg):
""" Use SHA-512 to provide an HMAC. """
return hmac.new(key, msg, hashlib.sha512).digest()
def hash160(x):
""" RIPEMD-160 of SHA-256.
Used to make bitcoin addresses from pubkeys. """
return ripemd160(sha256(x))
def hash_to_hex_str(x):
""" Convert a big-endian binary hash to displayed hex string.
Display form of a binary hash is reversed and converted to hex. """
return hexlify(reversed(x))
def hex_str_to_hash(x):
""" Convert a displayed hex string to a binary hash. """
return reversed(unhexlify(x))

View file

@ -1,13 +0,0 @@
from binascii import unhexlify, hexlify
def bytes_to_int(be_bytes):
""" Interprets a big-endian sequence of bytes as an integer. """
return int(hexlify(be_bytes), 16)
def int_to_bytes(value):
""" Converts an integer to a big-endian sequence of bytes. """
length = (value.bit_length() + 7) // 8
s = '%x' % value
return unhexlify(('0' * (len(s) % 2) + s).zfill(length * 2))

View file

@ -1,78 +0,0 @@
import asyncio
import typing
import logging
from prometheus_client import Counter, Gauge
if typing.TYPE_CHECKING:
from lbry.dht.node import Node
from lbry.extras.daemon.storage import SQLiteStorage
log = logging.getLogger(__name__)
class BlobAnnouncer:
announcements_sent_metric = Counter(
"announcements_sent", "Number of announcements sent and their respective status.", namespace="dht_node",
labelnames=("peers", "error"),
)
announcement_queue_size_metric = Gauge(
"announcement_queue_size", "Number of hashes waiting to be announced.", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, node: 'Node', storage: 'SQLiteStorage'):
self.loop = loop
self.node = node
self.storage = storage
self.announce_task: asyncio.Task = None
self.announce_queue: typing.List[str] = []
self._done = asyncio.Event()
self.announced = set()
async def _run_consumer(self):
while self.announce_queue:
try:
blob_hash = self.announce_queue.pop()
peers = len(await self.node.announce_blob(blob_hash))
self.announcements_sent_metric.labels(peers=peers, error=False).inc()
if peers > 4:
self.announced.add(blob_hash)
else:
log.debug("failed to announce %s, could only find %d peers, retrying soon.", blob_hash[:8], peers)
except Exception as err:
self.announcements_sent_metric.labels(peers=0, error=True).inc()
log.warning("error announcing %s: %s", blob_hash[:8], str(err))
async def _announce(self, batch_size: typing.Optional[int] = 10):
while batch_size:
if not self.node.joined.is_set():
await self.node.joined.wait()
await asyncio.sleep(60)
if not self.node.protocol.routing_table.get_peers():
log.warning("No peers in DHT, announce round skipped")
continue
self.announce_queue.extend(await self.storage.get_blobs_to_announce())
self.announcement_queue_size_metric.labels(scope="global").set(len(self.announce_queue))
log.debug("announcer task wake up, %d blobs to announce", len(self.announce_queue))
while len(self.announce_queue) > 0:
log.info("%i blobs to announce", len(self.announce_queue))
await asyncio.gather(*[self._run_consumer() for _ in range(batch_size)])
announced = list(filter(None, self.announced))
if announced:
await self.storage.update_last_announced_blobs(announced)
log.info("announced %i blobs", len(announced))
self.announced.clear()
self._done.set()
self._done.clear()
def start(self, batch_size: typing.Optional[int] = 10):
assert not self.announce_task or self.announce_task.done(), "already running"
self.announce_task = self.loop.create_task(self._announce(batch_size))
def stop(self):
if self.announce_task and not self.announce_task.done():
self.announce_task.cancel()
def wait(self):
return self._done.wait()

View file

@ -1,40 +0,0 @@
import hashlib
import os
HASH_CLASS = hashlib.sha384 # pylint: disable=invalid-name
HASH_LENGTH = HASH_CLASS().digest_size
HASH_BITS = HASH_LENGTH * 8
ALPHA = 5
K = 8
SPLIT_BUCKETS_UNDER_INDEX = 1
REPLACEMENT_CACHE_SIZE = 8
RPC_TIMEOUT = 5.0
RPC_ATTEMPTS = 5
RPC_ATTEMPTS_PRUNING_WINDOW = 600
ITERATIVE_LOOKUP_DELAY = RPC_TIMEOUT / 2.0 # TODO: use config val / 2 if rpc timeout is provided
REFRESH_INTERVAL = 3600 # 1 hour
REPLICATE_INTERVAL = REFRESH_INTERVAL
DATA_EXPIRATION = 86400 # 24 hours
TOKEN_SECRET_REFRESH_INTERVAL = 300 # 5 minutes
MAYBE_PING_DELAY = 300 # 5 minutes
CHECK_REFRESH_INTERVAL = REFRESH_INTERVAL / 5
RPC_ID_LENGTH = 20
PROTOCOL_VERSION = 1
MSG_SIZE_LIMIT = 1400
def digest(data: bytes) -> bytes:
h = HASH_CLASS()
h.update(data)
return h.digest()
def generate_id(num=None) -> bytes:
if num is not None:
return digest(str(num).encode())
else:
return digest(os.urandom(32))
def generate_rpc_id(num=None) -> bytes:
return generate_id(num)[:RPC_ID_LENGTH]

View file

@ -1,23 +0,0 @@
class BaseKademliaException(Exception):
pass
class DecodeError(BaseKademliaException):
"""
Should be raised by an C{Encoding} implementation if decode operation
fails
"""
class BucketFull(BaseKademliaException):
"""
Raised when the bucket is full
"""
class RemoteException(BaseKademliaException):
pass
class TransportNotConnected(BaseKademliaException):
pass

View file

@ -1,282 +0,0 @@
import logging
import asyncio
import typing
import socket
from prometheus_client import Gauge
from lbry.utils import aclosing, resolve_host
from lbry.dht import constants
from lbry.dht.peer import make_kademlia_peer
from lbry.dht.protocol.distance import Distance
from lbry.dht.protocol.iterative_find import IterativeNodeFinder, IterativeValueFinder
from lbry.dht.protocol.protocol import KademliaProtocol
if typing.TYPE_CHECKING:
from lbry.dht.peer import PeerManager
from lbry.dht.peer import KademliaPeer
log = logging.getLogger(__name__)
class Node:
storing_peers_metric = Gauge(
"storing_peers", "Number of peers storing blobs announced to this node", namespace="dht_node",
labelnames=("scope",),
)
stored_blob_with_x_bytes_colliding = Gauge(
"stored_blobs_x_bytes_colliding", "Number of blobs with at least X bytes colliding with this node id prefix",
namespace="dht_node", labelnames=("amount",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, udp_port: int,
internal_udp_port: int, peer_port: int, external_ip: str, rpc_timeout: float = constants.RPC_TIMEOUT,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_bootstrap_node: bool = False,
storage: typing.Optional['SQLiteStorage'] = None):
self.loop = loop
self.internal_udp_port = internal_udp_port
self.protocol = KademliaProtocol(loop, peer_manager, node_id, external_ip, udp_port, peer_port, rpc_timeout,
split_buckets_under_index, is_bootstrap_node)
self.listening_port: asyncio.DatagramTransport = None
self.joined = asyncio.Event()
self._join_task: asyncio.Task = None
self._refresh_task: asyncio.Task = None
self._storage = storage
@property
def stored_blob_hashes(self):
return self.protocol.data_store.keys()
async def refresh_node(self, force_once=False):
while True:
# remove peers with expired blob announcements from the datastore
self.protocol.data_store.removed_expired_peers()
total_peers: typing.List['KademliaPeer'] = []
# add all peers in the routing table
total_peers.extend(self.protocol.routing_table.get_peers())
# add all the peers who have announced blobs to us
storing_peers = self.protocol.data_store.get_storing_contacts()
self.storing_peers_metric.labels("global").set(len(storing_peers))
total_peers.extend(storing_peers)
counts = {0: 0, 1: 0, 2: 0}
node_id = self.protocol.node_id
for blob_hash in self.protocol.data_store.keys():
bytes_colliding = 0 if blob_hash[0] != node_id[0] else 2 if blob_hash[1] == node_id[1] else 1
counts[bytes_colliding] += 1
self.stored_blob_with_x_bytes_colliding.labels(amount=0).set(counts[0])
self.stored_blob_with_x_bytes_colliding.labels(amount=1).set(counts[1])
self.stored_blob_with_x_bytes_colliding.labels(amount=2).set(counts[2])
# get ids falling in the midpoint of each bucket that hasn't been recently updated
node_ids = self.protocol.routing_table.get_refresh_list(0, True)
if self.protocol.routing_table.get_peers():
# if we have node ids to look up, perform the iterative search until we have k results
while node_ids:
peers = await self.peer_search(node_ids.pop())
total_peers.extend(peers)
else:
if force_once:
break
fut = asyncio.Future()
self.loop.call_later(constants.REFRESH_INTERVAL // 4, fut.set_result, None)
await fut
continue
# ping the set of peers; upon success/failure the routing able and last replied/failed time will be updated
to_ping = [peer for peer in set(total_peers) if self.protocol.peer_manager.peer_is_good(peer) is not True]
if to_ping:
self.protocol.ping_queue.enqueue_maybe_ping(*to_ping, delay=0)
if self._storage:
await self._storage.save_kademlia_peers(self.protocol.routing_table.get_peers())
if force_once:
break
fut = asyncio.Future()
self.loop.call_later(constants.REFRESH_INTERVAL, fut.set_result, None)
await fut
async def announce_blob(self, blob_hash: str) -> typing.List[bytes]:
hash_value = bytes.fromhex(blob_hash)
assert len(hash_value) == constants.HASH_LENGTH
peers = await self.peer_search(hash_value)
if not self.protocol.external_ip:
raise Exception("Cannot determine external IP")
log.debug("Store to %i peers", len(peers))
for peer in peers:
log.debug("store to %s %s %s", peer.address, peer.udp_port, peer.tcp_port)
stored_to_tup = await asyncio.gather(
*(self.protocol.store_to_peer(hash_value, peer) for peer in peers)
)
stored_to = [node_id for node_id, contacted in stored_to_tup if contacted]
if stored_to:
log.debug(
"Stored %s to %i of %i attempted peers", hash_value.hex()[:8],
len(stored_to), len(peers)
)
else:
log.debug("Failed announcing %s, stored to 0 peers", blob_hash[:8])
return stored_to
def stop(self) -> None:
if self.joined.is_set():
self.joined.clear()
if self._join_task:
self._join_task.cancel()
if self._refresh_task and not (self._refresh_task.done() or self._refresh_task.cancelled()):
self._refresh_task.cancel()
if self.protocol and self.protocol.ping_queue.running:
self.protocol.ping_queue.stop()
self.protocol.stop()
if self.listening_port is not None:
self.listening_port.close()
self._join_task = None
self.listening_port = None
log.info("Stopped DHT node")
async def start_listening(self, interface: str = '0.0.0.0') -> None:
if not self.listening_port:
self.listening_port, _ = await self.loop.create_datagram_endpoint(
lambda: self.protocol, (interface, self.internal_udp_port)
)
log.info("DHT node listening on UDP %s:%i", interface, self.internal_udp_port)
self.protocol.start()
else:
log.warning("Already bound to port %s", self.listening_port)
async def join_network(self, interface: str = '0.0.0.0',
known_node_urls: typing.Optional[typing.List[typing.Tuple[str, int]]] = None):
def peers_from_urls(urls: typing.Optional[typing.List[typing.Tuple[bytes, str, int, int]]]):
peer_addresses = []
for node_id, address, udp_port, tcp_port in urls:
if (node_id, address, udp_port, tcp_port) not in peer_addresses and \
(address, udp_port) != (self.protocol.external_ip, self.protocol.udp_port):
peer_addresses.append((node_id, address, udp_port, tcp_port))
return [make_kademlia_peer(*peer_address) for peer_address in peer_addresses]
if not self.listening_port:
await self.start_listening(interface)
self.protocol.ping_queue.start()
self._refresh_task = self.loop.create_task(self.refresh_node())
while True:
if self.protocol.routing_table.get_peers():
if not self.joined.is_set():
self.joined.set()
log.info(
"joined dht, %i peers known in %i buckets", len(self.protocol.routing_table.get_peers()),
self.protocol.routing_table.buckets_with_contacts()
)
else:
if self.joined.is_set():
self.joined.clear()
seed_peers = peers_from_urls(
await self._storage.get_persisted_kademlia_peers()
) if self._storage else []
if not seed_peers:
try:
seed_peers.extend(peers_from_urls([
(None, await resolve_host(address, udp_port, 'udp'), udp_port, None)
for address, udp_port in known_node_urls or []
]))
except socket.gaierror:
await asyncio.sleep(30)
continue
self.protocol.peer_manager.reset()
self.protocol.ping_queue.enqueue_maybe_ping(*seed_peers, delay=0.0)
await self.peer_search(self.protocol.node_id, shortlist=seed_peers, count=32)
await asyncio.sleep(1)
def start(self, interface: str, known_node_urls: typing.Optional[typing.List[typing.Tuple[str, int]]] = None):
self._join_task = self.loop.create_task(self.join_network(interface, known_node_urls))
def get_iterative_node_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None,
max_results: int = constants.K) -> IterativeNodeFinder:
shortlist = shortlist or self.protocol.routing_table.find_close_peers(key)
return IterativeNodeFinder(self.loop, self.protocol, key, max_results, shortlist)
def get_iterative_value_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None,
max_results: int = -1) -> IterativeValueFinder:
shortlist = shortlist or self.protocol.routing_table.find_close_peers(key)
return IterativeValueFinder(self.loop, self.protocol, key, max_results, shortlist)
async def peer_search(self, node_id: bytes, count=constants.K, max_results=constants.K * 2,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None
) -> typing.List['KademliaPeer']:
peers = []
async with aclosing(self.get_iterative_node_finder(
node_id, shortlist=shortlist, max_results=max_results)) as node_finder:
async for iteration_peers in node_finder:
peers.extend(iteration_peers)
distance = Distance(node_id)
peers.sort(key=lambda peer: distance(peer.node_id))
return peers[:count]
async def _accumulate_peers_for_value(self, search_queue: asyncio.Queue, result_queue: asyncio.Queue):
tasks = []
try:
while True:
blob_hash = await search_queue.get()
tasks.append(self.loop.create_task(self._peers_for_value_producer(blob_hash, result_queue)))
finally:
for task in tasks:
task.cancel()
async def _peers_for_value_producer(self, blob_hash: str, result_queue: asyncio.Queue):
async def put_into_result_queue_after_pong(_peer):
try:
await self.protocol.get_rpc_peer(_peer).ping()
result_queue.put_nowait([_peer])
log.debug("pong from %s:%i for %s", _peer.address, _peer.udp_port, blob_hash)
except asyncio.TimeoutError:
pass
# prioritize peers who reply to a dht ping first
# this minimizes attempting to make tcp connections that won't work later to dead or unreachable peers
async with aclosing(self.get_iterative_value_finder(bytes.fromhex(blob_hash))) as value_finder:
async for results in value_finder:
to_put = []
for peer in results:
if peer.address == self.protocol.external_ip and self.protocol.peer_port == peer.tcp_port:
continue
is_good = self.protocol.peer_manager.peer_is_good(peer)
if is_good:
# the peer has replied recently over UDP, it can probably be reached on the TCP port
to_put.append(peer)
elif is_good is None:
if not peer.udp_port:
# TODO: use the same port for TCP and UDP
# the udp port must be guessed
# default to the ports being the same. if the TCP port appears to be <=0.48.0 default,
# including on a network with several nodes, then assume the udp port is proportionately
# based on a starting port of 4444
udp_port_to_try = peer.tcp_port
if 3400 > peer.tcp_port > 3332:
udp_port_to_try = (peer.tcp_port - 3333) + 4444
self.loop.create_task(put_into_result_queue_after_pong(
make_kademlia_peer(peer.node_id, peer.address, udp_port_to_try, peer.tcp_port)
))
else:
self.loop.create_task(put_into_result_queue_after_pong(peer))
else:
# the peer is known to be bad/unreachable, skip trying to connect to it over TCP
log.debug("skip bad peer %s:%i for %s", peer.address, peer.tcp_port, blob_hash)
if to_put:
result_queue.put_nowait(to_put)
def accumulate_peers(self, search_queue: asyncio.Queue,
peer_queue: typing.Optional[asyncio.Queue] = None
) -> typing.Tuple[asyncio.Queue, asyncio.Task]:
queue = peer_queue or asyncio.Queue()
return queue, self.loop.create_task(self._accumulate_peers_for_value(search_queue, queue))
async def get_kademlia_peers_from_hosts(peer_list: typing.List[typing.Tuple[str, int]]) -> typing.List['KademliaPeer']:
peer_address_list = [(await resolve_host(url, port, proto='tcp'), port) for url, port in peer_list]
kademlia_peer_list = [make_kademlia_peer(None, address, None, tcp_port=port, allow_localhost=True)
for address, port in peer_address_list]
return kademlia_peer_list

View file

@ -1,199 +0,0 @@
import typing
import asyncio
import logging
from dataclasses import dataclass, field
from functools import lru_cache
from prometheus_client import Gauge
from lbry.utils import is_valid_public_ipv4 as _is_valid_public_ipv4, LRUCache
from lbry.dht import constants
from lbry.dht.serialization.datagram import make_compact_address, make_compact_ip, decode_compact_address
ALLOW_LOCALHOST = False
CACHE_SIZE = 16384
log = logging.getLogger(__name__)
@lru_cache(CACHE_SIZE)
def make_kademlia_peer(node_id: typing.Optional[bytes], address: typing.Optional[str],
udp_port: typing.Optional[int] = None,
tcp_port: typing.Optional[int] = None,
allow_localhost: bool = False) -> 'KademliaPeer':
return KademliaPeer(address, node_id, udp_port, tcp_port=tcp_port, allow_localhost=allow_localhost)
def is_valid_public_ipv4(address, allow_localhost: bool = False):
allow_localhost = bool(allow_localhost or ALLOW_LOCALHOST)
return _is_valid_public_ipv4(address, allow_localhost)
class PeerManager:
peer_manager_keys_metric = Gauge(
"peer_manager_keys", "Number of keys tracked by PeerManager dicts (sum)", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop):
self._loop = loop
self._rpc_failures: typing.Dict[
typing.Tuple[str, int], typing.Tuple[typing.Optional[float], typing.Optional[float]]
] = LRUCache(CACHE_SIZE)
self._last_replied: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._last_sent: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._last_requested: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._node_id_mapping: typing.Dict[typing.Tuple[str, int], bytes] = LRUCache(CACHE_SIZE)
self._node_id_reverse_mapping: typing.Dict[bytes, typing.Tuple[str, int]] = LRUCache(CACHE_SIZE)
self._node_tokens: typing.Dict[bytes, (float, bytes)] = LRUCache(CACHE_SIZE)
def count_cache_keys(self):
return len(self._rpc_failures) + len(self._last_replied) + len(self._last_sent) + len(
self._last_requested) + len(self._node_id_mapping) + len(self._node_id_reverse_mapping) + len(
self._node_tokens)
def reset(self):
for statistic in (self._rpc_failures, self._last_replied, self._last_sent, self._last_requested):
statistic.clear()
def report_failure(self, address: str, udp_port: int):
now = self._loop.time()
_, previous = self._rpc_failures.pop((address, udp_port), (None, None))
self._rpc_failures[(address, udp_port)] = (previous, now)
def report_last_sent(self, address: str, udp_port: int):
now = self._loop.time()
self._last_sent[(address, udp_port)] = now
def report_last_replied(self, address: str, udp_port: int):
now = self._loop.time()
self._last_replied[(address, udp_port)] = now
def report_last_requested(self, address: str, udp_port: int):
now = self._loop.time()
self._last_requested[(address, udp_port)] = now
def clear_token(self, node_id: bytes):
self._node_tokens.pop(node_id, None)
def update_token(self, node_id: bytes, token: bytes):
now = self._loop.time()
self._node_tokens[node_id] = (now, token)
def get_node_token(self, node_id: bytes) -> typing.Optional[bytes]:
ts, token = self._node_tokens.get(node_id, (0, None))
if ts and ts > self._loop.time() - constants.TOKEN_SECRET_REFRESH_INTERVAL:
return token
def get_last_replied(self, address: str, udp_port: int) -> typing.Optional[float]:
return self._last_replied.get((address, udp_port))
def update_contact_triple(self, node_id: bytes, address: str, udp_port: int):
"""
Update the mapping of node_id -> address tuple and that of address tuple -> node_id
This is to handle peers changing addresses and ids while assuring that the we only ever have
one node id / address tuple mapped to each other
"""
if (address, udp_port) in self._node_id_mapping:
self._node_id_reverse_mapping.pop(self._node_id_mapping.pop((address, udp_port)))
if node_id in self._node_id_reverse_mapping:
self._node_id_mapping.pop(self._node_id_reverse_mapping.pop(node_id))
self._node_id_mapping[(address, udp_port)] = node_id
self._node_id_reverse_mapping[node_id] = (address, udp_port)
self.peer_manager_keys_metric.labels("global").set(self.count_cache_keys())
def get_node_id_for_endpoint(self, address, port):
return self._node_id_mapping.get((address, port))
def prune(self): # TODO: periodically call this
now = self._loop.time()
to_pop = []
for (address, udp_port), (_, last_failure) in self._rpc_failures.items():
if last_failure and last_failure < now - constants.RPC_ATTEMPTS_PRUNING_WINDOW:
to_pop.append((address, udp_port))
while to_pop:
del self._rpc_failures[to_pop.pop()]
to_pop = []
for node_id, (age, token) in self._node_tokens.items(): # pylint: disable=unused-variable
if age < now - constants.TOKEN_SECRET_REFRESH_INTERVAL:
to_pop.append(node_id)
while to_pop:
del self._node_tokens[to_pop.pop()]
def contact_triple_is_good(self, node_id: bytes, address: str, udp_port: int): # pylint: disable=too-many-return-statements
"""
:return: False if peer is bad, None if peer is unknown, or True if peer is good
"""
delay = self._loop.time() - constants.CHECK_REFRESH_INTERVAL
# fixme: find a way to re-enable that without breaking other parts
# if node_id not in self._node_id_reverse_mapping or (address, udp_port) not in self._node_id_mapping:
# return
# addr_tup = (address, udp_port)
# if self._node_id_reverse_mapping[node_id] != addr_tup or self._node_id_mapping[addr_tup] != node_id:
# return
previous_failure, most_recent_failure = self._rpc_failures.get((address, udp_port), (None, None))
last_requested = self._last_requested.get((address, udp_port))
last_replied = self._last_replied.get((address, udp_port))
if node_id is None:
return None
if most_recent_failure and last_replied:
if delay < last_replied > most_recent_failure:
return True
elif last_replied > most_recent_failure:
return
return False
elif previous_failure and most_recent_failure and most_recent_failure > delay:
return False
elif last_replied and last_replied > delay:
return True
elif last_requested and last_requested > delay:
return None
return
def peer_is_good(self, peer: 'KademliaPeer'):
return self.contact_triple_is_good(peer.node_id, peer.address, peer.udp_port)
def decode_tcp_peer_from_compact_address(compact_address: bytes) -> 'KademliaPeer': # pylint: disable=no-self-use
node_id, address, tcp_port = decode_compact_address(compact_address)
return make_kademlia_peer(node_id, address, udp_port=None, tcp_port=tcp_port)
@dataclass(unsafe_hash=True)
class KademliaPeer:
address: str = field(hash=True)
_node_id: typing.Optional[bytes] = field(hash=True)
udp_port: typing.Optional[int] = field(hash=True)
tcp_port: typing.Optional[int] = field(compare=False, hash=False)
protocol_version: typing.Optional[int] = field(default=1, compare=False, hash=False)
allow_localhost: bool = field(default=False, compare=False, hash=False)
def __post_init__(self):
if self._node_id is not None:
if not len(self._node_id) == constants.HASH_LENGTH:
raise ValueError("invalid node_id: {}".format(self._node_id.hex()))
if self.udp_port is not None and not 1024 <= self.udp_port <= 65535:
raise ValueError(f"invalid udp port: {self.address}:{self.udp_port}")
if self.tcp_port is not None and not 1024 <= self.tcp_port <= 65535:
raise ValueError(f"invalid tcp port: {self.address}:{self.tcp_port}")
if not is_valid_public_ipv4(self.address, self.allow_localhost):
raise ValueError(f"invalid ip address: '{self.address}'")
def update_tcp_port(self, tcp_port: int):
self.tcp_port = tcp_port
@property
def node_id(self) -> bytes:
return self._node_id
def compact_address_udp(self) -> bytearray:
return make_compact_address(self.node_id, self.address, self.udp_port)
def compact_address_tcp(self) -> bytearray:
return make_compact_address(self.node_id, self.address, self.tcp_port)
def compact_ip(self):
return make_compact_ip(self.address)
def __str__(self):
return f"{self.__class__.__name__}({self.node_id.hex()[:8]}@{self.address}:{self.udp_port}-{self.tcp_port})"

View file

@ -1,76 +0,0 @@
import asyncio
import typing
from lbry.dht import constants
if typing.TYPE_CHECKING:
from lbry.dht.peer import KademliaPeer, PeerManager
class DictDataStore:
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager'):
# Dictionary format:
# { <key>: [(<contact>, <age>), ...] }
self._data_store: typing.Dict[bytes, typing.List[typing.Tuple['KademliaPeer', float]]] = {}
self.loop = loop
self._peer_manager = peer_manager
self.completed_blobs: typing.Set[str] = set()
def keys(self):
return self._data_store.keys()
def __len__(self):
return self._data_store.__len__()
def removed_expired_peers(self):
now = self.loop.time()
keys = list(self._data_store.keys())
for key in keys:
to_remove = []
for (peer, ts) in self._data_store[key]:
if ts + constants.DATA_EXPIRATION < now or self._peer_manager.peer_is_good(peer) is False:
to_remove.append((peer, ts))
for item in to_remove:
self._data_store[key].remove(item)
if not self._data_store[key]:
del self._data_store[key]
def filter_bad_and_expired_peers(self, key: bytes) -> typing.Iterator['KademliaPeer']:
"""
Returns only non-expired and unknown/good peers
"""
for peer in self.filter_expired_peers(key):
if self._peer_manager.peer_is_good(peer) is not False:
yield peer
def filter_expired_peers(self, key: bytes) -> typing.Iterator['KademliaPeer']:
"""
Returns only non-expired peers
"""
now = self.loop.time()
for (peer, ts) in self._data_store.get(key, []):
if ts + constants.DATA_EXPIRATION > now:
yield peer
def has_peers_for_blob(self, key: bytes) -> bool:
return key in self._data_store
def add_peer_to_blob(self, contact: 'KademliaPeer', key: bytes) -> None:
now = self.loop.time()
if key in self._data_store:
current = list(filter(lambda x: x[0] == contact, self._data_store[key]))
if len(current) > 0:
self._data_store[key][self._data_store[key].index(current[0])] = contact, now
else:
self._data_store[key].append((contact, now))
else:
self._data_store[key] = [(contact, now)]
def get_peers_for_blob(self, key: bytes) -> typing.List['KademliaPeer']:
return list(self.filter_bad_and_expired_peers(key))
def get_storing_contacts(self) -> typing.List['KademliaPeer']:
peers = set()
for _, stored in self._data_store.items():
peers.update(set(map(lambda tup: tup[0], stored)))
return list(peers)

View file

@ -1,25 +0,0 @@
from lbry.dht import constants
class Distance:
"""Calculate the XOR result between two string variables.
Frequently we re-use one of the points so as an optimization
we pre-calculate the value of that point.
"""
def __init__(self, key: bytes):
if len(key) != constants.HASH_LENGTH:
raise ValueError(f"invalid key length: {len(key)}")
self.key = key
self.val_key_one = int.from_bytes(key, 'big')
def __call__(self, key_two: bytes) -> int:
if len(key_two) != constants.HASH_LENGTH:
raise ValueError(f"invalid length of key to compare: {len(key_two)}")
val_key_two = int.from_bytes(key_two, 'big')
return self.val_key_one ^ val_key_two
def is_closer(self, key_a: bytes, key_b: bytes) -> bool:
"""Returns true is `key_a` is closer to `key` than `key_b` is"""
return self(key_a) < self(key_b)

View file

@ -1,361 +0,0 @@
import asyncio
from itertools import chain
from collections import defaultdict, OrderedDict
from collections.abc import AsyncIterator
import typing
import logging
from typing import TYPE_CHECKING
from lbry.dht import constants
from lbry.dht.error import RemoteException, TransportNotConnected
from lbry.dht.protocol.distance import Distance
from lbry.dht.peer import make_kademlia_peer, decode_tcp_peer_from_compact_address
from lbry.dht.serialization.datagram import PAGE_KEY
if TYPE_CHECKING:
from lbry.dht.protocol.protocol import KademliaProtocol
from lbry.dht.peer import PeerManager, KademliaPeer
log = logging.getLogger(__name__)
class FindResponse:
@property
def found(self) -> bool:
raise NotImplementedError()
def get_close_triples(self) -> typing.List[typing.Tuple[bytes, str, int]]:
raise NotImplementedError()
def get_close_kademlia_peers(self, peer_info) -> typing.Generator[typing.Iterator['KademliaPeer'], None, None]:
for contact_triple in self.get_close_triples():
node_id, address, udp_port = contact_triple
try:
yield make_kademlia_peer(node_id, address, udp_port)
except ValueError:
log.warning("misbehaving peer %s:%i returned peer with reserved ip %s:%i", peer_info.address,
peer_info.udp_port, address, udp_port)
class FindNodeResponse(FindResponse):
def __init__(self, key: bytes, close_triples: typing.List[typing.Tuple[bytes, str, int]]):
self.key = key
self.close_triples = close_triples
@property
def found(self) -> bool:
return self.key in [triple[0] for triple in self.close_triples]
def get_close_triples(self) -> typing.List[typing.Tuple[bytes, str, int]]:
return self.close_triples
class FindValueResponse(FindResponse):
def __init__(self, key: bytes, result_dict: typing.Dict):
self.key = key
self.token = result_dict[b'token']
self.close_triples: typing.List[typing.Tuple[bytes, bytes, int]] = result_dict.get(b'contacts', [])
self.found_compact_addresses = result_dict.get(key, [])
self.pages = int(result_dict.get(PAGE_KEY, 0))
@property
def found(self) -> bool:
return len(self.found_compact_addresses) > 0
def get_close_triples(self) -> typing.List[typing.Tuple[bytes, str, int]]:
return [(node_id, address.decode(), port) for node_id, address, port in self.close_triples]
class IterativeFinder(AsyncIterator):
def __init__(self, loop: asyncio.AbstractEventLoop,
protocol: 'KademliaProtocol', key: bytes,
max_results: typing.Optional[int] = constants.K,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
if len(key) != constants.HASH_LENGTH:
raise ValueError("invalid key length: %i" % len(key))
self.loop = loop
self.peer_manager = protocol.peer_manager
self.protocol = protocol
self.key = key
self.max_results = max(constants.K, max_results)
self.active: typing.Dict['KademliaPeer', int] = OrderedDict() # peer: distance, sorted
self.contacted: typing.Set['KademliaPeer'] = set()
self.distance = Distance(key)
self.iteration_queue = asyncio.Queue()
self.running_probes: typing.Dict['KademliaPeer', asyncio.Task] = {}
self.iteration_count = 0
self.running = False
self.tasks: typing.List[asyncio.Task] = []
for peer in shortlist:
if peer.node_id:
self._add_active(peer, force=True)
else:
# seed nodes
self._schedule_probe(peer)
async def send_probe(self, peer: 'KademliaPeer') -> FindResponse:
"""
Send the rpc request to the peer and return an object with the FindResponse interface
"""
raise NotImplementedError()
def search_exhausted(self):
"""
This method ends the iterator due no more peers to contact.
Override to provide last time results.
"""
self.iteration_queue.put_nowait(None)
def check_result_ready(self, response: FindResponse):
"""
Called after adding peers from an rpc result to the shortlist.
This method is responsible for putting a result for the generator into the Queue
"""
raise NotImplementedError()
def get_initial_result(self) -> typing.List['KademliaPeer']: #pylint: disable=no-self-use
"""
Get an initial or cached result to be put into the Queue. Used for findValue requests where the blob
has peers in the local data store of blobs announced to us
"""
return []
def _add_active(self, peer, force=False):
if not force and self.peer_manager.peer_is_good(peer) is False:
return
if peer in self.contacted:
return
if peer not in self.active and peer.node_id and peer.node_id != self.protocol.node_id:
self.active[peer] = self.distance(peer.node_id)
self.active = OrderedDict(sorted(self.active.items(), key=lambda item: item[1]))
async def _handle_probe_result(self, peer: 'KademliaPeer', response: FindResponse):
self._add_active(peer)
for new_peer in response.get_close_kademlia_peers(peer):
self._add_active(new_peer)
self.check_result_ready(response)
self._log_state(reason="check result")
def _reset_closest(self, peer):
if peer in self.active:
del self.active[peer]
async def _send_probe(self, peer: 'KademliaPeer'):
try:
response = await self.send_probe(peer)
except asyncio.TimeoutError:
self._reset_closest(peer)
return
except asyncio.CancelledError:
log.debug("%s[%x] cancelled probe",
type(self).__name__, id(self))
raise
except ValueError as err:
log.warning(str(err))
self._reset_closest(peer)
return
except TransportNotConnected:
await self._aclose(reason="not connected")
return
except RemoteException:
self._reset_closest(peer)
return
return await self._handle_probe_result(peer, response)
def _search_round(self):
"""
Send up to constants.alpha (5) probes to closest active peers
"""
added = 0
for index, peer in enumerate(self.active.keys()):
if index == 0:
log.debug("%s[%x] closest to probe: %s",
type(self).__name__, id(self),
peer.node_id.hex()[:8])
if peer in self.contacted:
continue
if len(self.running_probes) >= constants.ALPHA:
break
if index > (constants.K + len(self.running_probes)):
break
origin_address = (peer.address, peer.udp_port)
if peer.node_id == self.protocol.node_id:
continue
if origin_address == (self.protocol.external_ip, self.protocol.udp_port):
continue
self._schedule_probe(peer)
added += 1
log.debug("%s[%x] running %d probes for key %s",
type(self).__name__, id(self),
len(self.running_probes), self.key.hex()[:8])
if not added and not self.running_probes:
log.debug("%s[%x] search for %s exhausted",
type(self).__name__, id(self),
self.key.hex()[:8])
self.search_exhausted()
def _schedule_probe(self, peer: 'KademliaPeer'):
self.contacted.add(peer)
t = self.loop.create_task(self._send_probe(peer))
def callback(_):
self.running_probes.pop(peer, None)
if self.running:
self._search_round()
t.add_done_callback(callback)
self.running_probes[peer] = t
def _log_state(self, reason="?"):
log.debug("%s[%x] [%s] %s: %i active nodes %i contacted %i produced %i queued",
type(self).__name__, id(self), self.key.hex()[:8],
reason, len(self.active), len(self.contacted),
self.iteration_count, self.iteration_queue.qsize())
def __aiter__(self):
if self.running:
raise Exception("already running")
self.running = True
self.loop.call_soon(self._search_round)
return self
async def __anext__(self) -> typing.List['KademliaPeer']:
try:
if self.iteration_count == 0:
result = self.get_initial_result() or await self.iteration_queue.get()
else:
result = await self.iteration_queue.get()
if not result:
raise StopAsyncIteration
self.iteration_count += 1
return result
except asyncio.CancelledError:
await self._aclose(reason="cancelled")
raise
except StopAsyncIteration:
await self._aclose(reason="no more results")
raise
async def _aclose(self, reason="?"):
log.debug("%s[%x] [%s] shutdown because %s: %i active nodes %i contacted %i produced %i queued",
type(self).__name__, id(self), self.key.hex()[:8],
reason, len(self.active), len(self.contacted),
self.iteration_count, self.iteration_queue.qsize())
self.running = False
self.iteration_queue.put_nowait(None)
for task in chain(self.tasks, self.running_probes.values()):
task.cancel()
self.tasks.clear()
self.running_probes.clear()
async def aclose(self):
if self.running:
await self._aclose(reason="aclose")
log.debug("%s[%x] [%s] async close completed",
type(self).__name__, id(self), self.key.hex()[:8])
class IterativeNodeFinder(IterativeFinder):
def __init__(self, loop: asyncio.AbstractEventLoop,
protocol: 'KademliaProtocol', key: bytes,
max_results: typing.Optional[int] = constants.K,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
super().__init__(loop, protocol, key, max_results, shortlist)
self.yielded_peers: typing.Set['KademliaPeer'] = set()
async def send_probe(self, peer: 'KademliaPeer') -> FindNodeResponse:
log.debug("probe %s:%d (%s) for NODE %s",
peer.address, peer.udp_port, peer.node_id.hex()[:8] if peer.node_id else '', self.key.hex()[:8])
response = await self.protocol.get_rpc_peer(peer).find_node(self.key)
return FindNodeResponse(self.key, response)
def search_exhausted(self):
self.put_result(self.active.keys(), finish=True)
def put_result(self, from_iter: typing.Iterable['KademliaPeer'], finish=False):
not_yet_yielded = [
peer for peer in from_iter
if peer not in self.yielded_peers
and peer.node_id != self.protocol.node_id
and self.peer_manager.peer_is_good(peer) is True # return only peers who answered
]
not_yet_yielded.sort(key=lambda peer: self.distance(peer.node_id))
to_yield = not_yet_yielded[:max(constants.K, self.max_results)]
if to_yield:
self.yielded_peers.update(to_yield)
self.iteration_queue.put_nowait(to_yield)
if finish:
self.iteration_queue.put_nowait(None)
def check_result_ready(self, response: FindNodeResponse):
found = response.found and self.key != self.protocol.node_id
if found:
log.debug("found")
return self.put_result(self.active.keys(), finish=True)
class IterativeValueFinder(IterativeFinder):
def __init__(self, loop: asyncio.AbstractEventLoop,
protocol: 'KademliaProtocol', key: bytes,
max_results: typing.Optional[int] = constants.K,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
super().__init__(loop, protocol, key, max_results, shortlist)
self.blob_peers: typing.Set['KademliaPeer'] = set()
# this tracks the index of the most recent page we requested from each peer
self.peer_pages: typing.DefaultDict['KademliaPeer', int] = defaultdict(int)
# this tracks the set of blob peers returned by each peer
self.discovered_peers: typing.Dict['KademliaPeer', typing.Set['KademliaPeer']] = defaultdict(set)
async def send_probe(self, peer: 'KademliaPeer') -> FindValueResponse:
log.debug("probe %s:%d (%s) for VALUE %s",
peer.address, peer.udp_port, peer.node_id.hex()[:8], self.key.hex()[:8])
page = self.peer_pages[peer]
response = await self.protocol.get_rpc_peer(peer).find_value(self.key, page=page)
parsed = FindValueResponse(self.key, response)
if not parsed.found:
return parsed
already_known = len(self.discovered_peers[peer])
decoded_peers = set()
for compact_addr in parsed.found_compact_addresses:
try:
decoded_peers.add(decode_tcp_peer_from_compact_address(compact_addr))
except ValueError:
log.warning("misbehaving peer %s:%i returned invalid peer for blob",
peer.address, peer.udp_port)
self.peer_manager.report_failure(peer.address, peer.udp_port)
parsed.found_compact_addresses.clear()
return parsed
self.discovered_peers[peer].update(decoded_peers)
log.debug("probed %s:%i page %i, %i known", peer.address, peer.udp_port, page,
already_known + len(parsed.found_compact_addresses))
if len(self.discovered_peers[peer]) != already_known + len(parsed.found_compact_addresses):
log.warning("misbehaving peer %s:%i returned duplicate peers for blob", peer.address, peer.udp_port)
elif len(parsed.found_compact_addresses) >= constants.K and self.peer_pages[peer] < parsed.pages:
# the peer returned a full page and indicates it has more
self.peer_pages[peer] += 1
if peer in self.contacted:
# the peer must be removed from self.contacted so that it will be probed for the next page
self.contacted.remove(peer)
return parsed
def check_result_ready(self, response: FindValueResponse):
if response.found:
blob_peers = [decode_tcp_peer_from_compact_address(compact_addr)
for compact_addr in response.found_compact_addresses]
to_yield = []
for blob_peer in blob_peers:
if blob_peer not in self.blob_peers:
self.blob_peers.add(blob_peer)
to_yield.append(blob_peer)
if to_yield:
self.iteration_queue.put_nowait(to_yield)
def get_initial_result(self) -> typing.List['KademliaPeer']:
if self.protocol.data_store.has_peers_for_blob(self.key):
return self.protocol.data_store.get_peers_for_blob(self.key)
return []

View file

@ -1,666 +0,0 @@
import logging
import socket
import functools
import hashlib
import asyncio
import time
import typing
import random
from asyncio.protocols import DatagramProtocol
from asyncio.transports import DatagramTransport
from prometheus_client import Gauge, Counter, Histogram
from lbry.dht import constants
from lbry.dht.serialization.bencoding import DecodeError
from lbry.dht.serialization.datagram import decode_datagram, ErrorDatagram, ResponseDatagram, RequestDatagram
from lbry.dht.serialization.datagram import RESPONSE_TYPE, ERROR_TYPE, PAGE_KEY
from lbry.dht.error import RemoteException, TransportNotConnected
from lbry.dht.protocol.routing_table import TreeRoutingTable
from lbry.dht.protocol.data_store import DictDataStore
from lbry.dht.peer import make_kademlia_peer
if typing.TYPE_CHECKING:
from lbry.dht.peer import PeerManager, KademliaPeer
log = logging.getLogger(__name__)
OLD_PROTOCOL_ERRORS = {
"findNode() takes exactly 2 arguments (5 given)": "0.19.1",
"findValue() takes exactly 2 arguments (5 given)": "0.19.1"
}
class KademliaRPC:
stored_blob_metric = Gauge(
"stored_blobs", "Number of blobs announced by other peers", namespace="dht_node",
labelnames=("scope",),
)
def __init__(self, protocol: 'KademliaProtocol', loop: asyncio.AbstractEventLoop, peer_port: int = 3333):
self.protocol = protocol
self.loop = loop
self.peer_port = peer_port
self.old_token_secret: bytes = None
self.token_secret = constants.generate_id()
def compact_address(self):
compact_ip = functools.reduce(lambda buff, x: buff + bytearray([int(x)]),
self.protocol.external_ip.split('.'), bytearray())
compact_port = self.peer_port.to_bytes(2, 'big')
return compact_ip + compact_port + self.protocol.node_id
@staticmethod
def ping():
return b'pong'
def store(self, rpc_contact: 'KademliaPeer', blob_hash: bytes, token: bytes, port: int) -> bytes:
if len(blob_hash) != constants.HASH_BITS // 8:
raise ValueError(f"invalid length of blob hash: {len(blob_hash)}")
if not 0 < port < 65535:
raise ValueError(f"invalid tcp port: {port}")
rpc_contact.update_tcp_port(port)
if not self.verify_token(token, rpc_contact.compact_ip()):
if self.loop.time() - self.protocol.started_listening_time < constants.TOKEN_SECRET_REFRESH_INTERVAL:
pass
else:
raise ValueError("Invalid token")
self.protocol.data_store.add_peer_to_blob(
rpc_contact, blob_hash
)
self.stored_blob_metric.labels("global").set(len(self.protocol.data_store))
return b'OK'
def find_node(self, rpc_contact: 'KademliaPeer', key: bytes) -> typing.List[typing.Tuple[bytes, str, int]]:
if len(key) != constants.HASH_LENGTH:
raise ValueError("invalid contact node_id length: %i" % len(key))
contacts = self.protocol.routing_table.find_close_peers(key, sender_node_id=rpc_contact.node_id)
contact_triples = []
for contact in contacts[:constants.K * 2]:
contact_triples.append((contact.node_id, contact.address, contact.udp_port))
return contact_triples
def find_value(self, rpc_contact: 'KademliaPeer', key: bytes, page: int = 0):
page = page if page > 0 else 0
if len(key) != constants.HASH_LENGTH:
raise ValueError("invalid blob_exchange hash length: %i" % len(key))
response = {
b'token': self.make_token(rpc_contact.compact_ip()),
}
if not page:
response[b'contacts'] = self.find_node(rpc_contact, key)[:constants.K]
if self.protocol.protocol_version:
response[b'protocolVersion'] = self.protocol.protocol_version
# get peers we have stored for this blob_exchange
peers = [
peer.compact_address_tcp()
for peer in self.protocol.data_store.get_peers_for_blob(key)
if not rpc_contact.tcp_port or peer.compact_address_tcp() != rpc_contact.compact_address_tcp()
]
# if we don't have k storing peers to return and we have this hash locally, include our contact information
if len(peers) < constants.K and key.hex() in self.protocol.data_store.completed_blobs:
peers.append(self.compact_address())
if not peers:
response[PAGE_KEY] = 0
else:
response[PAGE_KEY] = (len(peers) // (constants.K + 1)) + 1 # how many pages of peers we have for the blob
if len(peers) > constants.K:
random.Random(self.protocol.node_id).shuffle(peers)
if page * constants.K < len(peers):
response[key] = peers[page * constants.K:page * constants.K + constants.K]
return response
def refresh_token(self): # TODO: this needs to be called periodically
self.old_token_secret = self.token_secret
self.token_secret = constants.generate_id()
def make_token(self, compact_ip):
h = hashlib.new('sha384')
h.update(self.token_secret + compact_ip)
return h.digest()
def verify_token(self, token, compact_ip):
h = hashlib.new('sha384')
h.update(self.token_secret + compact_ip)
if self.old_token_secret and not token == h.digest(): # TODO: why should we be accepting the previous token?
h = hashlib.new('sha384')
h.update(self.old_token_secret + compact_ip)
if not token == h.digest():
return False
return True
class RemoteKademliaRPC:
"""
Encapsulates RPC calls to remote Peers
"""
def __init__(self, loop: asyncio.AbstractEventLoop, peer_tracker: 'PeerManager', protocol: 'KademliaProtocol',
peer: 'KademliaPeer'):
self.loop = loop
self.peer_tracker = peer_tracker
self.protocol = protocol
self.peer = peer
async def ping(self) -> bytes:
"""
:return: b'pong'
"""
response = await self.protocol.send_request(
self.peer, RequestDatagram.make_ping(self.protocol.node_id)
)
return response.response
async def store(self, blob_hash: bytes) -> bytes:
"""
:param blob_hash: blob hash as bytes
:return: b'OK'
"""
if len(blob_hash) != constants.HASH_BITS // 8:
raise ValueError(f"invalid length of blob hash: {len(blob_hash)}")
if not self.protocol.peer_port or not 0 < self.protocol.peer_port < 65535:
raise ValueError(f"invalid tcp port: {self.protocol.peer_port}")
token = self.peer_tracker.get_node_token(self.peer.node_id)
if not token:
find_value_resp = await self.find_value(blob_hash)
token = find_value_resp[b'token']
response = await self.protocol.send_request(
self.peer, RequestDatagram.make_store(self.protocol.node_id, blob_hash, token, self.protocol.peer_port)
)
return response.response
async def find_node(self, key: bytes) -> typing.List[typing.Tuple[bytes, str, int]]:
"""
:return: [(node_id, address, udp_port), ...]
"""
if len(key) != constants.HASH_BITS // 8:
raise ValueError(f"invalid length of find node key: {len(key)}")
response = await self.protocol.send_request(
self.peer, RequestDatagram.make_find_node(self.protocol.node_id, key)
)
return [(node_id, address.decode(), udp_port) for node_id, address, udp_port in response.response]
async def find_value(self, key: bytes, page: int = 0) -> typing.Union[typing.Dict]:
"""
:return: {
b'token': <token bytes>,
b'contacts': [(node_id, address, udp_port), ...]
<key bytes>: [<blob_peer_compact_address, ...]
}
"""
if len(key) != constants.HASH_BITS // 8:
raise ValueError(f"invalid length of find value key: {len(key)}")
response = await self.protocol.send_request(
self.peer, RequestDatagram.make_find_value(self.protocol.node_id, key, page=page)
)
self.peer_tracker.update_token(self.peer.node_id, response.response[b'token'])
return response.response
class PingQueue:
def __init__(self, loop: asyncio.AbstractEventLoop, protocol: 'KademliaProtocol'):
self._loop = loop
self._protocol = protocol
self._pending_contacts: typing.Dict['KademliaPeer', float] = {}
self._process_task: asyncio.Task = None
self._running = False
self._running_pings: typing.Set[asyncio.Task] = set()
self._default_delay = constants.MAYBE_PING_DELAY
@property
def running(self):
return self._running
@property
def busy(self):
return self._running and (any(self._running_pings) or any(self._pending_contacts))
def enqueue_maybe_ping(self, *peers: 'KademliaPeer', delay: typing.Optional[float] = None):
delay = delay if delay is not None else self._default_delay
now = self._loop.time()
for peer in peers:
if peer not in self._pending_contacts or now + delay < self._pending_contacts[peer]:
self._pending_contacts[peer] = delay + now
def maybe_ping(self, peer: 'KademliaPeer'):
async def ping_task():
try:
if self._protocol.peer_manager.peer_is_good(peer):
if not self._protocol.routing_table.get_peer(peer.node_id):
self._protocol.add_peer(peer)
return
await self._protocol.get_rpc_peer(peer).ping()
except (asyncio.TimeoutError, RemoteException):
pass
task = self._loop.create_task(ping_task())
task.add_done_callback(lambda _: None if task not in self._running_pings else self._running_pings.remove(task))
self._running_pings.add(task)
async def _process(self): # send up to 1 ping per second
while True:
enqueued = list(self._pending_contacts.keys())
now = self._loop.time()
for peer in enqueued:
if self._pending_contacts[peer] <= now:
del self._pending_contacts[peer]
self.maybe_ping(peer)
break
await asyncio.sleep(1)
def start(self):
assert not self._running
self._running = True
if not self._process_task:
self._process_task = self._loop.create_task(self._process())
def stop(self):
assert self._running
self._running = False
if self._process_task:
self._process_task.cancel()
self._process_task = None
while self._running_pings:
self._running_pings.pop().cancel()
class KademliaProtocol(DatagramProtocol):
request_sent_metric = Counter(
"request_sent", "Number of requests send from DHT RPC protocol", namespace="dht_node",
labelnames=("method",),
)
request_success_metric = Counter(
"request_success", "Number of successful requests", namespace="dht_node",
labelnames=("method",),
)
request_error_metric = Counter(
"request_error", "Number of errors returned from request to other peers", namespace="dht_node",
labelnames=("method",),
)
HISTOGRAM_BUCKETS = (
.005, .01, .025, .05, .075, .1, .25, .5, .75, 1.0, 2.5, 3.0, 3.5, 4.0, 4.50, 5.0, 5.50, 6.0, float('inf')
)
response_time_metric = Histogram(
"response_time", "Response times of DHT RPC requests", namespace="dht_node", buckets=HISTOGRAM_BUCKETS,
labelnames=("method",)
)
received_request_metric = Counter(
"received_request", "Number of received DHT RPC requests", namespace="dht_node",
labelnames=("method",),
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, external_ip: str,
udp_port: int, peer_port: int, rpc_timeout: float = constants.RPC_TIMEOUT,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_boostrap_node: bool = False):
self.peer_manager = peer_manager
self.loop = loop
self.node_id = node_id
self.external_ip = external_ip
self.udp_port = udp_port
self.peer_port = peer_port
self.is_seed_node = False
self.partial_messages: typing.Dict[bytes, typing.Dict[bytes, bytes]] = {}
self.sent_messages: typing.Dict[bytes, typing.Tuple['KademliaPeer', asyncio.Future, RequestDatagram]] = {}
self.protocol_version = constants.PROTOCOL_VERSION
self.started_listening_time = 0
self.transport: DatagramTransport = None
self.old_token_secret = constants.generate_id()
self.token_secret = constants.generate_id()
self.routing_table = TreeRoutingTable(
self.loop, self.peer_manager, self.node_id, split_buckets_under_index, is_bootstrap_node=is_boostrap_node)
self.data_store = DictDataStore(self.loop, self.peer_manager)
self.ping_queue = PingQueue(self.loop, self)
self.node_rpc = KademliaRPC(self, self.loop, self.peer_port)
self.rpc_timeout = rpc_timeout
self._split_lock = asyncio.Lock()
self._to_remove: typing.Set['KademliaPeer'] = set()
self._to_add: typing.Set['KademliaPeer'] = set()
self._wakeup_routing_task = asyncio.Event()
self.maintaing_routing_task: typing.Optional[asyncio.Task] = None
@functools.lru_cache(128)
def get_rpc_peer(self, peer: 'KademliaPeer') -> RemoteKademliaRPC:
return RemoteKademliaRPC(self.loop, self.peer_manager, self, peer)
def start(self):
self.maintaing_routing_task = self.loop.create_task(self.routing_table_task())
def stop(self):
if self.maintaing_routing_task:
self.maintaing_routing_task.cancel()
if self.transport:
self.disconnect()
def disconnect(self):
self.transport.close()
def connection_made(self, transport: DatagramTransport):
self.transport = transport
def connection_lost(self, exc):
self.stop()
@staticmethod
def _migrate_incoming_rpc_args(peer: 'KademliaPeer', method: bytes, *args) -> typing.Tuple[typing.Tuple,
typing.Dict]:
if method == b'store' and peer.protocol_version == 0:
if isinstance(args[1], dict):
blob_hash = args[0]
token = args[1].pop(b'token', None)
port = args[1].pop(b'port', -1)
original_publisher_id = args[1].pop(b'lbryid', None)
age = 0
return (blob_hash, token, port, original_publisher_id, age), {}
return args, {}
async def _add_peer(self, peer: 'KademliaPeer'):
async def probe(some_peer: 'KademliaPeer'):
rpc_peer = self.get_rpc_peer(some_peer)
await rpc_peer.ping()
return await self.routing_table.add_peer(peer, probe)
def add_peer(self, peer: 'KademliaPeer'):
if peer.node_id == self.node_id:
return False
self._to_add.add(peer)
self._wakeup_routing_task.set()
def remove_peer(self, peer: 'KademliaPeer'):
self._to_remove.add(peer)
self._wakeup_routing_task.set()
async def routing_table_task(self):
while True:
while self._to_remove:
async with self._split_lock:
peer = self._to_remove.pop()
self.routing_table.remove_peer(peer)
while self._to_add:
async with self._split_lock:
await self._add_peer(self._to_add.pop())
await asyncio.gather(self._wakeup_routing_task.wait(), asyncio.sleep(.1))
self._wakeup_routing_task.clear()
def _handle_rpc(self, sender_contact: 'KademliaPeer', message: RequestDatagram):
assert sender_contact.node_id != self.node_id, (sender_contact.node_id.hex()[:8],
self.node_id.hex()[:8])
method = message.method
if method not in [b'ping', b'store', b'findNode', b'findValue']:
raise AttributeError('Invalid method: %s' % message.method.decode())
if message.args and isinstance(message.args[-1], dict) and b'protocolVersion' in message.args[-1]:
# args don't need reformatting
args, kwargs = tuple(message.args[:-1]), message.args[-1]
else:
args, kwargs = self._migrate_incoming_rpc_args(sender_contact, message.method, *message.args)
log.debug("%s:%i RECV CALL %s %s:%i", self.external_ip, self.udp_port, message.method.decode(),
sender_contact.address, sender_contact.udp_port)
if method == b'ping':
result = self.node_rpc.ping()
elif method == b'store':
blob_hash, token, port, original_publisher_id, age = args[:5] # pylint: disable=unused-variable
result = self.node_rpc.store(sender_contact, blob_hash, token, port)
else:
key = args[0]
page = kwargs.get(PAGE_KEY, 0)
if method == b'findNode':
result = self.node_rpc.find_node(sender_contact, key)
else:
assert method == b'findValue'
result = self.node_rpc.find_value(sender_contact, key, page)
self.send_response(
sender_contact, ResponseDatagram(RESPONSE_TYPE, message.rpc_id, self.node_id, result),
)
def handle_request_datagram(self, address: typing.Tuple[str, int], request_datagram: RequestDatagram):
# This is an RPC method request
self.received_request_metric.labels(method=request_datagram.method).inc()
self.peer_manager.report_last_requested(address[0], address[1])
peer = self.routing_table.get_peer(request_datagram.node_id)
if not peer:
try:
peer = make_kademlia_peer(request_datagram.node_id, address[0], address[1])
except ValueError as err:
log.warning("error replying to %s: %s", address[0], str(err))
return
try:
self._handle_rpc(peer, request_datagram)
# if the contact is not known to be bad (yet) and we haven't yet queried it, send it a ping so that it
# will be added to our routing table if successful
is_good = self.peer_manager.peer_is_good(peer)
if is_good is None:
self.ping_queue.enqueue_maybe_ping(peer)
# only add a requesting contact to the routing table if it has replied to one of our requests
elif is_good is True:
self.add_peer(peer)
except ValueError as err:
log.debug("error raised handling %s request from %s:%i - %s(%s)",
request_datagram.method, peer.address, peer.udp_port, str(type(err)),
str(err))
self.send_error(
peer,
ErrorDatagram(ERROR_TYPE, request_datagram.rpc_id, self.node_id, str(type(err)).encode(),
str(err).encode())
)
except Exception as err:
log.warning("error raised handling %s request from %s:%i - %s(%s)",
request_datagram.method, peer.address, peer.udp_port, str(type(err)),
str(err))
self.send_error(
peer,
ErrorDatagram(ERROR_TYPE, request_datagram.rpc_id, self.node_id, str(type(err)).encode(),
str(err).encode())
)
def handle_response_datagram(self, address: typing.Tuple[str, int], response_datagram: ResponseDatagram):
# Find the message that triggered this response
if response_datagram.rpc_id in self.sent_messages:
peer, future, _ = self.sent_messages[response_datagram.rpc_id]
if peer.address != address[0]:
future.set_exception(
RemoteException(f"response from {address[0]}, expected {peer.address}")
)
return
# We got a result from the RPC
if peer.node_id == self.node_id:
future.set_exception(RemoteException("node has our node id"))
return
elif response_datagram.node_id == self.node_id:
future.set_exception(RemoteException("incoming message is from our node id"))
return
peer = make_kademlia_peer(response_datagram.node_id, address[0], address[1])
self.peer_manager.report_last_replied(address[0], address[1])
self.peer_manager.update_contact_triple(peer.node_id, address[0], address[1])
if not future.cancelled():
future.set_result(response_datagram)
self.add_peer(peer)
else:
log.warning("%s:%i replied, but after we cancelled the request attempt",
peer.address, peer.udp_port)
else:
# If the original message isn't found, it must have timed out
# TODO: we should probably do something with this...
pass
def handle_error_datagram(self, address, error_datagram: ErrorDatagram):
# The RPC request raised a remote exception; raise it locally
remote_exception = RemoteException(f"{error_datagram.exception_type}({error_datagram.response})")
if error_datagram.rpc_id in self.sent_messages:
peer, future, request = self.sent_messages.pop(error_datagram.rpc_id)
if (peer.address, peer.udp_port) != address:
future.set_exception(
RemoteException(
f"response from {address[0]}:{address[1]}, "
f"expected {peer.address}:{peer.udp_port}"
)
)
return
error_msg = f"" \
f"Error sending '{request.method}' to {peer.address}:{peer.udp_port}\n" \
f"Args: {request.args}\n" \
f"Raised: {str(remote_exception)}"
if 'Invalid token' in error_msg:
log.debug(error_msg)
elif error_datagram.response not in OLD_PROTOCOL_ERRORS:
log.warning(error_msg)
else:
log.debug(
"known dht protocol backwards compatibility error with %s:%i (lbrynet v%s)",
peer.address, peer.udp_port, OLD_PROTOCOL_ERRORS[error_datagram.response]
)
future.set_exception(remote_exception)
return
else:
if error_datagram.response not in OLD_PROTOCOL_ERRORS:
msg = f"Received error from {address[0]}:{address[1]}, but it isn't in response to a " \
f"pending request: {str(remote_exception)}"
log.warning(msg)
else:
log.debug(
"known dht protocol backwards compatibility error with %s:%i (lbrynet v%s)",
address[0], address[1], OLD_PROTOCOL_ERRORS[error_datagram.response]
)
def datagram_received(self, datagram: bytes, address: typing.Tuple[str, int]) -> None: # pylint: disable=arguments-renamed
try:
message = decode_datagram(datagram)
except (ValueError, TypeError, DecodeError):
self.peer_manager.report_failure(address[0], address[1])
log.warning("Couldn't decode dht datagram from %s: %s", address, datagram.hex())
return
if isinstance(message, RequestDatagram):
self.handle_request_datagram(address, message)
elif isinstance(message, ErrorDatagram):
self.handle_error_datagram(address, message)
else:
assert isinstance(message, ResponseDatagram), "sanity"
self.handle_response_datagram(address, message)
async def send_request(self, peer: 'KademliaPeer', request: RequestDatagram) -> ResponseDatagram:
self._send(peer, request)
response_fut = self.sent_messages[request.rpc_id][1]
try:
self.request_sent_metric.labels(method=request.method).inc()
start = time.perf_counter()
response = await asyncio.wait_for(response_fut, self.rpc_timeout)
self.response_time_metric.labels(method=request.method).observe(time.perf_counter() - start)
self.peer_manager.report_last_replied(peer.address, peer.udp_port)
self.request_success_metric.labels(method=request.method).inc()
return response
except asyncio.CancelledError:
if not response_fut.done():
response_fut.cancel()
raise
except (asyncio.TimeoutError, RemoteException):
self.request_error_metric.labels(method=request.method).inc()
self.peer_manager.report_failure(peer.address, peer.udp_port)
if self.peer_manager.peer_is_good(peer) is False:
self.remove_peer(peer)
raise
def send_response(self, peer: 'KademliaPeer', response: ResponseDatagram):
self._send(peer, response)
def send_error(self, peer: 'KademliaPeer', error: ErrorDatagram):
self._send(peer, error)
def _send(self, peer: 'KademliaPeer', message: typing.Union[RequestDatagram, ResponseDatagram, ErrorDatagram]):
if not self.transport or self.transport.is_closing():
raise TransportNotConnected()
data = message.bencode()
if len(data) > constants.MSG_SIZE_LIMIT:
log.warning("cannot send datagram larger than %i bytes (packet is %i bytes)",
constants.MSG_SIZE_LIMIT, len(data))
log.debug("Packet is too large to send: %s", data[:3500].hex())
raise ValueError(
f"cannot send datagram larger than {constants.MSG_SIZE_LIMIT} bytes (packet is {len(data)} bytes)"
)
if isinstance(message, (RequestDatagram, ResponseDatagram)):
assert message.node_id == self.node_id, message
if isinstance(message, RequestDatagram):
assert self.node_id != peer.node_id
def pop_from_sent_messages(_):
if message.rpc_id in self.sent_messages:
self.sent_messages.pop(message.rpc_id)
if isinstance(message, RequestDatagram):
response_fut = self.loop.create_future()
response_fut.add_done_callback(pop_from_sent_messages)
self.sent_messages[message.rpc_id] = (peer, response_fut, message)
try:
self.transport.sendto(data, (peer.address, peer.udp_port))
except OSError as err:
# TODO: handle ENETUNREACH
if err.errno == socket.EWOULDBLOCK:
# i'm scared this may swallow important errors, but i get a million of these
# on Linux and it doesn't seem to affect anything -grin
log.warning("Can't send data to dht: EWOULDBLOCK")
else:
log.error("DHT socket error sending %i bytes to %s:%i - %s (code %i)",
len(data), peer.address, peer.udp_port, str(err), err.errno)
if isinstance(message, RequestDatagram):
self.sent_messages[message.rpc_id][1].set_exception(err)
else:
raise err
if isinstance(message, RequestDatagram):
self.peer_manager.report_last_sent(peer.address, peer.udp_port)
elif isinstance(message, ErrorDatagram):
self.peer_manager.report_failure(peer.address, peer.udp_port)
def change_token(self):
self.old_token_secret = self.token_secret
self.token_secret = constants.generate_id()
def make_token(self, compact_ip):
return constants.digest(self.token_secret + compact_ip)
def verify_token(self, token, compact_ip):
h = constants.HASH_CLASS()
h.update(self.token_secret + compact_ip)
if self.old_token_secret and not token == h.digest(): # TODO: why should we be accepting the previous token?
h = constants.HASH_CLASS()
h.update(self.old_token_secret + compact_ip)
if not token == h.digest():
return False
return True
async def store_to_peer(self, hash_value: bytes, peer: 'KademliaPeer', # pylint: disable=too-many-return-statements
retry: bool = True) -> typing.Tuple[bytes, bool]:
async def __store():
res = await self.get_rpc_peer(peer).store(hash_value)
if res != b"OK":
raise ValueError(res)
log.debug("Stored %s to %s", hash_value.hex()[:8], peer)
return peer.node_id, True
try:
return await __store()
except asyncio.TimeoutError:
log.debug("Timeout while storing blob_hash %s at %s", hash_value.hex()[:8], peer)
return peer.node_id, False
except ValueError as err:
log.error("Unexpected response: %s", err)
return peer.node_id, False
except RemoteException as err:
if 'findValue() takes exactly 2 arguments (5 given)' in str(err):
log.debug("peer %s:%i is running an incompatible version of lbrynet", peer.address, peer.udp_port)
return peer.node_id, False
if 'Invalid token' not in str(err):
log.warning("Unexpected error while storing blob_hash: %s", err)
return peer.node_id, False
self.peer_manager.clear_token(peer.node_id)
if not retry:
return peer.node_id, False
return await self.store_to_peer(hash_value, peer, retry=False)

View file

@ -1,404 +0,0 @@
import asyncio
import random
import logging
import typing
import itertools
from prometheus_client import Gauge
from lbry import utils
from lbry.dht import constants
from lbry.dht.error import RemoteException
from lbry.dht.protocol.distance import Distance
if typing.TYPE_CHECKING:
from lbry.dht.peer import KademliaPeer, PeerManager
log = logging.getLogger(__name__)
class KBucket:
"""
Kademlia K-bucket implementation.
"""
peer_in_routing_table_metric = Gauge(
"peers_in_routing_table", "Number of peers on routing table", namespace="dht_node",
labelnames=("scope",)
)
peer_with_x_bit_colliding_metric = Gauge(
"peer_x_bit_colliding", "Number of peers with at least X bits colliding with this node id",
namespace="dht_node", labelnames=("amount",)
)
def __init__(self, peer_manager: 'PeerManager', range_min: int, range_max: int,
node_id: bytes, capacity: int = constants.K):
"""
@param range_min: The lower boundary for the range in the n-bit ID
space covered by this k-bucket
@param range_max: The upper boundary for the range in the ID space
covered by this k-bucket
"""
self._peer_manager = peer_manager
self.range_min = range_min
self.range_max = range_max
self.peers: typing.List['KademliaPeer'] = []
self._node_id = node_id
self._distance_to_self = Distance(node_id)
self.capacity = capacity
def add_peer(self, peer: 'KademliaPeer') -> bool:
""" Add contact to _contact list in the right order. This will move the
contact to the end of the k-bucket if it is already present.
@raise kademlia.kbucket.BucketFull: Raised when the bucket is full and
the contact isn't in the bucket
already
@param peer: The contact to add
@type peer: dht.contact._Contact
"""
if peer in self.peers:
# Move the existing contact to the end of the list
# - using the new contact to allow add-on data
# (e.g. optimization-specific stuff) to pe updated as well
self.peers.remove(peer)
self.peers.append(peer)
return True
else:
for i, _ in enumerate(self.peers):
local_peer = self.peers[i]
if local_peer.node_id == peer.node_id:
self.peers.remove(local_peer)
self.peers.append(peer)
return True
if len(self.peers) < self.capacity:
self.peers.append(peer)
self.peer_in_routing_table_metric.labels("global").inc()
bits_colliding = utils.get_colliding_prefix_bits(peer.node_id, self._node_id)
self.peer_with_x_bit_colliding_metric.labels(amount=bits_colliding).inc()
return True
else:
return False
def get_peer(self, node_id: bytes) -> 'KademliaPeer':
for peer in self.peers:
if peer.node_id == node_id:
return peer
def get_peers(self, count=-1, exclude_contact=None, sort_distance_to=None) -> typing.List['KademliaPeer']:
""" Returns a list containing up to the first count number of contacts
@param count: The amount of contacts to return (if 0 or less, return
all contacts)
@type count: int
@param exclude_contact: A node node_id to exclude; if this contact is in
the list of returned values, it will be
discarded before returning. If a C{str} is
passed as this argument, it must be the
contact's ID.
@type exclude_contact: str
@param sort_distance_to: Sort distance to the node_id, defaulting to the parent node node_id. If False don't
sort the contacts
@raise IndexError: If the number of requested contacts is too large
@return: Return up to the first count number of contacts in a list
If no contacts are present an empty is returned
@rtype: list
"""
peers = [peer for peer in self.peers if peer.node_id != exclude_contact]
# Return all contacts in bucket
if count <= 0:
count = len(peers)
# Get current contact number
current_len = len(peers)
# If count greater than k - return only k contacts
if count > constants.K:
count = constants.K
if not current_len:
return peers
if sort_distance_to is False:
pass
else:
sort_distance_to = sort_distance_to or self._node_id
peers.sort(key=lambda c: Distance(sort_distance_to)(c.node_id))
return peers[:min(current_len, count)]
def get_bad_or_unknown_peers(self) -> typing.List['KademliaPeer']:
peer = self.get_peers(sort_distance_to=False)
return [
peer for peer in peer
if self._peer_manager.contact_triple_is_good(peer.node_id, peer.address, peer.udp_port) is not True
]
def remove_peer(self, peer: 'KademliaPeer') -> None:
self.peers.remove(peer)
self.peer_in_routing_table_metric.labels("global").dec()
bits_colliding = utils.get_colliding_prefix_bits(peer.node_id, self._node_id)
self.peer_with_x_bit_colliding_metric.labels(amount=bits_colliding).dec()
def key_in_range(self, key: bytes) -> bool:
""" Tests whether the specified key (i.e. node ID) is in the range
of the n-bit ID space covered by this k-bucket (in otherwords, it
returns whether or not the specified key should be placed in this
k-bucket)
@param key: The key to test
@type key: str or int
@return: C{True} if the key is in this k-bucket's range, or C{False}
if not.
@rtype: bool
"""
return self.range_min <= self._distance_to_self(key) < self.range_max
def __len__(self) -> int:
return len(self.peers)
def __contains__(self, item) -> bool:
return item in self.peers
class TreeRoutingTable:
""" This class implements a routing table used by a Node class.
The Kademlia routing table is a binary tree whose leaves are k-buckets,
where each k-bucket contains nodes with some common prefix of their IDs.
This prefix is the k-bucket's position in the binary tree; it therefore
covers some range of ID values, and together all of the k-buckets cover
the entire n-bit ID (or key) space (with no overlap).
@note: In this implementation, nodes in the tree (the k-buckets) are
added dynamically, as needed; this technique is described in the 13-page
version of the Kademlia paper, in section 2.4. It does, however, use the
ping RPC-based k-bucket eviction algorithm described in section 2.2 of
that paper.
BOOTSTRAP MODE: if set to True, we always add all peers. This is so a
bootstrap node does not get a bias towards its own node id and replies are
the best it can provide (joining peer knows its neighbors immediately).
Over time, this will need to be optimized so we use the disk as holding
everything in memory won't be feasible anymore.
See: https://github.com/bittorrent/bootstrap-dht
"""
bucket_in_routing_table_metric = Gauge(
"buckets_in_routing_table", "Number of buckets on routing table", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', parent_node_id: bytes,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_bootstrap_node: bool = False):
self._loop = loop
self._peer_manager = peer_manager
self._parent_node_id = parent_node_id
self._split_buckets_under_index = split_buckets_under_index
self.buckets: typing.List[KBucket] = [
KBucket(
self._peer_manager, range_min=0, range_max=2 ** constants.HASH_BITS, node_id=self._parent_node_id,
capacity=1 << 32 if is_bootstrap_node else constants.K
)
]
def get_peers(self) -> typing.List['KademliaPeer']:
return list(itertools.chain.from_iterable(map(lambda bucket: bucket.peers, self.buckets)))
def _should_split(self, bucket_index: int, to_add: bytes) -> bool:
# https://stackoverflow.com/questions/32129978/highly-unbalanced-kademlia-routing-table/32187456#32187456
if bucket_index < self._split_buckets_under_index:
return True
contacts = self.get_peers()
distance = Distance(self._parent_node_id)
contacts.sort(key=lambda c: distance(c.node_id))
kth_contact = contacts[-1] if len(contacts) < constants.K else contacts[constants.K - 1]
return distance(to_add) < distance(kth_contact.node_id)
def find_close_peers(self, key: bytes, count: typing.Optional[int] = None,
sender_node_id: typing.Optional[bytes] = None) -> typing.List['KademliaPeer']:
exclude = [self._parent_node_id]
if sender_node_id:
exclude.append(sender_node_id)
count = count or constants.K
distance = Distance(key)
contacts = self.get_peers()
contacts = [c for c in contacts if c.node_id not in exclude]
if contacts:
contacts.sort(key=lambda c: distance(c.node_id))
return contacts[:min(count, len(contacts))]
return []
def get_peer(self, contact_id: bytes) -> 'KademliaPeer':
return self.buckets[self._kbucket_index(contact_id)].get_peer(contact_id)
def get_refresh_list(self, start_index: int = 0, force: bool = False) -> typing.List[bytes]:
refresh_ids = []
for offset, _ in enumerate(self.buckets[start_index:]):
refresh_ids.append(self._midpoint_id_in_bucket_range(start_index + offset))
# if we have 3 or fewer populated buckets get two random ids in the range of each to try and
# populate/split the buckets further
buckets_with_contacts = self.buckets_with_contacts()
if buckets_with_contacts <= 3:
for i in range(buckets_with_contacts):
refresh_ids.append(self._random_id_in_bucket_range(i))
refresh_ids.append(self._random_id_in_bucket_range(i))
return refresh_ids
def remove_peer(self, peer: 'KademliaPeer') -> None:
if not peer.node_id:
return
bucket_index = self._kbucket_index(peer.node_id)
try:
self.buckets[bucket_index].remove_peer(peer)
self._join_buckets()
except ValueError:
return
def _kbucket_index(self, key: bytes) -> int:
i = 0
for bucket in self.buckets:
if bucket.key_in_range(key):
return i
else:
i += 1
return i
def _random_id_in_bucket_range(self, bucket_index: int) -> bytes:
random_id = int(random.randrange(self.buckets[bucket_index].range_min, self.buckets[bucket_index].range_max))
return Distance(
self._parent_node_id
)(random_id.to_bytes(constants.HASH_LENGTH, 'big')).to_bytes(constants.HASH_LENGTH, 'big')
def _midpoint_id_in_bucket_range(self, bucket_index: int) -> bytes:
half = int((self.buckets[bucket_index].range_max - self.buckets[bucket_index].range_min) // 2)
return Distance(self._parent_node_id)(
int(self.buckets[bucket_index].range_min + half).to_bytes(constants.HASH_LENGTH, 'big')
).to_bytes(constants.HASH_LENGTH, 'big')
def _split_bucket(self, old_bucket_index: int) -> None:
""" Splits the specified k-bucket into two new buckets which together
cover the same range in the key/ID space
@param old_bucket_index: The index of k-bucket to split (in this table's
list of k-buckets)
@type old_bucket_index: int
"""
# Resize the range of the current (old) k-bucket
old_bucket = self.buckets[old_bucket_index]
split_point = old_bucket.range_max - (old_bucket.range_max - old_bucket.range_min) // 2
# Create a new k-bucket to cover the range split off from the old bucket
new_bucket = KBucket(self._peer_manager, split_point, old_bucket.range_max, self._parent_node_id)
old_bucket.range_max = split_point
# Now, add the new bucket into the routing table tree
self.buckets.insert(old_bucket_index + 1, new_bucket)
# Finally, copy all nodes that belong to the new k-bucket into it...
for contact in old_bucket.peers:
if new_bucket.key_in_range(contact.node_id):
new_bucket.add_peer(contact)
# ...and remove them from the old bucket
for contact in new_bucket.peers:
old_bucket.remove_peer(contact)
self.bucket_in_routing_table_metric.labels("global").set(len(self.buckets))
def _join_buckets(self):
if len(self.buckets) == 1:
return
to_pop = [i for i, bucket in enumerate(self.buckets) if len(bucket) == 0]
if not to_pop:
return
log.info("join buckets %i", len(to_pop))
bucket_index_to_pop = to_pop[0]
assert len(self.buckets[bucket_index_to_pop]) == 0
can_go_lower = bucket_index_to_pop - 1 >= 0
can_go_higher = bucket_index_to_pop + 1 < len(self.buckets)
assert can_go_higher or can_go_lower
bucket = self.buckets[bucket_index_to_pop]
if can_go_lower and can_go_higher:
midpoint = ((bucket.range_max - bucket.range_min) // 2) + bucket.range_min
self.buckets[bucket_index_to_pop - 1].range_max = midpoint - 1
self.buckets[bucket_index_to_pop + 1].range_min = midpoint
elif can_go_lower:
self.buckets[bucket_index_to_pop - 1].range_max = bucket.range_max
elif can_go_higher:
self.buckets[bucket_index_to_pop + 1].range_min = bucket.range_min
self.buckets.remove(bucket)
self.bucket_in_routing_table_metric.labels("global").set(len(self.buckets))
return self._join_buckets()
def buckets_with_contacts(self) -> int:
count = 0
for bucket in self.buckets:
if len(bucket) > 0:
count += 1
return count
async def add_peer(self, peer: 'KademliaPeer', probe: typing.Callable[['KademliaPeer'], typing.Awaitable]):
if not peer.node_id:
log.warning("Tried adding a peer with no node id!")
return False
for my_peer in self.get_peers():
if (my_peer.address, my_peer.udp_port) == (peer.address, peer.udp_port) and my_peer.node_id != peer.node_id:
self.remove_peer(my_peer)
self._join_buckets()
bucket_index = self._kbucket_index(peer.node_id)
if self.buckets[bucket_index].add_peer(peer):
return True
# The bucket is full; see if it can be split (by checking if its range includes the host node's node_id)
if self._should_split(bucket_index, peer.node_id):
self._split_bucket(bucket_index)
# Retry the insertion attempt
result = await self.add_peer(peer, probe)
self._join_buckets()
return result
else:
# We can't split the k-bucket
#
# The 13 page kademlia paper specifies that the least recently contacted node in the bucket
# shall be pinged. If it fails to reply it is replaced with the new contact. If the ping is successful
# the new contact is ignored and not added to the bucket (sections 2.2 and 2.4).
#
# A reasonable extension to this is BEP 0005, which extends the above:
#
# Not all nodes that we learn about are equal. Some are "good" and some are not.
# Many nodes using the DHT are able to send queries and receive responses,
# but are not able to respond to queries from other nodes. It is important that
# each node's routing table must contain only known good nodes. A good node is
# a node has responded to one of our queries within the last 15 minutes. A node
# is also good if it has ever responded to one of our queries and has sent us a
# query within the last 15 minutes. After 15 minutes of inactivity, a node becomes
# questionable. Nodes become bad when they fail to respond to multiple queries
# in a row. Nodes that we know are good are given priority over nodes with unknown status.
#
# When there are bad or questionable nodes in the bucket, the least recent is selected for
# potential replacement (BEP 0005). When all nodes in the bucket are fresh, the head (least recent)
# contact is selected as described in section 2.2 of the kademlia paper. In both cases the new contact
# is ignored if the pinged node replies.
not_good_contacts = self.buckets[bucket_index].get_bad_or_unknown_peers()
not_recently_replied = []
for my_peer in not_good_contacts:
last_replied = self._peer_manager.get_last_replied(my_peer.address, my_peer.udp_port)
if not last_replied or last_replied + 60 < self._loop.time():
not_recently_replied.append(my_peer)
if not_recently_replied:
to_replace = not_recently_replied[0]
else:
to_replace = self.buckets[bucket_index].peers[0]
last_replied = self._peer_manager.get_last_replied(to_replace.address, to_replace.udp_port)
if last_replied and last_replied + 60 > self._loop.time():
return False
log.debug("pinging %s:%s", to_replace.address, to_replace.udp_port)
try:
await probe(to_replace)
return False
except (asyncio.TimeoutError, RemoteException):
log.debug("Replacing dead contact in bucket %i: %s:%i with %s:%i ", bucket_index,
to_replace.address, to_replace.udp_port, peer.address, peer.udp_port)
if to_replace in self.buckets[bucket_index]:
self.buckets[bucket_index].remove_peer(to_replace)
return await self.add_peer(peer, probe)

View file

@ -1,205 +0,0 @@
import typing
from functools import reduce
from lbry.dht import constants
from lbry.dht.serialization.bencoding import bencode, bdecode
REQUEST_TYPE = 0
RESPONSE_TYPE = 1
ERROR_TYPE = 2
OPTIONAL_ARG_OFFSET = 100
# bencode representation of argument keys
PAGE_KEY = b'p'
OPTIONAL_FIELDS = ()
class KademliaDatagramBase:
"""
field names are used to unwrap/wrap the argument names to index integers that replace them in a datagram
all packets have an argument dictionary when bdecoded starting with {0: <int>, 1: <bytes>, 2: <bytes>, ...}
these correspond to the packet_type, rpc_id, and node_id args
"""
required_fields = [
'packet_type',
'rpc_id',
'node_id'
]
expected_packet_type = -1
def __init__(self, packet_type: int, rpc_id: bytes, node_id: bytes):
self.packet_type = packet_type
if self.expected_packet_type != packet_type:
raise ValueError(f"invalid packet type: {packet_type}, expected {self.expected_packet_type}")
if len(rpc_id) != constants.RPC_ID_LENGTH:
raise ValueError(f"invalid rpc node_id: {len(rpc_id)} bytes (expected 20)")
if not len(node_id) == constants.HASH_LENGTH:
raise ValueError(f"invalid node node_id: {len(node_id)} bytes (expected 48)")
self.rpc_id = rpc_id
self.node_id = node_id
def bencode(self) -> bytes:
datagram = {
i: getattr(self, k) for i, k in enumerate(self.required_fields)
}
for i, k in enumerate(OPTIONAL_FIELDS):
value = getattr(self, k, None)
if value is not None:
datagram[i + OPTIONAL_ARG_OFFSET] = value
return bencode(datagram)
class RequestDatagram(KademliaDatagramBase):
required_fields = [
'packet_type',
'rpc_id',
'node_id',
'method',
'args'
]
expected_packet_type = REQUEST_TYPE
def __init__(self, packet_type: int, rpc_id: bytes, node_id: bytes, method: bytes,
args: typing.Optional[typing.List] = None):
super().__init__(packet_type, rpc_id, node_id)
self.method = method
self.args = args or []
if not self.args:
self.args.append({})
if isinstance(self.args[-1], dict):
self.args[-1][b'protocolVersion'] = 1
else:
self.args.append({b'protocolVersion': 1})
@classmethod
def make_ping(cls, from_node_id: bytes, rpc_id: typing.Optional[bytes] = None) -> 'RequestDatagram':
rpc_id = rpc_id or constants.generate_id()[:constants.RPC_ID_LENGTH]
return cls(REQUEST_TYPE, rpc_id, from_node_id, b'ping')
@classmethod
def make_store(cls, from_node_id: bytes, blob_hash: bytes, token: bytes, port: int,
rpc_id: typing.Optional[bytes] = None) -> 'RequestDatagram':
rpc_id = rpc_id or constants.generate_id()[:constants.RPC_ID_LENGTH]
if len(blob_hash) != constants.HASH_BITS // 8:
raise ValueError(f"invalid blob hash length: {len(blob_hash)}")
if not 0 < port < 65536:
raise ValueError(f"invalid port: {port}")
if len(token) != constants.HASH_BITS // 8:
raise ValueError(f"invalid token length: {len(token)}")
store_args = [blob_hash, token, port, from_node_id, 0]
return cls(REQUEST_TYPE, rpc_id, from_node_id, b'store', store_args)
@classmethod
def make_find_node(cls, from_node_id: bytes, key: bytes,
rpc_id: typing.Optional[bytes] = None) -> 'RequestDatagram':
rpc_id = rpc_id or constants.generate_id()[:constants.RPC_ID_LENGTH]
if len(key) != constants.HASH_BITS // 8:
raise ValueError(f"invalid key length: {len(key)}")
return cls(REQUEST_TYPE, rpc_id, from_node_id, b'findNode', [key])
@classmethod
def make_find_value(cls, from_node_id: bytes, key: bytes,
rpc_id: typing.Optional[bytes] = None, page: int = 0) -> 'RequestDatagram':
rpc_id = rpc_id or constants.generate_id()[:constants.RPC_ID_LENGTH]
if len(key) != constants.HASH_BITS // 8:
raise ValueError(f"invalid key length: {len(key)}")
if page < 0:
raise ValueError(f"cannot request a negative page ({page})")
return cls(REQUEST_TYPE, rpc_id, from_node_id, b'findValue', [key, {PAGE_KEY: page}])
class ResponseDatagram(KademliaDatagramBase):
required_fields = [
'packet_type',
'rpc_id',
'node_id',
'response'
]
expected_packet_type = RESPONSE_TYPE
def __init__(self, packet_type: int, rpc_id: bytes, node_id: bytes, response):
super().__init__(packet_type, rpc_id, node_id)
self.response = response
class ErrorDatagram(KademliaDatagramBase):
required_fields = [
'packet_type',
'rpc_id',
'node_id',
'exception_type',
'response',
]
expected_packet_type = ERROR_TYPE
def __init__(self, packet_type: int, rpc_id: bytes, node_id: bytes, exception_type: bytes, response: bytes):
super().__init__(packet_type, rpc_id, node_id)
self.exception_type = exception_type.decode()
self.response = response.decode()
def _decode_datagram(datagram: bytes):
msg_types = {
REQUEST_TYPE: RequestDatagram,
RESPONSE_TYPE: ResponseDatagram,
ERROR_TYPE: ErrorDatagram
}
primitive: typing.Dict = bdecode(datagram)
converted = {
str(k).encode() if not isinstance(k, bytes) else k: v for k, v in primitive.items()
}
if converted[b'0'] in [REQUEST_TYPE, ERROR_TYPE, RESPONSE_TYPE]: # pylint: disable=unsubscriptable-object
datagram_type = converted[b'0'] # pylint: disable=unsubscriptable-object
else:
raise ValueError("invalid datagram type")
datagram_class = msg_types[datagram_type]
decoded = {
k: converted[str(i).encode()] # pylint: disable=unsubscriptable-object
for i, k in enumerate(datagram_class.required_fields)
if str(i).encode() in converted # pylint: disable=unsupported-membership-test
}
for i, _ in enumerate(OPTIONAL_FIELDS):
if str(i + OPTIONAL_ARG_OFFSET).encode() in converted:
decoded[i + OPTIONAL_ARG_OFFSET] = converted[str(i + OPTIONAL_ARG_OFFSET).encode()]
return decoded, datagram_class
def decode_datagram(datagram: bytes) -> typing.Union[RequestDatagram, ResponseDatagram, ErrorDatagram]:
decoded, datagram_class = _decode_datagram(datagram)
return datagram_class(**decoded)
def make_compact_ip(address: str) -> bytearray:
compact_ip = reduce(lambda buff, x: buff + bytearray([int(x)]), address.split('.'), bytearray())
if len(compact_ip) != 4:
raise ValueError("invalid IPv4 length")
return compact_ip
def make_compact_address(node_id: bytes, address: str, port: int) -> bytearray:
compact_ip = make_compact_ip(address)
if not 0 < port < 65536:
raise ValueError(f'Invalid port: {port}')
if len(node_id) != constants.HASH_BITS // 8:
raise ValueError("invalid node node_id length")
return compact_ip + port.to_bytes(2, 'big') + node_id
def decode_compact_address(compact_address: bytes) -> typing.Tuple[bytes, str, int]:
address = "{}.{}.{}.{}".format(*compact_address[:4])
port = int.from_bytes(compact_address[4:6], 'big')
node_id = compact_address[6:]
if not 0 < port < 65536:
raise ValueError(f'Invalid port: {port}')
if len(node_id) != constants.HASH_BITS // 8:
raise ValueError("invalid node node_id length")
return node_id, address, port

View file

@ -1,5 +0,0 @@
generate:
python generate.py generate > __init__.py
analyze:
python generate.py analyze

View file

@ -1,95 +0,0 @@
# Exceptions
Exceptions in LBRY are defined and generated from the Markdown table at the end of this README.
## Guidelines
When possible, use [built-in Python exceptions](https://docs.python.org/3/library/exceptions.html) or `aiohttp` [general client](https://docs.aiohttp.org/en/latest/client_reference.html#client-exceptions) / [HTTP](https://docs.aiohttp.org/en/latest/web_exceptions.html) exceptions, unless:
1. You want to provide a better error message (extend the closest built-in/`aiohttp` exception in this case).
2. You need to represent a new situation.
When defining your own exceptions, consider:
1. Extending a built-in Python or `aiohttp` exception.
2. Using contextual variables in the error message.
## Table Column Definitions
Column | Meaning
---|---
Code | Codes are used only to define the hierarchy of exceptions and do not end up in the generated output, it is okay to re-number things as necessary at anytime to achieve the desired hierarchy.
Name | Becomes the class name of the exception with "Error" appended to the end. Changing names of existing exceptions makes the API backwards incompatible. When extending other exceptions you must specify the full class name, manually adding "Error" as necessary (if extending another SDK exception).
Message | User friendly error message explaining the exceptional event. Supports Python formatted strings: any variables used in the string will be generated as arguments in the `__init__` method. Use `--` to provide a doc string after the error message to be added to the class definition.
## Exceptions Table
Code | Name | Message
---:|---|---
**1xx** | UserInput | User input errors.
**10x** | Command | Errors preparing to execute commands.
101 | CommandDoesNotExist | Command '{command}' does not exist.
102 | CommandDeprecated | Command '{command}' is deprecated.
103 | CommandInvalidArgument | Invalid argument '{argument}' to command '{command}'.
104 | CommandTemporarilyUnavailable | Command '{command}' is temporarily unavailable. -- Such as waiting for required components to start.
105 | CommandPermanentlyUnavailable | Command '{command}' is permanently unavailable. -- such as when required component was intentionally configured not to start.
**11x** | InputValue(ValueError) | Invalid argument value provided to command.
111 | GenericInputValue | The value '{value}' for argument '{argument}' is not valid.
112 | InputValueIsNone | None or null is not valid value for argument '{argument}'.
113 | ConflictingInputValue | Only '{first_argument}' or '{second_argument}' is allowed, not both.
114 | InputStringIsBlank | {argument} cannot be blank.
115 | EmptyPublishedFile | Cannot publish empty file: {file_path}
116 | MissingPublishedFile | File does not exist: {file_path}
117 | InvalidStreamURL | Invalid LBRY stream URL: '{url}' -- When an URL cannot be downloaded, such as '@Channel/' or a collection
**2xx** | Configuration | Configuration errors.
201 | ConfigWrite | Cannot write configuration file '{path}'. -- When writing the default config fails on startup, such as due to permission issues.
202 | ConfigRead | Cannot find provided configuration file '{path}'. -- Can't open the config file user provided via command line args.
203 | ConfigParse | Failed to parse the configuration file '{path}'. -- Includes the syntax error / line number to help user fix it.
204 | ConfigMissing | Configuration file '{path}' is missing setting that has no default / fallback.
205 | ConfigInvalid | Configuration file '{path}' has setting with invalid value.
**3xx** | Network | **Networking**
301 | NoInternet | No internet connection.
302 | NoUPnPSupport | Router does not support UPnP.
**4xx** | Wallet | **Wallet Errors**
401 | TransactionRejected | Transaction rejected, unknown reason.
402 | TransactionFeeTooLow | Fee too low.
403 | TransactionInvalidSignature | Invalid signature.
404 | InsufficientFunds | Not enough funds to cover this transaction. -- determined by wallet prior to attempting to broadcast a tx; this is different for example from a TX being created and sent but then rejected by lbrycrd for unspendable utxos.
405 | ChannelKeyNotFound | Channel signing key not found.
406 | ChannelKeyInvalid | Channel signing key is out of date. -- For example, channel was updated but you don't have the updated key.
407 | DataDownload | Failed to download blob. *generic*
408 | PrivateKeyNotFound | Couldn't find private key for {key} '{value}'.
410 | Resolve | Failed to resolve '{url}'.
411 | ResolveTimeout | Failed to resolve '{url}' within the timeout.
411 | ResolveCensored | Resolve of '{url}' was censored by channel with claim id '{censor_id}'.
420 | KeyFeeAboveMaxAllowed | {message}
421 | InvalidPassword | Password is invalid.
422 | IncompatibleWalletServer | '{server}:{port}' has an incompatibly old version.
423 | TooManyClaimSearchParameters | {key} cant have more than {limit} items.
424 | AlreadyPurchased | You already have a purchase for claim_id '{claim_id_hex}'. Use --allow-duplicate-purchase flag to override.
431 | ServerPaymentInvalidAddress | Invalid address from wallet server: '{address}' - skipping payment round.
432 | ServerPaymentWalletLocked | Cannot spend funds with locked wallet, skipping payment round.
433 | ServerPaymentFeeAboveMaxAllowed | Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC.
434 | WalletNotLoaded | Wallet {wallet_id} is not loaded.
435 | WalletAlreadyLoaded | Wallet {wallet_path} is already loaded.
436 | WalletNotFound | Wallet not found at {wallet_path}.
437 | WalletAlreadyExists | Wallet {wallet_path} already exists, use `wallet_add` to load it.
**5xx** | Blob | **Blobs**
500 | BlobNotFound | Blob not found.
501 | BlobPermissionDenied | Permission denied to read blob.
502 | BlobTooBig | Blob is too big.
503 | BlobEmpty | Blob is empty.
510 | BlobFailedDecryption | Failed to decrypt blob.
511 | CorruptBlob | Blobs is corrupted.
520 | BlobFailedEncryption | Failed to encrypt blob.
531 | DownloadCancelled | Download was canceled.
532 | DownloadSDTimeout | Failed to download sd blob {download} within timeout.
533 | DownloadDataTimeout | Failed to download data blobs for sd hash {download} within timeout.
534 | InvalidStreamDescriptor | {message}
535 | InvalidData | {message}
536 | InvalidBlobHash | {message}
**6xx** | Component | **Components**
601 | ComponentStartConditionNotMet | Unresolved dependencies for: {components}
602 | ComponentsNotStarted | {message}
**7xx** | CurrencyExchange | **Currency Exchange**
701 | InvalidExchangeRateResponse | Failed to get exchange rate from {source}: {reason}
702 | CurrencyConversion | {message}
703 | InvalidCurrency | Invalid currency: {currency} is not a supported currency.

View file

@ -1,494 +0,0 @@
from .base import BaseError, claim_id
class UserInputError(BaseError):
"""
User input errors.
"""
class CommandError(UserInputError):
"""
Errors preparing to execute commands.
"""
class CommandDoesNotExistError(CommandError):
def __init__(self, command):
self.command = command
super().__init__(f"Command '{command}' does not exist.")
class CommandDeprecatedError(CommandError):
def __init__(self, command):
self.command = command
super().__init__(f"Command '{command}' is deprecated.")
class CommandInvalidArgumentError(CommandError):
def __init__(self, argument, command):
self.argument = argument
self.command = command
super().__init__(f"Invalid argument '{argument}' to command '{command}'.")
class CommandTemporarilyUnavailableError(CommandError):
"""
Such as waiting for required components to start.
"""
def __init__(self, command):
self.command = command
super().__init__(f"Command '{command}' is temporarily unavailable.")
class CommandPermanentlyUnavailableError(CommandError):
"""
such as when required component was intentionally configured not to start.
"""
def __init__(self, command):
self.command = command
super().__init__(f"Command '{command}' is permanently unavailable.")
class InputValueError(UserInputError, ValueError):
"""
Invalid argument value provided to command.
"""
class GenericInputValueError(InputValueError):
def __init__(self, value, argument):
self.value = value
self.argument = argument
super().__init__(f"The value '{value}' for argument '{argument}' is not valid.")
class InputValueIsNoneError(InputValueError):
def __init__(self, argument):
self.argument = argument
super().__init__(f"None or null is not valid value for argument '{argument}'.")
class ConflictingInputValueError(InputValueError):
def __init__(self, first_argument, second_argument):
self.first_argument = first_argument
self.second_argument = second_argument
super().__init__(f"Only '{first_argument}' or '{second_argument}' is allowed, not both.")
class InputStringIsBlankError(InputValueError):
def __init__(self, argument):
self.argument = argument
super().__init__(f"{argument} cannot be blank.")
class EmptyPublishedFileError(InputValueError):
def __init__(self, file_path):
self.file_path = file_path
super().__init__(f"Cannot publish empty file: {file_path}")
class MissingPublishedFileError(InputValueError):
def __init__(self, file_path):
self.file_path = file_path
super().__init__(f"File does not exist: {file_path}")
class InvalidStreamURLError(InputValueError):
"""
When an URL cannot be downloaded, such as '@Channel/' or a collection
"""
def __init__(self, url):
self.url = url
super().__init__(f"Invalid LBRY stream URL: '{url}'")
class ConfigurationError(BaseError):
"""
Configuration errors.
"""
class ConfigWriteError(ConfigurationError):
"""
When writing the default config fails on startup, such as due to permission issues.
"""
def __init__(self, path):
self.path = path
super().__init__(f"Cannot write configuration file '{path}'.")
class ConfigReadError(ConfigurationError):
"""
Can't open the config file user provided via command line args.
"""
def __init__(self, path):
self.path = path
super().__init__(f"Cannot find provided configuration file '{path}'.")
class ConfigParseError(ConfigurationError):
"""
Includes the syntax error / line number to help user fix it.
"""
def __init__(self, path):
self.path = path
super().__init__(f"Failed to parse the configuration file '{path}'.")
class ConfigMissingError(ConfigurationError):
def __init__(self, path):
self.path = path
super().__init__(f"Configuration file '{path}' is missing setting that has no default / fallback.")
class ConfigInvalidError(ConfigurationError):
def __init__(self, path):
self.path = path
super().__init__(f"Configuration file '{path}' has setting with invalid value.")
class NetworkError(BaseError):
"""
**Networking**
"""
class NoInternetError(NetworkError):
def __init__(self):
super().__init__("No internet connection.")
class NoUPnPSupportError(NetworkError):
def __init__(self):
super().__init__("Router does not support UPnP.")
class WalletError(BaseError):
"""
**Wallet Errors**
"""
class TransactionRejectedError(WalletError):
def __init__(self):
super().__init__("Transaction rejected, unknown reason.")
class TransactionFeeTooLowError(WalletError):
def __init__(self):
super().__init__("Fee too low.")
class TransactionInvalidSignatureError(WalletError):
def __init__(self):
super().__init__("Invalid signature.")
class InsufficientFundsError(WalletError):
"""
determined by wallet prior to attempting to broadcast a tx; this is different for example from a TX
being created and sent but then rejected by lbrycrd for unspendable utxos.
"""
def __init__(self):
super().__init__("Not enough funds to cover this transaction.")
class ChannelKeyNotFoundError(WalletError):
def __init__(self):
super().__init__("Channel signing key not found.")
class ChannelKeyInvalidError(WalletError):
"""
For example, channel was updated but you don't have the updated key.
"""
def __init__(self):
super().__init__("Channel signing key is out of date.")
class DataDownloadError(WalletError):
def __init__(self):
super().__init__("Failed to download blob. *generic*")
class PrivateKeyNotFoundError(WalletError):
def __init__(self, key, value):
self.key = key
self.value = value
super().__init__(f"Couldn't find private key for {key} '{value}'.")
class ResolveError(WalletError):
def __init__(self, url):
self.url = url
super().__init__(f"Failed to resolve '{url}'.")
class ResolveTimeoutError(WalletError):
def __init__(self, url):
self.url = url
super().__init__(f"Failed to resolve '{url}' within the timeout.")
class ResolveCensoredError(WalletError):
def __init__(self, url, censor_id, censor_row):
self.url = url
self.censor_id = censor_id
self.censor_row = censor_row
super().__init__(f"Resolve of '{url}' was censored by channel with claim id '{censor_id}'.")
class KeyFeeAboveMaxAllowedError(WalletError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class InvalidPasswordError(WalletError):
def __init__(self):
super().__init__("Password is invalid.")
class IncompatibleWalletServerError(WalletError):
def __init__(self, server, port):
self.server = server
self.port = port
super().__init__(f"'{server}:{port}' has an incompatibly old version.")
class TooManyClaimSearchParametersError(WalletError):
def __init__(self, key, limit):
self.key = key
self.limit = limit
super().__init__(f"{key} cant have more than {limit} items.")
class AlreadyPurchasedError(WalletError):
"""
allow-duplicate-purchase flag to override.
"""
def __init__(self, claim_id_hex):
self.claim_id_hex = claim_id_hex
super().__init__(f"You already have a purchase for claim_id '{claim_id_hex}'. Use")
class ServerPaymentInvalidAddressError(WalletError):
def __init__(self, address):
self.address = address
super().__init__(f"Invalid address from wallet server: '{address}' - skipping payment round.")
class ServerPaymentWalletLockedError(WalletError):
def __init__(self):
super().__init__("Cannot spend funds with locked wallet, skipping payment round.")
class ServerPaymentFeeAboveMaxAllowedError(WalletError):
def __init__(self, daily_fee, max_fee):
self.daily_fee = daily_fee
self.max_fee = max_fee
super().__init__(f"Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC.")
class WalletNotLoadedError(WalletError):
def __init__(self, wallet_id):
self.wallet_id = wallet_id
super().__init__(f"Wallet {wallet_id} is not loaded.")
class WalletAlreadyLoadedError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet {wallet_path} is already loaded.")
class WalletNotFoundError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet not found at {wallet_path}.")
class WalletAlreadyExistsError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet {wallet_path} already exists, use `wallet_add` to load it.")
class BlobError(BaseError):
"""
**Blobs**
"""
class BlobNotFoundError(BlobError):
def __init__(self):
super().__init__("Blob not found.")
class BlobPermissionDeniedError(BlobError):
def __init__(self):
super().__init__("Permission denied to read blob.")
class BlobTooBigError(BlobError):
def __init__(self):
super().__init__("Blob is too big.")
class BlobEmptyError(BlobError):
def __init__(self):
super().__init__("Blob is empty.")
class BlobFailedDecryptionError(BlobError):
def __init__(self):
super().__init__("Failed to decrypt blob.")
class CorruptBlobError(BlobError):
def __init__(self):
super().__init__("Blobs is corrupted.")
class BlobFailedEncryptionError(BlobError):
def __init__(self):
super().__init__("Failed to encrypt blob.")
class DownloadCancelledError(BlobError):
def __init__(self):
super().__init__("Download was canceled.")
class DownloadSDTimeoutError(BlobError):
def __init__(self, download):
self.download = download
super().__init__(f"Failed to download sd blob {download} within timeout.")
class DownloadDataTimeoutError(BlobError):
def __init__(self, download):
self.download = download
super().__init__(f"Failed to download data blobs for sd hash {download} within timeout.")
class InvalidStreamDescriptorError(BlobError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class InvalidDataError(BlobError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class InvalidBlobHashError(BlobError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class ComponentError(BaseError):
"""
**Components**
"""
class ComponentStartConditionNotMetError(ComponentError):
def __init__(self, components):
self.components = components
super().__init__(f"Unresolved dependencies for: {components}")
class ComponentsNotStartedError(ComponentError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class CurrencyExchangeError(BaseError):
"""
**Currency Exchange**
"""
class InvalidExchangeRateResponseError(CurrencyExchangeError):
def __init__(self, source, reason):
self.source = source
self.reason = reason
super().__init__(f"Failed to get exchange rate from {source}: {reason}")
class CurrencyConversionError(CurrencyExchangeError):
def __init__(self, message):
self.message = message
super().__init__(f"{message}")
class InvalidCurrencyError(CurrencyExchangeError):
def __init__(self, currency):
self.currency = currency
super().__init__(f"Invalid currency: {currency} is not a supported currency.")

View file

@ -1,9 +0,0 @@
from binascii import hexlify
def claim_id(claim_hash):
return hexlify(claim_hash[::-1]).decode()
class BaseError(Exception):
pass

View file

@ -1,167 +0,0 @@
import re
import sys
import argparse
from pathlib import Path
from textwrap import fill, indent
INDENT = ' ' * 4
CLASS = """
class {name}({parents}):{doc}
"""
INIT = """
def __init__({args}):{fields}
super().__init__({format}"{message}")
"""
FUNCTIONS = ['claim_id']
class ErrorClass:
def __init__(self, hierarchy, name, message):
self.hierarchy = hierarchy.replace('**', '')
self.other_parents = []
if '(' in name:
assert ')' in name, f"Missing closing parenthesis in '{name}'."
self.other_parents = name[name.find('(')+1:name.find(')')].split(',')
name = name[:name.find('(')]
self.name = name
self.class_name = name+'Error'
self.message = message
self.comment = ""
if '--' in message:
self.message, self.comment = message.split('--')
self.message = self.message.strip()
self.comment = self.comment.strip()
@property
def is_leaf(self):
return 'x' not in self.hierarchy
@property
def code(self):
return self.hierarchy.replace('x', '')
@property
def parent_codes(self):
return self.hierarchy[0:2], self.hierarchy[0]
def get_arguments(self):
args = ['self']
for arg in re.findall('{([a-z0-1_()]+)}', self.message):
for func in FUNCTIONS:
if arg.startswith(f'{func}('):
arg = arg[len(f'{func}('):-1]
break
args.append(arg)
return args
@staticmethod
def get_fields(args):
if len(args) > 1:
return ''.join(f'\n{INDENT*2}self.{field} = {field}' for field in args[1:])
return ''
@staticmethod
def get_doc_string(doc):
if doc:
return f'\n{INDENT}"""\n{indent(fill(doc, 100), INDENT)}\n{INDENT}"""'
return ""
def render(self, out, parent):
if not parent:
parents = ['BaseError']
else:
parents = [parent.class_name]
parents += self.other_parents
args = self.get_arguments()
if self.is_leaf:
out.write((CLASS + INIT).format(
name=self.class_name, parents=', '.join(parents),
args=', '.join(args), fields=self.get_fields(args),
message=self.message, doc=self.get_doc_string(self.comment), format='f' if len(args) > 1 else ''
))
else:
out.write(CLASS.format(
name=self.class_name, parents=', '.join(parents),
doc=self.get_doc_string(self.comment or self.message)
))
def get_errors():
with open('README.md', 'r') as readme:
lines = iter(readme.readlines())
for line in lines:
if line.startswith('## Exceptions Table'):
break
for line in lines:
if line.startswith('---:|'):
break
for line in lines:
if not line:
break
yield ErrorClass(*[c.strip() for c in line.split('|')])
def find_parent(stack, child):
for parent_code in child.parent_codes:
parent = stack.get(parent_code)
if parent:
return parent
def generate(out):
out.write(f"from .base import BaseError, {', '.join(FUNCTIONS)}\n")
stack = {}
for error in get_errors():
error.render(out, find_parent(stack, error))
if not error.is_leaf:
assert error.code not in stack, f"Duplicate code: {error.code}"
stack[error.code] = error
def analyze():
errors = {e.class_name: [] for e in get_errors() if e.is_leaf}
here = Path(__file__).absolute().parents[0]
module = here.parent
for file_path in module.glob('**/*.py'):
if here in file_path.parents:
continue
with open(file_path) as src_file:
src = src_file.read()
for error in errors.keys():
found = src.count(error)
if found > 0:
errors[error].append((file_path, found))
print('Unused Errors:\n')
for error, used in errors.items():
if used:
print(f' - {error}')
for use in used:
print(f' {use[0].relative_to(module.parent)} {use[1]}')
print('')
print('')
print('Unused Errors:')
for error, used in errors.items():
if not used:
print(f' - {error}')
def main():
parser = argparse.ArgumentParser()
parser.add_argument("action", choices=['generate', 'analyze'])
args = parser.parse_args()
if args.action == "analyze":
analyze()
elif args.action == "generate":
generate(sys.stdout)
if __name__ == "__main__":
main()

View file

@ -1,339 +0,0 @@
import os
import sys
import shutil
import signal
import pathlib
import json
import asyncio
import argparse
import logging
import logging.handlers
import aiohttp
from aiohttp.web import GracefulExit
from docopt import docopt
from lbry import __version__ as lbrynet_version
from lbry.extras.daemon.daemon import Daemon
from lbry.conf import Config, CLIConfig
log = logging.getLogger('lbry')
def display(data):
print(json.dumps(data, indent=2))
async def execute_command(conf, method, params, callback=display):
async with aiohttp.ClientSession() as session:
try:
message = {'method': method, 'params': params}
async with session.get(conf.api_connection_url, json=message) as resp:
try:
data = await resp.json()
if 'result' in data:
return callback(data['result'])
elif 'error' in data:
return callback(data['error'])
except Exception as e:
log.exception('Could not process response from server:', exc_info=e)
except aiohttp.ClientConnectionError:
print("Could not connect to daemon. Are you sure it's running?")
def normalize_value(x, key=None):
if not isinstance(x, str):
return x
if key in ('uri', 'channel_name', 'name', 'file_name', 'claim_name', 'download_directory'):
return x
if x.lower() == 'true':
return True
if x.lower() == 'false':
return False
if x.isdigit():
return int(x)
return x
def remove_brackets(key):
if key.startswith("<") and key.endswith(">"):
return str(key[1:-1])
return key
def set_kwargs(parsed_args):
kwargs = {}
for key, arg in parsed_args.items():
if arg is None:
continue
k = None
if key.startswith("--") and remove_brackets(key[2:]) not in kwargs:
k = remove_brackets(key[2:])
elif remove_brackets(key) not in kwargs:
k = remove_brackets(key)
kwargs[k] = normalize_value(arg, k)
return kwargs
def split_subparser_argument(parent, original, name, condition):
new_sub_parser = argparse._SubParsersAction(
original.option_strings,
original._prog_prefix,
original._parser_class,
metavar=original.metavar
)
new_sub_parser._name_parser_map = original._name_parser_map
new_sub_parser._choices_actions = [
a for a in original._choices_actions if condition(original._name_parser_map[a.dest])
]
group = argparse._ArgumentGroup(parent, name)
group._group_actions = [new_sub_parser]
return group
class ArgumentParser(argparse.ArgumentParser):
def __init__(self, *args, group_name=None, **kwargs):
super().__init__(*args, formatter_class=HelpFormatter, add_help=False, **kwargs)
self.add_argument(
'--help', dest='help', action='store_true', default=False,
help='Show this help message and exit.'
)
self._optionals.title = 'Options'
if group_name is None:
self.epilog = (
"Run 'lbrynet COMMAND --help' for more information on a command or group."
)
else:
self.epilog = (
f"Run 'lbrynet {group_name} COMMAND --help' for more information on a command."
)
self.set_defaults(group=group_name, group_parser=self)
def format_help(self):
formatter = self._get_formatter()
formatter.add_usage(
self.usage, self._actions, self._mutually_exclusive_groups
)
formatter.add_text(self.description)
# positionals, optionals and user-defined groups
for action_group in self._granular_action_groups:
formatter.start_section(action_group.title)
formatter.add_text(action_group.description)
formatter.add_arguments(action_group._group_actions)
formatter.end_section()
formatter.add_text(self.epilog)
return formatter.format_help()
@property
def _granular_action_groups(self):
if self.prog != 'lbrynet':
yield from self._action_groups
return
yield self._optionals
action: argparse._SubParsersAction = self._positionals._group_actions[0]
yield split_subparser_argument(
self, action, "Grouped Commands", lambda parser: 'group' in parser._defaults
)
yield split_subparser_argument(
self, action, "Commands", lambda parser: 'group' not in parser._defaults
)
def error(self, message):
self.print_help(argparse._sys.stderr)
self.exit(2, f"\n{message}\n")
class HelpFormatter(argparse.HelpFormatter):
def add_usage(self, usage, actions, groups, prefix='Usage: '):
super().add_usage(
usage, [a for a in actions if a.option_strings != ['--help']], groups, prefix
)
def add_command_parser(parent, command):
subcommand = parent.add_parser(
command['name'],
help=command['doc'].strip().splitlines()[0]
)
subcommand.set_defaults(
api_method_name=command['api_method_name'],
command=command['name'],
doc=command['doc'],
replaced_by=command.get('replaced_by', None)
)
def get_argument_parser():
root = ArgumentParser(
'lbrynet', description='An interface to the LBRY Network.', allow_abbrev=False,
)
root.add_argument(
'-v', '--version', dest='cli_version', action="store_true",
help='Show lbrynet CLI version and exit.'
)
root.set_defaults(group=None, command=None)
CLIConfig.contribute_to_argparse(root)
sub = root.add_subparsers(metavar='COMMAND')
start = sub.add_parser(
'start',
usage='lbrynet start [--config FILE] [--data-dir DIR] [--wallet-dir DIR] [--download-dir DIR] ...',
help='Start LBRY Network interface.'
)
start.add_argument(
'--quiet', dest='quiet', action="store_true",
help='Disable all console output.'
)
start.add_argument(
'--no-logging', dest='no_logging', action="store_true",
help='Disable all logging of any kind.'
)
start.add_argument(
'--verbose', nargs="*",
help=('Enable debug output for lbry logger and event loop. Optionally specify loggers for which debug output '
'should selectively be applied.')
)
start.add_argument(
'--initial-headers', dest='initial_headers',
help='Specify path to initial blockchain headers, faster than downloading them on first run.'
)
Config.contribute_to_argparse(start)
start.set_defaults(command='start', start_parser=start, doc=start.format_help())
api = Daemon.get_api_definitions()
groups = {}
for group_name in sorted(api['groups']):
group_parser = sub.add_parser(group_name, group_name=group_name, help=api['groups'][group_name])
groups[group_name] = group_parser.add_subparsers(metavar='COMMAND')
nicer_order = ['stop', 'get', 'publish', 'resolve']
for command_name in sorted(api['commands']):
if command_name not in nicer_order:
nicer_order.append(command_name)
for command_name in nicer_order:
command = api['commands'][command_name]
if command['group'] is None:
add_command_parser(sub, command)
else:
add_command_parser(groups[command['group']], command)
return root
def ensure_directory_exists(path: str):
if not os.path.isdir(path):
pathlib.Path(path).mkdir(parents=True, exist_ok=True)
use_effective_ids = os.access in os.supports_effective_ids
if not os.access(path, os.W_OK, effective_ids=use_effective_ids):
raise PermissionError(f"The following directory is not writable: {path}")
LOG_MODULES = 'lbry', 'aioupnp'
def setup_logging(logger: logging.Logger, args: argparse.Namespace, conf: Config):
default_formatter = logging.Formatter("%(asctime)s %(levelname)-8s %(name)s:%(lineno)d: %(message)s")
file_handler = logging.handlers.RotatingFileHandler(conf.log_file_path, maxBytes=2097152, backupCount=5)
file_handler.setFormatter(default_formatter)
for module_name in LOG_MODULES:
logger.getChild(module_name).addHandler(file_handler)
if not args.quiet:
handler = logging.StreamHandler()
handler.setFormatter(default_formatter)
for module_name in LOG_MODULES:
logger.getChild(module_name).addHandler(handler)
logger.getChild('lbry').setLevel(logging.INFO)
logger.getChild('aioupnp').setLevel(logging.WARNING)
logger.getChild('aiohttp').setLevel(logging.CRITICAL)
if args.verbose is not None:
if len(args.verbose) > 0:
for module in args.verbose:
logger.getChild(module).setLevel(logging.DEBUG)
else:
logger.getChild('lbry').setLevel(logging.DEBUG)
def run_daemon(args: argparse.Namespace, conf: Config):
loop = asyncio.get_event_loop()
if args.verbose is not None:
loop.set_debug(True)
if not args.no_logging:
setup_logging(logging.getLogger(), args, conf)
daemon = Daemon(conf)
def __exit():
raise GracefulExit()
try:
loop.add_signal_handler(signal.SIGINT, __exit)
loop.add_signal_handler(signal.SIGTERM, __exit)
except NotImplementedError:
pass # Not implemented on Windows
try:
loop.run_until_complete(daemon.start())
loop.run_forever()
except (GracefulExit, KeyboardInterrupt, asyncio.CancelledError):
pass
finally:
loop.run_until_complete(daemon.stop())
logging.shutdown()
if hasattr(loop, 'shutdown_asyncgens'):
loop.run_until_complete(loop.shutdown_asyncgens())
def main(argv=None):
argv = argv or sys.argv[1:]
parser = get_argument_parser()
args, command_args = parser.parse_known_args(argv)
conf = Config.create_from_arguments(args)
for directory in (conf.data_dir, conf.download_dir, conf.wallet_dir):
ensure_directory_exists(directory)
if args.cli_version:
print(f"lbrynet {lbrynet_version}")
elif args.command == 'start':
if args.help:
args.start_parser.print_help()
else:
if args.initial_headers:
ledger_path = os.path.join(conf.wallet_dir, 'lbc_mainnet')
ensure_directory_exists(ledger_path)
current_size = 0
headers_path = os.path.join(ledger_path, 'headers')
if os.path.exists(headers_path):
current_size = os.stat(headers_path).st_size
if os.stat(args.initial_headers).st_size > current_size:
log.info('Copying header from %s to %s', args.initial_headers, headers_path)
shutil.copy(args.initial_headers, headers_path)
run_daemon(args, conf)
elif args.command is not None:
doc = args.doc
api_method_name = args.api_method_name
if args.replaced_by:
print(f"{args.api_method_name} is deprecated, using {args.replaced_by['api_method_name']}.")
doc = args.replaced_by['doc']
api_method_name = args.replaced_by['api_method_name']
if args.help:
print(doc)
else:
parsed = docopt(doc, command_args)
params = set_kwargs(parsed)
asyncio.get_event_loop().run_until_complete(execute_command(conf, api_method_name, params))
elif args.group is not None:
args.group_parser.print_help()
else:
parser.print_help()
return 0
if __name__ == "__main__":
sys.exit(main())

View file

@ -1,243 +0,0 @@
import asyncio
import collections
import logging
import typing
import aiohttp
from lbry import utils
from lbry.conf import Config
from lbry.extras import system_info
ANALYTICS_ENDPOINT = 'https://api.segment.io/v1'
ANALYTICS_TOKEN = 'Ax5LZzR1o3q3Z3WjATASDwR5rKyHH0qOIRIbLmMXn2H='
# Things We Track
SERVER_STARTUP = 'Server Startup'
SERVER_STARTUP_SUCCESS = 'Server Startup Success'
SERVER_STARTUP_ERROR = 'Server Startup Error'
DOWNLOAD_STARTED = 'Download Started'
DOWNLOAD_ERRORED = 'Download Errored'
DOWNLOAD_FINISHED = 'Download Finished'
HEARTBEAT = 'Heartbeat'
DISK_SPACE = 'Disk Space'
CLAIM_ACTION = 'Claim Action' # publish/create/update/abandon
NEW_CHANNEL = 'New Channel'
CREDITS_SENT = 'Credits Sent'
UPNP_SETUP = "UPnP Setup"
BLOB_BYTES_UPLOADED = 'Blob Bytes Uploaded'
TIME_TO_FIRST_BYTES = "Time To First Bytes"
log = logging.getLogger(__name__)
def _event_properties(installation_id: str, session_id: str,
event_properties: typing.Optional[typing.Dict]) -> typing.Dict:
properties = {
'lbry_id': installation_id,
'session_id': session_id,
}
properties.update(event_properties or {})
return properties
def _download_properties(conf: Config, external_ip: str, resolve_duration: float,
total_duration: typing.Optional[float], download_id: str, name: str,
outpoint: str, active_peer_count: typing.Optional[int],
tried_peers_count: typing.Optional[int], connection_failures_count: typing.Optional[int],
added_fixed_peers: bool, fixed_peer_delay: float, sd_hash: str,
sd_download_duration: typing.Optional[float] = None,
head_blob_hash: typing.Optional[str] = None,
head_blob_length: typing.Optional[int] = None,
head_blob_download_duration: typing.Optional[float] = None,
error: typing.Optional[str] = None, error_msg: typing.Optional[str] = None,
wallet_server: typing.Optional[str] = None) -> typing.Dict:
return {
"external_ip": external_ip,
"download_id": download_id,
"total_duration": round(total_duration, 4),
"resolve_duration": None if not resolve_duration else round(resolve_duration, 4),
"error": error,
"error_message": error_msg,
'name': name,
"outpoint": outpoint,
"node_rpc_timeout": conf.node_rpc_timeout,
"peer_connect_timeout": conf.peer_connect_timeout,
"blob_download_timeout": conf.blob_download_timeout,
"use_fixed_peers": len(conf.fixed_peers) > 0,
"fixed_peer_delay": fixed_peer_delay,
"added_fixed_peers": added_fixed_peers,
"active_peer_count": active_peer_count,
"tried_peers_count": tried_peers_count,
"sd_blob_hash": sd_hash,
"sd_blob_duration": None if not sd_download_duration else round(sd_download_duration, 4),
"head_blob_hash": head_blob_hash,
"head_blob_length": head_blob_length,
"head_blob_duration": None if not head_blob_download_duration else round(head_blob_download_duration, 4),
"connection_failures_count": connection_failures_count,
"wallet_server": wallet_server
}
def _make_context(platform):
# see https://segment.com/docs/spec/common/#context
# they say they'll ignore fields outside the spec, but evidently they don't
context = {
'app': {
'version': platform['lbrynet_version'],
'build': platform['build'],
},
# TODO: expand os info to give linux/osx specific info
'os': {
'name': platform['os_system'],
'version': platform['os_release']
},
}
if 'desktop' in platform and 'distro' in platform:
context['os']['desktop'] = platform['desktop']
context['os']['distro'] = platform['distro']
return context
class AnalyticsManager:
def __init__(self, conf: Config, installation_id: str, session_id: str):
self.conf = conf
self.cookies = {}
self.url = ANALYTICS_ENDPOINT
self._write_key = utils.deobfuscate(ANALYTICS_TOKEN)
self._tracked_data = collections.defaultdict(list)
self.context = _make_context(system_info.get_platform())
self.installation_id = installation_id
self.session_id = session_id
self.task: typing.Optional[asyncio.Task] = None
self.external_ip: typing.Optional[str] = None
@property
def enabled(self):
return self.conf.share_usage_data
@property
def is_started(self):
return self.task is not None
async def start(self):
if self.task is None:
self.task = asyncio.create_task(self.run())
async def run(self):
while True:
if self.enabled:
self.external_ip, _ = await utils.get_external_ip(self.conf.lbryum_servers)
await self._send_heartbeat()
await asyncio.sleep(1800)
def stop(self):
if self.task is not None and not self.task.done():
self.task.cancel()
async def _post(self, data: typing.Dict):
request_kwargs = {
'method': 'POST',
'url': self.url + '/track',
'headers': {'Connection': 'Close'},
'auth': aiohttp.BasicAuth(self._write_key, ''),
'json': data,
'cookies': self.cookies
}
try:
async with utils.aiohttp_request(**request_kwargs) as response:
self.cookies.update(response.cookies)
except Exception as e:
log.debug('Encountered an exception while POSTing to %s: ', self.url + '/track', exc_info=e)
async def track(self, event: typing.Dict):
"""Send a single tracking event"""
if self.enabled:
log.debug('Sending track event: %s', event)
await self._post(event)
async def send_upnp_setup_success_fail(self, success, status):
await self.track(
self._event(UPNP_SETUP, {
'success': success,
'status': status,
})
)
async def send_disk_space_used(self, storage_used, storage_limit, is_from_network_quota):
await self.track(
self._event(DISK_SPACE, {
'used': storage_used,
'limit': storage_limit,
'from_network_quota': is_from_network_quota
})
)
async def send_server_startup(self):
await self.track(self._event(SERVER_STARTUP))
async def send_server_startup_success(self):
await self.track(self._event(SERVER_STARTUP_SUCCESS))
async def send_server_startup_error(self, message):
await self.track(self._event(SERVER_STARTUP_ERROR, {'message': message}))
async def send_time_to_first_bytes(self, resolve_duration: typing.Optional[float],
total_duration: typing.Optional[float], download_id: str,
name: str, outpoint: typing.Optional[str],
found_peers_count: typing.Optional[int],
tried_peers_count: typing.Optional[int],
connection_failures_count: typing.Optional[int],
added_fixed_peers: bool,
fixed_peers_delay: float, sd_hash: str,
sd_download_duration: typing.Optional[float] = None,
head_blob_hash: typing.Optional[str] = None,
head_blob_length: typing.Optional[int] = None,
head_blob_duration: typing.Optional[int] = None,
error: typing.Optional[str] = None,
error_msg: typing.Optional[str] = None,
wallet_server: typing.Optional[str] = None):
await self.track(self._event(TIME_TO_FIRST_BYTES, _download_properties(
self.conf, self.external_ip, resolve_duration, total_duration, download_id, name, outpoint,
found_peers_count, tried_peers_count, connection_failures_count, added_fixed_peers, fixed_peers_delay,
sd_hash, sd_download_duration, head_blob_hash, head_blob_length, head_blob_duration, error, error_msg,
wallet_server
)))
async def send_download_finished(self, download_id, name, sd_hash):
await self.track(
self._event(
DOWNLOAD_FINISHED, {
'download_id': download_id,
'name': name,
'stream_info': sd_hash
}
)
)
async def send_claim_action(self, action):
await self.track(self._event(CLAIM_ACTION, {'action': action}))
async def send_new_channel(self):
await self.track(self._event(NEW_CHANNEL))
async def send_credits_sent(self):
await self.track(self._event(CREDITS_SENT))
async def _send_heartbeat(self):
await self.track(self._event(HEARTBEAT))
def _event(self, event, properties: typing.Optional[typing.Dict] = None):
return {
'userId': 'lbry',
'event': event,
'properties': _event_properties(self.installation_id, self.session_id, properties),
'context': self.context,
'timestamp': utils.isonow()
}

View file

@ -1,6 +0,0 @@
from lbry.extras.cli import execute_command
from lbry.conf import Config
def daemon_rpc(conf: Config, method: str, **kwargs):
return execute_command(conf, method, kwargs, callback=lambda data: data)

View file

@ -1,750 +0,0 @@
import math
import os
import asyncio
import logging
import binascii
import typing
import base58
from aioupnp import __version__ as aioupnp_version
from aioupnp.upnp import UPnP
from aioupnp.fault import UPnPError
from lbry import utils
from lbry.dht.node import Node
from lbry.dht.peer import is_valid_public_ipv4
from lbry.dht.blob_announcer import BlobAnnouncer
from lbry.blob.blob_manager import BlobManager
from lbry.blob.disk_space_manager import DiskSpaceManager
from lbry.blob_exchange.server import BlobServer
from lbry.stream.background_downloader import BackgroundDownloader
from lbry.stream.stream_manager import StreamManager
from lbry.file.file_manager import FileManager
from lbry.extras.daemon.component import Component
from lbry.extras.daemon.exchange_rate_manager import ExchangeRateManager
from lbry.extras.daemon.storage import SQLiteStorage
from lbry.torrent.torrent_manager import TorrentManager
from lbry.wallet import WalletManager
from lbry.wallet.usage_payment import WalletServerPayer
from lbry.torrent.tracker import TrackerClient
from lbry.torrent.session import TorrentSession
log = logging.getLogger(__name__)
# settings must be initialized before this file is imported
DATABASE_COMPONENT = "database"
BLOB_COMPONENT = "blob_manager"
WALLET_COMPONENT = "wallet"
WALLET_SERVER_PAYMENTS_COMPONENT = "wallet_server_payments"
DHT_COMPONENT = "dht"
HASH_ANNOUNCER_COMPONENT = "hash_announcer"
FILE_MANAGER_COMPONENT = "file_manager"
DISK_SPACE_COMPONENT = "disk_space"
BACKGROUND_DOWNLOADER_COMPONENT = "background_downloader"
PEER_PROTOCOL_SERVER_COMPONENT = "peer_protocol_server"
UPNP_COMPONENT = "upnp"
EXCHANGE_RATE_MANAGER_COMPONENT = "exchange_rate_manager"
TRACKER_ANNOUNCER_COMPONENT = "tracker_announcer_component"
LIBTORRENT_COMPONENT = "libtorrent_component"
class DatabaseComponent(Component):
component_name = DATABASE_COMPONENT
def __init__(self, component_manager):
super().__init__(component_manager)
self.storage = None
@property
def component(self):
return self.storage
@staticmethod
def get_current_db_revision():
return 15
@property
def revision_filename(self):
return os.path.join(self.conf.data_dir, 'db_revision')
def _write_db_revision_file(self, version_num):
with open(self.revision_filename, mode='w') as db_revision:
db_revision.write(str(version_num))
async def start(self):
# check directories exist, create them if they don't
log.info("Loading databases")
if not os.path.exists(self.revision_filename):
log.info("db_revision file not found. Creating it")
self._write_db_revision_file(self.get_current_db_revision())
# check the db migration and run any needed migrations
with open(self.revision_filename, "r") as revision_read_handle:
old_revision = int(revision_read_handle.read().strip())
if old_revision > self.get_current_db_revision():
raise Exception('This version of lbrynet is not compatible with the database\n'
'Your database is revision %i, expected %i' %
(old_revision, self.get_current_db_revision()))
if old_revision < self.get_current_db_revision():
from lbry.extras.daemon.migrator import dbmigrator # pylint: disable=import-outside-toplevel
log.info("Upgrading your databases (revision %i to %i)", old_revision, self.get_current_db_revision())
await asyncio.get_event_loop().run_in_executor(
None, dbmigrator.migrate_db, self.conf, old_revision, self.get_current_db_revision()
)
self._write_db_revision_file(self.get_current_db_revision())
log.info("Finished upgrading the databases.")
self.storage = SQLiteStorage(
self.conf, os.path.join(self.conf.data_dir, "lbrynet.sqlite")
)
await self.storage.open()
async def stop(self):
await self.storage.close()
self.storage = None
class WalletComponent(Component):
component_name = WALLET_COMPONENT
depends_on = [DATABASE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.wallet_manager = None
@property
def component(self):
return self.wallet_manager
async def get_status(self):
if self.wallet_manager is None:
return
is_connected = self.wallet_manager.ledger.network.is_connected
sessions = []
connected = None
if is_connected:
addr, port = self.wallet_manager.ledger.network.client.server
connected = f"{addr}:{port}"
sessions.append(self.wallet_manager.ledger.network.client)
result = {
'connected': connected,
'connected_features': self.wallet_manager.ledger.network.server_features,
'servers': [
{
'host': session.server[0],
'port': session.server[1],
'latency': session.connection_latency,
'availability': session.available,
} for session in sessions
],
'known_servers': len(self.wallet_manager.ledger.network.known_hubs),
'available_servers': 1 if is_connected else 0
}
if self.wallet_manager.ledger.network.remote_height:
local_height = self.wallet_manager.ledger.local_height_including_downloaded_height
disk_height = len(self.wallet_manager.ledger.headers)
remote_height = self.wallet_manager.ledger.network.remote_height
download_height, target_height = local_height - disk_height, remote_height - disk_height
if target_height > 0:
progress = min(max(math.ceil(float(download_height) / float(target_height) * 100), 0), 100)
else:
progress = 100
best_hash = await self.wallet_manager.get_best_blockhash()
result.update({
'headers_synchronization_progress': progress,
'blocks': max(local_height, 0),
'blocks_behind': max(remote_height - local_height, 0),
'best_blockhash': best_hash,
})
return result
async def start(self):
log.info("Starting wallet")
self.wallet_manager = await WalletManager.from_lbrynet_config(self.conf)
await self.wallet_manager.start()
async def stop(self):
await self.wallet_manager.stop()
self.wallet_manager = None
class WalletServerPaymentsComponent(Component):
component_name = WALLET_SERVER_PAYMENTS_COMPONENT
depends_on = [WALLET_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.usage_payment_service = WalletServerPayer(
max_fee=self.conf.max_wallet_server_fee, analytics_manager=self.component_manager.analytics_manager,
)
@property
def component(self) -> typing.Optional[WalletServerPayer]:
return self.usage_payment_service
async def start(self):
wallet_manager = self.component_manager.get_component(WALLET_COMPONENT)
await self.usage_payment_service.start(wallet_manager.ledger, wallet_manager.default_wallet)
async def stop(self):
await self.usage_payment_service.stop()
async def get_status(self):
return {
'max_fee': self.usage_payment_service.max_fee,
'running': self.usage_payment_service.running
}
class BlobComponent(Component):
component_name = BLOB_COMPONENT
depends_on = [DATABASE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.blob_manager: typing.Optional[BlobManager] = None
@property
def component(self) -> typing.Optional[BlobManager]:
return self.blob_manager
async def start(self):
storage = self.component_manager.get_component(DATABASE_COMPONENT)
data_store = None
if DHT_COMPONENT not in self.component_manager.skip_components:
dht_node: Node = self.component_manager.get_component(DHT_COMPONENT)
if dht_node:
data_store = dht_node.protocol.data_store
blob_dir = os.path.join(self.conf.data_dir, 'blobfiles')
if not os.path.isdir(blob_dir):
os.mkdir(blob_dir)
self.blob_manager = BlobManager(self.component_manager.loop, blob_dir, storage, self.conf, data_store)
return await self.blob_manager.setup()
async def stop(self):
self.blob_manager.stop()
async def get_status(self):
count = 0
if self.blob_manager:
count = len(self.blob_manager.completed_blob_hashes)
return {
'finished_blobs': count,
'connections': {} if not self.blob_manager else self.blob_manager.connection_manager.status
}
class DHTComponent(Component):
component_name = DHT_COMPONENT
depends_on = [UPNP_COMPONENT, DATABASE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.dht_node: typing.Optional[Node] = None
self.external_udp_port = None
self.external_peer_port = None
@property
def component(self) -> typing.Optional[Node]:
return self.dht_node
async def get_status(self):
return {
'node_id': None if not self.dht_node else binascii.hexlify(self.dht_node.protocol.node_id),
'peers_in_routing_table': 0 if not self.dht_node else len(self.dht_node.protocol.routing_table.get_peers())
}
def get_node_id(self):
node_id_filename = os.path.join(self.conf.data_dir, "node_id")
if os.path.isfile(node_id_filename):
with open(node_id_filename, "r") as node_id_file:
return base58.b58decode(str(node_id_file.read()).strip())
node_id = utils.generate_id()
with open(node_id_filename, "w") as node_id_file:
node_id_file.write(base58.b58encode(node_id).decode())
return node_id
async def start(self):
log.info("start the dht")
upnp_component = self.component_manager.get_component(UPNP_COMPONENT)
self.external_peer_port = upnp_component.upnp_redirects.get("TCP", self.conf.tcp_port)
self.external_udp_port = upnp_component.upnp_redirects.get("UDP", self.conf.udp_port)
external_ip = upnp_component.external_ip
storage = self.component_manager.get_component(DATABASE_COMPONENT)
if not external_ip:
external_ip, _ = await utils.get_external_ip(self.conf.lbryum_servers)
if not external_ip:
log.warning("failed to get external ip")
self.dht_node = Node(
self.component_manager.loop,
self.component_manager.peer_manager,
node_id=self.get_node_id(),
internal_udp_port=self.conf.udp_port,
udp_port=self.external_udp_port,
external_ip=external_ip,
peer_port=self.external_peer_port,
rpc_timeout=self.conf.node_rpc_timeout,
split_buckets_under_index=self.conf.split_buckets_under_index,
is_bootstrap_node=self.conf.is_bootstrap_node,
storage=storage
)
self.dht_node.start(self.conf.network_interface, self.conf.known_dht_nodes)
log.info("Started the dht")
async def stop(self):
self.dht_node.stop()
class HashAnnouncerComponent(Component):
component_name = HASH_ANNOUNCER_COMPONENT
depends_on = [DHT_COMPONENT, DATABASE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.hash_announcer: typing.Optional[BlobAnnouncer] = None
@property
def component(self) -> typing.Optional[BlobAnnouncer]:
return self.hash_announcer
async def start(self):
storage = self.component_manager.get_component(DATABASE_COMPONENT)
dht_node = self.component_manager.get_component(DHT_COMPONENT)
self.hash_announcer = BlobAnnouncer(self.component_manager.loop, dht_node, storage)
self.hash_announcer.start(self.conf.concurrent_blob_announcers)
log.info("Started blob announcer")
async def stop(self):
self.hash_announcer.stop()
log.info("Stopped blob announcer")
async def get_status(self):
return {
'announce_queue_size': 0 if not self.hash_announcer else len(self.hash_announcer.announce_queue)
}
class FileManagerComponent(Component):
component_name = FILE_MANAGER_COMPONENT
depends_on = [BLOB_COMPONENT, DATABASE_COMPONENT, WALLET_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.file_manager: typing.Optional[FileManager] = None
@property
def component(self) -> typing.Optional[FileManager]:
return self.file_manager
async def get_status(self):
if not self.file_manager:
return
return {
'managed_files': len(self.file_manager.get_filtered()),
}
async def start(self):
blob_manager = self.component_manager.get_component(BLOB_COMPONENT)
storage = self.component_manager.get_component(DATABASE_COMPONENT)
wallet = self.component_manager.get_component(WALLET_COMPONENT)
node = self.component_manager.get_component(DHT_COMPONENT) \
if self.component_manager.has_component(DHT_COMPONENT) else None
log.info('Starting the file manager')
loop = asyncio.get_event_loop()
self.file_manager = FileManager(
loop, self.conf, wallet, storage, self.component_manager.analytics_manager
)
self.file_manager.source_managers['stream'] = StreamManager(
loop, self.conf, blob_manager, wallet, storage, node,
)
if self.component_manager.has_component(LIBTORRENT_COMPONENT):
torrent = self.component_manager.get_component(LIBTORRENT_COMPONENT)
self.file_manager.source_managers['torrent'] = TorrentManager(
loop, self.conf, torrent, storage, self.component_manager.analytics_manager
)
await self.file_manager.start()
log.info('Done setting up file manager')
async def stop(self):
await self.file_manager.stop()
class BackgroundDownloaderComponent(Component):
MIN_PREFIX_COLLIDING_BITS = 8
component_name = BACKGROUND_DOWNLOADER_COMPONENT
depends_on = [DATABASE_COMPONENT, BLOB_COMPONENT, DISK_SPACE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.background_task: typing.Optional[asyncio.Task] = None
self.download_loop_delay_seconds = 60
self.ongoing_download: typing.Optional[asyncio.Task] = None
self.space_manager: typing.Optional[DiskSpaceManager] = None
self.blob_manager: typing.Optional[BlobManager] = None
self.background_downloader: typing.Optional[BackgroundDownloader] = None
self.dht_node: typing.Optional[Node] = None
self.space_available: typing.Optional[int] = None
@property
def is_busy(self):
return bool(self.ongoing_download and not self.ongoing_download.done())
@property
def component(self) -> 'BackgroundDownloaderComponent':
return self
async def get_status(self):
return {'running': self.background_task is not None and not self.background_task.done(),
'available_free_space_mb': self.space_available,
'ongoing_download': self.is_busy}
async def download_blobs_in_background(self):
while True:
self.space_available = await self.space_manager.get_free_space_mb(True)
if not self.is_busy and self.space_available > 10:
self._download_next_close_blob_hash()
await asyncio.sleep(self.download_loop_delay_seconds)
def _download_next_close_blob_hash(self):
node_id = self.dht_node.protocol.node_id
for blob_hash in self.dht_node.stored_blob_hashes:
if blob_hash.hex() in self.blob_manager.completed_blob_hashes:
continue
if utils.get_colliding_prefix_bits(node_id, blob_hash) >= self.MIN_PREFIX_COLLIDING_BITS:
self.ongoing_download = asyncio.create_task(self.background_downloader.download_blobs(blob_hash.hex()))
return
async def start(self):
self.space_manager: DiskSpaceManager = self.component_manager.get_component(DISK_SPACE_COMPONENT)
if not self.component_manager.has_component(DHT_COMPONENT):
return
self.dht_node = self.component_manager.get_component(DHT_COMPONENT)
self.blob_manager = self.component_manager.get_component(BLOB_COMPONENT)
storage = self.component_manager.get_component(DATABASE_COMPONENT)
self.background_downloader = BackgroundDownloader(self.conf, storage, self.blob_manager, self.dht_node)
self.background_task = asyncio.create_task(self.download_blobs_in_background())
async def stop(self):
if self.ongoing_download and not self.ongoing_download.done():
self.ongoing_download.cancel()
if self.background_task:
self.background_task.cancel()
class DiskSpaceComponent(Component):
component_name = DISK_SPACE_COMPONENT
depends_on = [DATABASE_COMPONENT, BLOB_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.disk_space_manager: typing.Optional[DiskSpaceManager] = None
@property
def component(self) -> typing.Optional[DiskSpaceManager]:
return self.disk_space_manager
async def get_status(self):
if self.disk_space_manager:
space_used = await self.disk_space_manager.get_space_used_mb(cached=True)
return {
'total_used_mb': space_used['total'],
'published_blobs_storage_used_mb': space_used['private_storage'],
'content_blobs_storage_used_mb': space_used['content_storage'],
'seed_blobs_storage_used_mb': space_used['network_storage'],
'running': self.disk_space_manager.running,
}
return {'space_used': '0', 'network_seeding_space_used': '0', 'running': False}
async def start(self):
db = self.component_manager.get_component(DATABASE_COMPONENT)
blob_manager = self.component_manager.get_component(BLOB_COMPONENT)
self.disk_space_manager = DiskSpaceManager(
self.conf, db, blob_manager,
analytics=self.component_manager.analytics_manager
)
await self.disk_space_manager.start()
async def stop(self):
await self.disk_space_manager.stop()
class TorrentComponent(Component):
component_name = LIBTORRENT_COMPONENT
def __init__(self, component_manager):
super().__init__(component_manager)
self.torrent_session = None
@property
def component(self) -> typing.Optional[TorrentSession]:
return self.torrent_session
async def get_status(self):
if not self.torrent_session:
return
return {
'running': True, # TODO: what to return here?
}
async def start(self):
self.torrent_session = TorrentSession(asyncio.get_event_loop(), None)
await self.torrent_session.bind() # TODO: specify host/port
async def stop(self):
if self.torrent_session:
await self.torrent_session.pause()
class PeerProtocolServerComponent(Component):
component_name = PEER_PROTOCOL_SERVER_COMPONENT
depends_on = [UPNP_COMPONENT, BLOB_COMPONENT, WALLET_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.blob_server: typing.Optional[BlobServer] = None
@property
def component(self) -> typing.Optional[BlobServer]:
return self.blob_server
async def start(self):
log.info("start blob server")
blob_manager: BlobManager = self.component_manager.get_component(BLOB_COMPONENT)
wallet: WalletManager = self.component_manager.get_component(WALLET_COMPONENT)
peer_port = self.conf.tcp_port
address = await wallet.get_unused_address()
self.blob_server = BlobServer(asyncio.get_event_loop(), blob_manager, address)
self.blob_server.start_server(peer_port, interface=self.conf.network_interface)
await self.blob_server.started_listening.wait()
async def stop(self):
if self.blob_server:
self.blob_server.stop_server()
class UPnPComponent(Component):
component_name = UPNP_COMPONENT
def __init__(self, component_manager):
super().__init__(component_manager)
self._int_peer_port = self.conf.tcp_port
self._int_dht_node_port = self.conf.udp_port
self.use_upnp = self.conf.use_upnp
self.upnp: typing.Optional[UPnP] = None
self.upnp_redirects = {}
self.external_ip: typing.Optional[str] = None
self._maintain_redirects_task = None
@property
def component(self) -> 'UPnPComponent':
return self
async def _repeatedly_maintain_redirects(self, now=True):
while True:
if now:
await self._maintain_redirects()
await asyncio.sleep(360)
async def _maintain_redirects(self):
# setup the gateway if necessary
if not self.upnp:
try:
self.upnp = await UPnP.discover(loop=self.component_manager.loop)
log.info("found upnp gateway: %s", self.upnp.gateway.manufacturer_string)
except Exception as err:
log.warning("upnp discovery failed: %s", err)
self.upnp = None
# update the external ip
external_ip = None
if self.upnp:
try:
external_ip = await self.upnp.get_external_ip()
if external_ip != "0.0.0.0" and not self.external_ip:
log.info("got external ip from UPnP: %s", external_ip)
except (asyncio.TimeoutError, UPnPError, NotImplementedError):
pass
if external_ip and not is_valid_public_ipv4(external_ip):
log.warning("UPnP returned a private/reserved ip - %s, checking lbry.com fallback", external_ip)
external_ip, _ = await utils.get_external_ip(self.conf.lbryum_servers)
if self.external_ip and self.external_ip != external_ip:
log.info("external ip changed from %s to %s", self.external_ip, external_ip)
if external_ip:
self.external_ip = external_ip
dht_component = self.component_manager.get_component(DHT_COMPONENT)
if dht_component:
dht_node = dht_component.component
dht_node.protocol.external_ip = external_ip
# assert self.external_ip is not None # TODO: handle going/starting offline
if not self.upnp_redirects and self.upnp: # setup missing redirects
log.info("add UPnP port mappings")
upnp_redirects = {}
if PEER_PROTOCOL_SERVER_COMPONENT not in self.component_manager.skip_components:
try:
upnp_redirects["TCP"] = await self.upnp.get_next_mapping(
self._int_peer_port, "TCP", "LBRY peer port", self._int_peer_port
)
except (UPnPError, asyncio.TimeoutError, NotImplementedError):
pass
if DHT_COMPONENT not in self.component_manager.skip_components:
try:
upnp_redirects["UDP"] = await self.upnp.get_next_mapping(
self._int_dht_node_port, "UDP", "LBRY DHT port", self._int_dht_node_port
)
except (UPnPError, asyncio.TimeoutError, NotImplementedError):
pass
if upnp_redirects:
log.info("set up redirects: %s", upnp_redirects)
self.upnp_redirects.update(upnp_redirects)
elif self.upnp: # check existing redirects are still active
found = set()
mappings = await self.upnp.get_redirects()
for mapping in mappings:
proto = mapping.protocol
if proto in self.upnp_redirects and mapping.external_port == self.upnp_redirects[proto]:
if mapping.lan_address == self.upnp.lan_address:
found.add(proto)
if 'UDP' not in found and DHT_COMPONENT not in self.component_manager.skip_components:
try:
udp_port = await self.upnp.get_next_mapping(self._int_dht_node_port, "UDP", "LBRY DHT port")
self.upnp_redirects['UDP'] = udp_port
log.info("refreshed upnp redirect for dht port: %i", udp_port)
except (asyncio.TimeoutError, UPnPError, NotImplementedError):
del self.upnp_redirects['UDP']
if 'TCP' not in found and PEER_PROTOCOL_SERVER_COMPONENT not in self.component_manager.skip_components:
try:
tcp_port = await self.upnp.get_next_mapping(self._int_peer_port, "TCP", "LBRY peer port")
self.upnp_redirects['TCP'] = tcp_port
log.info("refreshed upnp redirect for peer port: %i", tcp_port)
except (asyncio.TimeoutError, UPnPError, NotImplementedError):
del self.upnp_redirects['TCP']
if ('TCP' in self.upnp_redirects and
PEER_PROTOCOL_SERVER_COMPONENT not in self.component_manager.skip_components) and \
('UDP' in self.upnp_redirects and DHT_COMPONENT not in self.component_manager.skip_components):
if self.upnp_redirects:
log.debug("upnp redirects are still active")
async def start(self):
log.info("detecting external ip")
if not self.use_upnp:
self.external_ip, _ = await utils.get_external_ip(self.conf.lbryum_servers)
return
success = False
await self._maintain_redirects()
if self.upnp:
if not self.upnp_redirects and not all(
x in self.component_manager.skip_components
for x in (DHT_COMPONENT, PEER_PROTOCOL_SERVER_COMPONENT)
):
log.error("failed to setup upnp")
else:
success = True
if self.upnp_redirects:
log.debug("set up upnp port redirects for gateway: %s", self.upnp.gateway.manufacturer_string)
else:
log.error("failed to setup upnp")
if not self.external_ip:
self.external_ip, probed_url = await utils.get_external_ip(self.conf.lbryum_servers)
if self.external_ip:
log.info("detected external ip using %s fallback", probed_url)
if self.component_manager.analytics_manager:
self.component_manager.loop.create_task(
self.component_manager.analytics_manager.send_upnp_setup_success_fail(
success, await self.get_status()
)
)
self._maintain_redirects_task = self.component_manager.loop.create_task(
self._repeatedly_maintain_redirects(now=False)
)
async def stop(self):
if self.upnp_redirects:
log.info("Removing upnp redirects: %s", self.upnp_redirects)
await asyncio.wait([
self.upnp.delete_port_mapping(port, protocol) for protocol, port in self.upnp_redirects.items()
])
if self._maintain_redirects_task and not self._maintain_redirects_task.done():
self._maintain_redirects_task.cancel()
async def get_status(self):
return {
'aioupnp_version': aioupnp_version,
'redirects': self.upnp_redirects,
'gateway': 'No gateway found' if not self.upnp else self.upnp.gateway.manufacturer_string,
'dht_redirect_set': 'UDP' in self.upnp_redirects,
'peer_redirect_set': 'TCP' in self.upnp_redirects,
'external_ip': self.external_ip
}
class ExchangeRateManagerComponent(Component):
component_name = EXCHANGE_RATE_MANAGER_COMPONENT
def __init__(self, component_manager):
super().__init__(component_manager)
self.exchange_rate_manager = ExchangeRateManager()
@property
def component(self) -> ExchangeRateManager:
return self.exchange_rate_manager
async def start(self):
self.exchange_rate_manager.start()
async def stop(self):
self.exchange_rate_manager.stop()
class TrackerAnnouncerComponent(Component):
component_name = TRACKER_ANNOUNCER_COMPONENT
depends_on = [FILE_MANAGER_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.file_manager = None
self.announce_task = None
self.tracker_client: typing.Optional[TrackerClient] = None
@property
def component(self):
return self.tracker_client
@property
def running(self):
return self._running and self.announce_task and not self.announce_task.done()
async def announce_forever(self):
while True:
sleep_seconds = 60.0
announce_sd_hashes = []
for file in self.file_manager.get_filtered():
if not file.downloader:
continue
announce_sd_hashes.append(bytes.fromhex(file.sd_hash))
await self.tracker_client.announce_many(*announce_sd_hashes)
await asyncio.sleep(sleep_seconds)
async def start(self):
node = self.component_manager.get_component(DHT_COMPONENT) \
if self.component_manager.has_component(DHT_COMPONENT) else None
node_id = node.protocol.node_id if node else None
self.tracker_client = TrackerClient(node_id, self.conf.tcp_port, lambda: self.conf.tracker_servers)
await self.tracker_client.start()
self.file_manager = self.component_manager.get_component(FILE_MANAGER_COMPONENT)
self.announce_task = asyncio.create_task(self.announce_forever())
async def stop(self):
self.file_manager = None
if self.announce_task and not self.announce_task.done():
self.announce_task.cancel()
self.announce_task = None
self.tracker_client.stop()

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more