initial commit
This commit is contained in:
commit
7240ff6b1c
141 changed files with 19402 additions and 0 deletions
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
*.pyc
|
34
INSTALL
Normal file
34
INSTALL
Normal file
|
@ -0,0 +1,34 @@
|
|||
Prerequisites
|
||||
-------------
|
||||
|
||||
To use the LBRYWallet, which enables spending and accepting LBRYcrds in exchange for data, the
|
||||
LBRYcrd application (insert link to LBRYcrd website here) must be installed and running. If
|
||||
this is not desired, the testing client can be used to simulate trading points, which is
|
||||
built into LBRYnet.
|
||||
|
||||
on Ubuntu:
|
||||
|
||||
sudo apt-get install libgmp3-dev build-essential python-dev python-pip
|
||||
|
||||
Getting the source
|
||||
------------------
|
||||
|
||||
Don't you already have it?
|
||||
|
||||
Setting up the environment
|
||||
--------------------------
|
||||
|
||||
It's recommended that you use a virtualenv
|
||||
|
||||
sudo apt-get install python-virtualenv
|
||||
cd <source base directory>
|
||||
virtualenv .
|
||||
source bin/activate
|
||||
|
||||
(to deactivate the virtualenv, enter 'deactivate')
|
||||
|
||||
python setup.py install
|
||||
|
||||
this will install all of the libraries and a few applications
|
||||
|
||||
For running the file sharing application, see RUNNING
|
22
LICENSE
Normal file
22
LICENSE
Normal file
|
@ -0,0 +1,22 @@
|
|||
Copyright (c) 2015, LBRY, Inc.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
57
README
Normal file
57
README
Normal file
|
@ -0,0 +1,57 @@
|
|||
LBRYnet
|
||||
=======
|
||||
|
||||
LBRYnet is a fully decentralized network for distributing data. It consists of peers uploading
|
||||
and downloading data from other peers, possibly in exchange for payments, and a distributed hash
|
||||
table, used by peers to discover other peers.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
On LBRYnet, data is broken into chunks, and each chunk is specified by its sha384 hash sum. This
|
||||
guarantees that peers can verify the correctness of each chunk without having to know anything
|
||||
about its contents, and can confidently re-transmit the chunk to other peers. Peers wishing to
|
||||
transmit chunks to other peers announce to the distributed hash table that they are associated
|
||||
with the sha384 hash sum in question. When a peer wants to download that chunk from the network,
|
||||
it asks the distributed hash table which peers are associated with that sha384 hash sum. The
|
||||
distributed hash table can also be used more generally. It simply stores IP addresses and
|
||||
ports which are associated with 384-bit numbers, and can be used by any type of application to
|
||||
help peers find each other. For example, an application for which clients don't know all of the
|
||||
necessary chunks may use some identifier, chosen by the application, to find clients which do
|
||||
know all of the necessary chunks.
|
||||
|
||||
Running
|
||||
-------
|
||||
|
||||
LBRYnet comes with an file sharing application, called 'lbrynet-console', which breaks
|
||||
files into chunks, encrypts them with a symmetric key, computes their sha384 hash sum, generates
|
||||
a special file called a 'stream descriptor' containing the hash sums and some other file metadata,
|
||||
and makes the chunks available for download by other peers. A peer wishing to download the file
|
||||
must first obtain the 'stream descriptor' and then may open it with his 'lbrynet-console' client,
|
||||
download all of the chunks by locating peers with the chunks via the DHT, and then combine the
|
||||
chunks into the original file, according to the metadata included in the 'stream descriptor'.
|
||||
|
||||
To install and use this client, see INSTALL and RUNNING
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
See INSTALL
|
||||
|
||||
Developers
|
||||
----------
|
||||
|
||||
Documentation: doc.lbry.io
|
||||
Source code: trac.lbry.io/browser
|
||||
|
||||
To contribute to the development of LBRYnet or lbrynet-console, contact jimmy@lbry.io
|
||||
|
||||
Support
|
||||
-------
|
||||
|
||||
Send all support requests to jimmy@lbry.io
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
See LICENSE
|
52
RUNNING
Normal file
52
RUNNING
Normal file
|
@ -0,0 +1,52 @@
|
|||
To install LBRYnet and lbrynet-console, see INSTALL
|
||||
|
||||
lbrynet-console is a console application which makes use of the LBRYnet to share files.
|
||||
|
||||
In particular, lbrynet-console splits files into encrypted chunks of data compatible with
|
||||
LBRYnet, groups all metadata into a 'stream descriptor file' which can be sent directly to
|
||||
others wishing to obtain the file, or can be itself turned into a chunk compatible with
|
||||
LBRYnet and downloaded via LBRYnet by anyone knowing its sha384 hashsum. lbrynet-console
|
||||
also acts as a client whichreads a stream descriptor file, downloads the chunks of data
|
||||
specified by the hash sums found in the stream descriptor file, decrypts them according to
|
||||
metadata found in the stream, and reconstructs the original file. lbrynet-console features
|
||||
a server so that clients can connect to it and download the chunks and other data gotten
|
||||
from files created locally and files that have been downloaded from LBRYnet.
|
||||
|
||||
lbrynet-console also has a plugin system. There are two plugins: a live stream proof of
|
||||
concept which is currently far behind the development of the rest of the application and
|
||||
therefore will not run, and a plugin which attempts to determine which chunks on the
|
||||
network should be downloaded in order for the application to turn a profit. It will run,
|
||||
but its usefulness is extremely limited.
|
||||
|
||||
Passing '--help' to lbrynet-console will cause it to print out a quick help message
|
||||
describing other command line options to the application.
|
||||
|
||||
Once the application has been started, the user is presented with a numbered list of
|
||||
actions which looks something like this:
|
||||
|
||||
...
|
||||
[2] Toggle whether an LBRY File is running
|
||||
[3] Create an LBRY File from file
|
||||
[4] Publish a stream descriptor file to the DHT for an LBRY File
|
||||
...
|
||||
|
||||
To perform an action, type the desired number and then hit enter. For example, if you wish
|
||||
to create an LBRY file from a file as described in the beginning of this document, type 3 and
|
||||
hit enter.
|
||||
|
||||
If the application needs more input in order to for the action to be taken, the application
|
||||
will continue to print prompts for input until it has received what it needs.
|
||||
|
||||
For example, when creating an LBRY file from a file, the application needs to know which file
|
||||
it's supposed to use to create the LBRY file, so the user will be prompted for it:
|
||||
|
||||
File name:
|
||||
|
||||
The user should input the desired file name and hit enter, at which point the application
|
||||
will go about splitting the file and making it available on the network.
|
||||
|
||||
Some actions will produce sub-menus of actions, which work the same way.
|
||||
|
||||
A more detailed user guide is available at doc.lbry.io
|
||||
|
||||
Any issues may be reported to jimmy@lbry.io
|
332
ez_setup.py
Normal file
332
ez_setup.py
Normal file
|
@ -0,0 +1,332 @@
|
|||
#!/usr/bin/env python
|
||||
"""Bootstrap setuptools installation
|
||||
|
||||
To use setuptools in your package's setup.py, include this
|
||||
file in the same directory and add this to the top of your setup.py::
|
||||
|
||||
from ez_setup import use_setuptools
|
||||
use_setuptools()
|
||||
|
||||
To require a specific version of setuptools, set a download
|
||||
mirror, or use an alternate download directory, simply supply
|
||||
the appropriate options to ``use_setuptools()``.
|
||||
|
||||
This file can also be run as a script to install or upgrade setuptools.
|
||||
"""
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import zipfile
|
||||
import optparse
|
||||
import subprocess
|
||||
import platform
|
||||
import textwrap
|
||||
import contextlib
|
||||
|
||||
from distutils import log
|
||||
|
||||
try:
|
||||
from urllib.request import urlopen
|
||||
except ImportError:
|
||||
from urllib2 import urlopen
|
||||
|
||||
try:
|
||||
from site import USER_SITE
|
||||
except ImportError:
|
||||
USER_SITE = None
|
||||
|
||||
DEFAULT_VERSION = "4.0.1"
|
||||
DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/"
|
||||
|
||||
def _python_cmd(*args):
|
||||
"""
|
||||
Return True if the command succeeded.
|
||||
"""
|
||||
args = (sys.executable,) + args
|
||||
return subprocess.call(args) == 0
|
||||
|
||||
|
||||
def _install(archive_filename, install_args=()):
|
||||
with archive_context(archive_filename):
|
||||
# installing
|
||||
log.warn('Installing Setuptools')
|
||||
if not _python_cmd('setup.py', 'install', *install_args):
|
||||
log.warn('Something went wrong during the installation.')
|
||||
log.warn('See the error message above.')
|
||||
# exitcode will be 2
|
||||
return 2
|
||||
|
||||
|
||||
def _build_egg(egg, archive_filename, to_dir):
|
||||
with archive_context(archive_filename):
|
||||
# building an egg
|
||||
log.warn('Building a Setuptools egg in %s', to_dir)
|
||||
_python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir)
|
||||
# returning the result
|
||||
log.warn(egg)
|
||||
if not os.path.exists(egg):
|
||||
raise IOError('Could not build the egg.')
|
||||
|
||||
|
||||
class ContextualZipFile(zipfile.ZipFile):
|
||||
"""
|
||||
Supplement ZipFile class to support context manager for Python 2.6
|
||||
"""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, type, value, traceback):
|
||||
self.close()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""
|
||||
Construct a ZipFile or ContextualZipFile as appropriate
|
||||
"""
|
||||
if hasattr(zipfile.ZipFile, '__exit__'):
|
||||
return zipfile.ZipFile(*args, **kwargs)
|
||||
return super(ContextualZipFile, cls).__new__(cls)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def archive_context(filename):
|
||||
# extracting the archive
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
log.warn('Extracting in %s', tmpdir)
|
||||
old_wd = os.getcwd()
|
||||
try:
|
||||
os.chdir(tmpdir)
|
||||
with ContextualZipFile(filename) as archive:
|
||||
archive.extractall()
|
||||
|
||||
# going in the directory
|
||||
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
|
||||
os.chdir(subdir)
|
||||
log.warn('Now working in %s', subdir)
|
||||
yield
|
||||
|
||||
finally:
|
||||
os.chdir(old_wd)
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
|
||||
def _do_download(version, download_base, to_dir, download_delay):
|
||||
egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg'
|
||||
% (version, sys.version_info[0], sys.version_info[1]))
|
||||
if not os.path.exists(egg):
|
||||
archive = download_setuptools(version, download_base,
|
||||
to_dir, download_delay)
|
||||
_build_egg(egg, archive, to_dir)
|
||||
sys.path.insert(0, egg)
|
||||
|
||||
# Remove previously-imported pkg_resources if present (see
|
||||
# https://bitbucket.org/pypa/setuptools/pull-request/7/ for details).
|
||||
if 'pkg_resources' in sys.modules:
|
||||
del sys.modules['pkg_resources']
|
||||
|
||||
import setuptools
|
||||
setuptools.bootstrap_install_from = egg
|
||||
|
||||
|
||||
def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
|
||||
to_dir=os.curdir, download_delay=15):
|
||||
to_dir = os.path.abspath(to_dir)
|
||||
rep_modules = 'pkg_resources', 'setuptools'
|
||||
imported = set(sys.modules).intersection(rep_modules)
|
||||
try:
|
||||
import pkg_resources
|
||||
except ImportError:
|
||||
return _do_download(version, download_base, to_dir, download_delay)
|
||||
try:
|
||||
pkg_resources.require("setuptools>=" + version)
|
||||
return
|
||||
except pkg_resources.DistributionNotFound:
|
||||
return _do_download(version, download_base, to_dir, download_delay)
|
||||
except pkg_resources.VersionConflict as VC_err:
|
||||
if imported:
|
||||
msg = textwrap.dedent("""
|
||||
The required version of setuptools (>={version}) is not available,
|
||||
and can't be installed while this script is running. Please
|
||||
install a more recent version first, using
|
||||
'easy_install -U setuptools'.
|
||||
|
||||
(Currently using {VC_err.args[0]!r})
|
||||
""").format(VC_err=VC_err, version=version)
|
||||
sys.stderr.write(msg)
|
||||
sys.exit(2)
|
||||
|
||||
# otherwise, reload ok
|
||||
del pkg_resources, sys.modules['pkg_resources']
|
||||
return _do_download(version, download_base, to_dir, download_delay)
|
||||
|
||||
def _clean_check(cmd, target):
|
||||
"""
|
||||
Run the command to download target. If the command fails, clean up before
|
||||
re-raising the error.
|
||||
"""
|
||||
try:
|
||||
subprocess.check_call(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
if os.access(target, os.F_OK):
|
||||
os.unlink(target)
|
||||
raise
|
||||
|
||||
def download_file_powershell(url, target):
|
||||
"""
|
||||
Download the file at url to target using Powershell (which will validate
|
||||
trust). Raise an exception if the command cannot complete.
|
||||
"""
|
||||
target = os.path.abspath(target)
|
||||
ps_cmd = (
|
||||
"[System.Net.WebRequest]::DefaultWebProxy.Credentials = "
|
||||
"[System.Net.CredentialCache]::DefaultCredentials; "
|
||||
"(new-object System.Net.WebClient).DownloadFile(%(url)r, %(target)r)"
|
||||
% vars()
|
||||
)
|
||||
cmd = [
|
||||
'powershell',
|
||||
'-Command',
|
||||
ps_cmd,
|
||||
]
|
||||
_clean_check(cmd, target)
|
||||
|
||||
def has_powershell():
|
||||
if platform.system() != 'Windows':
|
||||
return False
|
||||
cmd = ['powershell', '-Command', 'echo test']
|
||||
with open(os.path.devnull, 'wb') as devnull:
|
||||
try:
|
||||
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
|
||||
except Exception:
|
||||
return False
|
||||
return True
|
||||
|
||||
download_file_powershell.viable = has_powershell
|
||||
|
||||
def download_file_curl(url, target):
|
||||
cmd = ['curl', url, '--silent', '--output', target]
|
||||
_clean_check(cmd, target)
|
||||
|
||||
def has_curl():
|
||||
cmd = ['curl', '--version']
|
||||
with open(os.path.devnull, 'wb') as devnull:
|
||||
try:
|
||||
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
|
||||
except Exception:
|
||||
return False
|
||||
return True
|
||||
|
||||
download_file_curl.viable = has_curl
|
||||
|
||||
def download_file_wget(url, target):
|
||||
cmd = ['wget', url, '--quiet', '--output-document', target]
|
||||
_clean_check(cmd, target)
|
||||
|
||||
def has_wget():
|
||||
cmd = ['wget', '--version']
|
||||
with open(os.path.devnull, 'wb') as devnull:
|
||||
try:
|
||||
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
|
||||
except Exception:
|
||||
return False
|
||||
return True
|
||||
|
||||
download_file_wget.viable = has_wget
|
||||
|
||||
def download_file_insecure(url, target):
|
||||
"""
|
||||
Use Python to download the file, even though it cannot authenticate the
|
||||
connection.
|
||||
"""
|
||||
src = urlopen(url)
|
||||
try:
|
||||
# Read all the data in one block.
|
||||
data = src.read()
|
||||
finally:
|
||||
src.close()
|
||||
|
||||
# Write all the data in one block to avoid creating a partial file.
|
||||
with open(target, "wb") as dst:
|
||||
dst.write(data)
|
||||
|
||||
download_file_insecure.viable = lambda: True
|
||||
|
||||
def get_best_downloader():
|
||||
downloaders = (
|
||||
download_file_powershell,
|
||||
download_file_curl,
|
||||
download_file_wget,
|
||||
download_file_insecure,
|
||||
)
|
||||
viable_downloaders = (dl for dl in downloaders if dl.viable())
|
||||
return next(viable_downloaders, None)
|
||||
|
||||
def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
|
||||
to_dir=os.curdir, delay=15, downloader_factory=get_best_downloader):
|
||||
"""
|
||||
Download setuptools from a specified location and return its filename
|
||||
|
||||
`version` should be a valid setuptools version number that is available
|
||||
as an egg for download under the `download_base` URL (which should end
|
||||
with a '/'). `to_dir` is the directory where the egg will be downloaded.
|
||||
`delay` is the number of seconds to pause before an actual download
|
||||
attempt.
|
||||
|
||||
``downloader_factory`` should be a function taking no arguments and
|
||||
returning a function for downloading a URL to a target.
|
||||
"""
|
||||
# making sure we use the absolute path
|
||||
to_dir = os.path.abspath(to_dir)
|
||||
zip_name = "setuptools-%s.zip" % version
|
||||
url = download_base + zip_name
|
||||
saveto = os.path.join(to_dir, zip_name)
|
||||
if not os.path.exists(saveto): # Avoid repeated downloads
|
||||
log.warn("Downloading %s", url)
|
||||
downloader = downloader_factory()
|
||||
downloader(url, saveto)
|
||||
return os.path.realpath(saveto)
|
||||
|
||||
def _build_install_args(options):
|
||||
"""
|
||||
Build the arguments to 'python setup.py install' on the setuptools package
|
||||
"""
|
||||
return ['--user'] if options.user_install else []
|
||||
|
||||
def _parse_args():
|
||||
"""
|
||||
Parse the command line for options
|
||||
"""
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option(
|
||||
'--user', dest='user_install', action='store_true', default=False,
|
||||
help='install in user site package (requires Python 2.6 or later)')
|
||||
parser.add_option(
|
||||
'--download-base', dest='download_base', metavar="URL",
|
||||
default=DEFAULT_URL,
|
||||
help='alternative URL from where to download the setuptools package')
|
||||
parser.add_option(
|
||||
'--insecure', dest='downloader_factory', action='store_const',
|
||||
const=lambda: download_file_insecure, default=get_best_downloader,
|
||||
help='Use internal, non-validating downloader'
|
||||
)
|
||||
parser.add_option(
|
||||
'--version', help="Specify which version to download",
|
||||
default=DEFAULT_VERSION,
|
||||
)
|
||||
options, args = parser.parse_args()
|
||||
# positional arguments are ignored
|
||||
return options
|
||||
|
||||
def main():
|
||||
"""Install or upgrade setuptools and EasyInstall"""
|
||||
options = _parse_args()
|
||||
archive = download_setuptools(
|
||||
version=options.version,
|
||||
download_base=options.download_base,
|
||||
downloader_factory=options.downloader_factory,
|
||||
)
|
||||
return _install(archive, _build_install_args(options))
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
0
lbrynet/__init__.py
Normal file
0
lbrynet/__init__.py
Normal file
25
lbrynet/conf.py
Normal file
25
lbrynet/conf.py
Normal file
|
@ -0,0 +1,25 @@
|
|||
"""
|
||||
Some network wide and also application specific parameters
|
||||
"""
|
||||
|
||||
|
||||
import os
|
||||
|
||||
|
||||
MAX_HANDSHAKE_SIZE = 2**16
|
||||
MAX_REQUEST_SIZE = 2**16
|
||||
MAX_BLOB_REQUEST_SIZE = 2**16
|
||||
MAX_RESPONSE_INFO_SIZE = 2**16
|
||||
MAX_BLOB_INFOS_TO_REQUEST = 20
|
||||
BLOBFILES_DIR = ".blobfiles"
|
||||
BLOB_SIZE = 2**21
|
||||
MIN_BLOB_DATA_PAYMENT_RATE = .5 # points/megabyte
|
||||
MIN_BLOB_INFO_PAYMENT_RATE = 2.0 # points/1000 infos
|
||||
MIN_VALUABLE_BLOB_INFO_PAYMENT_RATE = 5.0 # points/1000 infos
|
||||
MIN_VALUABLE_BLOB_HASH_PAYMENT_RATE = 5.0 # points/1000 infos
|
||||
MAX_CONNECTIONS_PER_STREAM = 5
|
||||
|
||||
POINTTRADER_SERVER = 'http://ec2-54-187-192-68.us-west-2.compute.amazonaws.com:2424'
|
||||
#POINTTRADER_SERVER = 'http://127.0.0.1:2424'
|
||||
|
||||
CRYPTSD_FILE_EXTENSION = ".cryptsd"
|
18
lbrynet/core/BlobInfo.py
Normal file
18
lbrynet/core/BlobInfo.py
Normal file
|
@ -0,0 +1,18 @@
|
|||
class BlobInfo(object):
|
||||
"""
|
||||
This structure is used to represent the metadata of a blob.
|
||||
|
||||
@ivar blob_hash: The sha384 hashsum of the blob's data.
|
||||
@type blob_hash: string, hex-encoded
|
||||
|
||||
@ivar blob_num: For streams, the position of the blob in the stream.
|
||||
@type blob_num: integer
|
||||
|
||||
@ivar length: The length of the blob in bytes.
|
||||
@type length: integer
|
||||
"""
|
||||
|
||||
def __init__(self, blob_hash, blob_num, length):
|
||||
self.blob_hash = blob_hash
|
||||
self.blob_num = blob_num
|
||||
self.length = length
|
438
lbrynet/core/BlobManager.py
Normal file
438
lbrynet/core/BlobManager.py
Normal file
|
@ -0,0 +1,438 @@
|
|||
import logging
|
||||
import os
|
||||
import leveldb
|
||||
import time
|
||||
import json
|
||||
from twisted.internet import threads, defer, reactor, task
|
||||
from twisted.python.failure import Failure
|
||||
from lbrynet.core.HashBlob import BlobFile, TempBlob, BlobFileCreator, TempBlobCreator
|
||||
from lbrynet.core.server.DHTHashAnnouncer import DHTHashSupplier
|
||||
from lbrynet.core.utils import is_valid_blobhash
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj
|
||||
|
||||
|
||||
class BlobManager(DHTHashSupplier):
|
||||
"""This class is subclassed by classes which keep track of which blobs are available
|
||||
and which give access to new/existing blobs"""
|
||||
def __init__(self, hash_announcer):
|
||||
DHTHashSupplier.__init__(self, hash_announcer)
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
|
||||
def get_blob(self, blob_hash, upload_allowed, length):
|
||||
pass
|
||||
|
||||
def get_blob_creator(self):
|
||||
pass
|
||||
|
||||
def _make_new_blob(self, blob_hash, upload_allowed, length):
|
||||
pass
|
||||
|
||||
def blob_completed(self, blob, next_announce_time=None):
|
||||
pass
|
||||
|
||||
def completed_blobs(self, blobs_to_check):
|
||||
pass
|
||||
|
||||
def hashes_to_announce(self):
|
||||
pass
|
||||
|
||||
def creator_finished(self, blob_creator):
|
||||
pass
|
||||
|
||||
def delete_blob(self, blob_hash):
|
||||
pass
|
||||
|
||||
def get_blob_length(self, blob_hash):
|
||||
pass
|
||||
|
||||
def check_consistency(self):
|
||||
pass
|
||||
|
||||
def blob_requested(self, blob_hash):
|
||||
pass
|
||||
|
||||
def blob_downloaded(self, blob_hash):
|
||||
pass
|
||||
|
||||
def blob_searched_on(self, blob_hash):
|
||||
pass
|
||||
|
||||
def blob_paid_for(self, blob_hash, amount):
|
||||
pass
|
||||
|
||||
|
||||
class DiskBlobManager(BlobManager):
|
||||
"""This class stores blobs on the hard disk"""
|
||||
def __init__(self, hash_announcer, blob_dir, db_dir):
|
||||
BlobManager.__init__(self, hash_announcer)
|
||||
self.blob_dir = blob_dir
|
||||
self.db_dir = db_dir
|
||||
self.db = None
|
||||
self.blob_type = BlobFile
|
||||
self.blob_creator_type = BlobFileCreator
|
||||
self.blobs = {}
|
||||
self.blob_hashes_to_delete = {} # {blob_hash: being_deleted (True/False)}
|
||||
self._next_manage_call = None
|
||||
|
||||
def setup(self):
|
||||
d = threads.deferToThread(self._open_db)
|
||||
d.addCallback(lambda _: self._manage())
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
if self._next_manage_call is not None and self._next_manage_call.active():
|
||||
self._next_manage_call.cancel()
|
||||
self._next_manage_call = None
|
||||
self.db = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_blob(self, blob_hash, upload_allowed, length=None):
|
||||
"""Return a blob identified by blob_hash, which may be a new blob or a blob that is already on the hard disk"""
|
||||
# TODO: if blob.upload_allowed and upload_allowed is False, change upload_allowed in blob and on disk
|
||||
if blob_hash in self.blobs:
|
||||
return defer.succeed(self.blobs[blob_hash])
|
||||
return self._make_new_blob(blob_hash, upload_allowed, length)
|
||||
|
||||
def get_blob_creator(self):
|
||||
return self.blob_creator_type(self, self.blob_dir)
|
||||
|
||||
def _make_new_blob(self, blob_hash, upload_allowed, length=None):
|
||||
blob = self.blob_type(self.blob_dir, blob_hash, upload_allowed, length)
|
||||
self.blobs[blob_hash] = blob
|
||||
d = threads.deferToThread(self._completed_blobs, [blob_hash])
|
||||
|
||||
def check_completed(completed_blobs):
|
||||
|
||||
def set_length(length):
|
||||
blob.length = length
|
||||
|
||||
if len(completed_blobs) == 1 and completed_blobs[0] == blob_hash:
|
||||
blob.verified = True
|
||||
inner_d = threads.deferToThread(self._get_blob_length, blob_hash)
|
||||
inner_d.addCallback(set_length)
|
||||
inner_d.addCallback(lambda _: blob)
|
||||
else:
|
||||
inner_d = defer.succeed(blob)
|
||||
return inner_d
|
||||
|
||||
d.addCallback(check_completed)
|
||||
return d
|
||||
|
||||
def blob_completed(self, blob, next_announce_time=None):
|
||||
if next_announce_time is None:
|
||||
next_announce_time = time.time()
|
||||
return threads.deferToThread(self._add_completed_blob, blob.blob_hash, blob.length,
|
||||
time.time(), next_announce_time)
|
||||
|
||||
def completed_blobs(self, blobs_to_check):
|
||||
return threads.deferToThread(self._completed_blobs, blobs_to_check)
|
||||
|
||||
def hashes_to_announce(self):
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
return threads.deferToThread(self._get_blobs_to_announce, next_announce_time)
|
||||
|
||||
def creator_finished(self, blob_creator):
|
||||
logging.debug("blob_creator.blob_hash: %s", blob_creator.blob_hash)
|
||||
assert blob_creator.blob_hash is not None
|
||||
assert blob_creator.blob_hash not in self.blobs
|
||||
assert blob_creator.length is not None
|
||||
new_blob = self.blob_type(self.blob_dir, blob_creator.blob_hash, True, blob_creator.length)
|
||||
new_blob.verified = True
|
||||
self.blobs[blob_creator.blob_hash] = new_blob
|
||||
if self.hash_announcer is not None:
|
||||
self.hash_announcer.immediate_announce([blob_creator.blob_hash])
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
d = self.blob_completed(new_blob, next_announce_time)
|
||||
else:
|
||||
d = self.blob_completed(new_blob)
|
||||
return d
|
||||
|
||||
def delete_blobs(self, blob_hashes):
|
||||
for blob_hash in blob_hashes:
|
||||
if not blob_hash in self.blob_hashes_to_delete:
|
||||
self.blob_hashes_to_delete[blob_hash] = False
|
||||
|
||||
def update_all_last_verified_dates(self, timestamp):
|
||||
return threads.deferToThread(self._update_all_last_verified_dates, timestamp)
|
||||
|
||||
def immediate_announce_all_blobs(self):
|
||||
d = threads.deferToThread(self._get_all_verified_blob_hashes)
|
||||
d.addCallback(self.hash_announcer.immediate_announce)
|
||||
return d
|
||||
|
||||
def get_blob_length(self, blob_hash):
|
||||
return threads.deferToThread(self._get_blob_length, blob_hash)
|
||||
|
||||
def check_consistency(self):
|
||||
return threads.deferToThread(self._check_consistency)
|
||||
|
||||
def _manage(self):
|
||||
from twisted.internet import reactor
|
||||
|
||||
d = self._delete_blobs_marked_for_deletion()
|
||||
|
||||
def set_next_manage_call():
|
||||
self._next_manage_call = reactor.callLater(1, self._manage)
|
||||
|
||||
d.addCallback(lambda _: set_next_manage_call())
|
||||
|
||||
def _delete_blobs_marked_for_deletion(self):
|
||||
|
||||
def remove_from_list(b_h):
|
||||
del self.blob_hashes_to_delete[b_h]
|
||||
return b_h
|
||||
|
||||
def set_not_deleting(err, b_h):
|
||||
logging.warning("Failed to delete blob %s. Reason: %s", str(b_h), err.getErrorMessage())
|
||||
self.blob_hashes_to_delete[b_h] = False
|
||||
return err
|
||||
|
||||
def delete_from_db(result):
|
||||
b_hs = [r[1] for r in result if r[0] is True]
|
||||
if b_hs:
|
||||
d = threads.deferToThread(self._delete_blobs_from_db, b_hs)
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
|
||||
def log_error(err):
|
||||
logging.warning("Failed to delete completed blobs from the db: %s", err.getErrorMessage())
|
||||
|
||||
d.addErrback(log_error)
|
||||
return d
|
||||
|
||||
def delete(blob, b_h):
|
||||
d = blob.delete()
|
||||
d.addCallbacks(lambda _: remove_from_list(b_h), set_not_deleting, errbackArgs=(b_h,))
|
||||
return d
|
||||
|
||||
ds = []
|
||||
for blob_hash, being_deleted in self.blob_hashes_to_delete.items():
|
||||
if being_deleted is False:
|
||||
self.blob_hashes_to_delete[blob_hash] = True
|
||||
d = self.get_blob(blob_hash, True)
|
||||
d.addCallbacks(delete, set_not_deleting, callbackArgs=(blob_hash,), errbackArgs=(blob_hash,))
|
||||
ds.append(d)
|
||||
dl = defer.DeferredList(ds, consumeErrors=True)
|
||||
dl.addCallback(delete_from_db)
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
######### database calls #########
|
||||
|
||||
def _open_db(self):
|
||||
self.db = leveldb.LevelDB(os.path.join(self.db_dir, "blobs.db"))
|
||||
|
||||
def _add_completed_blob(self, blob_hash, length, timestamp, next_announce_time=None):
|
||||
logging.debug("Adding a completed blob. blob_hash=%s, length=%s", blob_hash, str(length))
|
||||
if next_announce_time is None:
|
||||
next_announce_time = timestamp
|
||||
self.db.Put(blob_hash, json.dumps((length, timestamp, next_announce_time)), sync=True)
|
||||
|
||||
def _completed_blobs(self, blobs_to_check):
|
||||
blobs = []
|
||||
for b in blobs_to_check:
|
||||
if is_valid_blobhash(b):
|
||||
try:
|
||||
length, verified_time, next_announce_time = json.loads(self.db.Get(b))
|
||||
except KeyError:
|
||||
continue
|
||||
file_path = os.path.join(self.blob_dir, b)
|
||||
if os.path.isfile(file_path):
|
||||
if verified_time > os.path.getctime(file_path):
|
||||
blobs.append(b)
|
||||
return blobs
|
||||
|
||||
def _get_blob_length(self, blob):
|
||||
length, verified_time, next_announce_time = json.loads(self.db.Get(blob))
|
||||
return length
|
||||
|
||||
def _update_blob_verified_timestamp(self, blob, timestamp):
|
||||
length, old_verified_time, next_announce_time = json.loads(self.db.Get(blob))
|
||||
self.db.Put(blob, json.dumps((length, timestamp, next_announce_time)), sync=True)
|
||||
|
||||
def _get_blobs_to_announce(self, next_announce_time):
|
||||
# TODO: See if the following would be better for handling announce times:
|
||||
# TODO: Have a separate db for them, and read the whole thing into memory
|
||||
# TODO: on startup, and then write changes to db when they happen
|
||||
blobs = []
|
||||
batch = leveldb.WriteBatch()
|
||||
current_time = time.time()
|
||||
for blob_hash, blob_info in self.db.RangeIter():
|
||||
length, verified_time, announce_time = json.loads(blob_info)
|
||||
if announce_time < current_time:
|
||||
batch.Put(blob_hash, json.dumps((length, verified_time, next_announce_time)))
|
||||
blobs.append(blob_hash)
|
||||
self.db.Write(batch, sync=True)
|
||||
return blobs
|
||||
|
||||
def _update_all_last_verified_dates(self, timestamp):
|
||||
batch = leveldb.WriteBatch()
|
||||
for blob_hash, blob_info in self.db.RangeIter():
|
||||
length, verified_time, announce_time = json.loads(blob_info)
|
||||
batch.Put(blob_hash, json.dumps((length, timestamp, announce_time)))
|
||||
self.db.Write(batch, sync=True)
|
||||
|
||||
def _delete_blobs_from_db(self, blob_hashes):
|
||||
batch = leveldb.WriteBatch()
|
||||
for blob_hash in blob_hashes:
|
||||
batch.Delete(blob_hash)
|
||||
self.db.Write(batch, sync=True)
|
||||
|
||||
def _check_consistency(self):
|
||||
batch = leveldb.WriteBatch()
|
||||
current_time = time.time()
|
||||
for blob_hash, blob_info in self.db.RangeIter():
|
||||
length, verified_time, announce_time = json.loads(blob_info)
|
||||
file_path = os.path.join(self.blob_dir, blob_hash)
|
||||
if os.path.isfile(file_path):
|
||||
if verified_time < os.path.getctime(file_path):
|
||||
h = get_lbry_hash_obj()
|
||||
len_so_far = 0
|
||||
f = open(file_path)
|
||||
while True:
|
||||
data = f.read(2**12)
|
||||
if not data:
|
||||
break
|
||||
h.update(data)
|
||||
len_so_far += len(data)
|
||||
if len_so_far == length and h.hexdigest() == blob_hash:
|
||||
batch.Put(blob_hash, json.dumps((length, current_time, announce_time)))
|
||||
self.db.Write(batch, sync=True)
|
||||
|
||||
def _get_all_verified_blob_hashes(self):
|
||||
blob_hashes = []
|
||||
for blob_hash, blob_info in self.db.RangeIter():
|
||||
length, verified_time, announce_time = json.loads(blob_info)
|
||||
file_path = os.path.join(self.blob_dir, blob_hash)
|
||||
if os.path.isfile(file_path):
|
||||
if verified_time > os.path.getctime(file_path):
|
||||
blob_hashes.append(blob_hash)
|
||||
return blob_hashes
|
||||
|
||||
|
||||
class TempBlobManager(BlobManager):
|
||||
"""This class stores blobs in memory"""
|
||||
def __init__(self, hash_announcer):
|
||||
BlobManager.__init__(self, hash_announcer)
|
||||
self.blob_type = TempBlob
|
||||
self.blob_creator_type = TempBlobCreator
|
||||
self.blobs = {}
|
||||
self.blob_next_announces = {}
|
||||
self.blob_hashes_to_delete = {} # {blob_hash: being_deleted (True/False)}
|
||||
self._next_manage_call = None
|
||||
|
||||
def setup(self):
|
||||
self._manage()
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
if self._next_manage_call is not None and self._next_manage_call.active():
|
||||
self._next_manage_call.cancel()
|
||||
self._next_manage_call = None
|
||||
|
||||
def get_blob(self, blob_hash, upload_allowed, length=None):
|
||||
if blob_hash in self.blobs:
|
||||
return defer.succeed(self.blobs[blob_hash])
|
||||
return self._make_new_blob(blob_hash, upload_allowed, length)
|
||||
|
||||
def get_blob_creator(self):
|
||||
return self.blob_creator_type(self)
|
||||
|
||||
def _make_new_blob(self, blob_hash, upload_allowed, length=None):
|
||||
blob = self.blob_type(blob_hash, upload_allowed, length)
|
||||
self.blobs[blob_hash] = blob
|
||||
return defer.succeed(blob)
|
||||
|
||||
def blob_completed(self, blob, next_announce_time=None):
|
||||
if next_announce_time is None:
|
||||
next_announce_time = time.time()
|
||||
self.blob_next_announces[blob.blob_hash] = next_announce_time
|
||||
return defer.succeed(True)
|
||||
|
||||
def completed_blobs(self, blobs_to_check):
|
||||
blobs = [b.blob_hash for b in self.blobs.itervalues() if b.blob_hash in blobs_to_check and b.is_validated()]
|
||||
return defer.succeed(blobs)
|
||||
|
||||
def hashes_to_announce(self):
|
||||
now = time.time()
|
||||
blobs = [blob_hash for blob_hash, announce_time in self.blob_next_announces.iteritems() if announce_time < now]
|
||||
next_announce_time = now + self.hash_reannounce_time
|
||||
for b in blobs:
|
||||
self.blob_next_announces[b] = next_announce_time
|
||||
return defer.succeed(blobs)
|
||||
|
||||
def creator_finished(self, blob_creator):
|
||||
assert blob_creator.blob_hash is not None
|
||||
assert blob_creator.blob_hash not in self.blobs
|
||||
assert blob_creator.length is not None
|
||||
new_blob = self.blob_type(blob_creator.blob_hash, True, blob_creator.length)
|
||||
new_blob.verified = True
|
||||
new_blob.data_buffer = blob_creator.data_buffer
|
||||
new_blob.length = blob_creator.length
|
||||
self.blobs[blob_creator.blob_hash] = new_blob
|
||||
if self.hash_announcer is not None:
|
||||
self.hash_announcer.immediate_announce([blob_creator.blob_hash])
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
d = self.blob_completed(new_blob, next_announce_time)
|
||||
else:
|
||||
d = self.blob_completed(new_blob)
|
||||
d.addCallback(lambda _: new_blob)
|
||||
return d
|
||||
|
||||
def delete_blobs(self, blob_hashes):
|
||||
for blob_hash in blob_hashes:
|
||||
if not blob_hash in self.blob_hashes_to_delete:
|
||||
self.blob_hashes_to_delete[blob_hash] = False
|
||||
|
||||
def get_blob_length(self, blob_hash):
|
||||
if blob_hash in self.blobs:
|
||||
if self.blobs[blob_hash].length is not None:
|
||||
return defer.succeed(self.blobs[blob_hash].length)
|
||||
return defer.fail(ValueError("No such blob hash is known"))
|
||||
|
||||
def immediate_announce_all_blobs(self):
|
||||
return self.hash_announcer.immediate_announce(self.blobs.iterkeys())
|
||||
|
||||
def _manage(self):
|
||||
from twisted.internet import reactor
|
||||
|
||||
d = self._delete_blobs_marked_for_deletion()
|
||||
|
||||
def set_next_manage_call():
|
||||
logging.info("Setting the next manage call in %s", str(self))
|
||||
self._next_manage_call = reactor.callLater(1, self._manage)
|
||||
|
||||
d.addCallback(lambda _: set_next_manage_call())
|
||||
|
||||
def _delete_blobs_marked_for_deletion(self):
|
||||
|
||||
def remove_from_list(b_h):
|
||||
del self.blob_hashes_to_delete[b_h]
|
||||
logging.info("Deleted blob %s", blob_hash)
|
||||
return b_h
|
||||
|
||||
def set_not_deleting(err, b_h):
|
||||
logging.warning("Failed to delete blob %s. Reason: %s", str(b_h), err.getErrorMessage())
|
||||
self.blob_hashes_to_delete[b_h] = False
|
||||
return b_h
|
||||
|
||||
ds = []
|
||||
for blob_hash, being_deleted in self.blob_hashes_to_delete.items():
|
||||
if being_deleted is False:
|
||||
if blob_hash in self.blobs:
|
||||
self.blob_hashes_to_delete[blob_hash] = True
|
||||
logging.info("Found a blob marked for deletion: %s", blob_hash)
|
||||
blob = self.blobs[blob_hash]
|
||||
d = blob.delete()
|
||||
|
||||
d.addCallbacks(lambda _: remove_from_list(blob_hash), set_not_deleting,
|
||||
errbackArgs=(blob_hash,))
|
||||
|
||||
ds.append(d)
|
||||
else:
|
||||
remove_from_list(blob_hash)
|
||||
d = defer.fail(Failure(ValueError("No such blob known")))
|
||||
logging.warning("Blob %s cannot be deleted because it is unknown")
|
||||
ds.append(d)
|
||||
return defer.DeferredList(ds)
|
6
lbrynet/core/DownloadOption.py
Normal file
6
lbrynet/core/DownloadOption.py
Normal file
|
@ -0,0 +1,6 @@
|
|||
class DownloadOption(object):
|
||||
def __init__(self, option_types, long_description, short_description, default):
|
||||
self.option_types = option_types
|
||||
self.long_description = long_description
|
||||
self.short_description = short_description
|
||||
self.default = default
|
48
lbrynet/core/Error.py
Normal file
48
lbrynet/core/Error.py
Normal file
|
@ -0,0 +1,48 @@
|
|||
class PriceDisagreementError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class DuplicateStreamHashError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class DownloadCanceledError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class RequestCanceledError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class InsufficientFundsError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ConnectionClosedBeforeResponseError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class UnknownNameError(Exception):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
|
||||
class InvalidStreamInfoError(Exception):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
|
||||
class MisbehavingPeerError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidDataError(MisbehavingPeerError):
|
||||
pass
|
||||
|
||||
|
||||
class NoResponseError(MisbehavingPeerError):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidResponseError(MisbehavingPeerError):
|
||||
pass
|
15
lbrynet/core/HashAnnouncer.py
Normal file
15
lbrynet/core/HashAnnouncer.py
Normal file
|
@ -0,0 +1,15 @@
|
|||
class DummyHashAnnouncer(object):
|
||||
def __init__(self, *args):
|
||||
pass
|
||||
|
||||
def run_manage_loop(self):
|
||||
pass
|
||||
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def add_supplier(self, *args):
|
||||
pass
|
||||
|
||||
def immediate_announce(self, *args):
|
||||
pass
|
391
lbrynet/core/HashBlob.py
Normal file
391
lbrynet/core/HashBlob.py
Normal file
|
@ -0,0 +1,391 @@
|
|||
from StringIO import StringIO
|
||||
import logging
|
||||
import os
|
||||
import tempfile
|
||||
import threading
|
||||
import shutil
|
||||
from twisted.internet import interfaces, defer, threads
|
||||
from twisted.protocols.basic import FileSender
|
||||
from twisted.python.failure import Failure
|
||||
from zope.interface import implements
|
||||
from lbrynet.conf import BLOB_SIZE
|
||||
from lbrynet.core.Error import DownloadCanceledError, InvalidDataError
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj
|
||||
|
||||
|
||||
class HashBlobReader(object):
|
||||
implements(interfaces.IConsumer)
|
||||
|
||||
def __init__(self, write_func):
|
||||
self.write_func = write_func
|
||||
|
||||
def registerProducer(self, producer, streaming):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.producer = producer
|
||||
self.streaming = streaming
|
||||
if self.streaming is False:
|
||||
reactor.callLater(0, self.producer.resumeProducing)
|
||||
|
||||
def unregisterProducer(self):
|
||||
pass
|
||||
|
||||
def write(self, data):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.write_func(data)
|
||||
if self.streaming is False:
|
||||
reactor.callLater(0, self.producer.resumeProducing)
|
||||
|
||||
|
||||
class HashBlobWriter(object):
|
||||
def __init__(self, write_handle, length_getter, finished_cb):
|
||||
self.write_handle = write_handle
|
||||
self.length_getter = length_getter
|
||||
self.finished_cb = finished_cb
|
||||
self.hashsum = get_lbry_hash_obj()
|
||||
self.len_so_far = 0
|
||||
|
||||
def write(self, data):
|
||||
self.hashsum.update(data)
|
||||
self.len_so_far += len(data)
|
||||
if self.len_so_far > self.length_getter():
|
||||
self.finished_cb(self, Failure(InvalidDataError("Length so far is greater than the expected length."
|
||||
" %s to %s" % (str(self.len_so_far),
|
||||
str(self.length_getter())))))
|
||||
else:
|
||||
self.write_handle.write(data)
|
||||
if self.len_so_far == self.length_getter():
|
||||
self.finished_cb(self)
|
||||
|
||||
def cancel(self, reason=None):
|
||||
if reason is None:
|
||||
reason = Failure(DownloadCanceledError())
|
||||
self.finished_cb(self, reason)
|
||||
|
||||
|
||||
class HashBlob(object):
|
||||
"""A chunk of data available on the network which is specified by a hashsum"""
|
||||
|
||||
def __init__(self, blob_hash, upload_allowed, length=None):
|
||||
self.blob_hash = blob_hash
|
||||
self.length = length
|
||||
self.writers = {} # {Peer: writer, finished_deferred}
|
||||
self.finished_deferred = None
|
||||
self.verified = False
|
||||
self.upload_allowed = upload_allowed
|
||||
self.readers = 0
|
||||
|
||||
def set_length(self, length):
|
||||
if self.length is not None and length == self.length:
|
||||
return True
|
||||
if self.length is None and 0 <= length <= BLOB_SIZE:
|
||||
self.length = length
|
||||
return True
|
||||
logging.warning("Got an invalid length. Previous length: %s, Invalid length: %s", str(self.length), str(length))
|
||||
return False
|
||||
|
||||
def get_length(self):
|
||||
return self.length
|
||||
|
||||
def is_validated(self):
|
||||
if self.verified:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def is_downloading(self):
|
||||
if self.writers:
|
||||
return True
|
||||
return False
|
||||
|
||||
def read(self, write_func):
|
||||
|
||||
def close_self(*args):
|
||||
self.close_read_handle(file_handle)
|
||||
return args[0]
|
||||
|
||||
file_sender = FileSender()
|
||||
reader = HashBlobReader(write_func)
|
||||
file_handle = self.open_for_reading()
|
||||
if file_handle is not None:
|
||||
d = file_sender.beginFileTransfer(file_handle, reader)
|
||||
d.addCallback(close_self)
|
||||
else:
|
||||
d = defer.fail(ValueError("Could not read the blob"))
|
||||
return d
|
||||
|
||||
def writer_finished(self, writer, err=None):
|
||||
|
||||
def fire_finished_deferred():
|
||||
self.verified = True
|
||||
for p, (w, finished_deferred) in self.writers.items():
|
||||
if w == writer:
|
||||
finished_deferred.callback(self)
|
||||
del self.writers[p]
|
||||
return True
|
||||
logging.warning("Somehow, the writer that was accepted as being valid was already removed. writer: %s",
|
||||
str(writer))
|
||||
return False
|
||||
|
||||
def errback_finished_deferred(err):
|
||||
for p, (w, finished_deferred) in self.writers.items():
|
||||
if w == writer:
|
||||
finished_deferred.errback(err)
|
||||
del self.writers[p]
|
||||
|
||||
def cancel_other_downloads():
|
||||
for p, (w, finished_deferred) in self.writers.items():
|
||||
w.cancel()
|
||||
|
||||
if err is None:
|
||||
if writer.len_so_far == self.length and writer.hashsum.hexdigest() == self.blob_hash:
|
||||
if self.verified is False:
|
||||
d = self._save_verified_blob(writer)
|
||||
d.addCallbacks(lambda _: fire_finished_deferred(), errback_finished_deferred)
|
||||
d.addCallback(lambda _: cancel_other_downloads())
|
||||
else:
|
||||
errback_finished_deferred(Failure(DownloadCanceledError()))
|
||||
d = defer.succeed(True)
|
||||
else:
|
||||
err_string = "length vs expected: {0}, {1}, hash vs expected: {2}, {3}"
|
||||
err_string = err_string.format(self.length, writer.len_so_far, self.blob_hash,
|
||||
writer.hashsum.hexdigest())
|
||||
errback_finished_deferred(Failure(InvalidDataError(err_string)))
|
||||
d = defer.succeed(True)
|
||||
else:
|
||||
errback_finished_deferred(err)
|
||||
d = defer.succeed(True)
|
||||
|
||||
d.addBoth(lambda _: self._close_writer(writer))
|
||||
return d
|
||||
|
||||
def open_for_writing(self, peer):
|
||||
pass
|
||||
|
||||
def open_for_reading(self):
|
||||
pass
|
||||
|
||||
def delete(self):
|
||||
pass
|
||||
|
||||
def close_read_handle(self, file_handle):
|
||||
pass
|
||||
|
||||
def _close_writer(self, writer):
|
||||
pass
|
||||
|
||||
def _save_verified_blob(self, writer):
|
||||
pass
|
||||
|
||||
def __str__(self):
|
||||
return self.blob_hash[:16]
|
||||
|
||||
def __repr__(self):
|
||||
return str(self)
|
||||
|
||||
|
||||
class BlobFile(HashBlob):
|
||||
"""A HashBlob which will be saved to the hard disk of the downloader"""
|
||||
|
||||
def __init__(self, blob_dir, *args):
|
||||
HashBlob.__init__(self, *args)
|
||||
self.blob_dir = blob_dir
|
||||
self.file_path = os.path.join(blob_dir, self.blob_hash)
|
||||
self.setting_verified_blob_lock = threading.Lock()
|
||||
self.moved_verified_blob = False
|
||||
|
||||
def open_for_writing(self, peer):
|
||||
if not peer in self.writers:
|
||||
logging.debug("Opening %s to be written by %s", str(self), str(peer))
|
||||
write_file = tempfile.NamedTemporaryFile(delete=False, dir=self.blob_dir)
|
||||
finished_deferred = defer.Deferred()
|
||||
writer = HashBlobWriter(write_file, self.get_length, self.writer_finished)
|
||||
|
||||
self.writers[peer] = (writer, finished_deferred)
|
||||
return finished_deferred, writer.write, writer.cancel
|
||||
logging.warning("Tried to download the same file twice simultaneously from the same peer")
|
||||
return None, None, None
|
||||
|
||||
def open_for_reading(self):
|
||||
if self.verified is True:
|
||||
file_handle = None
|
||||
try:
|
||||
file_handle = open(self.file_path, 'rb')
|
||||
self.readers += 1
|
||||
return file_handle
|
||||
except IOError:
|
||||
self.close_read_handle(file_handle)
|
||||
return None
|
||||
|
||||
def delete(self):
|
||||
if not self.writers and not self.readers:
|
||||
self.verified = False
|
||||
self.moved_verified_blob = False
|
||||
|
||||
def delete_from_file_system():
|
||||
if os.path.isfile(self.file_path):
|
||||
os.remove(self.file_path)
|
||||
|
||||
d = threads.deferToThread(delete_from_file_system)
|
||||
|
||||
def log_error(err):
|
||||
logging.warning("An error occurred deleting %s: %s", str(self.file_path), err.getErrorMessage())
|
||||
return err
|
||||
|
||||
d.addErrback(log_error)
|
||||
return d
|
||||
else:
|
||||
return defer.fail(Failure(ValueError("File is currently being read or written and cannot be deleted")))
|
||||
|
||||
def close_read_handle(self, file_handle):
|
||||
if file_handle is not None:
|
||||
file_handle.close()
|
||||
self.readers -= 1
|
||||
|
||||
def _close_writer(self, writer):
|
||||
if writer.write_handle is not None:
|
||||
logging.debug("Closing %s", str(self))
|
||||
name = writer.write_handle.name
|
||||
writer.write_handle.close()
|
||||
threads.deferToThread(os.remove, name)
|
||||
writer.write_handle = None
|
||||
|
||||
def _save_verified_blob(self, writer):
|
||||
|
||||
def move_file():
|
||||
with self.setting_verified_blob_lock:
|
||||
if self.moved_verified_blob is False:
|
||||
temp_file_name = writer.write_handle.name
|
||||
writer.write_handle.close()
|
||||
shutil.move(temp_file_name, self.file_path)
|
||||
writer.write_handle = None
|
||||
self.moved_verified_blob = True
|
||||
return True
|
||||
else:
|
||||
raise DownloadCanceledError()
|
||||
|
||||
return threads.deferToThread(move_file)
|
||||
|
||||
|
||||
class TempBlob(HashBlob):
|
||||
"""A HashBlob which will only exist in memory"""
|
||||
def __init__(self, *args):
|
||||
HashBlob.__init__(self, *args)
|
||||
self.data_buffer = ""
|
||||
|
||||
def open_for_writing(self, peer):
|
||||
if not peer in self.writers:
|
||||
temp_buffer = StringIO()
|
||||
finished_deferred = defer.Deferred()
|
||||
writer = HashBlobWriter(temp_buffer, self.get_length, self.writer_finished)
|
||||
|
||||
self.writers[peer] = (writer, finished_deferred)
|
||||
return finished_deferred, writer.write, writer.cancel
|
||||
return None, None, None
|
||||
|
||||
def open_for_reading(self):
|
||||
if self.verified is True:
|
||||
return StringIO(self.data_buffer)
|
||||
return None
|
||||
|
||||
def delete(self):
|
||||
if not self.writers and not self.readers:
|
||||
self.verified = False
|
||||
self.data_buffer = ''
|
||||
return defer.succeed(True)
|
||||
else:
|
||||
return defer.fail(Failure(ValueError("Blob is currently being read or written and cannot be deleted")))
|
||||
|
||||
def close_read_handle(self, file_handle):
|
||||
file_handle.close()
|
||||
|
||||
def _close_writer(self, writer):
|
||||
if writer.write_handle is not None:
|
||||
writer.write_handle.close()
|
||||
writer.write_handle = None
|
||||
|
||||
def _save_verified_blob(self, writer):
|
||||
if not self.data_buffer:
|
||||
self.data_buffer = writer.write_handle.getvalue()
|
||||
writer.write_handle.close()
|
||||
writer.write_handle = None
|
||||
return defer.succeed(True)
|
||||
else:
|
||||
return defer.fail(Failure(DownloadCanceledError()))
|
||||
|
||||
|
||||
class HashBlobCreator(object):
|
||||
def __init__(self, blob_manager):
|
||||
self.blob_manager = blob_manager
|
||||
self.hashsum = get_lbry_hash_obj()
|
||||
self.len_so_far = 0
|
||||
self.blob_hash = None
|
||||
self.length = None
|
||||
|
||||
def open(self):
|
||||
pass
|
||||
|
||||
def close(self):
|
||||
self.length = self.len_so_far
|
||||
if self.length == 0:
|
||||
self.blob_hash = None
|
||||
else:
|
||||
self.blob_hash = self.hashsum.hexdigest()
|
||||
d = self._close()
|
||||
|
||||
if self.blob_hash is not None:
|
||||
d.addCallback(lambda _: self.blob_manager.creator_finished(self))
|
||||
d.addCallback(lambda _: self.blob_hash)
|
||||
else:
|
||||
d.addCallback(lambda _: None)
|
||||
return d
|
||||
|
||||
def write(self, data):
|
||||
self.hashsum.update(data)
|
||||
self.len_so_far += len(data)
|
||||
self._write(data)
|
||||
|
||||
def _close(self):
|
||||
pass
|
||||
|
||||
def _write(self, data):
|
||||
pass
|
||||
|
||||
|
||||
class BlobFileCreator(HashBlobCreator):
|
||||
def __init__(self, blob_manager, blob_dir):
|
||||
HashBlobCreator.__init__(self, blob_manager)
|
||||
self.blob_dir = blob_dir
|
||||
self.out_file = tempfile.NamedTemporaryFile(delete=False, dir=self.blob_dir)
|
||||
|
||||
def _close(self):
|
||||
temp_file_name = self.out_file.name
|
||||
self.out_file.close()
|
||||
|
||||
def change_file_name():
|
||||
shutil.move(temp_file_name, os.path.join(self.blob_dir, self.blob_hash))
|
||||
return True
|
||||
|
||||
if self.blob_hash is not None:
|
||||
d = threads.deferToThread(change_file_name)
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
return d
|
||||
|
||||
def _write(self, data):
|
||||
self.out_file.write(data)
|
||||
|
||||
|
||||
class TempBlobCreator(HashBlobCreator):
|
||||
def __init__(self, blob_manager):
|
||||
HashBlobCreator.__init__(self, blob_manager)
|
||||
self.data_buffer = ''
|
||||
|
||||
def _close(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def _write(self, data):
|
||||
self.data_buffer += data
|
468
lbrynet/core/LBRYcrdWallet.py
Normal file
468
lbrynet/core/LBRYcrdWallet.py
Normal file
|
@ -0,0 +1,468 @@
|
|||
from lbrynet.interfaces import IRequestCreator, IQueryHandlerFactory, IQueryHandler, ILBRYWallet
|
||||
from lbrynet.core.client.ClientRequest import ClientRequest
|
||||
from lbrynet.core.Error import UnknownNameError, InvalidStreamInfoError, RequestCanceledError
|
||||
from bitcoinrpc.authproxy import AuthServiceProxy, JSONRPCException
|
||||
from twisted.internet import threads, reactor, defer, task
|
||||
from twisted.python.failure import Failure
|
||||
from collections import defaultdict, deque
|
||||
from zope.interface import implements
|
||||
from decimal import Decimal
|
||||
import datetime
|
||||
import logging
|
||||
import json
|
||||
import subprocess
|
||||
import socket
|
||||
import time
|
||||
import os
|
||||
|
||||
|
||||
class ReservedPoints(object):
|
||||
def __init__(self, identifier, amount):
|
||||
self.identifier = identifier
|
||||
self.amount = amount
|
||||
|
||||
|
||||
class LBRYcrdWallet(object):
|
||||
"""This class implements the LBRYWallet interface for the LBRYcrd payment system"""
|
||||
implements(ILBRYWallet)
|
||||
|
||||
def __init__(self, rpc_user, rpc_pass, rpc_url, rpc_port, start_lbrycrdd=False,
|
||||
wallet_dir=None, wallet_conf=None):
|
||||
self.rpc_conn_string = "http://%s:%s@%s:%s" % (rpc_user, rpc_pass, rpc_url, str(rpc_port))
|
||||
self.next_manage_call = None
|
||||
self.wallet_balance = Decimal(0.0)
|
||||
self.total_reserved_points = Decimal(0.0)
|
||||
self.peer_addresses = {} # {Peer: string}
|
||||
self.queued_payments = defaultdict(Decimal) # {address(string): amount(Decimal)}
|
||||
self.expected_balances = defaultdict(Decimal) # {address(string): amount(Decimal)}
|
||||
self.current_address_given_to_peer = {} # {Peer: address(string)}
|
||||
self.expected_balance_at_time = deque() # (Peer, address(string), amount(Decimal), time(datetime), count(int),
|
||||
# incremental_amount(float))
|
||||
self.max_expected_payment_time = datetime.timedelta(minutes=3)
|
||||
self.stopped = True
|
||||
self.start_lbrycrdd = start_lbrycrdd
|
||||
self.started_lbrycrdd = False
|
||||
self.wallet_dir = wallet_dir
|
||||
self.wallet_conf = wallet_conf
|
||||
self.lbrycrdd = None
|
||||
self.manage_running = False
|
||||
|
||||
def start(self):
|
||||
|
||||
def make_connection():
|
||||
if self.start_lbrycrdd is True:
|
||||
self._start_daemon()
|
||||
logging.info("Trying to connect to %s", self.rpc_conn_string)
|
||||
self.rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
logging.info("Connected!")
|
||||
|
||||
def start_manage():
|
||||
self.stopped = False
|
||||
self.manage()
|
||||
return True
|
||||
|
||||
d = threads.deferToThread(make_connection)
|
||||
d.addCallback(lambda _: start_manage())
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
self.stopped = True
|
||||
# If self.next_manage_call is None, then manage is currently running or else
|
||||
# start has not been called, so set stopped and do nothing else.
|
||||
if self.next_manage_call is not None:
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
|
||||
d = self.manage()
|
||||
if self.start_lbrycrdd is True:
|
||||
d.addBoth(lambda _: self._stop_daemon())
|
||||
return d
|
||||
|
||||
def manage(self):
|
||||
logging.info("Doing manage")
|
||||
self.next_manage_call = None
|
||||
have_set_manage_running = [False]
|
||||
|
||||
def check_if_manage_running():
|
||||
|
||||
d = defer.Deferred()
|
||||
|
||||
def fire_if_not_running():
|
||||
if self.manage_running is False:
|
||||
self.manage_running = True
|
||||
have_set_manage_running[0] = True
|
||||
d.callback(True)
|
||||
else:
|
||||
task.deferLater(reactor, 1, fire_if_not_running)
|
||||
|
||||
fire_if_not_running()
|
||||
return d
|
||||
|
||||
d = check_if_manage_running()
|
||||
|
||||
d.addCallback(lambda _: self._check_expected_balances())
|
||||
|
||||
d.addCallback(lambda _: self._send_payments())
|
||||
|
||||
d.addCallback(lambda _: threads.deferToThread(self._get_wallet_balance))
|
||||
|
||||
def set_wallet_balance(balance):
|
||||
self.wallet_balance = balance
|
||||
|
||||
d.addCallback(set_wallet_balance)
|
||||
|
||||
def set_next_manage_call():
|
||||
if not self.stopped:
|
||||
self.next_manage_call = reactor.callLater(60, self.manage)
|
||||
|
||||
d.addCallback(lambda _: set_next_manage_call())
|
||||
|
||||
def log_error(err):
|
||||
logging.error("Something went wrong during manage. Error message: %s", err.getErrorMessage())
|
||||
return err
|
||||
|
||||
d.addErrback(log_error)
|
||||
|
||||
def set_manage_not_running(arg):
|
||||
if have_set_manage_running[0] is True:
|
||||
self.manage_running = False
|
||||
return arg
|
||||
|
||||
d.addBoth(set_manage_not_running)
|
||||
return d
|
||||
|
||||
def get_info_exchanger(self):
|
||||
return LBRYcrdAddressRequester(self)
|
||||
|
||||
def get_wallet_info_query_handler_factory(self):
|
||||
return LBRYcrdAddressQueryHandlerFactory(self)
|
||||
|
||||
def get_balance(self):
|
||||
d = threads.deferToThread(self._get_wallet_balance)
|
||||
return d
|
||||
|
||||
def reserve_points(self, peer, amount):
|
||||
"""
|
||||
Ensure a certain amount of points are available to be sent as payment, before the service is rendered
|
||||
|
||||
@param peer: The peer to which the payment will ultimately be sent
|
||||
|
||||
@param amount: The amount of points to reserve
|
||||
|
||||
@return: A ReservedPoints object which is given to send_points once the service has been rendered
|
||||
"""
|
||||
rounded_amount = Decimal(str(round(amount, 8)))
|
||||
#if peer in self.peer_addresses:
|
||||
if self.wallet_balance >= self.total_reserved_points + rounded_amount:
|
||||
self.total_reserved_points += rounded_amount
|
||||
return ReservedPoints(peer, rounded_amount)
|
||||
return None
|
||||
|
||||
def cancel_point_reservation(self, reserved_points):
|
||||
"""
|
||||
Return all of the points that were reserved previously for some ReservedPoints object
|
||||
|
||||
@param reserved_points: ReservedPoints previously returned by reserve_points
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self.total_reserved_points -= reserved_points.amount
|
||||
|
||||
def send_points(self, reserved_points, amount):
|
||||
"""
|
||||
Schedule a payment to be sent to a peer
|
||||
|
||||
@param reserved_points: ReservedPoints object previously returned by reserve_points
|
||||
|
||||
@param amount: amount of points to actually send, must be less than or equal to the
|
||||
amount reserved in reserved_points
|
||||
|
||||
@return: Deferred which fires when the payment has been scheduled
|
||||
"""
|
||||
rounded_amount = Decimal(str(round(amount, 8)))
|
||||
peer = reserved_points.identifier
|
||||
assert(rounded_amount <= reserved_points.amount)
|
||||
assert(peer in self.peer_addresses)
|
||||
self.queued_payments[self.peer_addresses[peer]] += rounded_amount
|
||||
# make any unused points available
|
||||
self.total_reserved_points -= (reserved_points.amount - rounded_amount)
|
||||
logging.info("ordering that %s points be sent to %s", str(rounded_amount),
|
||||
str(self.peer_addresses[peer]))
|
||||
peer.update_stats('points_sent', amount)
|
||||
return defer.succeed(True)
|
||||
|
||||
def add_expected_payment(self, peer, amount):
|
||||
"""Increase the number of points expected to be paid by a peer"""
|
||||
rounded_amount = Decimal(str(round(amount, 8)))
|
||||
assert(peer in self.current_address_given_to_peer)
|
||||
address = self.current_address_given_to_peer[peer]
|
||||
logging.info("expecting a payment at address %s in the amount of %s", str(address), str(rounded_amount))
|
||||
self.expected_balances[address] += rounded_amount
|
||||
expected_balance = self.expected_balances[address]
|
||||
expected_time = datetime.datetime.now() + self.max_expected_payment_time
|
||||
self.expected_balance_at_time.append((peer, address, expected_balance, expected_time, 0, amount))
|
||||
peer.update_stats('expected_points', amount)
|
||||
|
||||
def update_peer_address(self, peer, address):
|
||||
self.peer_addresses[peer] = address
|
||||
|
||||
def get_new_address_for_peer(self, peer):
|
||||
def set_address_for_peer(address):
|
||||
self.current_address_given_to_peer[peer] = address
|
||||
return address
|
||||
d = threads.deferToThread(self._get_new_address)
|
||||
d.addCallback(set_address_for_peer)
|
||||
return d
|
||||
|
||||
def get_stream_info_for_name(self, name):
|
||||
|
||||
def get_stream_info_from_value(result):
|
||||
r_dict = {}
|
||||
if 'value' in result:
|
||||
value = result['value']
|
||||
try:
|
||||
value_dict = json.loads(value)
|
||||
except ValueError:
|
||||
return Failure(InvalidStreamInfoError(name))
|
||||
if 'stream_hash' in value_dict:
|
||||
r_dict['stream_hash'] = value_dict['stream_hash']
|
||||
if 'name' in value_dict:
|
||||
r_dict['name'] = value_dict['name']
|
||||
if 'description' in value_dict:
|
||||
r_dict['description'] = value_dict['description']
|
||||
return r_dict
|
||||
return Failure(UnknownNameError(name))
|
||||
|
||||
d = threads.deferToThread(self._get_value_for_name, name)
|
||||
d.addCallback(get_stream_info_from_value)
|
||||
return d
|
||||
|
||||
def claim_name(self, stream_hash, name, amount):
|
||||
value = json.dumps({"stream_hash": stream_hash})
|
||||
d = threads.deferToThread(self._claim_name, name, value, amount)
|
||||
return d
|
||||
|
||||
def get_available_balance(self):
|
||||
return float(self.wallet_balance - self.total_reserved_points)
|
||||
|
||||
def get_new_address(self):
|
||||
return threads.deferToThread(self._get_new_address)
|
||||
|
||||
def _start_daemon(self):
|
||||
|
||||
if os.name == "nt":
|
||||
si = subprocess.STARTUPINFO
|
||||
si.dwFlags = subprocess.STARTF_USESHOWWINDOW
|
||||
si.wShowWindow = subprocess.SW_HIDE
|
||||
self.lbrycrdd = subprocess.Popen(["lbrycrdd.exe", "-datadir=%s" % self.wallet_dir,
|
||||
"-conf=%s" % self.wallet_conf], startupinfo=si)
|
||||
else:
|
||||
self.lbrycrdd = subprocess.Popen(["./lbrycrdd", "-datadir=%s" % self.wallet_dir,
|
||||
"-conf=%s" % self.wallet_conf])
|
||||
self.started_lbrycrdd = True
|
||||
|
||||
tries = 0
|
||||
while tries < 5:
|
||||
try:
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
rpc_conn.getinfo()
|
||||
break
|
||||
except (socket.error, JSONRPCException):
|
||||
tries += 1
|
||||
logging.warning("Failed to connect to lbrycrdd.")
|
||||
if tries < 5:
|
||||
time.sleep(2 ** tries)
|
||||
logging.warning("Trying again in %d seconds", 2 ** tries)
|
||||
else:
|
||||
logging.warning("Giving up.")
|
||||
else:
|
||||
self.lbrycrdd.terminate()
|
||||
raise ValueError("Couldn't open lbrycrdd")
|
||||
|
||||
def _stop_daemon(self):
|
||||
if self.lbrycrdd is not None and self.started_lbrycrdd is True:
|
||||
d = threads.deferToThread(self._rpc_stop)
|
||||
return d
|
||||
return defer.succeed(True)
|
||||
|
||||
def _check_expected_balances(self):
|
||||
now = datetime.datetime.now()
|
||||
balances_to_check = []
|
||||
try:
|
||||
while self.expected_balance_at_time[0][3] < now:
|
||||
balances_to_check.append(self.expected_balance_at_time.popleft())
|
||||
except IndexError:
|
||||
pass
|
||||
ds = []
|
||||
for balance_to_check in balances_to_check:
|
||||
d = threads.deferToThread(self._check_expected_balance, balance_to_check)
|
||||
ds.append(d)
|
||||
dl = defer.DeferredList(ds)
|
||||
|
||||
def handle_checks(results):
|
||||
from future_builtins import zip
|
||||
for balance, (success, result) in zip(balances_to_check, results):
|
||||
peer = balance[0]
|
||||
if success is True:
|
||||
if result is False:
|
||||
if balance[4] <= 1: # first or second strike, give them another chance
|
||||
new_expected_balance = (balance[0],
|
||||
balance[1],
|
||||
balance[2],
|
||||
datetime.datetime.now() + self.max_expected_payment_time,
|
||||
balance[4] + 1,
|
||||
balance[5])
|
||||
self.expected_balance_at_time.append(new_expected_balance)
|
||||
peer.update_score(-5.0)
|
||||
else:
|
||||
peer.update_score(-50.0)
|
||||
else:
|
||||
if balance[4] == 0:
|
||||
peer.update_score(balance[5])
|
||||
peer.update_stats('points_received', balance[5])
|
||||
else:
|
||||
logging.warning("Something went wrong checking a balance. Peer: %s, account: %s,"
|
||||
"expected balance: %s, expected time: %s, count: %s, error: %s",
|
||||
str(balance[0]), str(balance[1]), str(balance[2]), str(balance[3]),
|
||||
str(balance[4]), str(result.getErrorMessage()))
|
||||
|
||||
dl.addCallback(handle_checks)
|
||||
return dl
|
||||
|
||||
def _check_expected_balance(self, expected_balance):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
logging.info("Checking balance of address %s", str(expected_balance[1]))
|
||||
balance = rpc_conn.getreceivedbyaddress(expected_balance[1])
|
||||
logging.debug("received balance: %s", str(balance))
|
||||
logging.debug("expected balance: %s", str(expected_balance[2]))
|
||||
return balance >= expected_balance[2]
|
||||
|
||||
def _send_payments(self):
|
||||
logging.info("Trying to send payments, if there are any to be sent")
|
||||
|
||||
def do_send(payments):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
rpc_conn.sendmany("", payments)
|
||||
|
||||
payments_to_send = {}
|
||||
for address, points in self.queued_payments.items():
|
||||
logging.info("Should be sending %s points to %s", str(points), str(address))
|
||||
payments_to_send[address] = float(points)
|
||||
self.total_reserved_points -= points
|
||||
self.wallet_balance -= points
|
||||
del self.queued_payments[address]
|
||||
if payments_to_send:
|
||||
logging.info("Creating a transaction with outputs %s", str(payments_to_send))
|
||||
return threads.deferToThread(do_send, payments_to_send)
|
||||
logging.info("There were no payments to send")
|
||||
return defer.succeed(True)
|
||||
|
||||
def _get_wallet_balance(self):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
return rpc_conn.getbalance("")
|
||||
|
||||
def _get_new_address(self):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
return rpc_conn.getnewaddress()
|
||||
|
||||
def _get_value_for_name(self, name):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
return rpc_conn.getvalueforname(name)
|
||||
|
||||
def _claim_name(self, name, value, amount):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
return str(rpc_conn.claimname(name, value, amount))
|
||||
|
||||
def _rpc_stop(self):
|
||||
rpc_conn = AuthServiceProxy(self.rpc_conn_string)
|
||||
rpc_conn.stop()
|
||||
self.lbrycrdd.wait()
|
||||
|
||||
|
||||
class LBRYcrdAddressRequester(object):
|
||||
implements([IRequestCreator])
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
self._protocols = []
|
||||
|
||||
######### IRequestCreator #########
|
||||
|
||||
def send_next_request(self, peer, protocol):
|
||||
|
||||
if not protocol in self._protocols:
|
||||
r = ClientRequest({'lbrycrd_address': True}, 'lbrycrd_address')
|
||||
d = protocol.add_request(r)
|
||||
d.addCallback(self._handle_address_response, peer, r, protocol)
|
||||
d.addErrback(self._request_failed, peer)
|
||||
self._protocols.append(protocol)
|
||||
return defer.succeed(True)
|
||||
else:
|
||||
return defer.succeed(False)
|
||||
|
||||
######### internal calls #########
|
||||
|
||||
def _handle_address_response(self, response_dict, peer, request, protocol):
|
||||
assert request.response_identifier in response_dict, \
|
||||
"Expected %s in dict but did not get it" % request.response_identifier
|
||||
assert protocol in self._protocols, "Responding protocol is not in our list of protocols"
|
||||
address = response_dict[request.response_identifier]
|
||||
self.wallet.update_peer_address(peer, address)
|
||||
|
||||
def _request_failed(self, err, peer):
|
||||
if not err.check(RequestCanceledError):
|
||||
logging.warning("A peer failed to send a valid public key response. Error: %s, peer: %s",
|
||||
err.getErrorMessage(), str(peer))
|
||||
#return err
|
||||
|
||||
|
||||
class LBRYcrdAddressQueryHandlerFactory(object):
|
||||
implements(IQueryHandlerFactory)
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
|
||||
######### IQueryHandlerFactory #########
|
||||
|
||||
def build_query_handler(self):
|
||||
q_h = LBRYcrdAddressQueryHandler(self.wallet)
|
||||
return q_h
|
||||
|
||||
def get_primary_query_identifier(self):
|
||||
return 'lbrycrd_address'
|
||||
|
||||
def get_description(self):
|
||||
return "LBRYcrd Address - an address for receiving payments via LBRYcrd"
|
||||
|
||||
|
||||
class LBRYcrdAddressQueryHandler(object):
|
||||
implements(IQueryHandler)
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
self.query_identifiers = ['lbrycrd_address']
|
||||
self.address = None
|
||||
self.peer = None
|
||||
|
||||
######### IQueryHandler #########
|
||||
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
self.peer = peer
|
||||
request_handler.register_query_handler(self, self.query_identifiers)
|
||||
|
||||
def handle_queries(self, queries):
|
||||
|
||||
def create_response(address):
|
||||
self.address = address
|
||||
fields = {'lbrycrd_address': address}
|
||||
return fields
|
||||
|
||||
if self.query_identifiers[0] in queries:
|
||||
d = self.wallet.get_new_address_for_peer(self.peer)
|
||||
d.addCallback(create_response)
|
||||
return d
|
||||
if self.address is None:
|
||||
logging.warning("Expected a request for an address, but did not receive one")
|
||||
return defer.fail(Failure(ValueError("Expected but did not receive an address request")))
|
||||
else:
|
||||
return defer.succeed({})
|
315
lbrynet/core/PTCWallet.py
Normal file
315
lbrynet/core/PTCWallet.py
Normal file
|
@ -0,0 +1,315 @@
|
|||
from collections import defaultdict
|
||||
import logging
|
||||
import leveldb
|
||||
import os
|
||||
import time
|
||||
from Crypto.Hash import SHA512
|
||||
from Crypto.PublicKey import RSA
|
||||
from lbrynet.core.client.ClientRequest import ClientRequest
|
||||
from lbrynet.core.Error import RequestCanceledError
|
||||
from lbrynet.interfaces import IRequestCreator, IQueryHandlerFactory, IQueryHandler, ILBRYWallet
|
||||
from lbrynet.pointtraderclient import pointtraderclient
|
||||
from twisted.internet import defer, threads
|
||||
from zope.interface import implements
|
||||
from twisted.python.failure import Failure
|
||||
from lbrynet.core.LBRYcrdWallet import ReservedPoints
|
||||
|
||||
|
||||
class PTCWallet(object):
|
||||
"""This class sends payments to peers and also ensures that expected payments are received.
|
||||
This class is only intended to be used for testing."""
|
||||
implements(ILBRYWallet)
|
||||
|
||||
def __init__(self, db_dir):
|
||||
self.db_dir = db_dir
|
||||
self.db = None
|
||||
self.private_key = None
|
||||
self.encoded_public_key = None
|
||||
self.peer_pub_keys = {}
|
||||
self.queued_payments = defaultdict(int)
|
||||
self.expected_payments = defaultdict(list)
|
||||
self.received_payments = defaultdict(list)
|
||||
self.next_manage_call = None
|
||||
self.payment_check_window = 3 * 60 # 3 minutes
|
||||
self.new_payments_expected_time = time.time() - self.payment_check_window
|
||||
self.known_transactions = []
|
||||
self.total_reserved_points = 0.0
|
||||
self.wallet_balance = 0.0
|
||||
|
||||
def manage(self):
|
||||
"""Send payments, ensure expected payments are received"""
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if time.time() < self.new_payments_expected_time + self.payment_check_window:
|
||||
d1 = self._get_new_payments()
|
||||
else:
|
||||
d1 = defer.succeed(None)
|
||||
d1.addCallback(lambda _: self._check_good_standing())
|
||||
d2 = self._send_queued_points()
|
||||
self.next_manage_call = reactor.callLater(60, self.manage)
|
||||
dl = defer.DeferredList([d1, d2])
|
||||
dl.addCallback(lambda _: self.get_balance())
|
||||
|
||||
def set_balance(balance):
|
||||
self.wallet_balance = balance
|
||||
|
||||
dl.addCallback(set_balance)
|
||||
return dl
|
||||
|
||||
def stop(self):
|
||||
if self.next_manage_call is not None:
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
d = self.manage()
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
self.db = None
|
||||
return d
|
||||
|
||||
def start(self):
|
||||
|
||||
def save_key(success, private_key):
|
||||
if success is True:
|
||||
threads.deferToThread(self.save_private_key, private_key.exportKey())
|
||||
return True
|
||||
return False
|
||||
|
||||
def register_private_key(private_key):
|
||||
self.private_key = private_key
|
||||
self.encoded_public_key = self.private_key.publickey().exportKey()
|
||||
d_r = pointtraderclient.register_new_account(private_key)
|
||||
d_r.addCallback(save_key, private_key)
|
||||
return d_r
|
||||
|
||||
def ensure_private_key_exists(encoded_private_key):
|
||||
if encoded_private_key is not None:
|
||||
self.private_key = RSA.importKey(encoded_private_key)
|
||||
self.encoded_public_key = self.private_key.publickey().exportKey()
|
||||
return True
|
||||
else:
|
||||
create_d = threads.deferToThread(RSA.generate, 4096)
|
||||
create_d.addCallback(register_private_key)
|
||||
return create_d
|
||||
|
||||
def start_manage():
|
||||
self.manage()
|
||||
return True
|
||||
d = threads.deferToThread(self._open_db)
|
||||
d.addCallback(lambda _: threads.deferToThread(self.get_wallet_private_key))
|
||||
d.addCallback(ensure_private_key_exists)
|
||||
d.addCallback(lambda _: start_manage())
|
||||
return d
|
||||
|
||||
def get_info_exchanger(self):
|
||||
return PointTraderKeyExchanger(self)
|
||||
|
||||
def get_wallet_info_query_handler_factory(self):
|
||||
return PointTraderKeyQueryHandlerFactory(self)
|
||||
|
||||
def reserve_points(self, peer, amount):
|
||||
"""
|
||||
Ensure a certain amount of points are available to be sent as payment, before the service is rendered
|
||||
|
||||
@param peer: The peer to which the payment will ultimately be sent
|
||||
|
||||
@param amount: The amount of points to reserve
|
||||
|
||||
@return: A ReservedPoints object which is given to send_points once the service has been rendered
|
||||
"""
|
||||
if self.wallet_balance >= self.total_reserved_points + amount:
|
||||
self.total_reserved_points += amount
|
||||
return ReservedPoints(peer, amount)
|
||||
return None
|
||||
|
||||
def cancel_point_reservation(self, reserved_points):
|
||||
"""
|
||||
Return all of the points that were reserved previously for some ReservedPoints object
|
||||
|
||||
@param reserved_points: ReservedPoints previously returned by reserve_points
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self.total_reserved_points -= reserved_points.amount
|
||||
|
||||
def send_points(self, reserved_points, amount):
|
||||
"""
|
||||
Schedule a payment to be sent to a peer
|
||||
|
||||
@param reserved_points: ReservedPoints object previously returned by reserve_points
|
||||
|
||||
@param amount: amount of points to actually send, must be less than or equal to the
|
||||
amount reserved in reserved_points
|
||||
|
||||
@return: Deferred which fires when the payment has been scheduled
|
||||
"""
|
||||
self.queued_payments[reserved_points.identifier] += amount
|
||||
# make any unused points available
|
||||
self.total_reserved_points -= reserved_points.amount - amount
|
||||
reserved_points.identifier.update_stats('points_sent', amount)
|
||||
d = defer.succeed(True)
|
||||
return d
|
||||
|
||||
def _send_queued_points(self):
|
||||
ds = []
|
||||
for peer, points in self.queued_payments.items():
|
||||
if peer in self.peer_pub_keys:
|
||||
d = pointtraderclient.send_points(self.private_key, self.peer_pub_keys[peer], points)
|
||||
self.wallet_balance -= points
|
||||
self.total_reserved_points -= points
|
||||
ds.append(d)
|
||||
del self.queued_payments[peer]
|
||||
else:
|
||||
logging.warning("Don't have a payment address for peer %s. Can't send %s points.",
|
||||
str(peer), str(points))
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
def get_balance(self):
|
||||
"""Return the balance of this wallet"""
|
||||
d = pointtraderclient.get_balance(self.private_key)
|
||||
return d
|
||||
|
||||
def add_expected_payment(self, peer, amount):
|
||||
"""Increase the number of points expected to be paid by a peer"""
|
||||
self.expected_payments[peer].append((amount, time.time()))
|
||||
self.new_payments_expected_time = time.time()
|
||||
peer.update_stats('expected_points', amount)
|
||||
|
||||
def set_public_key_for_peer(self, peer, pub_key):
|
||||
self.peer_pub_keys[peer] = pub_key
|
||||
|
||||
def _get_new_payments(self):
|
||||
|
||||
def add_new_transactions(transactions):
|
||||
for transaction in transactions:
|
||||
if transaction[1] == self.encoded_public_key:
|
||||
t_hash = SHA512.new()
|
||||
t_hash.update(transaction[0])
|
||||
t_hash.update(transaction[1])
|
||||
t_hash.update(str(transaction[2]))
|
||||
t_hash.update(transaction[3])
|
||||
if t_hash.hexdigest() not in self.known_transactions:
|
||||
self.known_transactions.append(t_hash.hexdigest())
|
||||
self._add_received_payment(transaction[0], transaction[2])
|
||||
|
||||
d = pointtraderclient.get_recent_transactions(self.private_key)
|
||||
d.addCallback(add_new_transactions)
|
||||
return d
|
||||
|
||||
def _add_received_payment(self, encoded_other_public_key, amount):
|
||||
self.received_payments[encoded_other_public_key].append((amount, time.time()))
|
||||
|
||||
def _check_good_standing(self):
|
||||
for peer, expected_payments in self.expected_payments.iteritems():
|
||||
expected_cutoff = time.time() - 90
|
||||
min_expected_balance = sum([a[0] for a in expected_payments if a[1] < expected_cutoff])
|
||||
received_balance = 0
|
||||
if self.peer_pub_keys[peer] in self.received_payments:
|
||||
received_balance = sum([a[0] for a in self.received_payments[self.peer_pub_keys[peer]]])
|
||||
if min_expected_balance > received_balance:
|
||||
logging.warning("Account in bad standing: %s (pub_key: %s), expected amount = %s, received_amount = %s",
|
||||
str(peer), self.peer_pub_keys[peer], str(min_expected_balance), str(received_balance))
|
||||
|
||||
def _open_db(self):
|
||||
self.db = leveldb.LevelDB(os.path.join(self.db_dir, "ptcwallet.db"))
|
||||
|
||||
def save_private_key(self, private_key):
|
||||
self.db.Put("private_key", private_key)
|
||||
|
||||
def get_wallet_private_key(self):
|
||||
try:
|
||||
return self.db.Get("private_key")
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
|
||||
class PointTraderKeyExchanger(object):
|
||||
implements([IRequestCreator])
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
self._protocols = []
|
||||
|
||||
######### IRequestCreator #########
|
||||
|
||||
def send_next_request(self, peer, protocol):
|
||||
if not protocol in self._protocols:
|
||||
r = ClientRequest({'public_key': self.wallet.encoded_public_key},
|
||||
'public_key')
|
||||
d = protocol.add_request(r)
|
||||
d.addCallback(self._handle_exchange_response, peer, r, protocol)
|
||||
d.addErrback(self._request_failed, peer)
|
||||
self._protocols.append(protocol)
|
||||
return defer.succeed(True)
|
||||
else:
|
||||
return defer.succeed(False)
|
||||
|
||||
######### internal calls #########
|
||||
|
||||
def _handle_exchange_response(self, response_dict, peer, request, protocol):
|
||||
assert request.response_identifier in response_dict, \
|
||||
"Expected %s in dict but did not get it" % request.response_identifier
|
||||
assert protocol in self._protocols, "Responding protocol is not in our list of protocols"
|
||||
peer_pub_key = response_dict[request.response_identifier]
|
||||
self.wallet.set_public_key_for_peer(peer, peer_pub_key)
|
||||
return True
|
||||
|
||||
def _request_failed(self, err, peer):
|
||||
if not err.check(RequestCanceledError):
|
||||
logging.warning("A peer failed to send a valid public key response. Error: %s, peer: %s",
|
||||
err.getErrorMessage(), str(peer))
|
||||
#return err
|
||||
|
||||
|
||||
class PointTraderKeyQueryHandlerFactory(object):
|
||||
implements(IQueryHandlerFactory)
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
|
||||
######### IQueryHandlerFactory #########
|
||||
|
||||
def build_query_handler(self):
|
||||
q_h = PointTraderKeyQueryHandler(self.wallet)
|
||||
return q_h
|
||||
|
||||
def get_primary_query_identifier(self):
|
||||
return 'public_key'
|
||||
|
||||
def get_description(self):
|
||||
return "Point Trader Address - an address for receiving payments on the point trader testing network"
|
||||
|
||||
|
||||
class PointTraderKeyQueryHandler(object):
|
||||
implements(IQueryHandler)
|
||||
|
||||
def __init__(self, wallet):
|
||||
self.wallet = wallet
|
||||
self.query_identifiers = ['public_key']
|
||||
self.public_key = None
|
||||
self.peer = None
|
||||
|
||||
######### IQueryHandler #########
|
||||
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
self.peer = peer
|
||||
request_handler.register_query_handler(self, self.query_identifiers)
|
||||
|
||||
def handle_queries(self, queries):
|
||||
if self.query_identifiers[0] in queries:
|
||||
new_encoded_pub_key = queries[self.query_identifiers[0]]
|
||||
try:
|
||||
RSA.importKey(new_encoded_pub_key)
|
||||
except (ValueError, TypeError, IndexError):
|
||||
logging.warning("Client sent an invalid public key.")
|
||||
return defer.fail(Failure(ValueError("Client sent an invalid public key")))
|
||||
self.public_key = new_encoded_pub_key
|
||||
self.wallet.set_public_key_for_peer(self.peer, self.public_key)
|
||||
logging.debug("Received the client's public key: %s", str(self.public_key))
|
||||
fields = {'public_key': self.wallet.encoded_public_key}
|
||||
return defer.succeed(fields)
|
||||
if self.public_key is None:
|
||||
logging.warning("Expected a public key, but did not receive one")
|
||||
return defer.fail(Failure(ValueError("Expected but did not receive a public key")))
|
||||
else:
|
||||
return defer.succeed({})
|
29
lbrynet/core/PaymentRateManager.py
Normal file
29
lbrynet/core/PaymentRateManager.py
Normal file
|
@ -0,0 +1,29 @@
|
|||
class BasePaymentRateManager(object):
|
||||
def __init__(self, rate):
|
||||
self.min_blob_data_payment_rate = rate
|
||||
|
||||
|
||||
class PaymentRateManager(object):
|
||||
def __init__(self, base, rate=None):
|
||||
"""
|
||||
@param base: a BasePaymentRateManager
|
||||
|
||||
@param rate: the min blob data payment rate
|
||||
"""
|
||||
self.base = base
|
||||
self.min_blob_data_payment_rate = rate
|
||||
self.points_paid = 0.0
|
||||
|
||||
def get_rate_blob_data(self, peer):
|
||||
return self.get_effective_min_blob_data_payment_rate()
|
||||
|
||||
def accept_rate_blob_data(self, peer, payment_rate):
|
||||
return payment_rate >= self.get_effective_min_blob_data_payment_rate()
|
||||
|
||||
def get_effective_min_blob_data_payment_rate(self):
|
||||
if self.min_blob_data_payment_rate is None:
|
||||
return self.base.min_blob_data_payment_rate
|
||||
return self.min_blob_data_payment_rate
|
||||
|
||||
def record_points_paid(self, amount):
|
||||
self.points_paid += amount
|
36
lbrynet/core/Peer.py
Normal file
36
lbrynet/core/Peer.py
Normal file
|
@ -0,0 +1,36 @@
|
|||
from collections import defaultdict
|
||||
import datetime
|
||||
|
||||
|
||||
class Peer(object):
|
||||
def __init__(self, host, port):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.attempt_connection_at = None
|
||||
self.down_count = 0
|
||||
self.score = 0
|
||||
self.stats = defaultdict(float) # {string stat_type, float count}
|
||||
|
||||
def is_available(self):
|
||||
if (self.attempt_connection_at is None or
|
||||
datetime.datetime.today() > self.attempt_connection_at):
|
||||
return True
|
||||
return False
|
||||
|
||||
def report_up(self):
|
||||
self.down_count = 0
|
||||
self.attempt_connection_at = None
|
||||
|
||||
def report_down(self):
|
||||
self.down_count += 1
|
||||
timeout_time = datetime.timedelta(seconds=60 * self.down_count)
|
||||
self.attempt_connection_at = datetime.datetime.today() + timeout_time
|
||||
|
||||
def update_score(self, score_change):
|
||||
self.score += score_change
|
||||
|
||||
def update_stats(self, stat_type, count):
|
||||
self.stats[stat_type] += count
|
||||
|
||||
def __str__(self):
|
||||
return self.host + ":" + str(self.port)
|
19
lbrynet/core/PeerFinder.py
Normal file
19
lbrynet/core/PeerFinder.py
Normal file
|
@ -0,0 +1,19 @@
|
|||
from twisted.internet import defer
|
||||
|
||||
|
||||
class DummyPeerFinder(object):
|
||||
"""This class finds peers which have announced to the DHT that they have certain blobs"""
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def run_manage_loop(self):
|
||||
pass
|
||||
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def find_peers_for_blob(self, blob_hash):
|
||||
return defer.succeed([])
|
||||
|
||||
def get_most_popular_hashes(self, num_to_return):
|
||||
return []
|
14
lbrynet/core/PeerManager.py
Normal file
14
lbrynet/core/PeerManager.py
Normal file
|
@ -0,0 +1,14 @@
|
|||
from lbrynet.core.Peer import Peer
|
||||
|
||||
|
||||
class PeerManager(object):
|
||||
def __init__(self):
|
||||
self.peers = []
|
||||
|
||||
def get_peer(self, host, port):
|
||||
for peer in self.peers:
|
||||
if peer.host == host and peer.port == port:
|
||||
return peer
|
||||
peer = Peer(host, port)
|
||||
self.peers.append(peer)
|
||||
return peer
|
206
lbrynet/core/RateLimiter.py
Normal file
206
lbrynet/core/RateLimiter.py
Normal file
|
@ -0,0 +1,206 @@
|
|||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IRateLimiter
|
||||
|
||||
|
||||
class DummyRateLimiter(object):
|
||||
def __init__(self):
|
||||
self.dl_bytes_this_second = 0
|
||||
self.ul_bytes_this_second = 0
|
||||
self.total_dl_bytes = 0
|
||||
self.total_ul_bytes = 0
|
||||
self.target_dl = 0
|
||||
self.target_ul = 0
|
||||
self.ul_delay = 0.00
|
||||
self.dl_delay = 0.00
|
||||
self.next_tick = None
|
||||
|
||||
def tick(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.dl_bytes_this_second = 0
|
||||
self.ul_bytes_this_second = 0
|
||||
self.next_tick = reactor.callLater(1.0, self.tick)
|
||||
|
||||
def stop(self):
|
||||
if self.next_tick is not None:
|
||||
self.next_tick.cancel()
|
||||
self.next_tick = None
|
||||
|
||||
def set_dl_limit(self, limit):
|
||||
pass
|
||||
|
||||
def set_ul_limit(self, limit):
|
||||
pass
|
||||
|
||||
def ul_wait_time(self):
|
||||
return self.ul_delay
|
||||
|
||||
def dl_wait_time(self):
|
||||
return self.dl_delay
|
||||
|
||||
def report_dl_bytes(self, num_bytes):
|
||||
self.dl_bytes_this_second += num_bytes
|
||||
self.total_dl_bytes += num_bytes
|
||||
|
||||
def report_ul_bytes(self, num_bytes):
|
||||
self.ul_bytes_this_second += num_bytes
|
||||
self.total_ul_bytes += num_bytes
|
||||
|
||||
|
||||
class RateLimiter(object):
|
||||
"""This class ensures that upload and download rates don't exceed specified maximums"""
|
||||
|
||||
implements(IRateLimiter)
|
||||
|
||||
#called by main application
|
||||
|
||||
def __init__(self, max_dl_bytes=None, max_ul_bytes=None):
|
||||
self.max_dl_bytes = max_dl_bytes
|
||||
self.max_ul_bytes = max_ul_bytes
|
||||
self.dl_bytes_this_second = 0
|
||||
self.ul_bytes_this_second = 0
|
||||
self.total_dl_bytes = 0
|
||||
self.total_ul_bytes = 0
|
||||
self.next_tick = None
|
||||
self.next_unthrottle_dl = None
|
||||
self.next_unthrottle_ul = None
|
||||
|
||||
self.next_dl_check = None
|
||||
self.next_ul_check = None
|
||||
|
||||
self.dl_check_interval = 1.0
|
||||
self.ul_check_interval = 1.0
|
||||
|
||||
self.dl_throttled = False
|
||||
self.ul_throttled = False
|
||||
|
||||
self.protocols = []
|
||||
|
||||
def tick(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
# happens once per second
|
||||
if self.next_dl_check is not None:
|
||||
self.next_dl_check.cancel()
|
||||
self.next_dl_check = None
|
||||
if self.next_ul_check is not None:
|
||||
self.next_ul_check.cancel()
|
||||
self.next_ul_check = None
|
||||
if self.max_dl_bytes is not None:
|
||||
if self.dl_bytes_this_second == 0:
|
||||
self.dl_check_interval = 1.0
|
||||
else:
|
||||
self.dl_check_interval = min(1.0, self.dl_check_interval *
|
||||
self.max_dl_bytes / self.dl_bytes_this_second)
|
||||
self.next_dl_check = reactor.callLater(self.dl_check_interval, self.check_dl)
|
||||
if self.max_ul_bytes is not None:
|
||||
if self.ul_bytes_this_second == 0:
|
||||
self.ul_check_interval = 1.0
|
||||
else:
|
||||
self.ul_check_interval = min(1.0, self.ul_check_interval *
|
||||
self.max_ul_bytes / self.ul_bytes_this_second)
|
||||
self.next_ul_check = reactor.callLater(self.ul_check_interval, self.check_ul)
|
||||
self.dl_bytes_this_second = 0
|
||||
self.ul_bytes_this_second = 0
|
||||
self.unthrottle_dl()
|
||||
self.unthrottle_ul()
|
||||
self.next_tick = reactor.callLater(1.0, self.tick)
|
||||
|
||||
def stop(self):
|
||||
if self.next_tick is not None:
|
||||
self.next_tick.cancel()
|
||||
self.next_tick = None
|
||||
if self.next_dl_check is not None:
|
||||
self.next_dl_check.cancel()
|
||||
self.next_dl_check = None
|
||||
if self.next_ul_check is not None:
|
||||
self.next_ul_check.cancel()
|
||||
self.next_ul_check = None
|
||||
|
||||
def set_dl_limit(self, limit):
|
||||
self.max_dl_bytes = limit
|
||||
|
||||
def set_ul_limit(self, limit):
|
||||
self.max_ul_bytes = limit
|
||||
|
||||
#throttling
|
||||
|
||||
def check_dl(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.next_dl_check = None
|
||||
|
||||
if self.dl_bytes_this_second > self.max_dl_bytes:
|
||||
self.throttle_dl()
|
||||
else:
|
||||
self.next_dl_check = reactor.callLater(self.dl_check_interval, self.check_dl)
|
||||
self.dl_check_interval = min(self.dl_check_interval * 2, 1.0)
|
||||
|
||||
def check_ul(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.next_ul_check = None
|
||||
|
||||
if self.ul_bytes_this_second > self.max_ul_bytes:
|
||||
self.throttle_ul()
|
||||
else:
|
||||
self.next_ul_check = reactor.callLater(self.ul_check_interval, self.check_ul)
|
||||
self.ul_check_interval = min(self.ul_check_interval * 2, 1.0)
|
||||
|
||||
def throttle_dl(self):
|
||||
if self.dl_throttled is False:
|
||||
for protocol in self.protocols:
|
||||
protocol.throttle_download()
|
||||
self.dl_throttled = True
|
||||
|
||||
def throttle_ul(self):
|
||||
if self.ul_throttled is False:
|
||||
for protocol in self.protocols:
|
||||
protocol.throttle_upload()
|
||||
self.ul_throttled = True
|
||||
|
||||
def unthrottle_dl(self):
|
||||
if self.dl_throttled is True:
|
||||
for protocol in self.protocols:
|
||||
protocol.unthrottle_download()
|
||||
self.dl_throttled = False
|
||||
|
||||
def unthrottle_ul(self):
|
||||
if self.ul_throttled is True:
|
||||
for protocol in self.protocols:
|
||||
protocol.unthrottle_upload()
|
||||
self.ul_throttled = False
|
||||
|
||||
#deprecated
|
||||
|
||||
def ul_wait_time(self):
|
||||
return 0
|
||||
|
||||
def dl_wait_time(self):
|
||||
return 0
|
||||
|
||||
#called by protocols
|
||||
|
||||
def report_dl_bytes(self, num_bytes):
|
||||
self.dl_bytes_this_second += num_bytes
|
||||
self.total_dl_bytes += num_bytes
|
||||
|
||||
def report_ul_bytes(self, num_bytes):
|
||||
self.ul_bytes_this_second += num_bytes
|
||||
self.total_ul_bytes += num_bytes
|
||||
|
||||
def register_protocol(self, protocol):
|
||||
if protocol not in self.protocols:
|
||||
self.protocols.append(protocol)
|
||||
if self.dl_throttled is True:
|
||||
protocol.throttle_download()
|
||||
if self.ul_throttled is True:
|
||||
protocol.throttle_upload()
|
||||
|
||||
def unregister_protocol(self, protocol):
|
||||
if protocol in self.protocols:
|
||||
self.protocols.remove(protocol)
|
245
lbrynet/core/Session.py
Normal file
245
lbrynet/core/Session.py
Normal file
|
@ -0,0 +1,245 @@
|
|||
import logging
|
||||
import miniupnpc
|
||||
from lbrynet.core.PTCWallet import PTCWallet
|
||||
from lbrynet.core.BlobManager import DiskBlobManager, TempBlobManager
|
||||
from lbrynet.dht import node
|
||||
from lbrynet.core.PeerManager import PeerManager
|
||||
from lbrynet.core.RateLimiter import RateLimiter
|
||||
from lbrynet.core.client.DHTPeerFinder import DHTPeerFinder
|
||||
from lbrynet.core.HashAnnouncer import DummyHashAnnouncer
|
||||
from lbrynet.core.server.DHTHashAnnouncer import DHTHashAnnouncer
|
||||
from lbrynet.core.utils import generate_id
|
||||
from lbrynet.core.PaymentRateManager import BasePaymentRateManager
|
||||
from twisted.internet import threads, defer
|
||||
|
||||
|
||||
class LBRYSession(object):
|
||||
"""This class manages all important services common to any application that uses the network:
|
||||
the hash announcer, which informs other peers that this peer is associated with some hash. Usually,
|
||||
this means this peer has a blob identified by the hash in question, but it can be used for other
|
||||
purposes.
|
||||
the peer finder, which finds peers that are associated with some hash.
|
||||
the blob manager, which keeps track of which blobs have been downloaded and provides access to them,
|
||||
the rate limiter, which attempts to ensure download and upload rates stay below a set maximum,
|
||||
and upnp, which opens holes in compatible firewalls so that remote peers can connect to this peer."""
|
||||
def __init__(self, blob_data_payment_rate, db_dir=None, lbryid=None, peer_manager=None, dht_node_port=None,
|
||||
known_dht_nodes=None, peer_finder=None, hash_announcer=None,
|
||||
blob_dir=None, blob_manager=None, peer_port=None, use_upnp=True,
|
||||
rate_limiter=None, wallet=None):
|
||||
"""
|
||||
@param blob_data_payment_rate: The default payment rate for blob data
|
||||
|
||||
@param db_dir: The directory in which levelDB files should be stored
|
||||
|
||||
@param lbryid: The unique ID of this node
|
||||
|
||||
@param peer_manager: An object which keeps track of all known peers. If None, a PeerManager will be created
|
||||
|
||||
@param dht_node_port: The port on which the dht node should listen for incoming connections
|
||||
|
||||
@param known_dht_nodes: A list of nodes which the dht node should use to bootstrap into the dht
|
||||
|
||||
@param peer_finder: An object which is used to look up peers that are associated with some hash. If None,
|
||||
a DHTPeerFinder will be used, which looks for peers in the distributed hash table.
|
||||
|
||||
@param hash_announcer: An object which announces to other peers that this peer is associated with some hash.
|
||||
If None, and peer_port is not None, a DHTHashAnnouncer will be used. If None and
|
||||
peer_port is None, a DummyHashAnnouncer will be used, which will not actually announce
|
||||
anything.
|
||||
|
||||
@param blob_dir: The directory in which blobs will be stored. If None and blob_manager is None, blobs will
|
||||
be stored in memory only.
|
||||
|
||||
@param blob_manager: An object which keeps track of downloaded blobs and provides access to them. If None,
|
||||
and blob_dir is not None, a DiskBlobManager will be used, with the given blob_dir.
|
||||
If None and blob_dir is None, a TempBlobManager will be used, which stores blobs in
|
||||
memory only.
|
||||
|
||||
@param peer_port: The port on which other peers should connect to this peer
|
||||
|
||||
@param use_upnp: Whether or not to try to open a hole in the firewall so that outside peers can connect to
|
||||
this peer's peer_port and dht_node_port
|
||||
|
||||
@param rate_limiter: An object which keeps track of the amount of data transferred to and from this peer,
|
||||
and can limit that rate if desired
|
||||
|
||||
@param wallet: An object which will be used to keep track of expected payments and which will pay peers.
|
||||
If None, a wallet which uses the Point Trader system will be used, which is meant for testing
|
||||
only
|
||||
|
||||
@return:
|
||||
"""
|
||||
self.db_dir = db_dir
|
||||
|
||||
self.lbryid = lbryid
|
||||
|
||||
self.peer_manager = peer_manager
|
||||
|
||||
self.dht_node_port = dht_node_port
|
||||
self.known_dht_nodes = known_dht_nodes
|
||||
if self.known_dht_nodes is None:
|
||||
self.known_dht_nodes = []
|
||||
self.peer_finder = peer_finder
|
||||
self.hash_announcer = hash_announcer
|
||||
|
||||
self.blob_dir = blob_dir
|
||||
self.blob_manager = blob_manager
|
||||
|
||||
self.peer_port = peer_port
|
||||
|
||||
self.use_upnp = use_upnp
|
||||
|
||||
self.rate_limiter = rate_limiter
|
||||
|
||||
self.external_ip = '127.0.0.1'
|
||||
self.upnp_handler = None
|
||||
self.upnp_redirects_set = False
|
||||
|
||||
self.wallet = wallet
|
||||
|
||||
self.dht_node = None
|
||||
|
||||
self.base_payment_rate_manager = BasePaymentRateManager(blob_data_payment_rate)
|
||||
|
||||
def setup(self):
|
||||
"""Create the blob directory and database if necessary, start all desired services"""
|
||||
|
||||
logging.debug("Setting up the lbry session")
|
||||
|
||||
if self.lbryid is None:
|
||||
self.lbryid = generate_id()
|
||||
|
||||
if self.wallet is None:
|
||||
self.wallet = PTCWallet(self.db_dir)
|
||||
|
||||
if self.peer_manager is None:
|
||||
self.peer_manager = PeerManager()
|
||||
|
||||
if self.use_upnp is True:
|
||||
d = self._try_upnp()
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
|
||||
if self.peer_finder is None:
|
||||
d.addCallback(lambda _: self._setup_dht())
|
||||
else:
|
||||
if self.hash_announcer is None and self.peer_port is not None:
|
||||
logging.warning("The server has no way to advertise its available blobs.")
|
||||
self.hash_announcer = DummyHashAnnouncer()
|
||||
|
||||
d.addCallback(lambda _: self._setup_other_components())
|
||||
return d
|
||||
|
||||
def shut_down(self):
|
||||
"""Stop all services"""
|
||||
ds = []
|
||||
if self.dht_node is not None:
|
||||
ds.append(defer.maybeDeferred(self.dht_node.stop))
|
||||
ds.append(defer.maybeDeferred(self.rate_limiter.stop))
|
||||
ds.append(defer.maybeDeferred(self.peer_finder.stop))
|
||||
ds.append(defer.maybeDeferred(self.hash_announcer.stop))
|
||||
ds.append(defer.maybeDeferred(self.wallet.stop))
|
||||
ds.append(defer.maybeDeferred(self.blob_manager.stop))
|
||||
if self.upnp_redirects_set is True:
|
||||
ds.append(defer.maybeDeferred(self._unset_upnp))
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
def _try_upnp(self):
|
||||
|
||||
logging.debug("In _try_upnp")
|
||||
|
||||
def threaded_try_upnp():
|
||||
if self.use_upnp is False:
|
||||
logging.debug("Not using upnp")
|
||||
return False
|
||||
u = miniupnpc.UPnP()
|
||||
num_devices_found = u.discover()
|
||||
if num_devices_found > 0:
|
||||
self.upnp_handler = u
|
||||
u.selectigd()
|
||||
external_ip = u.externalipaddress()
|
||||
if external_ip != '0.0.0.0':
|
||||
self.external_ip = external_ip
|
||||
if self.peer_port is not None:
|
||||
u.addportmapping(self.peer_port, 'TCP', u.lanaddr, self.peer_port, 'LBRY peer port', '')
|
||||
if self.dht_node_port is not None:
|
||||
u.addportmapping(self.dht_node_port, 'UDP', u.lanaddr, self.dht_node_port, 'LBRY DHT port', '')
|
||||
self.upnp_redirects_set = True
|
||||
return True
|
||||
return False
|
||||
|
||||
def upnp_failed(err):
|
||||
logging.warning("UPnP failed. Reason: %s", err.getErrorMessage())
|
||||
return False
|
||||
|
||||
d = threads.deferToThread(threaded_try_upnp)
|
||||
d.addErrback(upnp_failed)
|
||||
return d
|
||||
|
||||
def _setup_dht(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
logging.debug("Starting the dht")
|
||||
|
||||
def match_port(h, p):
|
||||
return h, p
|
||||
|
||||
def join_resolved_addresses(result):
|
||||
addresses = []
|
||||
for success, value in result:
|
||||
if success is True:
|
||||
addresses.append(value)
|
||||
return addresses
|
||||
|
||||
def start_dht(addresses):
|
||||
self.dht_node.joinNetwork(addresses)
|
||||
self.peer_finder.run_manage_loop()
|
||||
self.hash_announcer.run_manage_loop()
|
||||
return True
|
||||
|
||||
ds = []
|
||||
for host, port in self.known_dht_nodes:
|
||||
d = reactor.resolve(host)
|
||||
d.addCallback(match_port, port)
|
||||
ds.append(d)
|
||||
|
||||
self.dht_node = node.Node(udpPort=self.dht_node_port, lbryid=self.lbryid,
|
||||
externalIP=self.external_ip)
|
||||
self.peer_finder = DHTPeerFinder(self.dht_node, self.peer_manager)
|
||||
if self.hash_announcer is None:
|
||||
self.hash_announcer = DHTHashAnnouncer(self.dht_node, self.peer_port)
|
||||
|
||||
dl = defer.DeferredList(ds)
|
||||
dl.addCallback(join_resolved_addresses)
|
||||
dl.addCallback(start_dht)
|
||||
return dl
|
||||
|
||||
def _setup_other_components(self):
|
||||
logging.debug("Setting up the rest of the components")
|
||||
|
||||
if self.rate_limiter is None:
|
||||
self.rate_limiter = RateLimiter()
|
||||
|
||||
if self.blob_manager is None:
|
||||
if self.blob_dir is None:
|
||||
self.blob_manager = TempBlobManager(self.hash_announcer)
|
||||
else:
|
||||
self.blob_manager = DiskBlobManager(self.hash_announcer, self.blob_dir, self.db_dir)
|
||||
|
||||
self.rate_limiter.tick()
|
||||
d1 = self.blob_manager.setup()
|
||||
d2 = self.wallet.start()
|
||||
return defer.DeferredList([d1, d2], fireOnOneErrback=True)
|
||||
|
||||
def _unset_upnp(self):
|
||||
|
||||
def threaded_unset_upnp():
|
||||
u = self.upnp_handler
|
||||
if self.peer_port is not None:
|
||||
u.deleteportmapping(self.peer_port, 'TCP')
|
||||
if self.dht_node_port is not None:
|
||||
u.deleteportmapping(self.dht_node_port, 'UDP')
|
||||
self.upnp_redirects_set = False
|
||||
|
||||
return threads.deferToThread(threaded_unset_upnp)
|
73
lbrynet/core/StreamCreator.py
Normal file
73
lbrynet/core/StreamCreator.py
Normal file
|
@ -0,0 +1,73 @@
|
|||
import logging
|
||||
from twisted.internet import interfaces, defer
|
||||
from zope.interface import implements
|
||||
|
||||
|
||||
class StreamCreator(object):
|
||||
"""Classes which derive from this class create a 'stream', which can be any
|
||||
collection of associated blobs and associated metadata. These classes
|
||||
use the IConsumer interface to get data from an IProducer and transform
|
||||
the data into a 'stream'"""
|
||||
|
||||
implements(interfaces.IConsumer)
|
||||
|
||||
def __init__(self, name):
|
||||
"""
|
||||
@param name: the name of the stream
|
||||
"""
|
||||
self.name = name
|
||||
self.stopped = True
|
||||
self.producer = None
|
||||
self.streaming = None
|
||||
self.blob_count = -1
|
||||
self.current_blob = None
|
||||
self.finished_deferreds = []
|
||||
|
||||
def _blob_finished(self, blob_info):
|
||||
pass
|
||||
|
||||
def registerProducer(self, producer, streaming):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.producer = producer
|
||||
self.streaming = streaming
|
||||
self.stopped = False
|
||||
if streaming is False:
|
||||
reactor.callLater(0, self.producer.resumeProducing)
|
||||
|
||||
def unregisterProducer(self):
|
||||
self.stopped = True
|
||||
self.producer = None
|
||||
|
||||
def stop(self):
|
||||
"""Stop creating the stream. Create the terminating zero-length blob."""
|
||||
logging.debug("stop has been called for StreamCreator")
|
||||
self.stopped = True
|
||||
if self.current_blob is not None:
|
||||
current_blob = self.current_blob
|
||||
d = current_blob.close()
|
||||
d.addCallback(self._blob_finished)
|
||||
self.finished_deferreds.append(d)
|
||||
self.current_blob = None
|
||||
self._finalize()
|
||||
dl = defer.DeferredList(self.finished_deferreds)
|
||||
dl.addCallback(lambda _: self._finished())
|
||||
return dl
|
||||
|
||||
def _finalize(self):
|
||||
pass
|
||||
|
||||
def _finished(self):
|
||||
pass
|
||||
|
||||
def write(self, data):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self._write(data)
|
||||
if self.stopped is False and self.streaming is False:
|
||||
reactor.callLater(0, self.producer.resumeProducing)
|
||||
|
||||
def _write(self, data):
|
||||
pass
|
195
lbrynet/core/StreamDescriptor.py
Normal file
195
lbrynet/core/StreamDescriptor.py
Normal file
|
@ -0,0 +1,195 @@
|
|||
from collections import defaultdict
|
||||
import json
|
||||
import logging
|
||||
from twisted.internet import threads
|
||||
from lbrynet.core.client.StandaloneBlobDownloader import StandaloneBlobDownloader
|
||||
|
||||
|
||||
class StreamDescriptorReader(object):
|
||||
"""Classes which derive from this class read a stream descriptor file return
|
||||
a dictionary containing the fields in the file"""
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def _get_raw_data(self):
|
||||
"""This method must be overridden by subclasses. It should return a deferred
|
||||
which fires with the raw data in the stream descriptor"""
|
||||
pass
|
||||
|
||||
def get_info(self):
|
||||
"""Return the fields contained in the file"""
|
||||
d = self._get_raw_data()
|
||||
d.addCallback(json.loads)
|
||||
return d
|
||||
|
||||
|
||||
class PlainStreamDescriptorReader(StreamDescriptorReader):
|
||||
"""Read a stream descriptor file which is not a blob but a regular file"""
|
||||
def __init__(self, stream_descriptor_filename):
|
||||
StreamDescriptorReader.__init__(self)
|
||||
self.stream_descriptor_filename = stream_descriptor_filename
|
||||
|
||||
def _get_raw_data(self):
|
||||
|
||||
def get_data():
|
||||
with open(self.stream_descriptor_filename) as file_handle:
|
||||
raw_data = file_handle.read()
|
||||
return raw_data
|
||||
|
||||
return threads.deferToThread(get_data)
|
||||
|
||||
|
||||
class BlobStreamDescriptorReader(StreamDescriptorReader):
|
||||
"""Read a stream descriptor file which is a blob"""
|
||||
def __init__(self, blob):
|
||||
StreamDescriptorReader.__init__(self)
|
||||
self.blob = blob
|
||||
|
||||
def _get_raw_data(self):
|
||||
|
||||
def get_data():
|
||||
f = self.blob.open_for_reading()
|
||||
if f is not None:
|
||||
raw_data = f.read()
|
||||
self.blob.close_read_handle(f)
|
||||
return raw_data
|
||||
else:
|
||||
raise ValueError("Could not open the blob for reading")
|
||||
|
||||
return threads.deferToThread(get_data)
|
||||
|
||||
|
||||
class StreamDescriptorWriter(object):
|
||||
"""Classes which derive from this class write fields from a dictionary
|
||||
of fields to a stream descriptor"""
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def create_descriptor(self, sd_info):
|
||||
return self._write_stream_descriptor(json.dumps(sd_info))
|
||||
|
||||
def _write_stream_descriptor(self, raw_data):
|
||||
"""This method must be overridden by subclasses to write raw data to the stream descriptor"""
|
||||
pass
|
||||
|
||||
|
||||
class PlainStreamDescriptorWriter(StreamDescriptorWriter):
|
||||
def __init__(self, sd_file_name):
|
||||
StreamDescriptorWriter.__init__(self)
|
||||
self.sd_file_name = sd_file_name
|
||||
|
||||
def _write_stream_descriptor(self, raw_data):
|
||||
|
||||
def write_file():
|
||||
logging.debug("Writing the sd file to disk")
|
||||
with open(self.sd_file_name, 'w') as sd_file:
|
||||
sd_file.write(raw_data)
|
||||
return self.sd_file_name
|
||||
|
||||
return threads.deferToThread(write_file)
|
||||
|
||||
|
||||
class BlobStreamDescriptorWriter(StreamDescriptorWriter):
|
||||
def __init__(self, blob_manager):
|
||||
StreamDescriptorWriter.__init__(self)
|
||||
|
||||
self.blob_manager = blob_manager
|
||||
|
||||
def _write_stream_descriptor(self, raw_data):
|
||||
logging.debug("Creating the new blob for the stream descriptor")
|
||||
blob_creator = self.blob_manager.get_blob_creator()
|
||||
blob_creator.write(raw_data)
|
||||
logging.debug("Wrote the data to the new blob")
|
||||
return blob_creator.close()
|
||||
|
||||
|
||||
class StreamDescriptorIdentifier(object):
|
||||
"""Tries to determine the type of stream described by the stream descriptor using the
|
||||
'stream_type' field. Keeps a list of StreamDescriptorValidators and StreamDownloaderFactorys
|
||||
and returns the appropriate ones based on the type of the stream descriptor given
|
||||
"""
|
||||
def __init__(self):
|
||||
self._sd_info_validators = {} # {stream_type: IStreamDescriptorValidator}
|
||||
self._stream_downloader_factories = defaultdict(list) # {stream_type: [IStreamDownloaderFactory]}
|
||||
|
||||
def add_stream_info_validator(self, stream_type, sd_info_validator):
|
||||
"""
|
||||
This is how the StreamDescriptorIdentifier learns about new types of stream descriptors.
|
||||
|
||||
There can only be one StreamDescriptorValidator for each type of stream.
|
||||
|
||||
@param stream_type: A string representing the type of stream descriptor. This must be unique to
|
||||
this stream descriptor.
|
||||
|
||||
@param sd_info_validator: A class implementing the IStreamDescriptorValidator interface. This class's
|
||||
constructor will be passed the raw metadata in the stream descriptor file and its 'validate' method
|
||||
will then be called. If the validation step fails, an exception will be thrown, preventing the stream
|
||||
descriptor from being further processed.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self._sd_info_validators[stream_type] = sd_info_validator
|
||||
|
||||
def add_stream_downloader_factory(self, stream_type, factory):
|
||||
"""
|
||||
Register a stream downloader factory with the StreamDescriptorIdentifier.
|
||||
|
||||
This is how the StreamDescriptorIdentifier determines what factories may be used to process different stream
|
||||
descriptor files. There must be at least one factory for each type of stream added via
|
||||
"add_stream_info_validator".
|
||||
|
||||
@param stream_type: A string representing the type of stream descriptor which the factory knows how to process.
|
||||
|
||||
@param factory: An object implementing the IStreamDownloaderFactory interface.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self._stream_downloader_factories[stream_type].append(factory)
|
||||
|
||||
def get_info_and_factories_for_sd_file(self, sd_path):
|
||||
|
||||
sd_reader = PlainStreamDescriptorReader(sd_path)
|
||||
d = sd_reader.get_info()
|
||||
d.addCallback(self._return_info_and_factories)
|
||||
return d
|
||||
|
||||
def get_info_and_factories_for_sd_blob(self, sd_blob):
|
||||
sd_reader = BlobStreamDescriptorReader(sd_blob)
|
||||
d = sd_reader.get_info()
|
||||
d.addCallback(self._return_info_and_factories)
|
||||
return d
|
||||
|
||||
def _get_factories(self, stream_type):
|
||||
assert stream_type in self._stream_downloader_factories, "Unrecognized stream type: " + str(stream_type)
|
||||
return self._stream_downloader_factories[stream_type]
|
||||
|
||||
def _get_validator(self, stream_type):
|
||||
assert stream_type in self._sd_info_validators, "Unrecognized stream type: " + str(stream_type)
|
||||
return self._sd_info_validators[stream_type]
|
||||
|
||||
def _return_info_and_factories(self, sd_info):
|
||||
assert 'stream_type' in sd_info, 'Invalid stream descriptor. No stream_type parameter.'
|
||||
stream_type = sd_info['stream_type']
|
||||
factories = self._get_factories(stream_type)
|
||||
validator = self._get_validator(stream_type)(sd_info)
|
||||
d = validator.validate()
|
||||
|
||||
d.addCallback(lambda _: (validator, factories))
|
||||
return d
|
||||
|
||||
|
||||
def download_sd_blob(session, blob_hash, payment_rate_manager):
|
||||
"""
|
||||
Downloads a single blob from the network
|
||||
|
||||
@param session:
|
||||
|
||||
@param blob_hash:
|
||||
|
||||
@param payment_rate_manager:
|
||||
|
||||
@return: An object of type HashBlob
|
||||
"""
|
||||
downloader = StandaloneBlobDownloader(blob_hash, session.blob_manager, session.peer_finder,
|
||||
session.rate_limiter, payment_rate_manager, session.wallet)
|
||||
return downloader.download()
|
7
lbrynet/core/__init__.py
Normal file
7
lbrynet/core/__init__.py
Normal file
|
@ -0,0 +1,7 @@
|
|||
"""
|
||||
Classes and functions which can be used by any application wishing to make use of the LBRY network.
|
||||
|
||||
This includes classes for connecting to other peers and downloading blobs from them, listening for
|
||||
connections from peers and responding to their requests, managing locally stored blobs, sending
|
||||
and receiving payments, and locating peers in the DHT.
|
||||
"""
|
307
lbrynet/core/client/BlobRequester.py
Normal file
307
lbrynet/core/client/BlobRequester.py
Normal file
|
@ -0,0 +1,307 @@
|
|||
from collections import defaultdict
|
||||
import logging
|
||||
from twisted.internet import defer
|
||||
from twisted.python.failure import Failure
|
||||
from zope.interface import implements
|
||||
from lbrynet.core.Error import PriceDisagreementError, DownloadCanceledError, InsufficientFundsError
|
||||
from lbrynet.core.Error import InvalidResponseError, RequestCanceledError, NoResponseError
|
||||
from lbrynet.core.client.ClientRequest import ClientRequest, ClientBlobRequest
|
||||
from lbrynet.interfaces import IRequestCreator
|
||||
|
||||
|
||||
class BlobRequester(object):
|
||||
implements(IRequestCreator)
|
||||
|
||||
def __init__(self, blob_manager, peer_finder, payment_rate_manager, wallet, download_manager):
|
||||
self.blob_manager = blob_manager
|
||||
self.peer_finder = peer_finder
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.wallet = wallet
|
||||
self.download_manager = download_manager
|
||||
self._peers = defaultdict(int) # {Peer: score}
|
||||
self._available_blobs = defaultdict(list) # {Peer: [blob_hash]}
|
||||
self._unavailable_blobs = defaultdict(list) # {Peer: [blob_hash]}}
|
||||
self._protocol_prices = {} # {ClientProtocol: price}
|
||||
self._price_disagreements = [] # [Peer]
|
||||
self._incompatible_peers = []
|
||||
|
||||
######## IRequestCreator #########
|
||||
|
||||
def send_next_request(self, peer, protocol):
|
||||
sent_request = False
|
||||
if self._blobs_to_download() and self._should_send_request_to(peer):
|
||||
a_r = self._get_availability_request(peer)
|
||||
d_r = self._get_download_request(peer)
|
||||
p_r = None
|
||||
|
||||
if a_r is not None or d_r is not None:
|
||||
p_r = self._get_price_request(peer, protocol)
|
||||
|
||||
if a_r is not None:
|
||||
d1 = protocol.add_request(a_r)
|
||||
d1.addCallback(self._handle_availability, peer, a_r)
|
||||
d1.addErrback(self._request_failed, "availability request", peer)
|
||||
sent_request = True
|
||||
if d_r is not None:
|
||||
reserved_points = self._reserve_points(peer, protocol, d_r.max_pay_units)
|
||||
if reserved_points is not None:
|
||||
# Note: The following three callbacks will be called when the blob has been
|
||||
# fully downloaded or canceled
|
||||
d_r.finished_deferred.addCallbacks(self._download_succeeded, self._download_failed,
|
||||
callbackArgs=(peer, d_r.blob),
|
||||
errbackArgs=(peer,))
|
||||
d_r.finished_deferred.addBoth(self._pay_or_cancel_payment, protocol, reserved_points, d_r.blob)
|
||||
d_r.finished_deferred.addErrback(self._handle_download_error, peer, d_r.blob)
|
||||
|
||||
d2 = protocol.add_blob_request(d_r)
|
||||
# Note: The following two callbacks will be called as soon as the peer sends its
|
||||
# response, which will be before the blob has finished downloading, but may be
|
||||
# after the blob has been canceled. For example,
|
||||
# 1) client sends request to Peer A
|
||||
# 2) the blob is finished downloading from peer B, and therefore this one is canceled
|
||||
# 3) client receives response from Peer A
|
||||
# Therefore, these callbacks shouldn't rely on there being a blob about to be
|
||||
# downloaded.
|
||||
d2.addCallback(self._handle_incoming_blob, peer, d_r)
|
||||
d2.addErrback(self._request_failed, "download request", peer)
|
||||
|
||||
sent_request = True
|
||||
else:
|
||||
d_r.cancel(InsufficientFundsError())
|
||||
return defer.fail(InsufficientFundsError())
|
||||
if sent_request is True:
|
||||
if p_r is not None:
|
||||
d3 = protocol.add_request(p_r)
|
||||
d3.addCallback(self._handle_price_response, peer, p_r, protocol)
|
||||
d3.addErrback(self._request_failed, "price request", peer)
|
||||
return defer.succeed(sent_request)
|
||||
|
||||
def get_new_peers(self):
|
||||
d = self._get_hash_for_peer_search()
|
||||
d.addCallback(self._find_peers_for_hash)
|
||||
return d
|
||||
|
||||
######### internal calls #########
|
||||
|
||||
def _download_succeeded(self, arg, peer, blob):
|
||||
logging.info("Blob %s has been successfully downloaded from %s", str(blob), str(peer))
|
||||
self._update_local_score(peer, 5.0)
|
||||
peer.update_stats('blobs_downloaded', 1)
|
||||
peer.update_score(5.0)
|
||||
self.blob_manager.blob_completed(blob)
|
||||
return arg
|
||||
|
||||
def _download_failed(self, reason, peer):
|
||||
if not reason.check(DownloadCanceledError, PriceDisagreementError):
|
||||
self._update_local_score(peer, -10.0)
|
||||
return reason
|
||||
|
||||
def _pay_or_cancel_payment(self, arg, protocol, reserved_points, blob):
|
||||
if blob.length != 0 and (not isinstance(arg, Failure) or arg.check(DownloadCanceledError)):
|
||||
self._pay_peer(protocol, blob.length, reserved_points)
|
||||
else:
|
||||
self._cancel_points(reserved_points)
|
||||
return arg
|
||||
|
||||
def _handle_download_error(self, err, peer, blob_to_download):
|
||||
if not err.check(DownloadCanceledError, PriceDisagreementError, RequestCanceledError):
|
||||
logging.warning("An error occurred while downloading %s from %s. Error: %s",
|
||||
blob_to_download.blob_hash, str(peer), err.getTraceback())
|
||||
if err.check(PriceDisagreementError):
|
||||
# Don't kill the whole connection just because a price couldn't be agreed upon.
|
||||
# Other information might be desired by other request creators at a better rate.
|
||||
return True
|
||||
return err
|
||||
|
||||
def _get_hash_for_peer_search(self):
|
||||
r = None
|
||||
blobs_to_download = self._blobs_to_download()
|
||||
if blobs_to_download:
|
||||
blobs_without_sources = self._blobs_without_sources()
|
||||
if not blobs_without_sources:
|
||||
blob_hash = blobs_to_download[0].blob_hash
|
||||
else:
|
||||
blob_hash = blobs_without_sources[0].blob_hash
|
||||
r = blob_hash
|
||||
logging.debug("Blob requester peer search response: %s", str(r))
|
||||
return defer.succeed(r)
|
||||
|
||||
def _find_peers_for_hash(self, h):
|
||||
if h is None:
|
||||
return None
|
||||
else:
|
||||
d = self.peer_finder.find_peers_for_blob(h)
|
||||
|
||||
def choose_best_peers(peers):
|
||||
bad_peers = self._get_bad_peers()
|
||||
return [p for p in peers if not p in bad_peers]
|
||||
|
||||
d.addCallback(choose_best_peers)
|
||||
return d
|
||||
|
||||
def _should_send_request_to(self, peer):
|
||||
if self._peers[peer] < -5.0:
|
||||
return False
|
||||
if peer in self._price_disagreements:
|
||||
return False
|
||||
if peer in self._incompatible_peers:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _get_bad_peers(self):
|
||||
return [p for p in self._peers.iterkeys() if not self._should_send_request_to(p)]
|
||||
|
||||
def _hash_available(self, blob_hash):
|
||||
for peer in self._available_blobs:
|
||||
if blob_hash in self._available_blobs[peer]:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _hash_available_on(self, blob_hash, peer):
|
||||
if blob_hash in self._available_blobs[peer]:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _blobs_to_download(self):
|
||||
needed_blobs = self.download_manager.needed_blobs()
|
||||
return sorted(needed_blobs, key=lambda b: b.is_downloading())
|
||||
|
||||
def _blobs_without_sources(self):
|
||||
return [b for b in self.download_manager.needed_blobs() if not self._hash_available(b.blob_hash)]
|
||||
|
||||
def _get_availability_request(self, peer):
|
||||
all_needed = [b.blob_hash for b in self._blobs_to_download() if not b.blob_hash in self._available_blobs[peer]]
|
||||
# sort them so that the peer will be asked first for blobs it hasn't said it doesn't have
|
||||
to_request = sorted(all_needed, key=lambda b: b in self._unavailable_blobs[peer])[:20]
|
||||
if to_request:
|
||||
r_dict = {'requested_blobs': to_request}
|
||||
response_identifier = 'available_blobs'
|
||||
request = ClientRequest(r_dict, response_identifier)
|
||||
return request
|
||||
return None
|
||||
|
||||
def _get_download_request(self, peer):
|
||||
request = None
|
||||
to_download = [b for b in self._blobs_to_download() if self._hash_available_on(b.blob_hash, peer)]
|
||||
while to_download and request is None:
|
||||
blob_to_download = to_download[0]
|
||||
to_download = to_download[1:]
|
||||
if not blob_to_download.is_validated():
|
||||
d, write_func, cancel_func = blob_to_download.open_for_writing(peer)
|
||||
|
||||
def counting_write_func(data):
|
||||
peer.update_stats('blob_bytes_downloaded', len(data))
|
||||
return write_func(data)
|
||||
|
||||
if d is not None:
|
||||
|
||||
request_dict = {'requested_blob': blob_to_download.blob_hash}
|
||||
response_identifier = 'incoming_blob'
|
||||
|
||||
request = ClientBlobRequest(request_dict, response_identifier, counting_write_func, d,
|
||||
cancel_func, blob_to_download)
|
||||
|
||||
logging.info("Requesting blob %s from %s", str(blob_to_download), str(peer))
|
||||
return request
|
||||
|
||||
def _price_settled(self, protocol):
|
||||
if protocol in self._protocol_prices:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _get_price_request(self, peer, protocol):
|
||||
request = None
|
||||
if not protocol in self._protocol_prices:
|
||||
self._protocol_prices[protocol] = self.payment_rate_manager.get_rate_blob_data(peer)
|
||||
request_dict = {'blob_data_payment_rate': self._protocol_prices[protocol]}
|
||||
request = ClientRequest(request_dict, 'blob_data_payment_rate')
|
||||
return request
|
||||
|
||||
def _update_local_score(self, peer, amount):
|
||||
self._peers[peer] += amount
|
||||
|
||||
def _reserve_points(self, peer, protocol, max_bytes):
|
||||
assert protocol in self._protocol_prices
|
||||
points_to_reserve = 1.0 * max_bytes * self._protocol_prices[protocol] / 2**20
|
||||
return self.wallet.reserve_points(peer, points_to_reserve)
|
||||
|
||||
def _pay_peer(self, protocol, num_bytes, reserved_points):
|
||||
assert num_bytes != 0
|
||||
assert protocol in self._protocol_prices
|
||||
point_amount = 1.0 * num_bytes * self._protocol_prices[protocol] / 2**20
|
||||
self.wallet.send_points(reserved_points, point_amount)
|
||||
self.payment_rate_manager.record_points_paid(point_amount)
|
||||
|
||||
def _cancel_points(self, reserved_points):
|
||||
self.wallet.cancel_point_reservation(reserved_points)
|
||||
|
||||
def _handle_availability(self, response_dict, peer, request):
|
||||
if not request.response_identifier in response_dict:
|
||||
raise InvalidResponseError("response identifier not in response")
|
||||
logging.debug("Received a response to the availability request")
|
||||
blob_hashes = response_dict[request.response_identifier]
|
||||
for blob_hash in blob_hashes:
|
||||
if blob_hash in request.request_dict['requested_blobs']:
|
||||
logging.debug("The server has indicated it has the following blob available: %s", blob_hash)
|
||||
self._available_blobs[peer].append(blob_hash)
|
||||
if blob_hash in self._unavailable_blobs[peer]:
|
||||
self._unavailable_blobs[peer].remove(blob_hash)
|
||||
request.request_dict['requested_blobs'].remove(blob_hash)
|
||||
for blob_hash in request.request_dict['requested_blobs']:
|
||||
self._unavailable_blobs[peer].append(blob_hash)
|
||||
return True
|
||||
|
||||
def _handle_incoming_blob(self, response_dict, peer, request):
|
||||
if not request.response_identifier in response_dict:
|
||||
return InvalidResponseError("response identifier not in response")
|
||||
if not type(response_dict[request.response_identifier]) == dict:
|
||||
return InvalidResponseError("response not a dict. got %s" %
|
||||
(type(response_dict[request.response_identifier]),))
|
||||
response = response_dict[request.response_identifier]
|
||||
if 'error' in response:
|
||||
# This means we're not getting our blob for some reason
|
||||
if response['error'] == "RATE_UNSET":
|
||||
# Stop the download with an error that won't penalize the peer
|
||||
request.cancel(PriceDisagreementError())
|
||||
else:
|
||||
# The peer has done something bad so we should get out of here
|
||||
return InvalidResponseError("Got an unknown error from the peer: %s" %
|
||||
(response['error'],))
|
||||
else:
|
||||
if not 'blob_hash' in response:
|
||||
return InvalidResponseError("Missing the required field 'blob_hash'")
|
||||
if not response['blob_hash'] == request.request_dict['requested_blob']:
|
||||
return InvalidResponseError("Incoming blob does not match expected. Incoming: %s. Expected: %s" %
|
||||
(response['blob_hash'], request.request_dict['requested_blob']))
|
||||
if not 'length' in response:
|
||||
return InvalidResponseError("Missing the required field 'length'")
|
||||
if not request.blob.set_length(response['length']):
|
||||
return InvalidResponseError("Could not set the length of the blob")
|
||||
return True
|
||||
|
||||
def _handle_price_response(self, response_dict, peer, request, protocol):
|
||||
if not request.response_identifier in response_dict:
|
||||
return InvalidResponseError("response identifier not in response")
|
||||
assert protocol in self._protocol_prices
|
||||
response = response_dict[request.response_identifier]
|
||||
if response == "RATE_ACCEPTED":
|
||||
return True
|
||||
else:
|
||||
del self._protocol_prices[protocol]
|
||||
self._price_disagreements.append(peer)
|
||||
return True
|
||||
|
||||
def _request_failed(self, reason, request_type, peer):
|
||||
if reason.check(RequestCanceledError):
|
||||
return
|
||||
if reason.check(NoResponseError):
|
||||
self._incompatible_peers.append(peer)
|
||||
return
|
||||
logging.warning("Blob requester: a request of type '%s' failed. Reason: %s, Error type: %s",
|
||||
str(request_type), reason.getErrorMessage(), reason.type)
|
||||
self._update_local_score(peer, -10.0)
|
||||
if isinstance(reason, InvalidResponseError):
|
||||
peer.update_score(-10.0)
|
||||
else:
|
||||
peer.update_score(-2.0)
|
||||
return reason
|
235
lbrynet/core/client/ClientProtocol.py
Normal file
235
lbrynet/core/client/ClientProtocol.py
Normal file
|
@ -0,0 +1,235 @@
|
|||
import json
|
||||
import logging
|
||||
from twisted.internet import error, defer, reactor
|
||||
from twisted.internet.protocol import Protocol, ClientFactory
|
||||
from twisted.python import failure
|
||||
from lbrynet.conf import MAX_RESPONSE_INFO_SIZE as MAX_RESPONSE_SIZE
|
||||
from lbrynet.core.Error import ConnectionClosedBeforeResponseError, NoResponseError
|
||||
from lbrynet.core.Error import DownloadCanceledError, MisbehavingPeerError
|
||||
from lbrynet.core.Error import RequestCanceledError
|
||||
from lbrynet.interfaces import IRequestSender, IRateLimited
|
||||
from zope.interface import implements
|
||||
|
||||
|
||||
class ClientProtocol(Protocol):
|
||||
implements(IRequestSender, IRateLimited)
|
||||
|
||||
######### Protocol #########
|
||||
|
||||
def connectionMade(self):
|
||||
self._connection_manager = self.factory.connection_manager
|
||||
self._rate_limiter = self.factory.rate_limiter
|
||||
self.peer = self.factory.peer
|
||||
self._response_deferreds = {}
|
||||
self._response_buff = ''
|
||||
self._downloading_blob = False
|
||||
self._blob_download_request = None
|
||||
self._next_request = {}
|
||||
self.connection_closed = False
|
||||
self.connection_closing = False
|
||||
|
||||
self.peer.report_up()
|
||||
|
||||
self._ask_for_request()
|
||||
|
||||
def dataReceived(self, data):
|
||||
self._rate_limiter.report_dl_bytes(len(data))
|
||||
if self._downloading_blob is True:
|
||||
self._blob_download_request.write(data)
|
||||
else:
|
||||
self._response_buff += data
|
||||
if len(self._response_buff) > MAX_RESPONSE_SIZE:
|
||||
logging.warning("Response is too large. Size %s", len(self._response_buff))
|
||||
self.transport.loseConnection()
|
||||
response, extra_data = self._get_valid_response(self._response_buff)
|
||||
if response is not None:
|
||||
self._response_buff = ''
|
||||
self._handle_response(response)
|
||||
if self._downloading_blob is True and len(extra_data) != 0:
|
||||
self._blob_download_request.write(extra_data)
|
||||
|
||||
def connectionLost(self, reason):
|
||||
self.connection_closed = True
|
||||
if reason.check(error.ConnectionDone):
|
||||
err = failure.Failure(ConnectionClosedBeforeResponseError())
|
||||
else:
|
||||
err = reason
|
||||
#if self._response_deferreds:
|
||||
# logging.warning("Lost connection with active response deferreds. %s", str(self._response_deferreds))
|
||||
for key, d in self._response_deferreds.items():
|
||||
del self._response_deferreds[key]
|
||||
d.errback(err)
|
||||
if self._blob_download_request is not None:
|
||||
self._blob_download_request.cancel(err)
|
||||
self._connection_manager.protocol_disconnected(self.peer, self)
|
||||
|
||||
######### IRequestSender #########
|
||||
|
||||
def add_request(self, request):
|
||||
if request.response_identifier in self._response_deferreds:
|
||||
return defer.fail(failure.Failure(ValueError("There is already a request for that response active")))
|
||||
self._next_request.update(request.request_dict)
|
||||
d = defer.Deferred()
|
||||
logging.debug("Adding a request. Request: %s", str(request))
|
||||
self._response_deferreds[request.response_identifier] = d
|
||||
return d
|
||||
|
||||
def add_blob_request(self, blob_request):
|
||||
if self._blob_download_request is None:
|
||||
d = self.add_request(blob_request)
|
||||
self._blob_download_request = blob_request
|
||||
blob_request.finished_deferred.addCallbacks(self._downloading_finished,
|
||||
self._downloading_failed)
|
||||
blob_request.finished_deferred.addErrback(self._handle_response_error)
|
||||
return d
|
||||
else:
|
||||
return defer.fail(failure.Failure(ValueError("There is already a blob download request active")))
|
||||
|
||||
def cancel_requests(self):
|
||||
self.connection_closing = True
|
||||
ds = []
|
||||
err = failure.Failure(RequestCanceledError())
|
||||
for key, d in self._response_deferreds.items():
|
||||
del self._response_deferreds[key]
|
||||
d.errback(err)
|
||||
ds.append(d)
|
||||
if self._blob_download_request is not None:
|
||||
self._blob_download_request.cancel(err)
|
||||
ds.append(self._blob_download_request.finished_deferred)
|
||||
self._blob_download_request = None
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
######### Internal request handling #########
|
||||
|
||||
def _handle_request_error(self, err):
|
||||
logging.error("An unexpected error occurred creating or sending a request to %s. Error message: %s",
|
||||
str(self.peer), err.getTraceback())
|
||||
self.transport.loseConnection()
|
||||
|
||||
def _ask_for_request(self):
|
||||
|
||||
if self.connection_closed is True or self.connection_closing is True:
|
||||
return
|
||||
|
||||
def send_request_or_close(do_request):
|
||||
if do_request is True:
|
||||
request_msg, self._next_request = self._next_request, {}
|
||||
self._send_request_message(request_msg)
|
||||
else:
|
||||
# The connection manager has indicated that this connection should be terminated
|
||||
logging.info("Closing the connection to %s due to having no further requests to send", str(self.peer))
|
||||
self.transport.loseConnection()
|
||||
|
||||
d = self._connection_manager.get_next_request(self.peer, self)
|
||||
d.addCallback(send_request_or_close)
|
||||
d.addErrback(self._handle_request_error)
|
||||
|
||||
def _send_request_message(self, request_msg):
|
||||
# TODO: compare this message to the last one. If they're the same,
|
||||
# TODO: incrementally delay this message.
|
||||
m = json.dumps(request_msg)
|
||||
self.transport.write(m)
|
||||
|
||||
def _get_valid_response(self, response_msg):
|
||||
extra_data = None
|
||||
response = None
|
||||
curr_pos = 0
|
||||
while 1:
|
||||
next_close_paren = response_msg.find('}', curr_pos)
|
||||
if next_close_paren != -1:
|
||||
curr_pos = next_close_paren + 1
|
||||
try:
|
||||
response = json.loads(response_msg[:curr_pos])
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
extra_data = response_msg[curr_pos:]
|
||||
break
|
||||
else:
|
||||
break
|
||||
return response, extra_data
|
||||
|
||||
def _handle_response_error(self, err):
|
||||
# If an error gets to this point, log it and kill the connection.
|
||||
if not err.check(MisbehavingPeerError, ConnectionClosedBeforeResponseError, DownloadCanceledError,
|
||||
RequestCanceledError):
|
||||
logging.error("The connection to %s is closing due to an unexpected error: %s", str(self.peer),
|
||||
err.getErrorMessage())
|
||||
if not err.check(RequestCanceledError):
|
||||
self.transport.loseConnection()
|
||||
|
||||
def _handle_response(self, response):
|
||||
ds = []
|
||||
logging.debug("Handling a response. Current expected responses: %s", str(self._response_deferreds))
|
||||
for key, val in response.items():
|
||||
if key in self._response_deferreds:
|
||||
d = self._response_deferreds[key]
|
||||
del self._response_deferreds[key]
|
||||
d.callback({key: val})
|
||||
ds.append(d)
|
||||
for k, d in self._response_deferreds.items():
|
||||
del self._response_deferreds[k]
|
||||
d.errback(failure.Failure(NoResponseError()))
|
||||
ds.append(d)
|
||||
|
||||
if self._blob_download_request is not None:
|
||||
self._downloading_blob = True
|
||||
d = self._blob_download_request.finished_deferred
|
||||
d.addErrback(self._handle_response_error)
|
||||
ds.append(d)
|
||||
|
||||
dl = defer.DeferredList(ds)
|
||||
|
||||
dl.addCallback(lambda _: self._ask_for_request())
|
||||
|
||||
def _downloading_finished(self, arg):
|
||||
logging.debug("The blob has finished downloading")
|
||||
self._blob_download_request = None
|
||||
self._downloading_blob = False
|
||||
return arg
|
||||
|
||||
def _downloading_failed(self, err):
|
||||
if err.check(DownloadCanceledError):
|
||||
# TODO: (wish-list) it seems silly to close the connection over this, and it shouldn't
|
||||
# TODO: always be this way. it's done this way now because the client has no other way
|
||||
# TODO: of telling the server it wants the download to stop. It would be great if the
|
||||
# TODO: protocol had such a mechanism.
|
||||
logging.info("Closing the connection to %s because the download of blob %s was canceled",
|
||||
str(self.peer), str(self._blob_download_request.blob))
|
||||
#self.transport.loseConnection()
|
||||
#return True
|
||||
return err
|
||||
|
||||
######### IRateLimited #########
|
||||
|
||||
def throttle_upload(self):
|
||||
pass
|
||||
|
||||
def unthrottle_upload(self):
|
||||
pass
|
||||
|
||||
def throttle_download(self):
|
||||
self.transport.pauseProducing()
|
||||
|
||||
def unthrottle_download(self):
|
||||
self.transport.resumeProducing()
|
||||
|
||||
|
||||
class ClientProtocolFactory(ClientFactory):
|
||||
protocol = ClientProtocol
|
||||
|
||||
def __init__(self, peer, rate_limiter, connection_manager):
|
||||
self.peer = peer
|
||||
self.rate_limiter = rate_limiter
|
||||
self.connection_manager = connection_manager
|
||||
self.p = None
|
||||
|
||||
def clientConnectionFailed(self, connector, reason):
|
||||
self.peer.report_down()
|
||||
self.connection_manager.protocol_disconnected(self.peer, connector)
|
||||
|
||||
def buildProtocol(self, addr):
|
||||
p = self.protocol()
|
||||
p.factory = self
|
||||
self.p = p
|
||||
return p
|
27
lbrynet/core/client/ClientRequest.py
Normal file
27
lbrynet/core/client/ClientRequest.py
Normal file
|
@ -0,0 +1,27 @@
|
|||
from lbrynet.conf import BLOB_SIZE
|
||||
|
||||
|
||||
class ClientRequest(object):
|
||||
def __init__(self, request_dict, response_identifier=None):
|
||||
self.request_dict = request_dict
|
||||
self.response_identifier = response_identifier
|
||||
|
||||
|
||||
class ClientPaidRequest(ClientRequest):
|
||||
def __init__(self, request_dict, response_identifier, max_pay_units):
|
||||
ClientRequest.__init__(self, request_dict, response_identifier)
|
||||
self.max_pay_units = max_pay_units
|
||||
|
||||
|
||||
class ClientBlobRequest(ClientPaidRequest):
|
||||
def __init__(self, request_dict, response_identifier, write_func, finished_deferred,
|
||||
cancel_func, blob):
|
||||
if blob.length is None:
|
||||
max_pay_units = BLOB_SIZE
|
||||
else:
|
||||
max_pay_units = blob.length
|
||||
ClientPaidRequest.__init__(self, request_dict, response_identifier, max_pay_units)
|
||||
self.write = write_func
|
||||
self.finished_deferred = finished_deferred
|
||||
self.cancel = cancel_func
|
||||
self.blob = blob
|
177
lbrynet/core/client/ConnectionManager.py
Normal file
177
lbrynet/core/client/ConnectionManager.py
Normal file
|
@ -0,0 +1,177 @@
|
|||
import logging
|
||||
from twisted.internet import defer
|
||||
from zope.interface import implements
|
||||
from lbrynet import interfaces
|
||||
from lbrynet.conf import MAX_CONNECTIONS_PER_STREAM
|
||||
from lbrynet.core.client.ClientProtocol import ClientProtocolFactory
|
||||
from lbrynet.core.Error import InsufficientFundsError
|
||||
|
||||
|
||||
class ConnectionManager(object):
|
||||
implements(interfaces.IConnectionManager)
|
||||
|
||||
def __init__(self, downloader, rate_limiter, primary_request_creators, secondary_request_creators):
|
||||
self.downloader = downloader
|
||||
self.rate_limiter = rate_limiter
|
||||
self.primary_request_creators = primary_request_creators
|
||||
self.secondary_request_creators = secondary_request_creators
|
||||
self.peer_connections = {} # {Peer: {'connection': connection,
|
||||
# 'request_creators': [IRequestCreator if using this connection]}}
|
||||
self.connections_closing = {} # {Peer: deferred (fired when the connection is closed)}
|
||||
self.next_manage_call = None
|
||||
|
||||
def start(self):
|
||||
from twisted.internet import reactor
|
||||
|
||||
if self.next_manage_call is not None and self.next_manage_call.active() is True:
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = reactor.callLater(0, self._manage)
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
if self.next_manage_call is not None and self.next_manage_call.active() is True:
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
closing_deferreds = []
|
||||
for peer in self.peer_connections.keys():
|
||||
|
||||
def close_connection(p):
|
||||
logging.info("Abruptly closing a connection to %s due to downloading being paused",
|
||||
str(p))
|
||||
|
||||
if self.peer_connections[p]['factory'].p is not None:
|
||||
d = self.peer_connections[p]['factory'].p.cancel_requests()
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
|
||||
def disconnect_peer():
|
||||
self.peer_connections[p]['connection'].disconnect()
|
||||
if p in self.peer_connections:
|
||||
del self.peer_connections[p]
|
||||
d = defer.Deferred()
|
||||
self.connections_closing[p] = d
|
||||
return d
|
||||
|
||||
d.addBoth(lambda _: disconnect_peer())
|
||||
return d
|
||||
|
||||
closing_deferreds.append(close_connection(peer))
|
||||
return defer.DeferredList(closing_deferreds)
|
||||
|
||||
def get_next_request(self, peer, protocol):
|
||||
|
||||
logging.debug("Trying to get the next request for peer %s", str(peer))
|
||||
|
||||
if not peer in self.peer_connections:
|
||||
logging.debug("The peer has already been told to shut down.")
|
||||
return defer.succeed(False)
|
||||
|
||||
def handle_error(err):
|
||||
if err.check(InsufficientFundsError):
|
||||
self.downloader.insufficient_funds()
|
||||
return False
|
||||
else:
|
||||
return err
|
||||
|
||||
def check_if_request_sent(request_sent, request_creator):
|
||||
if request_sent is False:
|
||||
if request_creator in self.peer_connections[peer]['request_creators']:
|
||||
self.peer_connections[peer]['request_creators'].remove(request_creator)
|
||||
else:
|
||||
if not request_creator in self.peer_connections[peer]['request_creators']:
|
||||
self.peer_connections[peer]['request_creators'].append(request_creator)
|
||||
return request_sent
|
||||
|
||||
def check_requests(requests):
|
||||
have_request = True in [r[1] for r in requests if r[0] is True]
|
||||
return have_request
|
||||
|
||||
def get_secondary_requests_if_necessary(have_request):
|
||||
if have_request is True:
|
||||
ds = []
|
||||
for s_r_c in self.secondary_request_creators:
|
||||
d = s_r_c.send_next_request(peer, protocol)
|
||||
ds.append(d)
|
||||
dl = defer.DeferredList(ds)
|
||||
else:
|
||||
dl = defer.succeed(None)
|
||||
dl.addCallback(lambda _: have_request)
|
||||
return dl
|
||||
|
||||
ds = []
|
||||
|
||||
for p_r_c in self.primary_request_creators:
|
||||
d = p_r_c.send_next_request(peer, protocol)
|
||||
d.addErrback(handle_error)
|
||||
d.addCallback(check_if_request_sent, p_r_c)
|
||||
ds.append(d)
|
||||
|
||||
dl = defer.DeferredList(ds, fireOnOneErrback=True)
|
||||
dl.addCallback(check_requests)
|
||||
dl.addCallback(get_secondary_requests_if_necessary)
|
||||
return dl
|
||||
|
||||
def protocol_disconnected(self, peer, protocol):
|
||||
if peer in self.peer_connections:
|
||||
del self.peer_connections[peer]
|
||||
if peer in self.connections_closing:
|
||||
d = self.connections_closing[peer]
|
||||
del self.connections_closing[peer]
|
||||
d.callback(True)
|
||||
|
||||
def _rank_request_creator_connections(self):
|
||||
"""
|
||||
@return: an ordered list of our request creators, ranked according to which has the least number of
|
||||
connections open that it likes
|
||||
"""
|
||||
def count_peers(request_creator):
|
||||
return len([p for p in self.peer_connections.itervalues() if request_creator in p['request_creators']])
|
||||
|
||||
return sorted(self.primary_request_creators, key=count_peers)
|
||||
|
||||
def _connect_to_peer(self, peer):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if peer is not None:
|
||||
logging.debug("Trying to connect to %s", str(peer))
|
||||
factory = ClientProtocolFactory(peer, self.rate_limiter, self)
|
||||
connection = reactor.connectTCP(peer.host, peer.port, factory)
|
||||
self.peer_connections[peer] = {'connection': connection,
|
||||
'request_creators': self.primary_request_creators[:],
|
||||
'factory': factory}
|
||||
|
||||
def _manage(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
def get_new_peers(request_creators):
|
||||
logging.debug("Trying to get a new peer to connect to")
|
||||
if len(request_creators) > 0:
|
||||
logging.debug("Got a creator to check: %s", str(request_creators[0]))
|
||||
d = request_creators[0].get_new_peers()
|
||||
d.addCallback(lambda h: h if h is not None else get_new_peers(request_creators[1:]))
|
||||
return d
|
||||
else:
|
||||
return defer.succeed(None)
|
||||
|
||||
def pick_best_peer(peers):
|
||||
# TODO: Eventually rank them based on past performance/reputation. For now
|
||||
# TODO: just pick the first to which we don't have an open connection
|
||||
logging.debug("Got a list of peers to choose from: %s", str(peers))
|
||||
if peers is None:
|
||||
return None
|
||||
for peer in peers:
|
||||
if not peer in self.peer_connections:
|
||||
logging.debug("Got a good peer. Returning peer %s", str(peer))
|
||||
return peer
|
||||
logging.debug("Couldn't find a good peer to connect to")
|
||||
return None
|
||||
|
||||
if len(self.peer_connections) < MAX_CONNECTIONS_PER_STREAM:
|
||||
ordered_request_creators = self._rank_request_creator_connections()
|
||||
d = get_new_peers(ordered_request_creators)
|
||||
d.addCallback(pick_best_peer)
|
||||
d.addCallback(self._connect_to_peer)
|
||||
|
||||
self.next_manage_call = reactor.callLater(1, self._manage)
|
47
lbrynet/core/client/DHTPeerFinder.py
Normal file
47
lbrynet/core/client/DHTPeerFinder.py
Normal file
|
@ -0,0 +1,47 @@
|
|||
import binascii
|
||||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IPeerFinder
|
||||
|
||||
|
||||
class DHTPeerFinder(object):
|
||||
"""This class finds peers which have announced to the DHT that they have certain blobs"""
|
||||
implements(IPeerFinder)
|
||||
|
||||
def __init__(self, dht_node, peer_manager):
|
||||
self.dht_node = dht_node
|
||||
self.peer_manager = peer_manager
|
||||
self.peers = []
|
||||
self.next_manage_call = None
|
||||
|
||||
def run_manage_loop(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self._manage_peers()
|
||||
self.next_manage_call = reactor.callLater(60, self.run_manage_loop)
|
||||
|
||||
def stop(self):
|
||||
if self.next_manage_call is not None and self.next_manage_call.active():
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
|
||||
def _manage_peers(self):
|
||||
pass
|
||||
|
||||
def find_peers_for_blob(self, blob_hash):
|
||||
bin_hash = binascii.unhexlify(blob_hash)
|
||||
|
||||
def filter_peers(peer_list):
|
||||
good_peers = []
|
||||
for host, port in peer_list:
|
||||
peer = self.peer_manager.get_peer(host, port)
|
||||
if peer.is_available() is True:
|
||||
good_peers.append(peer)
|
||||
return good_peers
|
||||
|
||||
d = self.dht_node.getPeersForBlob(bin_hash)
|
||||
d.addCallback(filter_peers)
|
||||
return d
|
||||
|
||||
def get_most_popular_hashes(self, num_to_return):
|
||||
return self.dht_node.get_most_popular_hashes(num_to_return)
|
115
lbrynet/core/client/DownloadManager.py
Normal file
115
lbrynet/core/client/DownloadManager.py
Normal file
|
@ -0,0 +1,115 @@
|
|||
import logging
|
||||
from twisted.internet import defer
|
||||
from twisted.python import failure
|
||||
from zope.interface import implements
|
||||
from lbrynet import interfaces
|
||||
|
||||
|
||||
class DownloadManager(object):
|
||||
implements(interfaces.IDownloadManager)
|
||||
|
||||
def __init__(self, blob_manager, upload_allowed):
|
||||
self.blob_manager = blob_manager
|
||||
self.upload_allowed = upload_allowed
|
||||
self.blob_requester = None
|
||||
self.blob_info_finder = None
|
||||
self.progress_manager = None
|
||||
self.blob_handler = None
|
||||
self.connection_manager = None
|
||||
|
||||
self.blobs = {}
|
||||
self.blob_infos = {}
|
||||
|
||||
######### IDownloadManager #########
|
||||
|
||||
def start_downloading(self):
|
||||
d = self.blob_info_finder.get_initial_blobs()
|
||||
logging.debug("Requested the initial blobs from the info finder")
|
||||
d.addCallback(self.add_blobs_to_download)
|
||||
d.addCallback(lambda _: self.resume_downloading())
|
||||
return d
|
||||
|
||||
def resume_downloading(self):
|
||||
|
||||
def check_start(result, manager):
|
||||
if isinstance(result, failure.Failure):
|
||||
logging.error("Failed to start the %s: %s", manager, result.getErrorMessage())
|
||||
return False
|
||||
return True
|
||||
|
||||
d1 = self.progress_manager.start()
|
||||
d1.addBoth(check_start, "progress manager")
|
||||
d2 = self.connection_manager.start()
|
||||
d2.addBoth(check_start, "connection manager")
|
||||
dl = defer.DeferredList([d1, d2])
|
||||
dl.addCallback(lambda xs: False not in xs)
|
||||
return dl
|
||||
|
||||
def stop_downloading(self):
|
||||
|
||||
def check_stop(result, manager):
|
||||
if isinstance(result, failure.Failure):
|
||||
logging.error("Failed to stop the %s: %s", manager. result.getErrorMessage())
|
||||
return False
|
||||
return True
|
||||
|
||||
d1 = self.progress_manager.stop()
|
||||
d1.addBoth(check_stop, "progress manager")
|
||||
d2 = self.connection_manager.stop()
|
||||
d2.addBoth(check_stop, "connection manager")
|
||||
dl = defer.DeferredList([d1, d2])
|
||||
dl.addCallback(lambda xs: False not in xs)
|
||||
return dl
|
||||
|
||||
def add_blobs_to_download(self, blob_infos):
|
||||
|
||||
logging.debug("Adding %s to blobs", str(blob_infos))
|
||||
|
||||
def add_blob_to_list(blob, blob_num):
|
||||
self.blobs[blob_num] = blob
|
||||
logging.info("Added blob (hash: %s, number %s) to the list", str(blob.blob_hash), str(blob_num))
|
||||
|
||||
def error_during_add(err):
|
||||
logging.warning("An error occurred adding the blob to blobs. Error:%s", err.getErrorMessage())
|
||||
return err
|
||||
|
||||
ds = []
|
||||
for blob_info in blob_infos:
|
||||
if not blob_info.blob_num in self.blobs:
|
||||
self.blob_infos[blob_info.blob_num] = blob_info
|
||||
logging.debug("Trying to get the blob associated with blob hash %s", str(blob_info.blob_hash))
|
||||
d = self.blob_manager.get_blob(blob_info.blob_hash, self.upload_allowed, blob_info.length)
|
||||
d.addCallback(add_blob_to_list, blob_info.blob_num)
|
||||
d.addErrback(error_during_add)
|
||||
ds.append(d)
|
||||
|
||||
dl = defer.DeferredList(ds)
|
||||
return dl
|
||||
|
||||
def stream_position(self):
|
||||
return self.progress_manager.stream_position()
|
||||
|
||||
def needed_blobs(self):
|
||||
return self.progress_manager.needed_blobs()
|
||||
|
||||
def final_blob_num(self):
|
||||
return self.blob_info_finder.final_blob_num()
|
||||
|
||||
def handle_blob(self, blob_num):
|
||||
return self.blob_handler.handle_blob(self.blobs[blob_num], self.blob_infos[blob_num])
|
||||
|
||||
def calculate_total_bytes(self):
|
||||
return sum([bi.length for bi in self.blob_infos.itervalues()])
|
||||
|
||||
def calculate_bytes_left_to_output(self):
|
||||
if not self.blobs:
|
||||
return self.calculate_total_bytes()
|
||||
else:
|
||||
to_be_outputted = [b for n, b in self.blobs.iteritems() if n >= self.progress_manager.last_blob_outputted]
|
||||
return sum([b.length for b in to_be_outputted if b.length is not None])
|
||||
|
||||
def calculate_bytes_left_to_download(self):
|
||||
if not self.blobs:
|
||||
return self.calculate_total_bytes()
|
||||
else:
|
||||
return sum([b.length for b in self.needed_blobs() if b.length is not None])
|
133
lbrynet/core/client/StandaloneBlobDownloader.py
Normal file
133
lbrynet/core/client/StandaloneBlobDownloader.py
Normal file
|
@ -0,0 +1,133 @@
|
|||
import logging
|
||||
from zope.interface import implements
|
||||
from lbrynet import interfaces
|
||||
from lbrynet.core.BlobInfo import BlobInfo
|
||||
from lbrynet.core.client.BlobRequester import BlobRequester
|
||||
from lbrynet.core.client.ConnectionManager import ConnectionManager
|
||||
from lbrynet.core.client.DownloadManager import DownloadManager
|
||||
from twisted.internet import defer
|
||||
|
||||
|
||||
class SingleBlobMetadataHandler(object):
|
||||
implements(interfaces.IMetadataHandler)
|
||||
|
||||
def __init__(self, blob_hash, download_manager):
|
||||
self.blob_hash = blob_hash
|
||||
self.download_manager = download_manager
|
||||
|
||||
######## IMetadataHandler #########
|
||||
|
||||
def get_initial_blobs(self):
|
||||
logging.debug("Returning the blob info")
|
||||
return defer.succeed([BlobInfo(self.blob_hash, 0, None)])
|
||||
|
||||
def final_blob_num(self):
|
||||
return 0
|
||||
|
||||
|
||||
class SingleProgressManager(object):
|
||||
def __init__(self, finished_callback, download_manager):
|
||||
self.finished_callback = finished_callback
|
||||
self.finished = False
|
||||
self.download_manager = download_manager
|
||||
self._next_check_if_finished = None
|
||||
|
||||
def start(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
assert self._next_check_if_finished is None
|
||||
self._next_check_if_finished = reactor.callLater(0, self._check_if_finished)
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
if self._next_check_if_finished is not None:
|
||||
self._next_check_if_finished.cancel()
|
||||
self._next_check_if_finished = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def _check_if_finished(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self._next_check_if_finished = None
|
||||
if self.finished is False:
|
||||
if self.stream_position() == 1:
|
||||
self.blob_downloaded(self.download_manager.blobs[0], 0)
|
||||
else:
|
||||
self._next_check_if_finished = reactor.callLater(1, self._check_if_finished)
|
||||
|
||||
def stream_position(self):
|
||||
blobs = self.download_manager.blobs
|
||||
if blobs and blobs[0].is_validated():
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def needed_blobs(self):
|
||||
blobs = self.download_manager.blobs
|
||||
assert len(blobs) == 1
|
||||
return [b for b in blobs.itervalues() if not b.is_validated()]
|
||||
|
||||
def blob_downloaded(self, blob, blob_num):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
logging.debug("The blob %s has been downloaded. Calling the finished callback", str(blob))
|
||||
if self.finished is False:
|
||||
self.finished = True
|
||||
reactor.callLater(0, self.finished_callback, blob)
|
||||
|
||||
|
||||
class DummyBlobHandler(object):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def handle_blob(self, blob, blob_info):
|
||||
pass
|
||||
|
||||
|
||||
class StandaloneBlobDownloader(object):
|
||||
|
||||
def __init__(self, blob_hash, blob_manager, peer_finder, rate_limiter, payment_rate_manager, wallet):
|
||||
self.blob_hash = blob_hash
|
||||
self.blob_manager = blob_manager
|
||||
self.peer_finder = peer_finder
|
||||
self.rate_limiter = rate_limiter
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.wallet = wallet
|
||||
self.download_manager = None
|
||||
self.finished_deferred = None
|
||||
|
||||
def download(self):
|
||||
def cancel_download(d):
|
||||
self.stop()
|
||||
|
||||
self.finished_deferred = defer.Deferred(canceller=cancel_download)
|
||||
self.download_manager = DownloadManager(self.blob_manager, True)
|
||||
self.download_manager.blob_requester = BlobRequester(self.blob_manager, self.peer_finder,
|
||||
self.payment_rate_manager, self.wallet,
|
||||
self.download_manager)
|
||||
self.download_manager.blob_info_finder = SingleBlobMetadataHandler(self.blob_hash,
|
||||
self.download_manager)
|
||||
self.download_manager.progress_manager = SingleProgressManager(self._blob_downloaded,
|
||||
self.download_manager)
|
||||
self.download_manager.blob_handler = DummyBlobHandler()
|
||||
self.download_manager.wallet_info_exchanger = self.wallet.get_info_exchanger()
|
||||
self.download_manager.connection_manager = ConnectionManager(
|
||||
self, self.rate_limiter,
|
||||
[self.download_manager.blob_requester],
|
||||
[self.download_manager.wallet_info_exchanger]
|
||||
)
|
||||
d = self.download_manager.start_downloading()
|
||||
d.addCallback(lambda _: self.finished_deferred)
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
return self.download_manager.stop_downloading()
|
||||
|
||||
def _blob_downloaded(self, blob):
|
||||
self.stop()
|
||||
self.finished_deferred.callback(blob)
|
||||
|
||||
def insufficient_funds(self):
|
||||
return self.stop()
|
141
lbrynet/core/client/StreamProgressManager.py
Normal file
141
lbrynet/core/client/StreamProgressManager.py
Normal file
|
@ -0,0 +1,141 @@
|
|||
import logging
|
||||
from lbrynet.interfaces import IProgressManager
|
||||
from twisted.internet import defer
|
||||
from zope.interface import implements
|
||||
|
||||
|
||||
class StreamProgressManager(object):
|
||||
implements(IProgressManager)
|
||||
|
||||
def __init__(self, finished_callback, blob_manager, download_manager, delete_blob_after_finished=False):
|
||||
self.finished_callback = finished_callback
|
||||
self.blob_manager = blob_manager
|
||||
self.delete_blob_after_finished = delete_blob_after_finished
|
||||
self.download_manager = download_manager
|
||||
self.provided_blob_nums = []
|
||||
self.last_blob_outputted = -1
|
||||
self.stopped = True
|
||||
self._next_try_to_output_call = None
|
||||
self.outputting_d = None
|
||||
|
||||
######### IProgressManager #########
|
||||
|
||||
def start(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.stopped = False
|
||||
self._next_try_to_output_call = reactor.callLater(0, self._try_to_output)
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
self.stopped = True
|
||||
if self._next_try_to_output_call is not None and self._next_try_to_output_call.active():
|
||||
self._next_try_to_output_call.cancel()
|
||||
self._next_try_to_output_call = None
|
||||
return self._stop_outputting()
|
||||
|
||||
def blob_downloaded(self, blob, blob_num):
|
||||
if self.outputting_d is None:
|
||||
self._output_loop()
|
||||
|
||||
######### internal #########
|
||||
|
||||
def _finished_outputting(self):
|
||||
self.finished_callback(True)
|
||||
|
||||
def _try_to_output(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self._next_try_to_output_call = reactor.callLater(1, self._try_to_output)
|
||||
if self.outputting_d is None:
|
||||
self._output_loop()
|
||||
|
||||
def _output_loop(self):
|
||||
pass
|
||||
|
||||
def _stop_outputting(self):
|
||||
if self.outputting_d is not None:
|
||||
return self.outputting_d
|
||||
return defer.succeed(None)
|
||||
|
||||
def _finished_with_blob(self, blob_num):
|
||||
logging.debug("In _finished_with_blob, blob_num = %s", str(blob_num))
|
||||
if self.delete_blob_after_finished is True:
|
||||
logging.debug("delete_blob_after_finished is True")
|
||||
blobs = self.download_manager.blobs
|
||||
if blob_num in blobs:
|
||||
logging.debug("Telling the blob manager, %s, to delete blob %s", str(self.blob_manager),
|
||||
blobs[blob_num].blob_hash)
|
||||
self.blob_manager.delete_blobs([blobs[blob_num].blob_hash])
|
||||
else:
|
||||
logging.debug("Blob number %s was not in blobs", str(blob_num))
|
||||
else:
|
||||
logging.debug("delete_blob_after_finished is False")
|
||||
|
||||
|
||||
class FullStreamProgressManager(StreamProgressManager):
|
||||
def __init__(self, finished_callback, blob_manager, download_manager, delete_blob_after_finished=False):
|
||||
StreamProgressManager.__init__(self, finished_callback, blob_manager, download_manager,
|
||||
delete_blob_after_finished)
|
||||
self.outputting_d = None
|
||||
|
||||
######### IProgressManager #########
|
||||
|
||||
def stream_position(self):
|
||||
blobs = self.download_manager.blobs
|
||||
if not blobs:
|
||||
return 0
|
||||
else:
|
||||
for i in xrange(max(blobs.iterkeys())):
|
||||
if not i in blobs or (not blobs[i].is_validated() and not i in self.provided_blob_nums):
|
||||
return i
|
||||
return max(blobs.iterkeys()) + 1
|
||||
|
||||
def needed_blobs(self):
|
||||
blobs = self.download_manager.blobs
|
||||
return [b for n, b in blobs.iteritems() if not b.is_validated() and not n in self.provided_blob_nums]
|
||||
|
||||
######### internal #########
|
||||
|
||||
def _output_loop(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if self.stopped:
|
||||
if self.outputting_d is not None:
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
||||
return
|
||||
|
||||
if self.outputting_d is None:
|
||||
self.outputting_d = defer.Deferred()
|
||||
blobs = self.download_manager.blobs
|
||||
|
||||
def finished_outputting_blob():
|
||||
self.last_blob_outputted += 1
|
||||
final_blob_num = self.download_manager.final_blob_num()
|
||||
if final_blob_num is not None and final_blob_num == self.last_blob_outputted:
|
||||
self._finished_outputting()
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
||||
else:
|
||||
reactor.callLater(0, self._output_loop)
|
||||
|
||||
current_blob_num = self.last_blob_outputted + 1
|
||||
|
||||
if current_blob_num in blobs and blobs[current_blob_num].is_validated():
|
||||
logging.info("Outputting blob %s", str(self.last_blob_outputted + 1))
|
||||
self.provided_blob_nums.append(self.last_blob_outputted + 1)
|
||||
d = self.download_manager.handle_blob(self.last_blob_outputted + 1)
|
||||
d.addCallback(lambda _: finished_outputting_blob())
|
||||
d.addCallback(lambda _: self._finished_with_blob(current_blob_num))
|
||||
|
||||
def log_error(err):
|
||||
logging.warning("Error occurred in the output loop. Error: %s", err.getErrorMessage())
|
||||
|
||||
d.addErrback(log_error)
|
||||
else:
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
0
lbrynet/core/client/__init__.py
Normal file
0
lbrynet/core/client/__init__.py
Normal file
18
lbrynet/core/cryptoutils.py
Normal file
18
lbrynet/core/cryptoutils.py
Normal file
|
@ -0,0 +1,18 @@
|
|||
from Crypto.Hash import SHA384
|
||||
import seccure
|
||||
|
||||
|
||||
def get_lbry_hash_obj():
|
||||
return SHA384.new()
|
||||
|
||||
|
||||
def get_pub_key(pass_phrase):
|
||||
return str(seccure.passphrase_to_pubkey(pass_phrase, curve="brainpoolp384r1"))
|
||||
|
||||
|
||||
def sign_with_pass_phrase(m, pass_phrase):
|
||||
return seccure.sign(m, pass_phrase, curve="brainpoolp384r1")
|
||||
|
||||
|
||||
def verify_signature(m, signature, pub_key):
|
||||
return seccure.verify(m, signature, pub_key, curve="brainpoolp384r1")
|
55
lbrynet/core/server/BlobAvailabilityHandler.py
Normal file
55
lbrynet/core/server/BlobAvailabilityHandler.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
import logging
|
||||
from twisted.internet import defer
|
||||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IQueryHandlerFactory, IQueryHandler
|
||||
|
||||
|
||||
class BlobAvailabilityHandlerFactory(object):
|
||||
implements(IQueryHandlerFactory)
|
||||
|
||||
def __init__(self, blob_manager):
|
||||
self.blob_manager = blob_manager
|
||||
|
||||
######### IQueryHandlerFactory #########
|
||||
|
||||
def build_query_handler(self):
|
||||
q_h = BlobAvailabilityHandler(self.blob_manager)
|
||||
return q_h
|
||||
|
||||
def get_primary_query_identifier(self):
|
||||
return 'requested_blobs'
|
||||
|
||||
def get_description(self):
|
||||
return "Blob Availability - blobs that are available to be uploaded"
|
||||
|
||||
|
||||
class BlobAvailabilityHandler(object):
|
||||
implements(IQueryHandler)
|
||||
|
||||
def __init__(self, blob_manager):
|
||||
self.blob_manager = blob_manager
|
||||
self.query_identifiers = ['requested_blobs']
|
||||
|
||||
######### IQueryHandler #########
|
||||
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
request_handler.register_query_handler(self, self.query_identifiers)
|
||||
|
||||
def handle_queries(self, queries):
|
||||
if self.query_identifiers[0] in queries:
|
||||
logging.debug("Received the client's list of requested blobs")
|
||||
d = self._get_available_blobs(queries[self.query_identifiers[0]])
|
||||
|
||||
def set_field(available_blobs):
|
||||
return {'available_blobs': available_blobs}
|
||||
|
||||
d.addCallback(set_field)
|
||||
return d
|
||||
return defer.succeed({})
|
||||
|
||||
######### internal #########
|
||||
|
||||
def _get_available_blobs(self, requested_blobs):
|
||||
d = self.blob_manager.completed_blobs(requested_blobs)
|
||||
|
||||
return d
|
156
lbrynet/core/server/BlobRequestHandler.py
Normal file
156
lbrynet/core/server/BlobRequestHandler.py
Normal file
|
@ -0,0 +1,156 @@
|
|||
import logging
|
||||
from twisted.internet import defer
|
||||
from twisted.protocols.basic import FileSender
|
||||
from twisted.python.failure import Failure
|
||||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IQueryHandlerFactory, IQueryHandler, IBlobSender
|
||||
|
||||
|
||||
class BlobRequestHandlerFactory(object):
|
||||
implements(IQueryHandlerFactory)
|
||||
|
||||
def __init__(self, blob_manager, wallet, payment_rate_manager):
|
||||
self.blob_manager = blob_manager
|
||||
self.wallet = wallet
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
|
||||
######### IQueryHandlerFactory #########
|
||||
|
||||
def build_query_handler(self):
|
||||
q_h = BlobRequestHandler(self.blob_manager, self.wallet, self.payment_rate_manager)
|
||||
return q_h
|
||||
|
||||
def get_primary_query_identifier(self):
|
||||
return 'requested_blob'
|
||||
|
||||
def get_description(self):
|
||||
return "Blob Uploader - uploads blobs"
|
||||
|
||||
|
||||
class BlobRequestHandler(object):
|
||||
implements(IQueryHandler, IBlobSender)
|
||||
|
||||
def __init__(self, blob_manager, wallet, payment_rate_manager):
|
||||
self.blob_manager = blob_manager
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.wallet = wallet
|
||||
self.query_identifiers = ['blob_data_payment_rate', 'requested_blob']
|
||||
self.peer = None
|
||||
self.blob_data_payment_rate = None
|
||||
self.read_handle = None
|
||||
self.currently_uploading = None
|
||||
self.file_sender = None
|
||||
self.blob_bytes_uploaded = 0
|
||||
|
||||
######### IQueryHandler #########
|
||||
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
self.peer = peer
|
||||
request_handler.register_query_handler(self, self.query_identifiers)
|
||||
request_handler.register_blob_sender(self)
|
||||
|
||||
def handle_queries(self, queries):
|
||||
response = {}
|
||||
if self.query_identifiers[0] in queries:
|
||||
if not self.handle_blob_data_payment_rate(queries[self.query_identifiers[0]]):
|
||||
response['blob_data_payment_rate'] = "RATE_TOO_LOW"
|
||||
else:
|
||||
response['blob_data_payment_rate'] = 'RATE_ACCEPTED'
|
||||
|
||||
if self.query_identifiers[1] in queries:
|
||||
logging.debug("Received the client's request to send a blob")
|
||||
response_fields = {}
|
||||
response['incoming_blob'] = response_fields
|
||||
|
||||
if self.blob_data_payment_rate is None:
|
||||
response_fields['error'] = "RATE_UNSET"
|
||||
return defer.succeed(response)
|
||||
else:
|
||||
|
||||
d = self.blob_manager.get_blob(queries[self.query_identifiers[1]], True)
|
||||
|
||||
def open_blob_for_reading(blob):
|
||||
if blob.is_validated():
|
||||
read_handle = blob.open_for_reading()
|
||||
if read_handle is not None:
|
||||
self.currently_uploading = blob
|
||||
self.read_handle = read_handle
|
||||
logging.debug("Sending %s to client", str(blob))
|
||||
response_fields['blob_hash'] = blob.blob_hash
|
||||
response_fields['length'] = blob.length
|
||||
return response
|
||||
logging.debug("We can not send %s", str(blob))
|
||||
response_fields['error'] = "BLOB_UNAVAILABLE"
|
||||
return response
|
||||
|
||||
d.addCallback(open_blob_for_reading)
|
||||
|
||||
return d
|
||||
else:
|
||||
return defer.succeed(response)
|
||||
|
||||
######### IBlobSender #########
|
||||
|
||||
def send_blob_if_requested(self, consumer):
|
||||
if self.currently_uploading is not None:
|
||||
return self.send_file(consumer)
|
||||
return defer.succeed(True)
|
||||
|
||||
def cancel_send(self, err):
|
||||
if self.currently_uploading is not None:
|
||||
self.currently_uploading.close_read_handle(self.read_handle)
|
||||
self.read_handle = None
|
||||
self.currently_uploading = None
|
||||
return err
|
||||
|
||||
######### internal #########
|
||||
|
||||
def handle_blob_data_payment_rate(self, requested_payment_rate):
|
||||
if not self.payment_rate_manager.accept_rate_blob_data(self.peer, requested_payment_rate):
|
||||
return False
|
||||
else:
|
||||
self.blob_data_payment_rate = requested_payment_rate
|
||||
return True
|
||||
|
||||
def send_file(self, consumer):
|
||||
|
||||
def _send_file():
|
||||
inner_d = start_transfer()
|
||||
# TODO: if the transfer fails, check if it's because the connection was cut off.
|
||||
# TODO: if so, perhaps bill the client
|
||||
inner_d.addCallback(lambda _: set_expected_payment())
|
||||
inner_d.addBoth(set_not_uploading)
|
||||
return inner_d
|
||||
|
||||
def count_bytes(data):
|
||||
self.blob_bytes_uploaded += len(data)
|
||||
self.peer.update_stats('blob_bytes_uploaded', len(data))
|
||||
return data
|
||||
|
||||
def start_transfer():
|
||||
self.file_sender = FileSender()
|
||||
logging.info("Starting the file upload")
|
||||
assert self.read_handle is not None, "self.read_handle was None when trying to start the transfer"
|
||||
d = self.file_sender.beginFileTransfer(self.read_handle, consumer, count_bytes)
|
||||
return d
|
||||
|
||||
def set_expected_payment():
|
||||
logging.info("Setting expected payment")
|
||||
if self.blob_bytes_uploaded != 0 and self.blob_data_payment_rate is not None:
|
||||
self.wallet.add_expected_payment(self.peer,
|
||||
self.currently_uploading.length * 1.0 *
|
||||
self.blob_data_payment_rate / 2**20)
|
||||
self.blob_bytes_uploaded = 0
|
||||
self.peer.update_stats('blobs_uploaded', 1)
|
||||
return None
|
||||
|
||||
def set_not_uploading(reason=None):
|
||||
if self.currently_uploading is not None:
|
||||
self.currently_uploading.close_read_handle(self.read_handle)
|
||||
self.read_handle = None
|
||||
self.currently_uploading = None
|
||||
self.file_sender = None
|
||||
if reason is not None and isinstance(reason, Failure):
|
||||
logging.warning("Upload has failed. Reason: %s", reason.getErrorMessage())
|
||||
|
||||
return _send_file()
|
81
lbrynet/core/server/DHTHashAnnouncer.py
Normal file
81
lbrynet/core/server/DHTHashAnnouncer.py
Normal file
|
@ -0,0 +1,81 @@
|
|||
import binascii
|
||||
from twisted.internet import defer, task, reactor
|
||||
import collections
|
||||
|
||||
|
||||
class DHTHashAnnouncer(object):
|
||||
"""This class announces to the DHT that this peer has certain blobs"""
|
||||
def __init__(self, dht_node, peer_port):
|
||||
self.dht_node = dht_node
|
||||
self.peer_port = peer_port
|
||||
self.suppliers = []
|
||||
self.next_manage_call = None
|
||||
self.hash_queue = collections.deque()
|
||||
self._concurrent_announcers = 0
|
||||
|
||||
def run_manage_loop(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if self.peer_port is not None:
|
||||
self._announce_available_hashes()
|
||||
self.next_manage_call = reactor.callLater(60, self.run_manage_loop)
|
||||
|
||||
def stop(self):
|
||||
if self.next_manage_call is not None:
|
||||
self.next_manage_call.cancel()
|
||||
self.next_manage_call = None
|
||||
|
||||
def add_supplier(self, supplier):
|
||||
self.suppliers.append(supplier)
|
||||
|
||||
def immediate_announce(self, blob_hashes):
|
||||
if self.peer_port is not None:
|
||||
return self._announce_hashes(blob_hashes)
|
||||
else:
|
||||
return defer.succeed(False)
|
||||
|
||||
def _announce_available_hashes(self):
|
||||
ds = []
|
||||
for supplier in self.suppliers:
|
||||
d = supplier.hashes_to_announce()
|
||||
d.addCallback(self._announce_hashes)
|
||||
ds.append(d)
|
||||
dl = defer.DeferredList(ds)
|
||||
return dl
|
||||
|
||||
def _announce_hashes(self, hashes):
|
||||
|
||||
ds = []
|
||||
|
||||
for h in hashes:
|
||||
announce_deferred = defer.Deferred()
|
||||
ds.append(announce_deferred)
|
||||
self.hash_queue.append((h, announce_deferred))
|
||||
|
||||
def announce():
|
||||
if len(self.hash_queue):
|
||||
h, announce_deferred = self.hash_queue.popleft()
|
||||
d = self.dht_node.announceHaveBlob(binascii.unhexlify(h), self.peer_port)
|
||||
d.chainDeferred(announce_deferred)
|
||||
d.addBoth(lambda _: reactor.callLater(0, announce))
|
||||
else:
|
||||
self._concurrent_announcers -= 1
|
||||
|
||||
for i in range(self._concurrent_announcers, 5):
|
||||
# TODO: maybe make the 5 configurable
|
||||
self._concurrent_announcers += 1
|
||||
announce()
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
|
||||
class DHTHashSupplier(object):
|
||||
"""Classes derived from this class give hashes to a hash announcer"""
|
||||
def __init__(self, announcer):
|
||||
if announcer is not None:
|
||||
announcer.add_supplier(self)
|
||||
self.hash_announcer = announcer
|
||||
self.hash_reannounce_time = 60 * 60 # 1 hour
|
||||
|
||||
def hashes_to_announce(self):
|
||||
pass
|
91
lbrynet/core/server/ServerProtocol.py
Normal file
91
lbrynet/core/server/ServerProtocol.py
Normal file
|
@ -0,0 +1,91 @@
|
|||
import logging
|
||||
from twisted.internet import interfaces, error
|
||||
from twisted.internet.protocol import Protocol, ServerFactory
|
||||
from twisted.python import failure
|
||||
from zope.interface import implements
|
||||
from lbrynet.core.server.ServerRequestHandler import ServerRequestHandler
|
||||
|
||||
|
||||
class ServerProtocol(Protocol):
|
||||
"""ServerProtocol needs to:
|
||||
|
||||
1) Receive requests from its transport
|
||||
2) Pass those requests on to its request handler
|
||||
3) Tell the request handler to pause/resume producing
|
||||
4) Tell its transport to pause/resume producing
|
||||
5) Hang up when the request handler is done producing
|
||||
6) Tell the request handler to stop producing if the connection is lost
|
||||
7) Upon creation, register with the rate limiter
|
||||
8) Upon connection loss, unregister with the rate limiter
|
||||
9) Report all uploaded and downloaded bytes to the rate limiter
|
||||
10) Pause/resume production when told by the rate limiter
|
||||
"""
|
||||
|
||||
implements(interfaces.IConsumer)
|
||||
|
||||
#Protocol stuff
|
||||
|
||||
def connectionMade(self):
|
||||
logging.debug("Got a connection")
|
||||
peer_info = self.transport.getPeer()
|
||||
self.peer = self.factory.peer_manager.get_peer(peer_info.host, peer_info.port)
|
||||
self.request_handler = ServerRequestHandler(self)
|
||||
for query_handler_factory, enabled in self.factory.query_handler_factories.iteritems():
|
||||
if enabled is True:
|
||||
query_handler = query_handler_factory.build_query_handler()
|
||||
query_handler.register_with_request_handler(self.request_handler, self.peer)
|
||||
logging.debug("Setting the request handler")
|
||||
self.factory.rate_limiter.register_protocol(self)
|
||||
|
||||
def connectionLost(self, reason=failure.Failure(error.ConnectionDone())):
|
||||
if self.request_handler is not None:
|
||||
self.request_handler.stopProducing()
|
||||
self.factory.rate_limiter.unregister_protocol(self)
|
||||
if not reason.check(error.ConnectionDone):
|
||||
logging.warning("Closing a connection. Reason: %s", reason.getErrorMessage())
|
||||
|
||||
def dataReceived(self, data):
|
||||
logging.debug("Receiving %s bytes of data from the transport", str(len(data)))
|
||||
self.factory.rate_limiter.report_dl_bytes(len(data))
|
||||
if self.request_handler is not None:
|
||||
self.request_handler.data_received(data)
|
||||
|
||||
#IConsumer stuff
|
||||
|
||||
def registerProducer(self, producer, streaming):
|
||||
logging.debug("Registering the producer")
|
||||
assert streaming is True
|
||||
|
||||
def unregisterProducer(self):
|
||||
self.request_handler = None
|
||||
self.transport.loseConnection()
|
||||
|
||||
def write(self, data):
|
||||
logging.debug("Writing %s bytes of data to the transport", str(len(data)))
|
||||
self.transport.write(data)
|
||||
self.factory.rate_limiter.report_ul_bytes(len(data))
|
||||
|
||||
#Rate limiter stuff
|
||||
|
||||
def throttle_upload(self):
|
||||
if self.request_handler is not None:
|
||||
self.request_handler.pauseProducing()
|
||||
|
||||
def unthrottle_upload(self):
|
||||
if self.request_handler is not None:
|
||||
self.request_handler.resumeProducing()
|
||||
|
||||
def throttle_download(self):
|
||||
self.transport.pauseProducing()
|
||||
|
||||
def unthrottle_download(self):
|
||||
self.transport.resumeProducing()
|
||||
|
||||
|
||||
class ServerProtocolFactory(ServerFactory):
|
||||
protocol = ServerProtocol
|
||||
|
||||
def __init__(self, rate_limiter, query_handler_factories, peer_manager):
|
||||
self.rate_limiter = rate_limiter
|
||||
self.query_handler_factories = query_handler_factories
|
||||
self.peer_manager = peer_manager
|
171
lbrynet/core/server/ServerRequestHandler.py
Normal file
171
lbrynet/core/server/ServerRequestHandler.py
Normal file
|
@ -0,0 +1,171 @@
|
|||
import json
|
||||
import logging
|
||||
from twisted.internet import interfaces, defer
|
||||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IRequestHandler
|
||||
|
||||
|
||||
class ServerRequestHandler(object):
|
||||
"""This class handles requests from clients. It can upload blobs and return request for information about
|
||||
more blobs that are associated with streams"""
|
||||
|
||||
implements(interfaces.IPushProducer, interfaces.IConsumer, IRequestHandler)
|
||||
|
||||
def __init__(self, consumer):
|
||||
self.consumer = consumer
|
||||
self.production_paused = False
|
||||
self.request_buff = ''
|
||||
self.response_buff = ''
|
||||
self.producer = None
|
||||
self.request_received = False
|
||||
self.CHUNK_SIZE = 2**14
|
||||
self.query_handlers = {} # {IQueryHandler: [query_identifiers]}
|
||||
self.blob_sender = None
|
||||
self.consumer.registerProducer(self, True)
|
||||
|
||||
#IPushProducer stuff
|
||||
|
||||
def pauseProducing(self):
|
||||
self.production_paused = True
|
||||
|
||||
def stopProducing(self):
|
||||
if self.producer is not None:
|
||||
self.producer.stopProducing()
|
||||
self.producer = None
|
||||
self.production_paused = True
|
||||
self.consumer.unregisterProducer()
|
||||
|
||||
def resumeProducing(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.production_paused = False
|
||||
self._produce_more()
|
||||
if self.producer is not None:
|
||||
reactor.callLater(0, self.producer.resumeProducing)
|
||||
|
||||
def _produce_more(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if self.production_paused is False:
|
||||
chunk = self.response_buff[:self.CHUNK_SIZE]
|
||||
self.response_buff = self.response_buff[self.CHUNK_SIZE:]
|
||||
if chunk != '':
|
||||
logging.debug("writing %s bytes to the client", str(len(chunk)))
|
||||
self.consumer.write(chunk)
|
||||
reactor.callLater(0, self._produce_more)
|
||||
|
||||
#IConsumer stuff
|
||||
|
||||
def registerProducer(self, producer, streaming):
|
||||
#assert self.file_sender == producer
|
||||
self.producer = producer
|
||||
assert streaming is False
|
||||
producer.resumeProducing()
|
||||
|
||||
def unregisterProducer(self):
|
||||
self.producer = None
|
||||
|
||||
def write(self, data):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self.response_buff = self.response_buff + data
|
||||
self._produce_more()
|
||||
|
||||
def get_more_data():
|
||||
if self.producer is not None:
|
||||
logging.debug("Requesting more data from the producer")
|
||||
self.producer.resumeProducing()
|
||||
|
||||
reactor.callLater(0, get_more_data)
|
||||
|
||||
#From Protocol
|
||||
|
||||
def data_received(self, data):
|
||||
logging.debug("Received data")
|
||||
logging.debug("%s", str(data))
|
||||
if self.request_received is False:
|
||||
self.request_buff = self.request_buff + data
|
||||
msg = self.try_to_parse_request(self.request_buff)
|
||||
if msg is not None:
|
||||
self.request_buff = ''
|
||||
d = self.handle_request(msg)
|
||||
if self.blob_sender is not None:
|
||||
d.addCallback(lambda _: self.blob_sender.send_blob_if_requested(self))
|
||||
d.addCallbacks(lambda _: self.finished_response(), self.request_failure_handler)
|
||||
else:
|
||||
logging.info("Request buff not a valid json message")
|
||||
logging.info("Request buff: %s", str(self.request_buff))
|
||||
else:
|
||||
logging.warning("The client sent data when we were uploading a file. This should not happen")
|
||||
|
||||
######### IRequestHandler #########
|
||||
|
||||
def register_query_handler(self, query_handler, query_identifiers):
|
||||
self.query_handlers[query_handler] = query_identifiers
|
||||
|
||||
def register_blob_sender(self, blob_sender):
|
||||
self.blob_sender = blob_sender
|
||||
|
||||
#response handling
|
||||
|
||||
def request_failure_handler(self, err):
|
||||
logging.warning("An error occurred handling a request. Error: %s", err.getErrorMessage())
|
||||
self.stopProducing()
|
||||
return err
|
||||
|
||||
def finished_response(self):
|
||||
self.request_received = False
|
||||
self._produce_more()
|
||||
|
||||
def send_response(self, msg):
|
||||
m = json.dumps(msg)
|
||||
logging.info("Sending a response of length %s", str(len(m)))
|
||||
logging.debug("Response: %s", str(m))
|
||||
self.response_buff = self.response_buff + m
|
||||
self._produce_more()
|
||||
return True
|
||||
|
||||
def handle_request(self, msg):
|
||||
logging.debug("Handling a request")
|
||||
logging.debug(str(msg))
|
||||
|
||||
def create_response_message(results):
|
||||
response = {}
|
||||
for success, result in results:
|
||||
if success is True:
|
||||
response.update(result)
|
||||
else:
|
||||
# result is a Failure
|
||||
return result
|
||||
logging.debug("Finished making the response message. Response: %s", str(response))
|
||||
return response
|
||||
|
||||
def log_errors(err):
|
||||
logging.warning("An error occurred handling a client request. Error message: %s", err.getErrorMessage())
|
||||
return err
|
||||
|
||||
def send_response(response):
|
||||
self.send_response(response)
|
||||
return True
|
||||
|
||||
ds = []
|
||||
for query_handler, query_identifiers in self.query_handlers.iteritems():
|
||||
queries = {q_i: msg[q_i] for q_i in query_identifiers if q_i in msg}
|
||||
d = query_handler.handle_queries(queries)
|
||||
d.addErrback(log_errors)
|
||||
ds.append(d)
|
||||
|
||||
dl = defer.DeferredList(ds)
|
||||
dl.addCallback(create_response_message)
|
||||
dl.addCallback(send_response)
|
||||
return dl
|
||||
|
||||
def try_to_parse_request(self, request_buff):
|
||||
try:
|
||||
msg = json.loads(request_buff)
|
||||
return msg
|
||||
except ValueError:
|
||||
return None
|
0
lbrynet/core/server/__init__.py
Normal file
0
lbrynet/core/server/__init__.py
Normal file
28
lbrynet/core/utils.py
Normal file
28
lbrynet/core/utils.py
Normal file
|
@ -0,0 +1,28 @@
|
|||
from lbrynet.core.cryptoutils import get_lbry_hash_obj
|
||||
import random
|
||||
|
||||
|
||||
blobhash_length = get_lbry_hash_obj().digest_size * 2 # digest_size is in bytes, and blob hashes are hex encoded
|
||||
|
||||
|
||||
def generate_id(num=None):
|
||||
h = get_lbry_hash_obj()
|
||||
if num is not None:
|
||||
h.update(str(num))
|
||||
else:
|
||||
h.update(str(random.getrandbits(512)))
|
||||
return h.digest()
|
||||
|
||||
|
||||
def is_valid_blobhash(blobhash):
|
||||
"""
|
||||
@param blobhash: string, the blobhash to check
|
||||
|
||||
@return: Whether the blobhash is the correct length and contains only valid characters (0-9, a-f)
|
||||
"""
|
||||
if len(blobhash) != blobhash_length:
|
||||
return False
|
||||
for l in blobhash:
|
||||
if l not in "0123456789abcdef":
|
||||
return False
|
||||
return True
|
83
lbrynet/create_network.py
Normal file
83
lbrynet/create_network.py
Normal file
|
@ -0,0 +1,83 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
|
||||
# Thanks to Paul Cannon for IP-address resolution functions (taken from aspn.activestate.com)
|
||||
|
||||
import argparse
|
||||
import os, sys, time, signal
|
||||
|
||||
amount = 0
|
||||
|
||||
|
||||
def destroyNetwork(nodes):
|
||||
print 'Destroying Kademlia network...'
|
||||
i = 0
|
||||
for node in nodes:
|
||||
i += 1
|
||||
hashAmount = i*50/amount
|
||||
hashbar = '#'*hashAmount
|
||||
output = '\r[%-50s] %d/%d' % (hashbar, i, amount)
|
||||
sys.stdout.write(output)
|
||||
time.sleep(0.15)
|
||||
os.kill(node, signal.SIGTERM)
|
||||
print
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
parser = argparse.ArgumentParser(description="Launch a network of dht nodes")
|
||||
|
||||
parser.add_argument("amount_of_nodes",
|
||||
help="The number of nodes to create",
|
||||
type=int)
|
||||
parser.add_argument("--nic_ip_address",
|
||||
help="The network interface on which these nodes will listen for connections "
|
||||
"from each other and from other nodes. If omitted, an attempt will be "
|
||||
"made to automatically determine the system's IP address, but this may "
|
||||
"result in the nodes being reachable only from this system")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
global amount
|
||||
amount = args.amount_of_nodes
|
||||
if args.nic_ip_address:
|
||||
ipAddress = args.nic_ip_address
|
||||
else:
|
||||
import socket
|
||||
ipAddress = socket.gethostbyname(socket.gethostname())
|
||||
print 'Network interface IP address omitted; using %s...' % ipAddress
|
||||
|
||||
startPort = 4000
|
||||
port = startPort+1
|
||||
nodes = []
|
||||
print 'Creating Kademlia network...'
|
||||
try:
|
||||
nodes.append(os.spawnlp(os.P_NOWAIT, 'lbrynet-launch-node', 'lbrynet-launch-node', str(startPort)))
|
||||
for i in range(amount-1):
|
||||
time.sleep(0.15)
|
||||
hashAmount = i*50/amount
|
||||
hashbar = '#'*hashAmount
|
||||
output = '\r[%-50s] %d/%d' % (hashbar, i, amount)
|
||||
sys.stdout.write(output)
|
||||
nodes.append(os.spawnlp(os.P_NOWAIT, 'lbrynet-launch-node', 'lbrynet-launch-node', str(port), ipAddress, str(startPort)))
|
||||
port += 1
|
||||
except KeyboardInterrupt:
|
||||
'\nNetwork creation cancelled.'
|
||||
destroyNetwork(nodes)
|
||||
sys.exit(1)
|
||||
|
||||
print '\n\n---------------\nNetwork running\n---------------\n'
|
||||
try:
|
||||
while 1:
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
destroyNetwork(nodes)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
106
lbrynet/cryptstream/CryptBlob.py
Normal file
106
lbrynet/cryptstream/CryptBlob.py
Normal file
|
@ -0,0 +1,106 @@
|
|||
import binascii
|
||||
import logging
|
||||
from Crypto.Cipher import AES
|
||||
from lbrynet.conf import BLOB_SIZE
|
||||
from lbrynet.core.BlobInfo import BlobInfo
|
||||
|
||||
|
||||
class CryptBlobInfo(BlobInfo):
|
||||
def __init__(self, blob_hash, blob_num, length, iv):
|
||||
BlobInfo.__init__(self, blob_hash, blob_num, length)
|
||||
self.iv = iv
|
||||
|
||||
|
||||
class StreamBlobDecryptor(object):
|
||||
def __init__(self, blob, key, iv, length):
|
||||
self.blob = blob
|
||||
self.key = key
|
||||
self.iv = iv
|
||||
self.length = length
|
||||
self.buff = b''
|
||||
self.len_read = 0
|
||||
self.cipher = AES.new(self.key, AES.MODE_CBC, self.iv)
|
||||
|
||||
def decrypt(self, write_func):
|
||||
|
||||
def remove_padding(data):
|
||||
pad_len = ord(data[-1])
|
||||
data, padding = data[:-1 * pad_len], data[-1 * pad_len:]
|
||||
for c in padding:
|
||||
assert ord(c) == pad_len
|
||||
return data
|
||||
|
||||
def write_bytes():
|
||||
if self.len_read < self.length:
|
||||
num_bytes_to_decrypt = (len(self.buff) // self.cipher.block_size) * self.cipher.block_size
|
||||
data_to_decrypt, self.buff = self.buff[:num_bytes_to_decrypt], self.buff[num_bytes_to_decrypt:]
|
||||
write_func(self.cipher.decrypt(data_to_decrypt))
|
||||
|
||||
def finish_decrypt():
|
||||
assert len(self.buff) % self.cipher.block_size == 0
|
||||
data_to_decrypt, self.buff = self.buff, b''
|
||||
write_func(remove_padding(self.cipher.decrypt(data_to_decrypt)))
|
||||
|
||||
def decrypt_bytes(data):
|
||||
self.buff += data
|
||||
self.len_read += len(data)
|
||||
write_bytes()
|
||||
#write_func(remove_padding(self.cipher.decrypt(self.buff)))
|
||||
|
||||
d = self.blob.read(decrypt_bytes)
|
||||
d.addCallback(lambda _: finish_decrypt())
|
||||
return d
|
||||
|
||||
|
||||
class CryptStreamBlobMaker(object):
|
||||
"""This class encrypts data and writes it to a new blob"""
|
||||
def __init__(self, key, iv, blob_num, blob):
|
||||
self.key = key
|
||||
self.iv = iv
|
||||
self.blob_num = blob_num
|
||||
self.blob = blob
|
||||
self.cipher = AES.new(self.key, AES.MODE_CBC, self.iv)
|
||||
self.buff = b''
|
||||
self.length = 0
|
||||
|
||||
def write(self, data):
|
||||
max_bytes_to_write = BLOB_SIZE - self.length - 1
|
||||
done = False
|
||||
if max_bytes_to_write <= len(data):
|
||||
num_bytes_to_write = max_bytes_to_write
|
||||
done = True
|
||||
else:
|
||||
num_bytes_to_write = len(data)
|
||||
self.length += num_bytes_to_write
|
||||
data_to_write = data[:num_bytes_to_write]
|
||||
self.buff += data_to_write
|
||||
self._write_buffer()
|
||||
return done, num_bytes_to_write
|
||||
|
||||
def close(self):
|
||||
logging.debug("closing blob %s with plaintext len %s", str(self.blob_num), str(self.length))
|
||||
if self.length != 0:
|
||||
self._close_buffer()
|
||||
d = self.blob.close()
|
||||
d.addCallback(self._return_info)
|
||||
logging.debug("called the finished_callback from CryptStreamBlobMaker.close")
|
||||
return d
|
||||
|
||||
def _write_buffer(self):
|
||||
num_bytes_to_encrypt = (len(self.buff) // AES.block_size) * AES.block_size
|
||||
data_to_encrypt, self.buff = self.buff[:num_bytes_to_encrypt], self.buff[num_bytes_to_encrypt:]
|
||||
encrypted_data = self.cipher.encrypt(data_to_encrypt)
|
||||
self.blob.write(encrypted_data)
|
||||
|
||||
def _close_buffer(self):
|
||||
data_to_encrypt, self.buff = self.buff, b''
|
||||
assert len(data_to_encrypt) < AES.block_size
|
||||
pad_len = AES.block_size - len(data_to_encrypt)
|
||||
padded_data = data_to_encrypt + chr(pad_len) * pad_len
|
||||
self.length += pad_len
|
||||
assert len(padded_data) == AES.block_size
|
||||
encrypted_data = self.cipher.encrypt(padded_data)
|
||||
self.blob.write(encrypted_data)
|
||||
|
||||
def _return_info(self, blob_hash):
|
||||
return CryptBlobInfo(blob_hash, self.blob_num, self.length, binascii.hexlify(self.iv))
|
94
lbrynet/cryptstream/CryptStreamCreator.py
Normal file
94
lbrynet/cryptstream/CryptStreamCreator.py
Normal file
|
@ -0,0 +1,94 @@
|
|||
"""
|
||||
Utility for creating Crypt Streams, which are encrypted blobs and associated metadata.
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from Crypto import Random
|
||||
from Crypto.Cipher import AES
|
||||
|
||||
from twisted.internet import defer
|
||||
from lbrynet.core.StreamCreator import StreamCreator
|
||||
from lbrynet.cryptstream.CryptBlob import CryptStreamBlobMaker
|
||||
|
||||
|
||||
class CryptStreamCreator(StreamCreator):
|
||||
"""Create a new stream with blobs encrypted by a symmetric cipher.
|
||||
|
||||
Each blob is encrypted with the same key, but each blob has its own initialization vector
|
||||
which is associated with the blob when the blob is associated with the stream."""
|
||||
def __init__(self, blob_manager, name=None, key=None, iv_generator=None):
|
||||
"""
|
||||
@param blob_manager: Object that stores and provides access to blobs.
|
||||
@type blob_manager: BlobManager
|
||||
|
||||
@param name: the name of the stream, which will be presented to the user
|
||||
@type name: string
|
||||
|
||||
@param key: the raw AES key which will be used to encrypt the blobs. If None, a random key will
|
||||
be generated.
|
||||
@type key: string
|
||||
|
||||
@param iv_generator: a generator which yields initialization vectors for the blobs. Will be called
|
||||
once for each blob.
|
||||
@type iv_generator: a generator function which yields strings
|
||||
|
||||
@return: None
|
||||
"""
|
||||
StreamCreator.__init__(self, name)
|
||||
self.blob_manager = blob_manager
|
||||
self.key = key
|
||||
if iv_generator is None:
|
||||
self.iv_generator = self.random_iv_generator()
|
||||
else:
|
||||
self.iv_generator = iv_generator
|
||||
|
||||
@staticmethod
|
||||
def random_iv_generator():
|
||||
while 1:
|
||||
yield Random.new().read(AES.block_size)
|
||||
|
||||
def setup(self):
|
||||
"""Create the symmetric key if it wasn't provided"""
|
||||
|
||||
if self.key is None:
|
||||
self.key = Random.new().read(AES.block_size)
|
||||
|
||||
return defer.succeed(True)
|
||||
|
||||
def _finalize(self):
|
||||
logging.debug("_finalize has been called")
|
||||
self.blob_count += 1
|
||||
iv = self.iv_generator.next()
|
||||
final_blob_creator = self.blob_manager.get_blob_creator()
|
||||
logging.debug("Created the finished_deferred")
|
||||
final_blob = self._get_blob_maker(iv, final_blob_creator)
|
||||
logging.debug("Created the final blob")
|
||||
logging.debug("Calling close on final blob")
|
||||
d = final_blob.close()
|
||||
d.addCallback(self._blob_finished)
|
||||
self.finished_deferreds.append(d)
|
||||
logging.debug("called close on final blob, returning from make_final_blob")
|
||||
return d
|
||||
|
||||
def _write(self, data):
|
||||
|
||||
def close_blob(blob):
|
||||
d = blob.close()
|
||||
d.addCallback(self._blob_finished)
|
||||
self.finished_deferreds.append(d)
|
||||
|
||||
while len(data) > 0:
|
||||
if self.current_blob is None:
|
||||
next_blob_creator = self.blob_manager.get_blob_creator()
|
||||
self.blob_count += 1
|
||||
iv = self.iv_generator.next()
|
||||
self.current_blob = self._get_blob_maker(iv, next_blob_creator)
|
||||
done, num_bytes_written = self.current_blob.write(data)
|
||||
data = data[num_bytes_written:]
|
||||
if done is True:
|
||||
close_blob(self.current_blob)
|
||||
self.current_blob = None
|
||||
|
||||
def _get_blob_maker(self, iv, blob_creator):
|
||||
return CryptStreamBlobMaker(self.key, iv, self.blob_count, blob_creator)
|
8
lbrynet/cryptstream/__init__.py
Normal file
8
lbrynet/cryptstream/__init__.py
Normal file
|
@ -0,0 +1,8 @@
|
|||
"""
|
||||
Classes and functions for dealing with Crypt Streams.
|
||||
|
||||
Crypt Streams are encrypted blobs and metadata tying those blobs together. At least some of the
|
||||
metadata is generally stored in a Stream Descriptor File, for example containing a public key
|
||||
used to bind blobs to the stream and a symmetric key used to encrypt the blobs. The list of blobs
|
||||
may or may not be present.
|
||||
"""
|
19
lbrynet/cryptstream/client/CryptBlobHandler.py
Normal file
19
lbrynet/cryptstream/client/CryptBlobHandler.py
Normal file
|
@ -0,0 +1,19 @@
|
|||
import binascii
|
||||
from zope.interface import implements
|
||||
from lbrynet.cryptstream.CryptBlob import StreamBlobDecryptor
|
||||
from lbrynet.interfaces import IBlobHandler
|
||||
|
||||
|
||||
class CryptBlobHandler(object):
|
||||
implements(IBlobHandler)
|
||||
|
||||
def __init__(self, key, write_func):
|
||||
self.key = key
|
||||
self.write_func = write_func
|
||||
|
||||
######## IBlobHandler #########
|
||||
|
||||
def handle_blob(self, blob, blob_info):
|
||||
blob_decryptor = StreamBlobDecryptor(blob, self.key, binascii.unhexlify(blob_info.iv), blob_info.length)
|
||||
d = blob_decryptor.decrypt(self.write_func)
|
||||
return d
|
213
lbrynet/cryptstream/client/CryptStreamDownloader.py
Normal file
213
lbrynet/cryptstream/client/CryptStreamDownloader.py
Normal file
|
@ -0,0 +1,213 @@
|
|||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IStreamDownloader
|
||||
from lbrynet.core.client.BlobRequester import BlobRequester
|
||||
from lbrynet.core.client.ConnectionManager import ConnectionManager
|
||||
from lbrynet.core.client.DownloadManager import DownloadManager
|
||||
from lbrynet.core.client.StreamProgressManager import FullStreamProgressManager
|
||||
from lbrynet.cryptstream.client.CryptBlobHandler import CryptBlobHandler
|
||||
from twisted.internet import defer
|
||||
from twisted.python.failure import Failure
|
||||
|
||||
|
||||
class StartFailedError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AlreadyRunningError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AlreadyStoppedError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class CurrentlyStoppingError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class CurrentlyStartingError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class CryptStreamDownloader(object):
|
||||
|
||||
implements(IStreamDownloader)
|
||||
|
||||
def __init__(self, peer_finder, rate_limiter, blob_manager,
|
||||
payment_rate_manager, wallet, upload_allowed):
|
||||
"""
|
||||
Initialize a CryptStreamDownloader
|
||||
|
||||
@param peer_finder: An object which implements the IPeerFinder interface. Used to look up peers by a hashsum.
|
||||
|
||||
@param rate_limiter: An object which implements the IRateLimiter interface
|
||||
|
||||
@param blob_manager: A BlobManager object
|
||||
|
||||
@param payment_rate_manager: A PaymentRateManager object
|
||||
|
||||
@param wallet: An object which implements the ILBRYWallet interface
|
||||
|
||||
@return:
|
||||
"""
|
||||
|
||||
self.peer_finder = peer_finder
|
||||
self.rate_limiter = rate_limiter
|
||||
self.blob_manager = blob_manager
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.wallet = wallet
|
||||
self.upload_allowed = upload_allowed
|
||||
|
||||
self.key = None
|
||||
self.stream_name = None
|
||||
|
||||
self.completed = False
|
||||
self.stopped = True
|
||||
self.stopping = False
|
||||
self.starting = False
|
||||
|
||||
self.download_manager = None
|
||||
self.finished_deferred = None
|
||||
|
||||
self.points_paid = 0.0
|
||||
|
||||
def toggle_running(self):
|
||||
if self.stopped is True:
|
||||
return self.start()
|
||||
else:
|
||||
return self.stop()
|
||||
|
||||
def start(self):
|
||||
|
||||
def set_finished_deferred():
|
||||
self.finished_deferred = defer.Deferred()
|
||||
return self.finished_deferred
|
||||
|
||||
if self.starting is True:
|
||||
raise CurrentlyStartingError()
|
||||
if self.stopping is True:
|
||||
raise CurrentlyStoppingError()
|
||||
if self.stopped is False:
|
||||
raise AlreadyRunningError()
|
||||
assert self.download_manager is None
|
||||
self.starting = True
|
||||
self.completed = False
|
||||
d = self._start()
|
||||
d.addCallback(lambda _: set_finished_deferred())
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
|
||||
def check_if_stop_succeeded(success):
|
||||
self.stopping = False
|
||||
if success is True:
|
||||
self.stopped = True
|
||||
self._remove_download_manager()
|
||||
return success
|
||||
|
||||
if self.stopped is True:
|
||||
raise AlreadyStoppedError()
|
||||
if self.stopping is True:
|
||||
raise CurrentlyStoppingError()
|
||||
assert self.download_manager is not None
|
||||
self.stopping = True
|
||||
d = self.download_manager.stop_downloading()
|
||||
self._fire_completed_deferred()
|
||||
d.addCallback(check_if_stop_succeeded)
|
||||
return d
|
||||
|
||||
def _start_failed(self):
|
||||
|
||||
def set_stopped():
|
||||
self.stopped = True
|
||||
self.stopping = False
|
||||
self.starting = False
|
||||
|
||||
if self.download_manager is not None:
|
||||
d = self.download_manager.stop_downloading()
|
||||
d.addCallback(lambda _: self._remove_download_manager())
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
d.addCallback(lambda _: set_stopped())
|
||||
d.addCallback(lambda _: Failure(StartFailedError()))
|
||||
return d
|
||||
|
||||
def _start(self):
|
||||
|
||||
def check_start_succeeded(success):
|
||||
if success:
|
||||
self.starting = False
|
||||
self.stopped = False
|
||||
self.completed = False
|
||||
return True
|
||||
else:
|
||||
return self._start_failed()
|
||||
|
||||
self.download_manager = self._get_download_manager()
|
||||
d = self.download_manager.start_downloading()
|
||||
d.addCallbacks(check_start_succeeded)
|
||||
return d
|
||||
|
||||
def _get_download_manager(self):
|
||||
download_manager = DownloadManager(self.blob_manager, self.upload_allowed)
|
||||
download_manager.blob_info_finder = self._get_metadata_handler(download_manager)
|
||||
download_manager.blob_requester = self._get_blob_requester(download_manager)
|
||||
download_manager.progress_manager = self._get_progress_manager(download_manager)
|
||||
download_manager.blob_handler = self._get_blob_handler(download_manager)
|
||||
download_manager.wallet_info_exchanger = self.wallet.get_info_exchanger()
|
||||
download_manager.connection_manager = self._get_connection_manager(download_manager)
|
||||
#return DownloadManager(self.blob_manager, self.blob_requester, self.metadata_handler,
|
||||
# self.progress_manager, self.blob_handler, self.connection_manager)
|
||||
return download_manager
|
||||
|
||||
def _remove_download_manager(self):
|
||||
self.download_manager.blob_info_finder = None
|
||||
self.download_manager.blob_requester = None
|
||||
self.download_manager.progress_manager = None
|
||||
self.download_manager.blob_handler = None
|
||||
self.download_manager.wallet_info_exchanger = None
|
||||
self.download_manager.connection_manager = None
|
||||
self.download_manager = None
|
||||
|
||||
def _get_primary_request_creators(self, download_manager):
|
||||
return [download_manager.blob_requester]
|
||||
|
||||
def _get_secondary_request_creators(self, download_manager):
|
||||
return [download_manager.wallet_info_exchanger]
|
||||
|
||||
def _get_metadata_handler(self, download_manager):
|
||||
pass
|
||||
|
||||
def _get_blob_requester(self, download_manager):
|
||||
return BlobRequester(self.blob_manager, self.peer_finder, self.payment_rate_manager, self.wallet,
|
||||
download_manager)
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return FullStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager)
|
||||
|
||||
def _get_write_func(self):
|
||||
pass
|
||||
|
||||
def _get_blob_handler(self, download_manager):
|
||||
return CryptBlobHandler(self.key, self._get_write_func())
|
||||
|
||||
def _get_connection_manager(self, download_manager):
|
||||
return ConnectionManager(self, self.rate_limiter,
|
||||
self._get_primary_request_creators(download_manager),
|
||||
self._get_secondary_request_creators(download_manager))
|
||||
|
||||
def _fire_completed_deferred(self):
|
||||
self.finished_deferred, d = None, self.finished_deferred
|
||||
if d is not None:
|
||||
d.callback(self._get_finished_deferred_callback_value())
|
||||
|
||||
def _get_finished_deferred_callback_value(self):
|
||||
return None
|
||||
|
||||
def _finished_downloading(self, finished):
|
||||
if finished is True:
|
||||
self.completed = True
|
||||
return self.stop()
|
||||
|
||||
def insufficient_funds(self):
|
||||
return self.stop()
|
0
lbrynet/cryptstream/client/__init__.py
Normal file
0
lbrynet/cryptstream/client/__init__.py
Normal file
7
lbrynet/dht/AUTHORS
Normal file
7
lbrynet/dht/AUTHORS
Normal file
|
@ -0,0 +1,7 @@
|
|||
Francois Aucamp <faucamp@csir.co.za>
|
||||
|
||||
Thanks goes to the following people for providing patches/suggestions/tests:
|
||||
|
||||
Neil Kleynhans <ntkleynhans@csir.co.za>
|
||||
Haiyang Ma <haiyang.ma@maidsafe.net>
|
||||
Bryan McAlister <bmcalister@csir.co.za>
|
165
lbrynet/dht/COPYING
Normal file
165
lbrynet/dht/COPYING
Normal file
|
@ -0,0 +1,165 @@
|
|||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
|
||||
This version of the GNU Lesser General Public License incorporates
|
||||
the terms and conditions of version 3 of the GNU General Public
|
||||
License, supplemented by the additional permissions listed below.
|
||||
|
||||
0. Additional Definitions.
|
||||
|
||||
As used herein, "this License" refers to version 3 of the GNU Lesser
|
||||
General Public License, and the "GNU GPL" refers to version 3 of the GNU
|
||||
General Public License.
|
||||
|
||||
"The Library" refers to a covered work governed by this License,
|
||||
other than an Application or a Combined Work as defined below.
|
||||
|
||||
An "Application" is any work that makes use of an interface provided
|
||||
by the Library, but which is not otherwise based on the Library.
|
||||
Defining a subclass of a class defined by the Library is deemed a mode
|
||||
of using an interface provided by the Library.
|
||||
|
||||
A "Combined Work" is a work produced by combining or linking an
|
||||
Application with the Library. The particular version of the Library
|
||||
with which the Combined Work was made is also called the "Linked
|
||||
Version".
|
||||
|
||||
The "Minimal Corresponding Source" for a Combined Work means the
|
||||
Corresponding Source for the Combined Work, excluding any source code
|
||||
for portions of the Combined Work that, considered in isolation, are
|
||||
based on the Application, and not on the Linked Version.
|
||||
|
||||
The "Corresponding Application Code" for a Combined Work means the
|
||||
object code and/or source code for the Application, including any data
|
||||
and utility programs needed for reproducing the Combined Work from the
|
||||
Application, but excluding the System Libraries of the Combined Work.
|
||||
|
||||
1. Exception to Section 3 of the GNU GPL.
|
||||
|
||||
You may convey a covered work under sections 3 and 4 of this License
|
||||
without being bound by section 3 of the GNU GPL.
|
||||
|
||||
2. Conveying Modified Versions.
|
||||
|
||||
If you modify a copy of the Library, and, in your modifications, a
|
||||
facility refers to a function or data to be supplied by an Application
|
||||
that uses the facility (other than as an argument passed when the
|
||||
facility is invoked), then you may convey a copy of the modified
|
||||
version:
|
||||
|
||||
a) under this License, provided that you make a good faith effort to
|
||||
ensure that, in the event an Application does not supply the
|
||||
function or data, the facility still operates, and performs
|
||||
whatever part of its purpose remains meaningful, or
|
||||
|
||||
b) under the GNU GPL, with none of the additional permissions of
|
||||
this License applicable to that copy.
|
||||
|
||||
3. Object Code Incorporating Material from Library Header Files.
|
||||
|
||||
The object code form of an Application may incorporate material from
|
||||
a header file that is part of the Library. You may convey such object
|
||||
code under terms of your choice, provided that, if the incorporated
|
||||
material is not limited to numerical parameters, data structure
|
||||
layouts and accessors, or small macros, inline functions and templates
|
||||
(ten or fewer lines in length), you do both of the following:
|
||||
|
||||
a) Give prominent notice with each copy of the object code that the
|
||||
Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the object code with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
4. Combined Works.
|
||||
|
||||
You may convey a Combined Work under terms of your choice that,
|
||||
taken together, effectively do not restrict modification of the
|
||||
portions of the Library contained in the Combined Work and reverse
|
||||
engineering for debugging such modifications, if you also do each of
|
||||
the following:
|
||||
|
||||
a) Give prominent notice with each copy of the Combined Work that
|
||||
the Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the Combined Work with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
c) For a Combined Work that displays copyright notices during
|
||||
execution, include the copyright notice for the Library among
|
||||
these notices, as well as a reference directing the user to the
|
||||
copies of the GNU GPL and this license document.
|
||||
|
||||
d) Do one of the following:
|
||||
|
||||
0) Convey the Minimal Corresponding Source under the terms of this
|
||||
License, and the Corresponding Application Code in a form
|
||||
suitable for, and under terms that permit, the user to
|
||||
recombine or relink the Application with a modified version of
|
||||
the Linked Version to produce a modified Combined Work, in the
|
||||
manner specified by section 6 of the GNU GPL for conveying
|
||||
Corresponding Source.
|
||||
|
||||
1) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (a) uses at run time
|
||||
a copy of the Library already present on the user's computer
|
||||
system, and (b) will operate properly with a modified version
|
||||
of the Library that is interface-compatible with the Linked
|
||||
Version.
|
||||
|
||||
e) Provide Installation Information, but only if you would otherwise
|
||||
be required to provide such information under section 6 of the
|
||||
GNU GPL, and only to the extent that such information is
|
||||
necessary to install and execute a modified version of the
|
||||
Combined Work produced by recombining or relinking the
|
||||
Application with a modified version of the Linked Version. (If
|
||||
you use option 4d0, the Installation Information must accompany
|
||||
the Minimal Corresponding Source and Corresponding Application
|
||||
Code. If you use option 4d1, you must provide the Installation
|
||||
Information in the manner specified by section 6 of the GNU GPL
|
||||
for conveying Corresponding Source.)
|
||||
|
||||
5. Combined Libraries.
|
||||
|
||||
You may place library facilities that are a work based on the
|
||||
Library side by side in a single library together with other library
|
||||
facilities that are not Applications and are not covered by this
|
||||
License, and convey such a combined library under terms of your
|
||||
choice, if you do both of the following:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work based
|
||||
on the Library, uncombined with any other library facilities,
|
||||
conveyed under the terms of this License.
|
||||
|
||||
b) Give prominent notice with the combined library that part of it
|
||||
is a work based on the Library, and explaining where to find the
|
||||
accompanying uncombined form of the same work.
|
||||
|
||||
6. Revised Versions of the GNU Lesser General Public License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions
|
||||
of the GNU Lesser General Public License from time to time. Such new
|
||||
versions will be similar in spirit to the present version, but may
|
||||
differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Library as you received it specifies that a certain numbered version
|
||||
of the GNU Lesser General Public License "or any later version"
|
||||
applies to it, you have the option of following the terms and
|
||||
conditions either of that published version or of any later version
|
||||
published by the Free Software Foundation. If the Library as you
|
||||
received it does not specify a version number of the GNU Lesser
|
||||
General Public License, you may choose any version of the GNU Lesser
|
||||
General Public License ever published by the Free Software Foundation.
|
||||
|
||||
If the Library as you received it specifies that a proxy can decide
|
||||
whether future versions of the GNU Lesser General Public License shall
|
||||
apply, that proxy's public statement of acceptance of any version is
|
||||
permanent authorization for you to choose that version for the
|
||||
Library.
|
0
lbrynet/dht/__init__.py
Normal file
0
lbrynet/dht/__init__.py
Normal file
52
lbrynet/dht/constants.py
Normal file
52
lbrynet/dht/constants.py
Normal file
|
@ -0,0 +1,52 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
""" This module defines the charaterizing constants of the Kademlia network
|
||||
|
||||
C{checkRefreshInterval} and C{udpDatagramMaxSize} are implementation-specific
|
||||
constants, and do not affect general Kademlia operation.
|
||||
"""
|
||||
|
||||
######### KADEMLIA CONSTANTS ###########
|
||||
|
||||
#: Small number Representing the degree of parallelism in network calls
|
||||
alpha = 3
|
||||
|
||||
#: Maximum number of contacts stored in a bucket; this should be an even number
|
||||
k = 8
|
||||
|
||||
#: Timeout for network operations (in seconds)
|
||||
rpcTimeout = 5
|
||||
|
||||
# Delay between iterations of iterative node lookups (for loose parallelism) (in seconds)
|
||||
iterativeLookupDelay = rpcTimeout / 2
|
||||
|
||||
#: If a k-bucket has not been used for this amount of time, refresh it (in seconds)
|
||||
refreshTimeout = 3600 # 1 hour
|
||||
#: The interval at which nodes replicate (republish/refresh) data they are holding
|
||||
replicateInterval = refreshTimeout
|
||||
# The time it takes for data to expire in the network; the original publisher of the data
|
||||
# will also republish the data at this time if it is still valid
|
||||
dataExpireTimeout = 86400 # 24 hours
|
||||
|
||||
tokenSecretChangeInterval = 300 # 5 minutes
|
||||
|
||||
peer_request_timeout = 10
|
||||
|
||||
######## IMPLEMENTATION-SPECIFIC CONSTANTS ###########
|
||||
|
||||
#: The interval in which the node should check its whether any buckets need refreshing,
|
||||
#: or whether any data needs to be republished (in seconds)
|
||||
checkRefreshInterval = refreshTimeout/5
|
||||
|
||||
#: Max size of a single UDP datagram, in bytes. If a message is larger than this, it will
|
||||
#: be spread accross several UDP packets.
|
||||
udpDatagramMaxSize = 8192 # 8 KB
|
||||
|
||||
key_bits = 384
|
63
lbrynet/dht/contact.py
Normal file
63
lbrynet/dht/contact.py
Normal file
|
@ -0,0 +1,63 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
|
||||
class Contact(object):
|
||||
""" Encapsulation for remote contact
|
||||
|
||||
This class contains information on a single remote contact, and also
|
||||
provides a direct RPC API to the remote node which it represents
|
||||
"""
|
||||
def __init__(self, id, ipAddress, udpPort, networkProtocol, firstComm=0):
|
||||
self.id = id
|
||||
self.address = ipAddress
|
||||
self.port = udpPort
|
||||
self._networkProtocol = networkProtocol
|
||||
self.commTime = firstComm
|
||||
|
||||
def __eq__(self, other):
|
||||
if isinstance(other, Contact):
|
||||
return self.id == other.id
|
||||
elif isinstance(other, str):
|
||||
return self.id == other
|
||||
else:
|
||||
return False
|
||||
|
||||
def __ne__(self, other):
|
||||
if isinstance(other, Contact):
|
||||
return self.id != other.id
|
||||
elif isinstance(other, str):
|
||||
return self.id != other
|
||||
else:
|
||||
return True
|
||||
|
||||
def compact_ip(self):
|
||||
compact_ip = reduce(lambda buff, x: buff + bytearray([int(x)]), self.address.split('.'), bytearray())
|
||||
return str(compact_ip)
|
||||
|
||||
def __str__(self):
|
||||
return '<%s.%s object; IP address: %s, UDP port: %d>' % (self.__module__, self.__class__.__name__, self.address, self.port)
|
||||
|
||||
def __getattr__(self, name):
|
||||
""" This override allows the host node to call a method of the remote
|
||||
node (i.e. this contact) as if it was a local function.
|
||||
|
||||
For instance, if C{remoteNode} is a instance of C{Contact}, the
|
||||
following will result in C{remoteNode}'s C{test()} method to be
|
||||
called with argument C{123}::
|
||||
remoteNode.test(123)
|
||||
|
||||
Such a RPC method call will return a Deferred, which will callback
|
||||
when the contact responds with the result (or an error occurs).
|
||||
This happens via this contact's C{_networkProtocol} object (i.e. the
|
||||
host Node's C{_protocol} object).
|
||||
"""
|
||||
def _sendRPC(*args, **kwargs):
|
||||
return self._networkProtocol.sendRPC(self, name, args, **kwargs)
|
||||
return _sendRPC
|
213
lbrynet/dht/datastore.py
Normal file
213
lbrynet/dht/datastore.py
Normal file
|
@ -0,0 +1,213 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import UserDict
|
||||
#import sqlite3
|
||||
import cPickle as pickle
|
||||
import time
|
||||
import os
|
||||
import constants
|
||||
|
||||
|
||||
|
||||
class DataStore(UserDict.DictMixin):
|
||||
""" Interface for classes implementing physical storage (for data
|
||||
published via the "STORE" RPC) for the Kademlia DHT
|
||||
|
||||
@note: This provides an interface for a dict-like object
|
||||
"""
|
||||
def keys(self):
|
||||
""" Return a list of the keys in this data store """
|
||||
|
||||
# def lastPublished(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was last published """
|
||||
|
||||
# def originalPublisherID(self, key):
|
||||
# """ Get the original publisher of the data's node ID
|
||||
#
|
||||
# @param key: The key that identifies the stored data
|
||||
# @type key: str
|
||||
#
|
||||
# @return: Return the node ID of the original publisher of the
|
||||
# C{(key, value)} pair identified by C{key}.
|
||||
# """
|
||||
|
||||
# def originalPublishTime(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was originally published """
|
||||
|
||||
# def setItem(self, key, value, lastPublished, originallyPublished, originalPublisherID):
|
||||
# """ Set the value of the (key, value) pair identified by C{key};
|
||||
# this should set the "last published" value for the (key, value)
|
||||
# pair to the current time
|
||||
# """
|
||||
|
||||
def addPeerToBlob(self, key, value, lastPublished, originallyPublished, originalPublisherID):
|
||||
pass
|
||||
|
||||
# def __getitem__(self, key):
|
||||
# """ Get the value identified by C{key} """
|
||||
|
||||
# def __setitem__(self, key, value):
|
||||
# """ Convenience wrapper to C{setItem}; this accepts a tuple in the
|
||||
# format: (value, lastPublished, originallyPublished, originalPublisherID) """
|
||||
# self.setItem(key, *value)
|
||||
|
||||
# def __delitem__(self, key):
|
||||
# """ Delete the specified key (and its value) """
|
||||
|
||||
class DictDataStore(DataStore):
|
||||
""" A datastore using an in-memory Python dictionary """
|
||||
def __init__(self):
|
||||
# Dictionary format:
|
||||
# { <key>: (<value>, <lastPublished>, <originallyPublished> <originalPublisherID>) }
|
||||
self._dict = {}
|
||||
|
||||
def keys(self):
|
||||
""" Return a list of the keys in this data store """
|
||||
return self._dict.keys()
|
||||
|
||||
# def lastPublished(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was last published """
|
||||
# return self._dict[key][1]
|
||||
|
||||
# def originalPublisherID(self, key):
|
||||
# """ Get the original publisher of the data's node ID
|
||||
#
|
||||
# @param key: The key that identifies the stored data
|
||||
# @type key: str
|
||||
#
|
||||
# @return: Return the node ID of the original publisher of the
|
||||
# C{(key, value)} pair identified by C{key}.
|
||||
# """
|
||||
# return self._dict[key][3]
|
||||
|
||||
# def originalPublishTime(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was originally published """
|
||||
# return self._dict[key][2]
|
||||
|
||||
def removeExpiredPeers(self):
|
||||
now = int(time.time())
|
||||
def notExpired(peer):
|
||||
if (now - peer[2]) > constants.dataExpireTimeout:
|
||||
return False
|
||||
return True
|
||||
for key in self._dict.keys():
|
||||
unexpired_peers = filter(notExpired, self._dict[key])
|
||||
self._dict[key] = unexpired_peers
|
||||
|
||||
def hasPeersForBlob(self, key):
|
||||
if key in self._dict and len(self._dict[key]) > 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
def addPeerToBlob(self, key, value, lastPublished, originallyPublished, originalPublisherID):
|
||||
if key in self._dict:
|
||||
self._dict[key].append((value, lastPublished, originallyPublished, originalPublisherID))
|
||||
else:
|
||||
self._dict[key] = [(value, lastPublished, originallyPublished, originalPublisherID)]
|
||||
|
||||
def getPeersForBlob(self, key):
|
||||
if key in self._dict:
|
||||
return [val[0] for val in self._dict[key]]
|
||||
|
||||
# def setItem(self, key, value, lastPublished, originallyPublished, originalPublisherID):
|
||||
# """ Set the value of the (key, value) pair identified by C{key};
|
||||
# this should set the "last published" value for the (key, value)
|
||||
# pair to the current time
|
||||
# """
|
||||
# self._dict[key] = (value, lastPublished, originallyPublished, originalPublisherID)
|
||||
|
||||
# def __getitem__(self, key):
|
||||
# """ Get the value identified by C{key} """
|
||||
# return self._dict[key][0]
|
||||
|
||||
# def __delitem__(self, key):
|
||||
# """ Delete the specified key (and its value) """
|
||||
# del self._dict[key]
|
||||
|
||||
|
||||
#class SQLiteDataStore(DataStore):
|
||||
# """ Example of a SQLite database-based datastore
|
||||
# """
|
||||
# def __init__(self, dbFile=':memory:'):
|
||||
# """
|
||||
# @param dbFile: The name of the file containing the SQLite database; if
|
||||
# unspecified, an in-memory database is used.
|
||||
# @type dbFile: str
|
||||
# """
|
||||
# createDB = not os.path.exists(dbFile)
|
||||
# self._db = sqlite3.connect(dbFile)
|
||||
# self._db.isolation_level = None
|
||||
# self._db.text_factory = str
|
||||
# if createDB:
|
||||
# self._db.execute('CREATE TABLE data(key, value, lastPublished, originallyPublished, originalPublisherID)')
|
||||
# self._cursor = self._db.cursor()
|
||||
|
||||
# def keys(self):
|
||||
# """ Return a list of the keys in this data store """
|
||||
# keys = []
|
||||
# try:
|
||||
# self._cursor.execute("SELECT key FROM data")
|
||||
# for row in self._cursor:
|
||||
# keys.append(row[0].decode('hex'))
|
||||
# finally:
|
||||
# return keys
|
||||
|
||||
# def lastPublished(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was last published """
|
||||
# return int(self._dbQuery(key, 'lastPublished'))
|
||||
|
||||
# def originalPublisherID(self, key):
|
||||
# """ Get the original publisher of the data's node ID
|
||||
|
||||
# @param key: The key that identifies the stored data
|
||||
# @type key: str
|
||||
|
||||
# @return: Return the node ID of the original publisher of the
|
||||
# C{(key, value)} pair identified by C{key}.
|
||||
# """
|
||||
# return self._dbQuery(key, 'originalPublisherID')
|
||||
|
||||
# def originalPublishTime(self, key):
|
||||
# """ Get the time the C{(key, value)} pair identified by C{key}
|
||||
# was originally published """
|
||||
# return int(self._dbQuery(key, 'originallyPublished'))
|
||||
|
||||
# def setItem(self, key, value, lastPublished, originallyPublished, originalPublisherID):
|
||||
# # Encode the key so that it doesn't corrupt the database
|
||||
# encodedKey = key.encode('hex')
|
||||
# self._cursor.execute("select key from data where key=:reqKey", {'reqKey': encodedKey})
|
||||
# if self._cursor.fetchone() == None:
|
||||
# self._cursor.execute('INSERT INTO data(key, value, lastPublished, originallyPublished, originalPublisherID) VALUES (?, ?, ?, ?, ?)', (encodedKey, buffer(pickle.dumps(value, pickle.HIGHEST_PROTOCOL)), lastPublished, originallyPublished, originalPublisherID))
|
||||
# else:
|
||||
# self._cursor.execute('UPDATE data SET value=?, lastPublished=?, originallyPublished=?, originalPublisherID=? WHERE key=?', (buffer(pickle.dumps(value, pickle.HIGHEST_PROTOCOL)), lastPublished, originallyPublished, originalPublisherID, encodedKey))
|
||||
|
||||
# def _dbQuery(self, key, columnName, unpickle=False):
|
||||
# try:
|
||||
# self._cursor.execute("SELECT %s FROM data WHERE key=:reqKey" % columnName, {'reqKey': key.encode('hex')})
|
||||
# row = self._cursor.fetchone()
|
||||
# value = str(row[0])
|
||||
# except TypeError:
|
||||
# raise KeyError, key
|
||||
# else:
|
||||
# if unpickle:
|
||||
# return pickle.loads(value)
|
||||
# else:
|
||||
# return value
|
||||
|
||||
# def __getitem__(self, key):
|
||||
# return self._dbQuery(key, 'value', unpickle=True)
|
||||
|
||||
# def __delitem__(self, key):
|
||||
# self._cursor.execute("DELETE FROM data WHERE key=:reqKey", {'reqKey': key.encode('hex')})
|
144
lbrynet/dht/encoding.py
Normal file
144
lbrynet/dht/encoding.py
Normal file
|
@ -0,0 +1,144 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
class DecodeError(Exception):
|
||||
""" Should be raised by an C{Encoding} implementation if decode operation
|
||||
fails
|
||||
"""
|
||||
|
||||
class Encoding(object):
|
||||
""" Interface for RPC message encoders/decoders
|
||||
|
||||
All encoding implementations used with this library should inherit and
|
||||
implement this.
|
||||
"""
|
||||
def encode(self, data):
|
||||
""" Encode the specified data
|
||||
|
||||
@param data: The data to encode
|
||||
This method has to support encoding of the following
|
||||
types: C{str}, C{int} and C{long}
|
||||
Any additional data types may be supported as long as the
|
||||
implementing class's C{decode()} method can successfully
|
||||
decode them.
|
||||
|
||||
@return: The encoded data
|
||||
@rtype: str
|
||||
"""
|
||||
def decode(self, data):
|
||||
""" Decode the specified data string
|
||||
|
||||
@param data: The data (byte string) to decode.
|
||||
@type data: str
|
||||
|
||||
@return: The decoded data (in its correct type)
|
||||
"""
|
||||
|
||||
class Bencode(Encoding):
|
||||
""" Implementation of a Bencode-based algorithm (Bencode is the encoding
|
||||
algorithm used by Bittorrent).
|
||||
|
||||
@note: This algorithm differs from the "official" Bencode algorithm in
|
||||
that it can encode/decode floating point values in addition to
|
||||
integers.
|
||||
"""
|
||||
|
||||
def encode(self, data):
|
||||
""" Encoder implementation of the Bencode algorithm
|
||||
|
||||
@param data: The data to encode
|
||||
@type data: int, long, tuple, list, dict or str
|
||||
|
||||
@return: The encoded data
|
||||
@rtype: str
|
||||
"""
|
||||
if type(data) in (int, long):
|
||||
return 'i%de' % data
|
||||
elif type(data) == str:
|
||||
return '%d:%s' % (len(data), data)
|
||||
elif type(data) in (list, tuple):
|
||||
encodedListItems = ''
|
||||
for item in data:
|
||||
encodedListItems += self.encode(item)
|
||||
return 'l%se' % encodedListItems
|
||||
elif type(data) == dict:
|
||||
encodedDictItems = ''
|
||||
keys = data.keys()
|
||||
keys.sort()
|
||||
for key in keys:
|
||||
encodedDictItems += self.encode(key)
|
||||
encodedDictItems += self.encode(data[key])
|
||||
return 'd%se' % encodedDictItems
|
||||
elif type(data) == float:
|
||||
# This (float data type) is a non-standard extension to the original Bencode algorithm
|
||||
return 'f%fe' % data
|
||||
elif data == None:
|
||||
# This (None/NULL data type) is a non-standard extension to the original Bencode algorithm
|
||||
return 'n'
|
||||
else:
|
||||
print data
|
||||
raise TypeError, "Cannot bencode '%s' object" % type(data)
|
||||
|
||||
def decode(self, data):
|
||||
""" Decoder implementation of the Bencode algorithm
|
||||
|
||||
@param data: The encoded data
|
||||
@type data: str
|
||||
|
||||
@note: This is a convenience wrapper for the recursive decoding
|
||||
algorithm, C{_decodeRecursive}
|
||||
|
||||
@return: The decoded data, as a native Python type
|
||||
@rtype: int, list, dict or str
|
||||
"""
|
||||
if len(data) == 0:
|
||||
raise DecodeError, 'Cannot decode empty string'
|
||||
return self._decodeRecursive(data)[0]
|
||||
|
||||
@staticmethod
|
||||
def _decodeRecursive(data, startIndex=0):
|
||||
""" Actual implementation of the recursive Bencode algorithm
|
||||
|
||||
Do not call this; use C{decode()} instead
|
||||
"""
|
||||
if data[startIndex] == 'i':
|
||||
endPos = data[startIndex:].find('e')+startIndex
|
||||
return (int(data[startIndex+1:endPos]), endPos+1)
|
||||
elif data[startIndex] == 'l':
|
||||
startIndex += 1
|
||||
decodedList = []
|
||||
while data[startIndex] != 'e':
|
||||
listData, startIndex = Bencode._decodeRecursive(data, startIndex)
|
||||
decodedList.append(listData)
|
||||
return (decodedList, startIndex+1)
|
||||
elif data[startIndex] == 'd':
|
||||
startIndex += 1
|
||||
decodedDict = {}
|
||||
while data[startIndex] != 'e':
|
||||
key, startIndex = Bencode._decodeRecursive(data, startIndex)
|
||||
value, startIndex = Bencode._decodeRecursive(data, startIndex)
|
||||
decodedDict[key] = value
|
||||
return (decodedDict, startIndex)
|
||||
elif data[startIndex] == 'f':
|
||||
# This (float data type) is a non-standard extension to the original Bencode algorithm
|
||||
endPos = data[startIndex:].find('e')+startIndex
|
||||
return (float(data[startIndex+1:endPos]), endPos+1)
|
||||
elif data[startIndex] == 'n':
|
||||
# This (None/NULL data type) is a non-standard extension to the original Bencode algorithm
|
||||
return (None, startIndex+1)
|
||||
else:
|
||||
splitPos = data[startIndex:].find(':')+startIndex
|
||||
try:
|
||||
length = int(data[startIndex:splitPos])
|
||||
except ValueError, e:
|
||||
raise DecodeError, e
|
||||
startIndex = splitPos+1
|
||||
endPos = startIndex+length
|
||||
bytes = data[startIndex:endPos]
|
||||
return (bytes, endPos)
|
35
lbrynet/dht/hashwatcher.py
Normal file
35
lbrynet/dht/hashwatcher.py
Normal file
|
@ -0,0 +1,35 @@
|
|||
|
||||
from collections import Counter
|
||||
import datetime
|
||||
|
||||
|
||||
class HashWatcher():
|
||||
def __init__(self, ttl=600):
|
||||
self.ttl = 600
|
||||
self.hashes = []
|
||||
self.next_tick = None
|
||||
|
||||
def tick(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
self._remove_old_hashes()
|
||||
self.next_tick = reactor.callLater(10, self.tick)
|
||||
|
||||
def stop(self):
|
||||
if self.next_tick is not None:
|
||||
self.next_tick.cancel()
|
||||
self.next_tick = None
|
||||
|
||||
def add_requested_hash(self, hashsum, from_ip):
|
||||
matching_hashes = [h for h in self.hashes if h[0] == hashsum and h[2] == from_ip]
|
||||
if len(matching_hashes) == 0:
|
||||
self.hashes.append((hashsum, datetime.datetime.now(), from_ip))
|
||||
|
||||
def most_popular_hashes(self, num_to_return=10):
|
||||
hash_counter = Counter([h[0] for h in self.hashes])
|
||||
return hash_counter.most_common(num_to_return)
|
||||
|
||||
def _remove_old_hashes(self):
|
||||
remove_time = datetime.datetime.now() - datetime.timedelta(minutes=10)
|
||||
self.hashes = [h for h in self.hashes if h[1] < remove_time]
|
134
lbrynet/dht/kbucket.py
Normal file
134
lbrynet/dht/kbucket.py
Normal file
|
@ -0,0 +1,134 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import constants
|
||||
|
||||
class BucketFull(Exception):
|
||||
""" Raised when the bucket is full """
|
||||
|
||||
|
||||
class KBucket(object):
|
||||
""" Description - later
|
||||
"""
|
||||
def __init__(self, rangeMin, rangeMax):
|
||||
"""
|
||||
@param rangeMin: The lower boundary for the range in the n-bit ID
|
||||
space covered by this k-bucket
|
||||
@param rangeMax: The upper boundary for the range in the ID space
|
||||
covered by this k-bucket
|
||||
"""
|
||||
self.lastAccessed = 0
|
||||
self.rangeMin = rangeMin
|
||||
self.rangeMax = rangeMax
|
||||
self._contacts = list()
|
||||
|
||||
def addContact(self, contact):
|
||||
""" Add contact to _contact list in the right order. This will move the
|
||||
contact to the end of the k-bucket if it is already present.
|
||||
|
||||
@raise kademlia.kbucket.BucketFull: Raised when the bucket is full and
|
||||
the contact isn't in the bucket
|
||||
already
|
||||
|
||||
@param contact: The contact to add
|
||||
@type contact: kademlia.contact.Contact
|
||||
"""
|
||||
if contact in self._contacts:
|
||||
# Move the existing contact to the end of the list
|
||||
# - using the new contact to allow add-on data (e.g. optimization-specific stuff) to pe updated as well
|
||||
self._contacts.remove(contact)
|
||||
self._contacts.append(contact)
|
||||
elif len(self._contacts) < constants.k:
|
||||
self._contacts.append(contact)
|
||||
else:
|
||||
raise BucketFull("No space in bucket to insert contact")
|
||||
|
||||
def getContact(self, contactID):
|
||||
""" Get the contact specified node ID"""
|
||||
index = self._contacts.index(contactID)
|
||||
return self._contacts[index]
|
||||
|
||||
def getContacts(self, count=-1, excludeContact=None):
|
||||
""" Returns a list containing up to the first count number of contacts
|
||||
|
||||
@param count: The amount of contacts to return (if 0 or less, return
|
||||
all contacts)
|
||||
@type count: int
|
||||
@param excludeContact: A contact to exclude; if this contact is in
|
||||
the list of returned values, it will be
|
||||
discarded before returning. If a C{str} is
|
||||
passed as this argument, it must be the
|
||||
contact's ID.
|
||||
@type excludeContact: kademlia.contact.Contact or str
|
||||
|
||||
|
||||
@raise IndexError: If the number of requested contacts is too large
|
||||
|
||||
@return: Return up to the first count number of contacts in a list
|
||||
If no contacts are present an empty is returned
|
||||
@rtype: list
|
||||
"""
|
||||
# Return all contacts in bucket
|
||||
if count <= 0:
|
||||
count = len(self._contacts)
|
||||
|
||||
# Get current contact number
|
||||
currentLen = len(self._contacts)
|
||||
|
||||
# If count greater than k - return only k contacts
|
||||
if count > constants.k:
|
||||
count = constants.k
|
||||
|
||||
# Check if count value in range and,
|
||||
# if count number of contacts are available
|
||||
if not currentLen:
|
||||
contactList = list()
|
||||
|
||||
# length of list less than requested amount
|
||||
elif currentLen < count:
|
||||
contactList = self._contacts[0:currentLen]
|
||||
# enough contacts in list
|
||||
else:
|
||||
contactList = self._contacts[0:count]
|
||||
|
||||
if excludeContact in contactList:
|
||||
contactList.remove(excludeContact)
|
||||
|
||||
return contactList
|
||||
|
||||
def removeContact(self, contact):
|
||||
""" Remove given contact from list
|
||||
|
||||
@param contact: The contact to remove, or a string containing the
|
||||
contact's node ID
|
||||
@type contact: kademlia.contact.Contact or str
|
||||
|
||||
@raise ValueError: The specified contact is not in this bucket
|
||||
"""
|
||||
self._contacts.remove(contact)
|
||||
|
||||
def keyInRange(self, key):
|
||||
""" Tests whether the specified key (i.e. node ID) is in the range
|
||||
of the n-bit ID space covered by this k-bucket (in otherwords, it
|
||||
returns whether or not the specified key should be placed in this
|
||||
k-bucket)
|
||||
|
||||
@param key: The key to test
|
||||
@type key: str or int
|
||||
|
||||
@return: C{True} if the key is in this k-bucket's range, or C{False}
|
||||
if not.
|
||||
@rtype: bool
|
||||
"""
|
||||
if isinstance(key, str):
|
||||
key = long(key.encode('hex'), 16)
|
||||
return self.rangeMin <= key < self.rangeMax
|
||||
|
||||
def __len__(self):
|
||||
return len(self._contacts)
|
72
lbrynet/dht/msgformat.py
Normal file
72
lbrynet/dht/msgformat.py
Normal file
|
@ -0,0 +1,72 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import msgtypes
|
||||
|
||||
class MessageTranslator(object):
|
||||
""" Interface for RPC message translators/formatters
|
||||
|
||||
Classes inheriting from this should provide a translation services between
|
||||
the classes used internally by this Kademlia implementation and the actual
|
||||
data that is transmitted between nodes.
|
||||
"""
|
||||
def fromPrimitive(self, msgPrimitive):
|
||||
""" Create an RPC Message from a message's string representation
|
||||
|
||||
@param msgPrimitive: The unencoded primitive representation of a message
|
||||
@type msgPrimitive: str, int, list or dict
|
||||
|
||||
@return: The translated message object
|
||||
@rtype: entangled.kademlia.msgtypes.Message
|
||||
"""
|
||||
|
||||
def toPrimitive(self, message):
|
||||
""" Create a string representation of a message
|
||||
|
||||
@param message: The message object
|
||||
@type message: msgtypes.Message
|
||||
|
||||
@return: The message's primitive representation in a particular
|
||||
messaging format
|
||||
@rtype: str, int, list or dict
|
||||
"""
|
||||
|
||||
class DefaultFormat(MessageTranslator):
|
||||
""" The default on-the-wire message format for this library """
|
||||
typeRequest, typeResponse, typeError = range(3)
|
||||
headerType, headerMsgID, headerNodeID, headerPayload, headerArgs = range(5)
|
||||
|
||||
def fromPrimitive(self, msgPrimitive):
|
||||
msgType = msgPrimitive[self.headerType]
|
||||
if msgType == self.typeRequest:
|
||||
msg = msgtypes.RequestMessage(msgPrimitive[self.headerNodeID], msgPrimitive[self.headerPayload], msgPrimitive[self.headerArgs], msgPrimitive[self.headerMsgID])
|
||||
elif msgType == self.typeResponse:
|
||||
msg = msgtypes.ResponseMessage(msgPrimitive[self.headerMsgID], msgPrimitive[self.headerNodeID], msgPrimitive[self.headerPayload])
|
||||
elif msgType == self.typeError:
|
||||
msg = msgtypes.ErrorMessage(msgPrimitive[self.headerMsgID], msgPrimitive[self.headerNodeID], msgPrimitive[self.headerPayload], msgPrimitive[self.headerArgs])
|
||||
else:
|
||||
# Unknown message, no payload
|
||||
msg = msgtypes.Message(msgPrimitive[self.headerMsgID], msgPrimitive[self.headerNodeID])
|
||||
return msg
|
||||
|
||||
def toPrimitive(self, message):
|
||||
msg = {self.headerMsgID: message.id,
|
||||
self.headerNodeID: message.nodeID}
|
||||
if isinstance(message, msgtypes.RequestMessage):
|
||||
msg[self.headerType] = self.typeRequest
|
||||
msg[self.headerPayload] = message.request
|
||||
msg[self.headerArgs] = message.args
|
||||
elif isinstance(message, msgtypes.ErrorMessage):
|
||||
msg[self.headerType] = self.typeError
|
||||
msg[self.headerPayload] = message.exceptionType
|
||||
msg[self.headerArgs] = message.response
|
||||
elif isinstance(message, msgtypes.ResponseMessage):
|
||||
msg[self.headerType] = self.typeResponse
|
||||
msg[self.headerPayload] = message.response
|
||||
return msg
|
46
lbrynet/dht/msgtypes.py
Normal file
46
lbrynet/dht/msgtypes.py
Normal file
|
@ -0,0 +1,46 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import hashlib
|
||||
import random
|
||||
|
||||
class Message(object):
|
||||
""" Base class for messages - all "unknown" messages use this class """
|
||||
def __init__(self, rpcID, nodeID):
|
||||
self.id = rpcID
|
||||
self.nodeID = nodeID
|
||||
|
||||
|
||||
class RequestMessage(Message):
|
||||
""" Message containing an RPC request """
|
||||
def __init__(self, nodeID, method, methodArgs, rpcID=None):
|
||||
if rpcID == None:
|
||||
hash = hashlib.sha384()
|
||||
hash.update(str(random.getrandbits(255)))
|
||||
rpcID = hash.digest()
|
||||
Message.__init__(self, rpcID, nodeID)
|
||||
self.request = method
|
||||
self.args = methodArgs
|
||||
|
||||
|
||||
class ResponseMessage(Message):
|
||||
""" Message containing the result from a successful RPC request """
|
||||
def __init__(self, rpcID, nodeID, response):
|
||||
Message.__init__(self, rpcID, nodeID)
|
||||
self.response = response
|
||||
|
||||
|
||||
class ErrorMessage(ResponseMessage):
|
||||
""" Message containing the error from an unsuccessful RPC request """
|
||||
def __init__(self, rpcID, nodeID, exceptionType, errorMessage):
|
||||
ResponseMessage.__init__(self, rpcID, nodeID, errorMessage)
|
||||
if isinstance(exceptionType, type):
|
||||
self.exceptionType = '%s.%s' % (exceptionType.__module__, exceptionType.__name__)
|
||||
else:
|
||||
self.exceptionType = exceptionType
|
1011
lbrynet/dht/node.py
Normal file
1011
lbrynet/dht/node.py
Normal file
File diff suppressed because it is too large
Load diff
305
lbrynet/dht/protocol.py
Normal file
305
lbrynet/dht/protocol.py
Normal file
|
@ -0,0 +1,305 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import time
|
||||
|
||||
from twisted.internet import protocol, defer
|
||||
from twisted.python import failure
|
||||
import twisted.internet.reactor
|
||||
|
||||
import constants
|
||||
import encoding
|
||||
import msgtypes
|
||||
import msgformat
|
||||
from contact import Contact
|
||||
|
||||
reactor = twisted.internet.reactor
|
||||
|
||||
class TimeoutError(Exception):
|
||||
""" Raised when a RPC times out """
|
||||
|
||||
class KademliaProtocol(protocol.DatagramProtocol):
|
||||
""" Implements all low-level network-related functions of a Kademlia node """
|
||||
msgSizeLimit = constants.udpDatagramMaxSize-26
|
||||
maxToSendDelay = 10**-3#0.05
|
||||
minToSendDelay = 10**-5#0.01
|
||||
|
||||
def __init__(self, node, msgEncoder=encoding.Bencode(), msgTranslator=msgformat.DefaultFormat()):
|
||||
self._node = node
|
||||
self._encoder = msgEncoder
|
||||
self._translator = msgTranslator
|
||||
self._sentMessages = {}
|
||||
self._partialMessages = {}
|
||||
self._partialMessagesProgress = {}
|
||||
self._next = 0
|
||||
self._callLaterList = {}
|
||||
|
||||
def sendRPC(self, contact, method, args, rawResponse=False):
|
||||
""" Sends an RPC to the specified contact
|
||||
|
||||
@param contact: The contact (remote node) to send the RPC to
|
||||
@type contact: kademlia.contacts.Contact
|
||||
@param method: The name of remote method to invoke
|
||||
@type method: str
|
||||
@param args: A list of (non-keyword) arguments to pass to the remote
|
||||
method, in the correct order
|
||||
@type args: tuple
|
||||
@param rawResponse: If this is set to C{True}, the caller of this RPC
|
||||
will receive a tuple containing the actual response
|
||||
message object and the originating address tuple as
|
||||
a result; in other words, it will not be
|
||||
interpreted by this class. Unless something special
|
||||
needs to be done with the metadata associated with
|
||||
the message, this should remain C{False}.
|
||||
@type rawResponse: bool
|
||||
|
||||
@return: This immediately returns a deferred object, which will return
|
||||
the result of the RPC call, or raise the relevant exception
|
||||
if the remote node raised one. If C{rawResponse} is set to
|
||||
C{True}, however, it will always return the actual response
|
||||
message (which may be a C{ResponseMessage} or an
|
||||
C{ErrorMessage}).
|
||||
@rtype: twisted.internet.defer.Deferred
|
||||
"""
|
||||
msg = msgtypes.RequestMessage(self._node.id, method, args)
|
||||
msgPrimitive = self._translator.toPrimitive(msg)
|
||||
encodedMsg = self._encoder.encode(msgPrimitive)
|
||||
|
||||
df = defer.Deferred()
|
||||
if rawResponse:
|
||||
df._rpcRawResponse = True
|
||||
|
||||
# Set the RPC timeout timer
|
||||
timeoutCall = reactor.callLater(constants.rpcTimeout, self._msgTimeout, msg.id) #IGNORE:E1101
|
||||
# Transmit the data
|
||||
self._send(encodedMsg, msg.id, (contact.address, contact.port))
|
||||
self._sentMessages[msg.id] = (contact.id, df, timeoutCall)
|
||||
return df
|
||||
|
||||
def datagramReceived(self, datagram, address):
|
||||
""" Handles and parses incoming RPC messages (and responses)
|
||||
|
||||
@note: This is automatically called by Twisted when the protocol
|
||||
receives a UDP datagram
|
||||
"""
|
||||
if datagram[0] == '\x00' and datagram[25] == '\x00':
|
||||
totalPackets = (ord(datagram[1]) << 8) | ord(datagram[2])
|
||||
msgID = datagram[5:25]
|
||||
seqNumber = (ord(datagram[3]) << 8) | ord(datagram[4])
|
||||
if msgID not in self._partialMessages:
|
||||
self._partialMessages[msgID] = {}
|
||||
self._partialMessages[msgID][seqNumber] = datagram[26:]
|
||||
if len(self._partialMessages[msgID]) == totalPackets:
|
||||
keys = self._partialMessages[msgID].keys()
|
||||
keys.sort()
|
||||
data = ''
|
||||
for key in keys:
|
||||
data += self._partialMessages[msgID][key]
|
||||
datagram = data
|
||||
del self._partialMessages[msgID]
|
||||
else:
|
||||
return
|
||||
try:
|
||||
msgPrimitive = self._encoder.decode(datagram)
|
||||
except encoding.DecodeError:
|
||||
# We received some rubbish here
|
||||
return
|
||||
|
||||
message = self._translator.fromPrimitive(msgPrimitive)
|
||||
remoteContact = Contact(message.nodeID, address[0], address[1], self)
|
||||
|
||||
# Refresh the remote node's details in the local node's k-buckets
|
||||
self._node.addContact(remoteContact)
|
||||
|
||||
if isinstance(message, msgtypes.RequestMessage):
|
||||
# This is an RPC method request
|
||||
self._handleRPC(remoteContact, message.id, message.request, message.args)
|
||||
elif isinstance(message, msgtypes.ResponseMessage):
|
||||
# Find the message that triggered this response
|
||||
if self._sentMessages.has_key(message.id):
|
||||
# Cancel timeout timer for this RPC
|
||||
df, timeoutCall = self._sentMessages[message.id][1:3]
|
||||
timeoutCall.cancel()
|
||||
del self._sentMessages[message.id]
|
||||
|
||||
if hasattr(df, '_rpcRawResponse'):
|
||||
# The RPC requested that the raw response message and originating address be returned; do not interpret it
|
||||
df.callback((message, address))
|
||||
elif isinstance(message, msgtypes.ErrorMessage):
|
||||
# The RPC request raised a remote exception; raise it locally
|
||||
if message.exceptionType.startswith('exceptions.'):
|
||||
exceptionClassName = message.exceptionType[11:]
|
||||
else:
|
||||
localModuleHierarchy = self.__module__.split('.')
|
||||
remoteHierarchy = message.exceptionType.split('.')
|
||||
#strip the remote hierarchy
|
||||
while remoteHierarchy[0] == localModuleHierarchy[0]:
|
||||
remoteHierarchy.pop(0)
|
||||
localModuleHierarchy.pop(0)
|
||||
exceptionClassName = '.'.join(remoteHierarchy)
|
||||
remoteException = None
|
||||
try:
|
||||
exec 'remoteException = %s("%s")' % (exceptionClassName, message.response)
|
||||
except Exception:
|
||||
# We could not recreate the exception; create a generic one
|
||||
remoteException = Exception(message.response)
|
||||
df.errback(remoteException)
|
||||
else:
|
||||
# We got a result from the RPC
|
||||
df.callback(message.response)
|
||||
else:
|
||||
# If the original message isn't found, it must have timed out
|
||||
#TODO: we should probably do something with this...
|
||||
pass
|
||||
|
||||
def _send(self, data, rpcID, address):
|
||||
""" Transmit the specified data over UDP, breaking it up into several
|
||||
packets if necessary
|
||||
|
||||
If the data is spread over multiple UDP datagrams, the packets have the
|
||||
following structure::
|
||||
| | | | | |||||||||||| 0x00 |
|
||||
|Transmision|Total number|Sequence number| RPC ID |Header end|
|
||||
| type ID | of packets |of this packet | | indicator|
|
||||
| (1 byte) | (2 bytes) | (2 bytes) |(20 bytes)| (1 byte) |
|
||||
| | | | | |||||||||||| |
|
||||
|
||||
@note: The header used for breaking up large data segments will
|
||||
possibly be moved out of the KademliaProtocol class in the
|
||||
future, into something similar to a message translator/encoder
|
||||
class (see C{kademlia.msgformat} and C{kademlia.encoding}).
|
||||
"""
|
||||
if len(data) > self.msgSizeLimit:
|
||||
# We have to spread the data over multiple UDP datagrams, and provide sequencing information
|
||||
# 1st byte is transmission type id, bytes 2 & 3 are the total number of packets in this transmission, bytes 4 & 5 are the sequence number for this specific packet
|
||||
totalPackets = len(data) / self.msgSizeLimit
|
||||
if len(data) % self.msgSizeLimit > 0:
|
||||
totalPackets += 1
|
||||
encTotalPackets = chr(totalPackets >> 8) + chr(totalPackets & 0xff)
|
||||
seqNumber = 0
|
||||
startPos = 0
|
||||
while seqNumber < totalPackets:
|
||||
#reactor.iterate() #IGNORE:E1101
|
||||
packetData = data[startPos:startPos+self.msgSizeLimit]
|
||||
encSeqNumber = chr(seqNumber >> 8) + chr(seqNumber & 0xff)
|
||||
txData = '\x00%s%s%s\x00%s' % (encTotalPackets, encSeqNumber, rpcID, packetData)
|
||||
self._sendNext(txData, address)
|
||||
|
||||
startPos += self.msgSizeLimit
|
||||
seqNumber += 1
|
||||
else:
|
||||
self._sendNext(data, address)
|
||||
|
||||
def _sendNext(self, txData, address):
|
||||
""" Send the next UDP packet """
|
||||
ts = time.time()
|
||||
delay = 0
|
||||
if ts >= self._next:
|
||||
delay = self.minToSendDelay
|
||||
self._next = ts + self.minToSendDelay
|
||||
else:
|
||||
delay = (self._next-ts) + self.maxToSendDelay
|
||||
self._next += self.maxToSendDelay
|
||||
if self.transport:
|
||||
laterCall = reactor.callLater(delay, self.transport.write, txData, address)
|
||||
for key in self._callLaterList.keys():
|
||||
if key <= ts:
|
||||
del self._callLaterList[key]
|
||||
self._callLaterList[self._next] = laterCall
|
||||
|
||||
def _sendResponse(self, contact, rpcID, response):
|
||||
""" Send a RPC response to the specified contact
|
||||
"""
|
||||
msg = msgtypes.ResponseMessage(rpcID, self._node.id, response)
|
||||
msgPrimitive = self._translator.toPrimitive(msg)
|
||||
encodedMsg = self._encoder.encode(msgPrimitive)
|
||||
self._send(encodedMsg, rpcID, (contact.address, contact.port))
|
||||
|
||||
def _sendError(self, contact, rpcID, exceptionType, exceptionMessage):
|
||||
""" Send an RPC error message to the specified contact
|
||||
"""
|
||||
msg = msgtypes.ErrorMessage(rpcID, self._node.id, exceptionType, exceptionMessage)
|
||||
msgPrimitive = self._translator.toPrimitive(msg)
|
||||
encodedMsg = self._encoder.encode(msgPrimitive)
|
||||
self._send(encodedMsg, rpcID, (contact.address, contact.port))
|
||||
|
||||
def _handleRPC(self, senderContact, rpcID, method, args):
|
||||
""" Executes a local function in response to an RPC request """
|
||||
# Set up the deferred callchain
|
||||
def handleError(f):
|
||||
self._sendError(senderContact, rpcID, f.type, f.getErrorMessage())
|
||||
|
||||
def handleResult(result):
|
||||
self._sendResponse(senderContact, rpcID, result)
|
||||
|
||||
df = defer.Deferred()
|
||||
df.addCallback(handleResult)
|
||||
df.addErrback(handleError)
|
||||
|
||||
# Execute the RPC
|
||||
func = getattr(self._node, method, None)
|
||||
if callable(func) and hasattr(func, 'rpcmethod'):
|
||||
# Call the exposed Node method and return the result to the deferred callback chain
|
||||
try:
|
||||
##try:
|
||||
## # Try to pass the sender's node id to the function...
|
||||
result = func(*args, **{'_rpcNodeID': senderContact.id, '_rpcNodeContact': senderContact})
|
||||
##except TypeError:
|
||||
## # ...or simply call it if that fails
|
||||
## result = func(*args)
|
||||
except Exception, e:
|
||||
df.errback(failure.Failure(e))
|
||||
else:
|
||||
df.callback(result)
|
||||
else:
|
||||
# No such exposed method
|
||||
df.errback( failure.Failure( AttributeError('Invalid method: %s' % method) ) )
|
||||
|
||||
def _msgTimeout(self, messageID):
|
||||
""" Called when an RPC request message times out """
|
||||
# Find the message that timed out
|
||||
if self._sentMessages.has_key(messageID):
|
||||
remoteContactID, df = self._sentMessages[messageID][0:2]
|
||||
if self._partialMessages.has_key(messageID):
|
||||
# We are still receiving this message
|
||||
# See if any progress has been made; if not, kill the message
|
||||
if self._partialMessagesProgress.has_key(messageID):
|
||||
if len(self._partialMessagesProgress[messageID]) == len(self._partialMessages[messageID]):
|
||||
# No progress has been made
|
||||
del self._partialMessagesProgress[messageID]
|
||||
del self._partialMessages[messageID]
|
||||
df.errback(failure.Failure(TimeoutError(remoteContactID)))
|
||||
return
|
||||
# Reset the RPC timeout timer
|
||||
timeoutCall = reactor.callLater(constants.rpcTimeout, self._msgTimeout, messageID) #IGNORE:E1101
|
||||
self._sentMessages[messageID] = (remoteContactID, df, timeoutCall)
|
||||
return
|
||||
del self._sentMessages[messageID]
|
||||
# The message's destination node is now considered to be dead;
|
||||
# raise an (asynchronous) TimeoutError exception and update the host node
|
||||
self._node.removeContact(remoteContactID)
|
||||
df.errback(failure.Failure(TimeoutError(remoteContactID)))
|
||||
else:
|
||||
# This should never be reached
|
||||
print "ERROR: deferred timed out, but is not present in sent messages list!"
|
||||
|
||||
def stopProtocol(self):
|
||||
""" Called when the transport is disconnected.
|
||||
|
||||
Will only be called once, after all ports are disconnected.
|
||||
"""
|
||||
for key in self._callLaterList.keys():
|
||||
try:
|
||||
if key > time.time():
|
||||
self._callLaterList[key].cancel()
|
||||
except Exception, e:
|
||||
print e
|
||||
del self._callLaterList[key]
|
||||
#TODO: test: do we really need the reactor.iterate() call?
|
||||
reactor.iterate()
|
422
lbrynet/dht/routingtable.py
Normal file
422
lbrynet/dht/routingtable.py
Normal file
|
@ -0,0 +1,422 @@
|
|||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
# The docstrings in this module contain epytext markup; API documentation
|
||||
# may be created by processing this file with epydoc: http://epydoc.sf.net
|
||||
|
||||
import time, random
|
||||
|
||||
import constants
|
||||
import kbucket
|
||||
from protocol import TimeoutError
|
||||
|
||||
class RoutingTable(object):
|
||||
""" Interface for RPC message translators/formatters
|
||||
|
||||
Classes inheriting from this should provide a suitable routing table for
|
||||
a parent Node object (i.e. the local entity in the Kademlia network)
|
||||
"""
|
||||
def __init__(self, parentNodeID):
|
||||
"""
|
||||
@param parentNodeID: The n-bit node ID of the node to which this
|
||||
routing table belongs
|
||||
@type parentNodeID: str
|
||||
"""
|
||||
def addContact(self, contact):
|
||||
""" Add the given contact to the correct k-bucket; if it already
|
||||
exists, its status will be updated
|
||||
|
||||
@param contact: The contact to add to this node's k-buckets
|
||||
@type contact: kademlia.contact.Contact
|
||||
"""
|
||||
|
||||
def distance(self, keyOne, keyTwo):
|
||||
""" Calculate the XOR result between two string variables
|
||||
|
||||
@return: XOR result of two long variables
|
||||
@rtype: long
|
||||
"""
|
||||
valKeyOne = long(keyOne.encode('hex'), 16)
|
||||
valKeyTwo = long(keyTwo.encode('hex'), 16)
|
||||
return valKeyOne ^ valKeyTwo
|
||||
|
||||
def findCloseNodes(self, key, count, _rpcNodeID=None):
|
||||
""" Finds a number of known nodes closest to the node/value with the
|
||||
specified key.
|
||||
|
||||
@param key: the n-bit key (i.e. the node or value ID) to search for
|
||||
@type key: str
|
||||
@param count: the amount of contacts to return
|
||||
@type count: int
|
||||
@param _rpcNodeID: Used during RPC, this is be the sender's Node ID
|
||||
Whatever ID is passed in the paramater will get
|
||||
excluded from the list of returned contacts.
|
||||
@type _rpcNodeID: str
|
||||
|
||||
@return: A list of node contacts (C{kademlia.contact.Contact instances})
|
||||
closest to the specified key.
|
||||
This method will return C{k} (or C{count}, if specified)
|
||||
contacts if at all possible; it will only return fewer if the
|
||||
node is returning all of the contacts that it knows of.
|
||||
@rtype: list
|
||||
"""
|
||||
def getContact(self, contactID):
|
||||
""" Returns the (known) contact with the specified node ID
|
||||
|
||||
@raise ValueError: No contact with the specified contact ID is known
|
||||
by this node
|
||||
"""
|
||||
def getRefreshList(self, startIndex=0, force=False):
|
||||
""" Finds all k-buckets that need refreshing, starting at the
|
||||
k-bucket with the specified index, and returns IDs to be searched for
|
||||
in order to refresh those k-buckets
|
||||
|
||||
@param startIndex: The index of the bucket to start refreshing at;
|
||||
this bucket and those further away from it will
|
||||
be refreshed. For example, when joining the
|
||||
network, this node will set this to the index of
|
||||
the bucket after the one containing it's closest
|
||||
neighbour.
|
||||
@type startIndex: index
|
||||
@param force: If this is C{True}, all buckets (in the specified range)
|
||||
will be refreshed, regardless of the time they were last
|
||||
accessed.
|
||||
@type force: bool
|
||||
|
||||
@return: A list of node ID's that the parent node should search for
|
||||
in order to refresh the routing Table
|
||||
@rtype: list
|
||||
"""
|
||||
def removeContact(self, contactID):
|
||||
""" Remove the contact with the specified node ID from the routing
|
||||
table
|
||||
|
||||
@param contactID: The node ID of the contact to remove
|
||||
@type contactID: str
|
||||
"""
|
||||
def touchKBucket(self, key):
|
||||
""" Update the "last accessed" timestamp of the k-bucket which covers
|
||||
the range containing the specified key in the key/ID space
|
||||
|
||||
@param key: A key in the range of the target k-bucket
|
||||
@type key: str
|
||||
"""
|
||||
|
||||
|
||||
class TreeRoutingTable(RoutingTable):
|
||||
""" This class implements a routing table used by a Node class.
|
||||
|
||||
The Kademlia routing table is a binary tree whose leaves are k-buckets,
|
||||
where each k-bucket contains nodes with some common prefix of their IDs.
|
||||
This prefix is the k-bucket's position in the binary tree; it therefore
|
||||
covers some range of ID values, and together all of the k-buckets cover
|
||||
the entire n-bit ID (or key) space (with no overlap).
|
||||
|
||||
@note: In this implementation, nodes in the tree (the k-buckets) are
|
||||
added dynamically, as needed; this technique is described in the 13-page
|
||||
version of the Kademlia paper, in section 2.4. It does, however, use the
|
||||
C{PING} RPC-based k-bucket eviction algorithm described in section 2.2 of
|
||||
that paper.
|
||||
"""
|
||||
def __init__(self, parentNodeID):
|
||||
"""
|
||||
@param parentNodeID: The n-bit node ID of the node to which this
|
||||
routing table belongs
|
||||
@type parentNodeID: str
|
||||
"""
|
||||
# Create the initial (single) k-bucket covering the range of the entire n-bit ID space
|
||||
self._buckets = [kbucket.KBucket(rangeMin=0, rangeMax=2**constants.key_bits)]
|
||||
self._parentNodeID = parentNodeID
|
||||
|
||||
def addContact(self, contact):
|
||||
""" Add the given contact to the correct k-bucket; if it already
|
||||
exists, its status will be updated
|
||||
|
||||
@param contact: The contact to add to this node's k-buckets
|
||||
@type contact: kademlia.contact.Contact
|
||||
"""
|
||||
if contact.id == self._parentNodeID:
|
||||
return
|
||||
|
||||
bucketIndex = self._kbucketIndex(contact.id)
|
||||
try:
|
||||
self._buckets[bucketIndex].addContact(contact)
|
||||
except kbucket.BucketFull:
|
||||
# The bucket is full; see if it can be split (by checking if its range includes the host node's id)
|
||||
if self._buckets[bucketIndex].keyInRange(self._parentNodeID):
|
||||
self._splitBucket(bucketIndex)
|
||||
# Retry the insertion attempt
|
||||
self.addContact(contact)
|
||||
else:
|
||||
# We can't split the k-bucket
|
||||
# NOTE:
|
||||
# In section 2.4 of the 13-page version of the Kademlia paper, it is specified that
|
||||
# in this case, the new contact should simply be dropped. However, in section 2.2,
|
||||
# it states that the head contact in the k-bucket (i.e. the least-recently seen node)
|
||||
# should be pinged - if it does not reply, it should be dropped, and the new contact
|
||||
# added to the tail of the k-bucket. This implementation follows section 2.2 regarding
|
||||
# this point.
|
||||
headContact = self._buckets[bucketIndex]._contacts[0]
|
||||
|
||||
def replaceContact(failure):
|
||||
""" Callback for the deferred PING RPC to see if the head
|
||||
node in the k-bucket is still responding
|
||||
|
||||
@type failure: twisted.python.failure.Failure
|
||||
"""
|
||||
failure.trap(TimeoutError)
|
||||
print '==replacing contact=='
|
||||
# Remove the old contact...
|
||||
deadContactID = failure.getErrorMessage()
|
||||
try:
|
||||
self._buckets[bucketIndex].removeContact(deadContactID)
|
||||
except ValueError:
|
||||
# The contact has already been removed (probably due to a timeout)
|
||||
pass
|
||||
# ...and add the new one at the tail of the bucket
|
||||
self.addContact(contact)
|
||||
|
||||
# Ping the least-recently seen contact in this k-bucket
|
||||
headContact = self._buckets[bucketIndex]._contacts[0]
|
||||
df = headContact.ping()
|
||||
# If there's an error (i.e. timeout), remove the head contact, and append the new one
|
||||
df.addErrback(replaceContact)
|
||||
|
||||
def findCloseNodes(self, key, count, _rpcNodeID=None):
|
||||
""" Finds a number of known nodes closest to the node/value with the
|
||||
specified key.
|
||||
|
||||
@param key: the n-bit key (i.e. the node or value ID) to search for
|
||||
@type key: str
|
||||
@param count: the amount of contacts to return
|
||||
@type count: int
|
||||
@param _rpcNodeID: Used during RPC, this is be the sender's Node ID
|
||||
Whatever ID is passed in the paramater will get
|
||||
excluded from the list of returned contacts.
|
||||
@type _rpcNodeID: str
|
||||
|
||||
@return: A list of node contacts (C{kademlia.contact.Contact instances})
|
||||
closest to the specified key.
|
||||
This method will return C{k} (or C{count}, if specified)
|
||||
contacts if at all possible; it will only return fewer if the
|
||||
node is returning all of the contacts that it knows of.
|
||||
@rtype: list
|
||||
"""
|
||||
#if key == self.id:
|
||||
# bucketIndex = 0 #TODO: maybe not allow this to continue?
|
||||
#else:
|
||||
bucketIndex = self._kbucketIndex(key)
|
||||
closestNodes = self._buckets[bucketIndex].getContacts(constants.k, _rpcNodeID)
|
||||
# This method must return k contacts (even if we have the node with the specified key as node ID),
|
||||
# unless there is less than k remote nodes in the routing table
|
||||
i = 1
|
||||
canGoLower = bucketIndex-i >= 0
|
||||
canGoHigher = bucketIndex+i < len(self._buckets)
|
||||
# Fill up the node list to k nodes, starting with the closest neighbouring nodes known
|
||||
while len(closestNodes) < constants.k and (canGoLower or canGoHigher):
|
||||
#TODO: this may need to be optimized
|
||||
if canGoLower:
|
||||
closestNodes.extend(self._buckets[bucketIndex-i].getContacts(constants.k - len(closestNodes), _rpcNodeID))
|
||||
canGoLower = bucketIndex-(i+1) >= 0
|
||||
if canGoHigher:
|
||||
closestNodes.extend(self._buckets[bucketIndex+i].getContacts(constants.k - len(closestNodes), _rpcNodeID))
|
||||
canGoHigher = bucketIndex+(i+1) < len(self._buckets)
|
||||
i += 1
|
||||
return closestNodes
|
||||
|
||||
def getContact(self, contactID):
|
||||
""" Returns the (known) contact with the specified node ID
|
||||
|
||||
@raise ValueError: No contact with the specified contact ID is known
|
||||
by this node
|
||||
"""
|
||||
bucketIndex = self._kbucketIndex(contactID)
|
||||
try:
|
||||
contact = self._buckets[bucketIndex].getContact(contactID)
|
||||
except ValueError:
|
||||
raise
|
||||
else:
|
||||
return contact
|
||||
|
||||
def getRefreshList(self, startIndex=0, force=False):
|
||||
""" Finds all k-buckets that need refreshing, starting at the
|
||||
k-bucket with the specified index, and returns IDs to be searched for
|
||||
in order to refresh those k-buckets
|
||||
|
||||
@param startIndex: The index of the bucket to start refreshing at;
|
||||
this bucket and those further away from it will
|
||||
be refreshed. For example, when joining the
|
||||
network, this node will set this to the index of
|
||||
the bucket after the one containing it's closest
|
||||
neighbour.
|
||||
@type startIndex: index
|
||||
@param force: If this is C{True}, all buckets (in the specified range)
|
||||
will be refreshed, regardless of the time they were last
|
||||
accessed.
|
||||
@type force: bool
|
||||
|
||||
@return: A list of node ID's that the parent node should search for
|
||||
in order to refresh the routing Table
|
||||
@rtype: list
|
||||
"""
|
||||
bucketIndex = startIndex
|
||||
refreshIDs = []
|
||||
for bucket in self._buckets[startIndex:]:
|
||||
if force or (int(time.time()) - bucket.lastAccessed >= constants.refreshTimeout):
|
||||
searchID = self._randomIDInBucketRange(bucketIndex)
|
||||
refreshIDs.append(searchID)
|
||||
bucketIndex += 1
|
||||
return refreshIDs
|
||||
|
||||
def removeContact(self, contactID):
|
||||
""" Remove the contact with the specified node ID from the routing
|
||||
table
|
||||
|
||||
@param contactID: The node ID of the contact to remove
|
||||
@type contactID: str
|
||||
"""
|
||||
bucketIndex = self._kbucketIndex(contactID)
|
||||
try:
|
||||
self._buckets[bucketIndex].removeContact(contactID)
|
||||
except ValueError:
|
||||
#print 'removeContact(): Contact not in routing table'
|
||||
return
|
||||
|
||||
def touchKBucket(self, key):
|
||||
""" Update the "last accessed" timestamp of the k-bucket which covers
|
||||
the range containing the specified key in the key/ID space
|
||||
|
||||
@param key: A key in the range of the target k-bucket
|
||||
@type key: str
|
||||
"""
|
||||
bucketIndex = self._kbucketIndex(key)
|
||||
self._buckets[bucketIndex].lastAccessed = int(time.time())
|
||||
|
||||
def _kbucketIndex(self, key):
|
||||
""" Calculate the index of the k-bucket which is responsible for the
|
||||
specified key (or ID)
|
||||
|
||||
@param key: The key for which to find the appropriate k-bucket index
|
||||
@type key: str
|
||||
|
||||
@return: The index of the k-bucket responsible for the specified key
|
||||
@rtype: int
|
||||
"""
|
||||
valKey = long(key.encode('hex'), 16)
|
||||
i = 0
|
||||
for bucket in self._buckets:
|
||||
if bucket.keyInRange(valKey):
|
||||
return i
|
||||
else:
|
||||
i += 1
|
||||
return i
|
||||
|
||||
def _randomIDInBucketRange(self, bucketIndex):
|
||||
""" Returns a random ID in the specified k-bucket's range
|
||||
|
||||
@param bucketIndex: The index of the k-bucket to use
|
||||
@type bucketIndex: int
|
||||
"""
|
||||
idValue = random.randrange(self._buckets[bucketIndex].rangeMin, self._buckets[bucketIndex].rangeMax)
|
||||
randomID = hex(idValue)[2:]
|
||||
if randomID[-1] == 'L':
|
||||
randomID = randomID[:-1]
|
||||
if len(randomID) % 2 != 0:
|
||||
randomID = '0' + randomID
|
||||
randomID = randomID.decode('hex')
|
||||
randomID = (constants.key_bits/8 - len(randomID))*'\x00' + randomID
|
||||
return randomID
|
||||
|
||||
def _splitBucket(self, oldBucketIndex):
|
||||
""" Splits the specified k-bucket into two new buckets which together
|
||||
cover the same range in the key/ID space
|
||||
|
||||
@param oldBucketIndex: The index of k-bucket to split (in this table's
|
||||
list of k-buckets)
|
||||
@type oldBucketIndex: int
|
||||
"""
|
||||
# Resize the range of the current (old) k-bucket
|
||||
oldBucket = self._buckets[oldBucketIndex]
|
||||
splitPoint = oldBucket.rangeMax - (oldBucket.rangeMax - oldBucket.rangeMin)/2
|
||||
# Create a new k-bucket to cover the range split off from the old bucket
|
||||
newBucket = kbucket.KBucket(splitPoint, oldBucket.rangeMax)
|
||||
oldBucket.rangeMax = splitPoint
|
||||
# Now, add the new bucket into the routing table tree
|
||||
self._buckets.insert(oldBucketIndex + 1, newBucket)
|
||||
# Finally, copy all nodes that belong to the new k-bucket into it...
|
||||
for contact in oldBucket._contacts:
|
||||
if newBucket.keyInRange(contact.id):
|
||||
newBucket.addContact(contact)
|
||||
# ...and remove them from the old bucket
|
||||
for contact in newBucket._contacts:
|
||||
oldBucket.removeContact(contact)
|
||||
|
||||
class OptimizedTreeRoutingTable(TreeRoutingTable):
|
||||
""" A version of the "tree"-type routing table specified by Kademlia,
|
||||
along with contact accounting optimizations specified in section 4.1 of
|
||||
of the 13-page version of the Kademlia paper.
|
||||
"""
|
||||
def __init__(self, parentNodeID):
|
||||
TreeRoutingTable.__init__(self, parentNodeID)
|
||||
# Cache containing nodes eligible to replace stale k-bucket entries
|
||||
self._replacementCache = {}
|
||||
|
||||
def addContact(self, contact):
|
||||
""" Add the given contact to the correct k-bucket; if it already
|
||||
exists, its status will be updated
|
||||
|
||||
@param contact: The contact to add to this node's k-buckets
|
||||
@type contact: kademlia.contact.Contact
|
||||
"""
|
||||
if contact.id == self._parentNodeID:
|
||||
return
|
||||
|
||||
# Initialize/reset the "successively failed RPC" counter
|
||||
contact.failedRPCs = 0
|
||||
|
||||
bucketIndex = self._kbucketIndex(contact.id)
|
||||
try:
|
||||
self._buckets[bucketIndex].addContact(contact)
|
||||
except kbucket.BucketFull:
|
||||
# The bucket is full; see if it can be split (by checking if its range includes the host node's id)
|
||||
if self._buckets[bucketIndex].keyInRange(self._parentNodeID):
|
||||
self._splitBucket(bucketIndex)
|
||||
# Retry the insertion attempt
|
||||
self.addContact(contact)
|
||||
else:
|
||||
# We can't split the k-bucket
|
||||
# NOTE: This implementation follows section 4.1 of the 13 page version
|
||||
# of the Kademlia paper (optimized contact accounting without PINGs
|
||||
#- results in much less network traffic, at the expense of some memory)
|
||||
|
||||
# Put the new contact in our replacement cache for the corresponding k-bucket (or update it's position if it exists already)
|
||||
if not self._replacementCache.has_key(bucketIndex):
|
||||
self._replacementCache[bucketIndex] = []
|
||||
if contact in self._replacementCache[bucketIndex]:
|
||||
self._replacementCache[bucketIndex].remove(contact)
|
||||
#TODO: Using k to limit the size of the contact replacement cache - maybe define a seperate value for this in constants.py?
|
||||
elif len(self._replacementCache) >= constants.k:
|
||||
self._replacementCache.pop(0)
|
||||
self._replacementCache[bucketIndex].append(contact)
|
||||
|
||||
def removeContact(self, contactID):
|
||||
""" Remove the contact with the specified node ID from the routing
|
||||
table
|
||||
|
||||
@param contactID: The node ID of the contact to remove
|
||||
@type contactID: str
|
||||
"""
|
||||
bucketIndex = self._kbucketIndex(contactID)
|
||||
try:
|
||||
contact = self._buckets[bucketIndex].getContact(contactID)
|
||||
except ValueError:
|
||||
#print 'removeContact(): Contact not in routing table'
|
||||
return
|
||||
contact.failedRPCs += 1
|
||||
if contact.failedRPCs >= 5:
|
||||
self._buckets[bucketIndex].removeContact(contactID)
|
||||
# Replace this stale contact with one from our replacemnent cache, if we have any
|
||||
if self._replacementCache.has_key(bucketIndex):
|
||||
if len(self._replacementCache[bucketIndex]) > 0:
|
||||
self._buckets[bucketIndex].addContact( self._replacementCache[bucketIndex].pop() )
|
100
lbrynet/dht_scripts.py
Normal file
100
lbrynet/dht_scripts.py
Normal file
|
@ -0,0 +1,100 @@
|
|||
from lbrynet.dht.node import Node
|
||||
import binascii
|
||||
from twisted.internet import reactor, task
|
||||
import logging
|
||||
import sys
|
||||
from lbrynet.core.utils import generate_id
|
||||
|
||||
|
||||
def print_usage():
|
||||
print "Usage:\n%s UDP_PORT KNOWN_NODE_IP KNOWN_NODE_PORT HASH"
|
||||
|
||||
|
||||
def join_network(udp_port, known_nodes):
|
||||
lbryid = generate_id()
|
||||
|
||||
logging.info('Creating Node...')
|
||||
node = Node(udpPort=udp_port, lbryid=lbryid)
|
||||
|
||||
logging.info('Joining network...')
|
||||
d = node.joinNetwork(known_nodes)
|
||||
|
||||
def log_network_size():
|
||||
logging.info("Approximate number of nodes in DHT: %s", str(node.getApproximateTotalDHTNodes()))
|
||||
logging.info("Approximate number of blobs in DHT: %s", str(node.getApproximateTotalHashes()))
|
||||
|
||||
d.addCallback(lambda _: log_network_size())
|
||||
|
||||
d.addCallback(lambda _: node)
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def get_hosts(node, h):
|
||||
|
||||
def print_hosts(hosts):
|
||||
print "Hosts returned from the DHT: "
|
||||
print hosts
|
||||
|
||||
logging.info("Looking up %s", h)
|
||||
d = node.getPeersForBlob(h)
|
||||
d.addCallback(print_hosts)
|
||||
return d
|
||||
|
||||
|
||||
def announce_hash(node, h):
|
||||
d = node.announceHaveBlob(h, 34567)
|
||||
|
||||
def log_results(results):
|
||||
for success, result in results:
|
||||
if success:
|
||||
logging.info("Succeeded: %s", str(result))
|
||||
else:
|
||||
logging.info("Failed: %s", str(result.getErrorMessage()))
|
||||
|
||||
d.addCallback(log_results)
|
||||
return d
|
||||
|
||||
|
||||
def get_args():
|
||||
if len(sys.argv) < 5:
|
||||
print_usage()
|
||||
sys.exit(1)
|
||||
udp_port = int(sys.argv[1])
|
||||
known_nodes = [(sys.argv[2], int(sys.argv[3]))]
|
||||
h = binascii.unhexlify(sys.argv[4])
|
||||
return udp_port, known_nodes, h
|
||||
|
||||
|
||||
def run_dht_script(dht_func):
|
||||
log_format = "(%(asctime)s)[%(filename)s:%(lineno)s] %(funcName)s(): %(message)s"
|
||||
logging.basicConfig(level=logging.DEBUG, format=log_format)
|
||||
|
||||
udp_port, known_nodes, h = get_args()
|
||||
|
||||
d = task.deferLater(reactor, 0, join_network, udp_port, known_nodes)
|
||||
|
||||
def run_dht_func(node):
|
||||
return dht_func(node, h)
|
||||
|
||||
d.addCallback(run_dht_func)
|
||||
|
||||
def log_err(err):
|
||||
logging.error("An error occurred: %s", err.getTraceback())
|
||||
return err
|
||||
|
||||
def shut_down():
|
||||
logging.info("Shutting down")
|
||||
reactor.stop()
|
||||
|
||||
d.addErrback(log_err)
|
||||
d.addBoth(lambda _: shut_down())
|
||||
reactor.run()
|
||||
|
||||
|
||||
def get_hosts_for_hash_in_dht():
|
||||
run_dht_script(get_hosts)
|
||||
|
||||
|
||||
def announce_hash_to_dht():
|
||||
run_dht_script(announce_hash)
|
158
lbrynet/dhttest.py
Normal file
158
lbrynet/dhttest.py
Normal file
|
@ -0,0 +1,158 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# This is a basic single-node example of how to use the Entangled DHT. It creates a Node and (optionally) joins an existing DHT.
|
||||
# It then does a Kademlia store and find, and then it deletes the stored value (non-Kademlia method).
|
||||
#
|
||||
# No tuple space functionality is demonstrated by this script.
|
||||
#
|
||||
# To test it properly, start a multi-node Kademlia DHT with the "create_network.py"
|
||||
# script and point this node to that, e.g.:
|
||||
# $python create_network.py 10 127.0.0.1
|
||||
#
|
||||
# $python basic_example.py 5000 127.0.0.1 4000
|
||||
#
|
||||
# This library is free software, distributed under the terms of
|
||||
# the GNU Lesser General Public License Version 3, or any later version.
|
||||
# See the COPYING file included in this archive
|
||||
#
|
||||
|
||||
# Thanks to Paul Cannon for IP-address resolution functions (taken from aspn.activestate.com)
|
||||
|
||||
|
||||
|
||||
import os, sys, time, signal, hashlib, random
|
||||
import twisted.internet.reactor
|
||||
from lbrynet.dht.node import Node
|
||||
#from entangled.kademlia.datastore import SQLiteDataStore
|
||||
|
||||
# The Entangled DHT node; instantiated in the main() method
|
||||
node = None
|
||||
|
||||
# The key to use for this example when storing/retrieving data
|
||||
hash = hashlib.sha384()
|
||||
hash.update("key")
|
||||
KEY = hash.digest()
|
||||
# The value to store
|
||||
VALUE = random.randint(10000, 20000)
|
||||
import binascii
|
||||
lbryid = KEY
|
||||
|
||||
|
||||
def storeValue(key, value):
|
||||
""" Stores the specified value in the DHT using the specified key """
|
||||
global node
|
||||
print '\nStoring value; Key: %s, Value: %s' % (key, value)
|
||||
# Store the value in the DHT. This method returns a Twisted Deferred result, which we then add callbacks to
|
||||
deferredResult = node.announceHaveHash(key, value)
|
||||
# Add our callback; this method is called when the operation completes...
|
||||
deferredResult.addCallback(storeValueCallback)
|
||||
# ...and for error handling, add an "error callback" as well.
|
||||
# For this example script, I use a generic error handler; usually you would need something more specific
|
||||
deferredResult.addErrback(genericErrorCallback)
|
||||
|
||||
|
||||
def storeValueCallback(*args, **kwargs):
|
||||
""" Callback function that is invoked when the storeValue() operation succeeds """
|
||||
print 'Value has been stored in the DHT'
|
||||
# Now that the value has been stored, schedule that the value is read again after 2.5 seconds
|
||||
print 'Scheduling retrieval in 2.5 seconds...'
|
||||
twisted.internet.reactor.callLater(2.5, getValue)
|
||||
|
||||
|
||||
def genericErrorCallback(failure):
|
||||
""" Callback function that is invoked if an error occurs during any of the DHT operations """
|
||||
print 'An error has occurred:', failure.getErrorMessage()
|
||||
twisted.internet.reactor.callLater(0, stop)
|
||||
|
||||
def getValue():
|
||||
""" Retrieves the value of the specified key (KEY) from the DHT """
|
||||
global node, KEY
|
||||
# Get the value for the specified key (immediately returns a Twisted deferred result)
|
||||
print '\nRetrieving value from DHT for key "%s"...' % binascii.unhexlify("f7d9dc4de674eaa2c5a022eb95bc0d33ec2e75c6")
|
||||
deferredResult = node.iterativeFindValue(binascii.unhexlify("f7d9dc4de674eaa2c5a022eb95bc0d33ec2e75c6"))
|
||||
#deferredResult = node.iterativeFindValue(KEY)
|
||||
# Add a callback to this result; this will be called as soon as the operation has completed
|
||||
deferredResult.addCallback(getValueCallback)
|
||||
# As before, add the generic error callback
|
||||
deferredResult.addErrback(genericErrorCallback)
|
||||
|
||||
|
||||
def getValueCallback(result):
|
||||
""" Callback function that is invoked when the getValue() operation succeeds """
|
||||
# Check if the key was found (result is a dict of format {key: value}) or not (in which case a list of "closest" Kademlia contacts would be returned instead")
|
||||
print "Got the value"
|
||||
print result
|
||||
#if type(result) == dict:
|
||||
# for v in result[binascii.unhexlify("5292fa9c426621f02419f5050900392bdff5036c")]:
|
||||
# print "v:", v
|
||||
# print "v[6:", v[6:]
|
||||
# print "lbryid:",lbryid
|
||||
# print "lbryid == v[6:]:", lbryid == v[6:]
|
||||
# print 'Value successfully retrieved: %s' % result[KEY]
|
||||
|
||||
#else:
|
||||
# print 'Value not found'
|
||||
# Either way, schedule a "delete" operation for the key
|
||||
#print 'Scheduling removal in 2.5 seconds...'
|
||||
#twisted.internet.reactor.callLater(2.5, deleteValue)
|
||||
print 'Scheduling shutdown in 2.5 seconds...'
|
||||
twisted.internet.reactor.callLater(2.5, stop)
|
||||
|
||||
|
||||
def stop():
|
||||
""" Stops the Twisted reactor, and thus the script """
|
||||
print '\nStopping Kademlia node and terminating script...'
|
||||
twisted.internet.reactor.stop()
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
import sys, os
|
||||
if len(sys.argv) < 2:
|
||||
print 'Usage:\n%s UDP_PORT [KNOWN_NODE_IP KNOWN_NODE_PORT]' % sys.argv[0]
|
||||
print 'or:\n%s UDP_PORT [FILE_WITH_KNOWN_NODES]' % sys.argv[0]
|
||||
print '\nIf a file is specified, it should containg one IP address and UDP port\nper line, seperated by a space.'
|
||||
sys.exit(1)
|
||||
try:
|
||||
int(sys.argv[1])
|
||||
except ValueError:
|
||||
print '\nUDP_PORT must be an integer value.\n'
|
||||
print 'Usage:\n%s UDP_PORT [KNOWN_NODE_IP KNOWN_NODE_PORT]' % sys.argv[0]
|
||||
print 'or:\n%s UDP_PORT [FILE_WITH_KNOWN_NODES]' % sys.argv[0]
|
||||
print '\nIf a file is specified, it should contain one IP address and UDP port\nper line, seperated by a space.'
|
||||
sys.exit(1)
|
||||
|
||||
if len(sys.argv) == 4:
|
||||
knownNodes = [(sys.argv[2], int(sys.argv[3]))]
|
||||
elif len(sys.argv) == 3:
|
||||
knownNodes = []
|
||||
f = open(sys.argv[2], 'r')
|
||||
lines = f.readlines()
|
||||
f.close()
|
||||
for line in lines:
|
||||
ipAddress, udpPort = line.split()
|
||||
knownNodes.append((ipAddress, int(udpPort)))
|
||||
else:
|
||||
knownNodes = None
|
||||
print '\nNOTE: You have not specified any remote DHT node(s) to connect to'
|
||||
print 'It will thus not be aware of any existing DHT, but will still function as a self-contained DHT (until another node contacts it).'
|
||||
print 'Run this script without any arguments for info.\n'
|
||||
|
||||
# Set up SQLite-based data store (you could use an in-memory store instead, for example)
|
||||
#if os.path.isfile('/tmp/dbFile%s.db' % sys.argv[1]):
|
||||
# os.remove('/tmp/dbFile%s.db' % sys.argv[1])
|
||||
#dataStore = SQLiteDataStore(dbFile = '/tmp/dbFile%s.db' % sys.argv[1])
|
||||
# Create the Entangled node. It extends the functionality of a basic Kademlia node (but is fully backwards-compatible with a Kademlia-only network)
|
||||
# If you wish to have a pure Kademlia network, use the entangled.kademlia.node.Node class instead
|
||||
print 'Creating Node...'
|
||||
#node = EntangledNode( udpPort=int(sys.argv[1]), dataStore=dataStore )
|
||||
node = Node( udpPort=int(sys.argv[1]), lbryid=lbryid)
|
||||
|
||||
# Schedule the node to join the Kademlia/Entangled DHT
|
||||
node.joinNetwork(knownNodes)
|
||||
# Schedule the "storeValue() call to be invoked after 2.5 seconds, using KEY and VALUE as arguments
|
||||
#twisted.internet.reactor.callLater(2.5, storeValue, KEY, VALUE)
|
||||
twisted.internet.reactor.callLater(2.5, getValue)
|
||||
# Start the Twisted reactor - this fires up all networking, and allows the scheduled join operation to take place
|
||||
print 'Twisted reactor started (script will commence in 2.5 seconds)'
|
||||
twisted.internet.reactor.run()
|
||||
|
633
lbrynet/interfaces.py
Normal file
633
lbrynet/interfaces.py
Normal file
|
@ -0,0 +1,633 @@
|
|||
"""
|
||||
Interfaces which are implemented by various classes within LBRYnet.
|
||||
"""
|
||||
from zope.interface import Interface
|
||||
|
||||
|
||||
class IPeerFinder(Interface):
|
||||
"""
|
||||
Used to find peers by sha384 hashes which they claim to be associated with.
|
||||
"""
|
||||
def find_peers_for_blob(self, blob_hash):
|
||||
"""
|
||||
Look for peers claiming to be associated with a sha384 hashsum.
|
||||
|
||||
@param blob_hash: The sha384 hashsum to use to look up peers.
|
||||
@type blob_hash: string, hex encoded
|
||||
|
||||
@return: a Deferred object which fires with a list of Peer objects
|
||||
@rtype: Deferred which fires with [Peer]
|
||||
"""
|
||||
|
||||
|
||||
class IRequestSender(Interface):
|
||||
"""
|
||||
Used to connect to a peer, send requests to it, and return the responses to those requests.
|
||||
"""
|
||||
def add_request(self, request):
|
||||
"""
|
||||
Add a request to the next message that will be sent to the peer
|
||||
|
||||
@param request: a request to be sent to the peer in the next message
|
||||
@type request: ClientRequest
|
||||
|
||||
@return: Deferred object which will callback with the response to this request, a dict
|
||||
@rtype: Deferred which fires with dict
|
||||
"""
|
||||
|
||||
def add_blob_request(self, blob_request):
|
||||
"""
|
||||
Add a request for a blob to the next message that will be sent to the peer.
|
||||
|
||||
This will cause the protocol to call blob_request.write(data) for all incoming
|
||||
data, after the response message has been parsed out, until blob_request.finished_deferred fires.
|
||||
|
||||
@param blob_request: the request for the blob
|
||||
@type blob_request: ClientBlobRequest
|
||||
|
||||
@return: Deferred object which will callback with the response to this request
|
||||
@rtype: Deferred which fires with dict
|
||||
"""
|
||||
|
||||
|
||||
class IRequestCreator(Interface):
|
||||
"""
|
||||
Send requests, via an IRequestSender, to peers.
|
||||
"""
|
||||
|
||||
def send_next_request(self, peer, protocol):
|
||||
"""
|
||||
Create a Request object for the peer and then give the protocol that request.
|
||||
|
||||
@param peer: the Peer object which the request will be sent to.
|
||||
@type peer: Peer
|
||||
|
||||
@param protocol: the protocol to pass the request to.
|
||||
@type protocol: object which implements IRequestSender
|
||||
|
||||
@return: Deferred object which will callback with True or False depending on whether a Request was sent
|
||||
@rtype: Deferred which fires with boolean
|
||||
"""
|
||||
|
||||
def get_new_peers(self):
|
||||
"""
|
||||
Get some new peers which the request creator wants to send requests to.
|
||||
|
||||
@return: Deferred object which will callback with [Peer]
|
||||
@rtype: Deferred which fires with [Peer]
|
||||
"""
|
||||
|
||||
|
||||
class IMetadataHandler(Interface):
|
||||
"""
|
||||
Get metadata for the IDownloadManager.
|
||||
"""
|
||||
def get_initial_blobs(self):
|
||||
"""
|
||||
Return metadata about blobs that are known to be associated with the stream at the time that the
|
||||
stream is set up.
|
||||
|
||||
@return: Deferred object which will call back with a list of BlobInfo objects
|
||||
@rtype: Deferred which fires with [BlobInfo]
|
||||
"""
|
||||
|
||||
def final_blob_num(self):
|
||||
"""
|
||||
If the last blob in the stream is known, return its blob_num. Otherwise, return None.
|
||||
|
||||
@return: integer representing the final blob num in the stream, or None
|
||||
@rtype: integer or None
|
||||
"""
|
||||
|
||||
|
||||
class IDownloadManager(Interface):
|
||||
"""
|
||||
Manage the downloading of an associated group of blobs, referred to as a stream.
|
||||
|
||||
These objects keep track of metadata about the stream, are responsible for starting and stopping
|
||||
other components, and handle communication between other components.
|
||||
"""
|
||||
|
||||
def start_downloading(self):
|
||||
"""
|
||||
Load the initial metadata about the stream and then start the other components.
|
||||
|
||||
@return: Deferred which fires when the other components have been started.
|
||||
@rtype: Deferred which fires with boolean
|
||||
"""
|
||||
|
||||
def resume_downloading(self):
|
||||
"""
|
||||
Start the other components after they have been stopped.
|
||||
|
||||
@return: Deferred which fires when the other components have been started.
|
||||
@rtype: Deferred which fires with boolean
|
||||
"""
|
||||
|
||||
def pause_downloading(self):
|
||||
"""
|
||||
Stop the other components.
|
||||
|
||||
@return: Deferred which fires when the other components have been stopped.
|
||||
@rtype: Deferred which fires with boolean
|
||||
"""
|
||||
|
||||
def add_blobs_to_download(self, blobs):
|
||||
"""
|
||||
Add blobs to the list of blobs that should be downloaded
|
||||
|
||||
@param blobs: list of BlobInfos that are associated with the stream being downloaded
|
||||
@type blobs: [BlobInfo]
|
||||
|
||||
@return: DeferredList which fires with the result of adding each previously unknown BlobInfo
|
||||
to the list of known BlobInfos.
|
||||
@rtype: DeferredList which fires with [(boolean, Failure/None)]
|
||||
"""
|
||||
|
||||
def stream_position(self):
|
||||
"""
|
||||
Returns the blob_num of the next blob needed in the stream.
|
||||
|
||||
If the stream already has all of the blobs it needs, then this will return the blob_num
|
||||
of the last blob in the stream plus 1.
|
||||
|
||||
@return: the blob_num of the next blob needed, or the last blob_num + 1.
|
||||
@rtype: integer
|
||||
"""
|
||||
|
||||
def needed_blobs(self):
|
||||
"""
|
||||
Returns a list of BlobInfos representing all of the blobs that the stream still needs to download.
|
||||
|
||||
@return: the list of BlobInfos representing blobs that the stream still needs to download.
|
||||
@rtype: [BlobInfo]
|
||||
"""
|
||||
|
||||
def final_blob_num(self):
|
||||
"""
|
||||
If the last blob in the stream is known, return its blob_num. If not, return None.
|
||||
|
||||
@return: The blob_num of the last blob in the stream, or None if it is unknown.
|
||||
@rtype: integer or None
|
||||
"""
|
||||
|
||||
def handle_blob(self, blob_num):
|
||||
"""
|
||||
This function is called when the next blob in the stream is ready to be handled, whatever that may mean.
|
||||
|
||||
@param blob_num: The blob_num of the blob that is ready to be handled.
|
||||
@type blob_num: integer
|
||||
|
||||
@return: A Deferred which fires when the blob has been 'handled'
|
||||
@rtype: Deferred which can fire with anything
|
||||
"""
|
||||
|
||||
|
||||
class IConnectionManager(Interface):
|
||||
"""
|
||||
Connects to peers so that IRequestCreators can send their requests.
|
||||
"""
|
||||
def get_next_request(self, peer, protocol):
|
||||
"""
|
||||
Ask all IRequestCreators belonging to this object to create a Request for peer and give it to protocol
|
||||
|
||||
@param peer: the peer which the request will be sent to.
|
||||
@type peer: Peer
|
||||
|
||||
@param protocol: the protocol which the request should be sent to by the IRequestCreator.
|
||||
@type protocol: IRequestSender
|
||||
|
||||
@return: Deferred object which will callback with True or False depending on whether the IRequestSender
|
||||
should send the request or hang up
|
||||
@rtype: Deferred which fires with boolean
|
||||
"""
|
||||
|
||||
def protocol_disconnected(self, peer, protocol):
|
||||
"""
|
||||
Inform the IConnectionManager that the protocol has been disconnected
|
||||
|
||||
@param peer: The peer which the connection was to.
|
||||
@type peer: Peer
|
||||
|
||||
@param protocol: The protocol which was disconnected.
|
||||
@type protocol: Protocol
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IProgressManager(Interface):
|
||||
"""
|
||||
Responsible for keeping track of the progress of the download.
|
||||
|
||||
Specifically, it is their responsibility to decide which blobs need to be downloaded and keep track of
|
||||
the progress of the download
|
||||
"""
|
||||
def stream_position(self):
|
||||
"""
|
||||
Returns the blob_num of the next blob needed in the stream.
|
||||
|
||||
If the stream already has all of the blobs it needs, then this will return the blob_num
|
||||
of the last blob in the stream plus 1.
|
||||
|
||||
@return: the blob_num of the next blob needed, or the last blob_num + 1.
|
||||
@rtype: integer
|
||||
"""
|
||||
|
||||
def needed_blobs(self):
|
||||
"""
|
||||
Returns a list of BlobInfos representing all of the blobs that the stream still needs to download.
|
||||
|
||||
@return: the list of BlobInfos representing blobs that the stream still needs to download.
|
||||
@rtype: [BlobInfo]
|
||||
"""
|
||||
|
||||
def blob_downloaded(self, blob, blob_info):
|
||||
"""
|
||||
Mark that a blob has been downloaded and does not need to be downloaded again
|
||||
|
||||
@param blob: the blob that has been downloaded.
|
||||
@type blob: Blob
|
||||
|
||||
@param blob_info: the metadata of the blob that has been downloaded.
|
||||
@type blob_info: BlobInfo
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IBlobHandler(Interface):
|
||||
"""
|
||||
Responsible for doing whatever should be done with blobs that have been downloaded.
|
||||
"""
|
||||
def blob_downloaded(self, blob, blob_info):
|
||||
"""
|
||||
Do whatever the downloader is supposed to do when a blob has been downloaded
|
||||
|
||||
@param blob: The downloaded blob
|
||||
@type blob: Blob
|
||||
|
||||
@param blob_info: The metadata of the downloaded blob
|
||||
@type blob_info: BlobInfo
|
||||
|
||||
@return: A Deferred which fires when the blob has been handled.
|
||||
@rtype: Deferred which can fire with anything
|
||||
"""
|
||||
|
||||
|
||||
class IRateLimited(Interface):
|
||||
"""
|
||||
Have the ability to be throttled (temporarily stopped).
|
||||
"""
|
||||
def throttle_upload(self):
|
||||
"""
|
||||
Stop uploading data until unthrottle_upload is called.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def throttle_download(self):
|
||||
"""
|
||||
Stop downloading data until unthrottle_upload is called.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def unthrottle_upload(self):
|
||||
"""
|
||||
Resume uploading data at will until throttle_upload is called.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def unthrottle_downlad(self):
|
||||
"""
|
||||
Resume downloading data at will until throttle_download is called.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IRateLimiter(Interface):
|
||||
"""
|
||||
Can keep track of download and upload rates and can throttle objects which implement the
|
||||
IRateLimited interface.
|
||||
"""
|
||||
def report_dl_bytes(self, num_bytes):
|
||||
"""
|
||||
Inform the IRateLimiter that num_bytes have been downloaded.
|
||||
|
||||
@param num_bytes: the number of bytes that have been downloaded
|
||||
@type num_bytes: integer
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def report_ul_bytes(self, num_bytes):
|
||||
"""
|
||||
Inform the IRateLimiter that num_bytes have been uploaded.
|
||||
|
||||
@param num_bytes: the number of bytes that have been uploaded
|
||||
@type num_bytes: integer
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def register_protocol(self, protocol):
|
||||
"""
|
||||
Register an IRateLimited object with the IRateLimiter so that the IRateLimiter can throttle it
|
||||
|
||||
@param protocol: An object implementing the interface IRateLimited
|
||||
@type protocol: Object implementing IRateLimited
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def unregister_protocol(self, protocol):
|
||||
"""
|
||||
Unregister an IRateLimited object so that it won't be throttled any more.
|
||||
|
||||
@param protocol: An object implementing the interface IRateLimited, which was previously registered with this
|
||||
IRateLimiter via "register_protocol"
|
||||
@type protocol: Object implementing IRateLimited
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IRequestHandler(Interface):
|
||||
"""
|
||||
Pass client queries on to IQueryHandlers
|
||||
"""
|
||||
def register_query_handler(self, query_handler, query_identifiers):
|
||||
"""
|
||||
Register a query handler, which will be passed any queries that
|
||||
match any of the identifiers in query_identifiers
|
||||
|
||||
@param query_handler: the object which will handle queries matching the given query_identifiers
|
||||
@type query_handler: Object implementing IQueryHandler
|
||||
|
||||
@param query_identifiers: A list of strings representing the query identifiers
|
||||
for queries that should be passed to this handler
|
||||
@type query_identifiers: [string]
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def register_blob_sender(self, blob_sender):
|
||||
"""
|
||||
Register a blob sender which will be called after the response has
|
||||
finished to see if it wants to send a blob
|
||||
|
||||
@param blob_sender: the object which will upload the blob to the client.
|
||||
@type blob_sender: IBlobSender
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IBlobSender(Interface):
|
||||
"""
|
||||
Upload blobs to clients.
|
||||
"""
|
||||
def send_blob_if_requested(self, consumer):
|
||||
"""
|
||||
If a blob has been requested, write it to 'write' func of the consumer and then
|
||||
callback the returned deferred when it has all been written
|
||||
|
||||
@param consumer: the object implementing IConsumer which the file will be written to
|
||||
@type consumer: object which implements IConsumer
|
||||
|
||||
@return: Deferred which will fire when the blob sender is done, which will be
|
||||
immediately if no blob should be sent.
|
||||
@rtype: Deferred which fires with anything
|
||||
"""
|
||||
|
||||
|
||||
class IQueryHandler(Interface):
|
||||
"""
|
||||
Respond to requests from clients.
|
||||
"""
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
"""
|
||||
Register with the request handler to receive queries
|
||||
|
||||
@param request_handler: the object implementing IRequestHandler to register with
|
||||
@type request_handler: object implementing IRequestHandler
|
||||
|
||||
@param peer: the Peer which this query handler will be answering requests from
|
||||
@type peer: Peer
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def handle_queries(self, queries):
|
||||
"""
|
||||
Return responses to queries from the client.
|
||||
|
||||
@param queries: a dict representing the query_identifiers:queries that should be handled
|
||||
@type queries: {string: dict}
|
||||
|
||||
@return: a Deferred object which will callback with a dict of query responses
|
||||
@rtype: Deferred which fires with {string: dict}
|
||||
"""
|
||||
|
||||
|
||||
class IQueryHandlerFactory(Interface):
|
||||
"""
|
||||
Construct IQueryHandlers to handle queries from each new client that connects.
|
||||
"""
|
||||
def build_query_handler(self):
|
||||
"""
|
||||
Create an object that implements the IQueryHandler interface
|
||||
|
||||
@return: object that implements IQueryHandler
|
||||
"""
|
||||
|
||||
|
||||
class IStreamDownloaderFactory(Interface):
|
||||
"""
|
||||
Construct IStreamDownloaders and provide options that will be passed to those IStreamDownloaders.
|
||||
"""
|
||||
def get_downloader_options(self, sd_validator, payment_rate_manager):
|
||||
"""
|
||||
Return the list of options that can be used to modify IStreamDownloader behavior
|
||||
|
||||
@param sd_validator: object containing stream metadata, which the options may depend on
|
||||
@type sd_validator: object which implements IStreamDescriptorValidator interface
|
||||
|
||||
@param payment_rate_manager: The payment rate manager currently in effect for the downloader
|
||||
@type payment_rate_manager: PaymentRateManager
|
||||
|
||||
@return: [(option_description, default)]
|
||||
@rtype: [(string, string)]
|
||||
"""
|
||||
|
||||
def make_downloader(self, sd_validator, options, payment_rate_manager):
|
||||
"""
|
||||
Create an object that implements the IStreamDownloader interface
|
||||
|
||||
@param sd_validator: object containing stream metadata which will be given to the IStreamDownloader
|
||||
@type sd_validator: object which implements IStreamDescriptorValidator interface
|
||||
|
||||
@param options: a list of strings that will be used by the IStreamDownloaderFactory to
|
||||
construct the IStreamDownloader. the options are in the same order as they were given
|
||||
by get_downloader_options.
|
||||
@type options: [string]
|
||||
|
||||
@param payment_rate_manager: the PaymentRateManager which the IStreamDownloader should use.
|
||||
@type payment_rate_manager: PaymentRateManager
|
||||
|
||||
@return: a Deferred which fires with the downloader object
|
||||
@rtype: Deferred which fires with IStreamDownloader
|
||||
"""
|
||||
|
||||
def get_description(self):
|
||||
"""
|
||||
Return a string detailing what this downloader does with streams
|
||||
|
||||
@return: short description of what the IStreamDownloader does.
|
||||
@rtype: string
|
||||
"""
|
||||
|
||||
|
||||
class IStreamDownloader(Interface):
|
||||
"""
|
||||
Use metadata and data from the network for some useful purpose.
|
||||
"""
|
||||
def start(self):
|
||||
"""
|
||||
start downloading the stream
|
||||
|
||||
@return: a Deferred which fires when the stream is finished downloading, or errbacks when the stream is
|
||||
cancelled.
|
||||
@rtype: Deferred which fires with anything
|
||||
"""
|
||||
|
||||
def insufficient_funds(self):
|
||||
"""
|
||||
this function informs the stream downloader that funds are too low to finish downloading.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
|
||||
class IStreamDescriptorValidator(Interface):
|
||||
"""
|
||||
Pull metadata out of Stream Descriptor Files and perform some
|
||||
validation on the metadata.
|
||||
"""
|
||||
def validate(self):
|
||||
"""
|
||||
@return: whether the stream descriptor passes validation checks
|
||||
@rtype: boolean
|
||||
"""
|
||||
|
||||
def info_to_show(self):
|
||||
"""
|
||||
@return: A list of tuples representing metadata that should be presented to the user before starting the
|
||||
download
|
||||
@rtype: [(string, string)]
|
||||
"""
|
||||
|
||||
|
||||
class ILBRYWallet(Interface):
|
||||
"""
|
||||
Send and receive payments.
|
||||
|
||||
To send a payment, a payment reservation must be obtained first. This guarantees that a payment
|
||||
isn't promised if it can't be paid. When the service in question is rendered, the payment
|
||||
reservation must be given to the ILBRYWallet along with the final price. The reservation can also
|
||||
be canceled.
|
||||
"""
|
||||
def stop(self):
|
||||
"""
|
||||
Send out any unsent payments, close any connections, and stop checking for incoming payments.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def start(self):
|
||||
"""
|
||||
Set up any connections and start checking for incoming payments
|
||||
|
||||
@return: None
|
||||
"""
|
||||
def get_info_exchanger(self):
|
||||
"""
|
||||
Get the object that will be used to find the payment addresses of peers.
|
||||
|
||||
@return: The object that will be used to find the payment addresses of peers.
|
||||
@rtype: An object implementing IRequestCreator
|
||||
"""
|
||||
|
||||
def get_wallet_info_query_handler_factory(self):
|
||||
"""
|
||||
Get the object that will be used to give our payment address to peers.
|
||||
|
||||
This must return an object implementing IQueryHandlerFactory. It will be used to
|
||||
create IQueryHandler objects that will be registered with an IRequestHandler.
|
||||
|
||||
@return: The object that will be used to give our payment address to peers.
|
||||
@rtype: An object implementing IQueryHandlerFactory
|
||||
"""
|
||||
|
||||
def reserve_points(self, peer, amount):
|
||||
"""
|
||||
Ensure a certain amount of points are available to be sent as payment, before the service is rendered
|
||||
|
||||
@param peer: The peer to which the payment will ultimately be sent
|
||||
@type peer: Peer
|
||||
|
||||
@param amount: The amount of points to reserve
|
||||
@type amount: float
|
||||
|
||||
@return: A ReservedPoints object which is given to send_points once the service has been rendered
|
||||
@rtype: ReservedPoints
|
||||
"""
|
||||
|
||||
def cancel_point_reservation(self, reserved_points):
|
||||
"""
|
||||
Return all of the points that were reserved previously for some ReservedPoints object
|
||||
|
||||
@param reserved_points: ReservedPoints previously returned by reserve_points
|
||||
@type reserved_points: ReservedPoints
|
||||
|
||||
@return: None
|
||||
"""
|
||||
|
||||
def send_points(self, reserved_points, amount):
|
||||
"""
|
||||
Schedule a payment to be sent to a peer
|
||||
|
||||
@param reserved_points: ReservedPoints object previously returned by reserve_points.
|
||||
@type reserved_points: ReservedPoints
|
||||
|
||||
@param amount: amount of points to actually send, must be less than or equal to the
|
||||
amount reserved in reserved_points
|
||||
@type amount: float
|
||||
|
||||
@return: Deferred which fires when the payment has been scheduled
|
||||
@rtype: Deferred which fires with anything
|
||||
"""
|
||||
|
||||
def get_balance(self):
|
||||
"""
|
||||
Return the balance of this wallet
|
||||
|
||||
@return: Deferred which fires with the balance of the wallet
|
||||
@rtype: Deferred which fires with float
|
||||
"""
|
||||
|
||||
def add_expected_payment(self, peer, amount):
|
||||
"""
|
||||
Increase the number of points expected to be paid by a peer
|
||||
|
||||
@param peer: the peer which is expected to pay the points
|
||||
@type peer: Peer
|
||||
|
||||
@param amount: the amount of points expected to be paid
|
||||
@type amount: float
|
||||
|
||||
@return: None
|
||||
"""
|
268
lbrynet/lbryfile/LBRYFileMetadataManager.py
Normal file
268
lbrynet/lbryfile/LBRYFileMetadataManager.py
Normal file
|
@ -0,0 +1,268 @@
|
|||
import logging
|
||||
import leveldb
|
||||
import json
|
||||
import os
|
||||
from twisted.internet import threads, defer
|
||||
from lbrynet.core.Error import DuplicateStreamHashError
|
||||
|
||||
|
||||
class DBLBRYFileMetadataManager(object):
|
||||
"""Store and provide access to LBRY file metadata using leveldb files"""
|
||||
|
||||
def __init__(self, db_dir):
|
||||
self.db_dir = db_dir
|
||||
self.stream_info_db = None
|
||||
self.stream_blob_db = None
|
||||
self.stream_desc_db = None
|
||||
|
||||
def setup(self):
|
||||
return threads.deferToThread(self._open_db)
|
||||
|
||||
def stop(self):
|
||||
self.stream_info_db = None
|
||||
self.stream_blob_db = None
|
||||
self.stream_desc_db = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_all_streams(self):
|
||||
return threads.deferToThread(self._get_all_streams)
|
||||
|
||||
def save_stream(self, stream_hash, file_name, key, suggested_file_name, blobs):
|
||||
d = threads.deferToThread(self._store_stream, stream_hash, file_name, key, suggested_file_name)
|
||||
d.addCallback(lambda _: self.add_blobs_to_stream(stream_hash, blobs))
|
||||
return d
|
||||
|
||||
def get_stream_info(self, stream_hash):
|
||||
return threads.deferToThread(self._get_stream_info, stream_hash)
|
||||
|
||||
def check_if_stream_exists(self, stream_hash):
|
||||
return threads.deferToThread(self._check_if_stream_exists, stream_hash)
|
||||
|
||||
def delete_stream(self, stream_hash):
|
||||
return threads.deferToThread(self._delete_stream, stream_hash)
|
||||
|
||||
def add_blobs_to_stream(self, stream_hash, blobs):
|
||||
|
||||
def add_blobs():
|
||||
self._add_blobs_to_stream(stream_hash, blobs, ignore_duplicate_error=True)
|
||||
|
||||
return threads.deferToThread(add_blobs)
|
||||
|
||||
def get_blobs_for_stream(self, stream_hash, start_blob=None, end_blob=None, count=None, reverse=False):
|
||||
logging.info("Getting blobs for a stream. Count is %s", str(count))
|
||||
|
||||
def get_positions_of_start_and_end():
|
||||
if start_blob is not None:
|
||||
start_num = self._get_blob_num_by_hash(stream_hash, start_blob)
|
||||
else:
|
||||
start_num = None
|
||||
if end_blob is not None:
|
||||
end_num = self._get_blob_num_by_hash(stream_hash, end_blob)
|
||||
else:
|
||||
end_num = None
|
||||
return start_num, end_num
|
||||
|
||||
def get_blob_infos(nums):
|
||||
start_num, end_num = nums
|
||||
return threads.deferToThread(self._get_further_blob_infos, stream_hash, start_num, end_num,
|
||||
count, reverse)
|
||||
|
||||
d = threads.deferToThread(get_positions_of_start_and_end)
|
||||
d.addCallback(get_blob_infos)
|
||||
return d
|
||||
|
||||
def get_stream_of_blob(self, blob_hash):
|
||||
return threads.deferToThread(self._get_stream_of_blobhash, blob_hash)
|
||||
|
||||
def save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
return threads.deferToThread(self._save_sd_blob_hash_to_stream, stream_hash, sd_blob_hash)
|
||||
|
||||
def get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return threads.deferToThread(self._get_sd_blob_hashes_for_stream, stream_hash)
|
||||
|
||||
def _open_db(self):
|
||||
self.stream_info_db = leveldb.LevelDB(os.path.join(self.db_dir, "lbryfile_info.db"))
|
||||
self.stream_blob_db = leveldb.LevelDB(os.path.join(self.db_dir, "lbryfile_blob.db"))
|
||||
self.stream_desc_db = leveldb.LevelDB(os.path.join(self.db_dir, "lbryfile_desc.db"))
|
||||
|
||||
def _delete_stream(self, stream_hash):
|
||||
desc_batch = leveldb.WriteBatch()
|
||||
for sd_blob_hash, s_h in self.stream_desc_db.RangeIter():
|
||||
if stream_hash == s_h:
|
||||
desc_batch.Delete(sd_blob_hash)
|
||||
self.stream_desc_db.Write(desc_batch, sync=True)
|
||||
|
||||
blob_batch = leveldb.WriteBatch()
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if stream_hash == s_h:
|
||||
blob_batch.Delete(blob_hash_stream_hash)
|
||||
self.stream_blob_db.Write(blob_batch, sync=True)
|
||||
|
||||
stream_batch = leveldb.WriteBatch()
|
||||
for s_h, stream_info in self.stream_info_db.RangeIter():
|
||||
if stream_hash == s_h:
|
||||
stream_batch.Delete(s_h)
|
||||
self.stream_info_db.Write(stream_batch, sync=True)
|
||||
|
||||
def _store_stream(self, stream_hash, name, key, suggested_file_name):
|
||||
try:
|
||||
self.stream_info_db.Get(stream_hash)
|
||||
raise DuplicateStreamHashError("Stream hash %s already exists" % stream_hash)
|
||||
except KeyError:
|
||||
pass
|
||||
self.stream_info_db.Put(stream_hash, json.dumps((key, name, suggested_file_name)), sync=True)
|
||||
|
||||
def _get_all_streams(self):
|
||||
return [stream_hash for stream_hash, stream_info in self.stream_info_db.RangeIter()]
|
||||
|
||||
def _get_stream_info(self, stream_hash):
|
||||
return json.loads(self.stream_info_db.Get(stream_hash))[:3]
|
||||
|
||||
def _check_if_stream_exists(self, stream_hash):
|
||||
try:
|
||||
self.stream_info_db.Get(stream_hash)
|
||||
return True
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
def _get_blob_num_by_hash(self, stream_hash, blob_hash):
|
||||
blob_hash_stream_hash = json.dumps((blob_hash, stream_hash))
|
||||
return json.loads(self.stream_blob_db.Get(blob_hash_stream_hash))[0]
|
||||
|
||||
def _get_further_blob_infos(self, stream_hash, start_num, end_num, count=None, reverse=False):
|
||||
blob_infos = []
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if stream_hash == s_h:
|
||||
position, iv, length = json.loads(blob_info)
|
||||
if (start_num is None) or (position > start_num):
|
||||
if (end_num is None) or (position < end_num):
|
||||
blob_infos.append((b_h, position, iv, length))
|
||||
blob_infos.sort(key=lambda i: i[1], reverse=reverse)
|
||||
if count is not None:
|
||||
blob_infos = blob_infos[:count]
|
||||
return blob_infos
|
||||
|
||||
def _add_blobs_to_stream(self, stream_hash, blob_infos, ignore_duplicate_error=False):
|
||||
batch = leveldb.WriteBatch()
|
||||
for blob_info in blob_infos:
|
||||
blob_hash_stream_hash = json.dumps((blob_info.blob_hash, stream_hash))
|
||||
try:
|
||||
self.stream_blob_db.Get(blob_hash_stream_hash)
|
||||
if ignore_duplicate_error is False:
|
||||
raise KeyError() # TODO: change this to DuplicateStreamBlobError?
|
||||
continue
|
||||
except KeyError:
|
||||
pass
|
||||
batch.Put(blob_hash_stream_hash,
|
||||
json.dumps((blob_info.blob_num,
|
||||
blob_info.iv,
|
||||
blob_info.length)))
|
||||
self.stream_blob_db.Write(batch, sync=True)
|
||||
|
||||
def _get_stream_of_blobhash(self, blob_hash):
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if blob_hash == b_h:
|
||||
return s_h
|
||||
return None
|
||||
|
||||
def _save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
self.stream_desc_db.Put(sd_blob_hash, stream_hash)
|
||||
|
||||
def _get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return [sd_blob_hash for sd_blob_hash, s_h in self.stream_desc_db.RangeIter() if stream_hash == s_h]
|
||||
|
||||
|
||||
class TempLBRYFileMetadataManager(object):
|
||||
def __init__(self):
|
||||
self.streams = {}
|
||||
self.stream_blobs = {}
|
||||
self.sd_files = {}
|
||||
|
||||
def setup(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_all_streams(self):
|
||||
return defer.succeed(self.streams.keys())
|
||||
|
||||
def save_stream(self, stream_hash, file_name, key, suggested_file_name, blobs):
|
||||
self.streams[stream_hash] = {'suggested_file_name': suggested_file_name,
|
||||
'stream_name': file_name,
|
||||
'key': key}
|
||||
d = self.add_blobs_to_stream(stream_hash, blobs)
|
||||
d.addCallback(lambda _: stream_hash)
|
||||
return d
|
||||
|
||||
def get_stream_info(self, stream_hash):
|
||||
if stream_hash in self.streams:
|
||||
stream_info = self.streams[stream_hash]
|
||||
return defer.succeed([stream_info['key'], stream_info['stream_name'],
|
||||
stream_info['suggested_file_name']])
|
||||
return defer.succeed(None)
|
||||
|
||||
def delete_stream(self, stream_hash):
|
||||
if stream_hash in self.streams:
|
||||
del self.streams[stream_hash]
|
||||
for (s_h, b_h) in self.stream_blobs.keys():
|
||||
if s_h == stream_hash:
|
||||
del self.stream_blobs[(s_h, b_h)]
|
||||
return defer.succeed(True)
|
||||
|
||||
def add_blobs_to_stream(self, stream_hash, blobs):
|
||||
assert stream_hash in self.streams, "Can't add blobs to a stream that isn't known"
|
||||
for blob in blobs:
|
||||
info = {}
|
||||
info['blob_num'] = blob.blob_num
|
||||
info['length'] = blob.length
|
||||
info['iv'] = blob.iv
|
||||
self.stream_blobs[(stream_hash, blob.blob_hash)] = info
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_blobs_for_stream(self, stream_hash, start_blob=None, end_blob=None, count=None, reverse=False):
|
||||
|
||||
if start_blob is not None:
|
||||
start_num = self._get_blob_num_by_hash(stream_hash, start_blob)
|
||||
else:
|
||||
start_num = None
|
||||
if end_blob is not None:
|
||||
end_num = self._get_blob_num_by_hash(stream_hash, end_blob)
|
||||
else:
|
||||
end_num = None
|
||||
return self._get_further_blob_infos(stream_hash, start_num, end_num, count, reverse)
|
||||
|
||||
def get_stream_of_blob(self, blob_hash):
|
||||
for (s_h, b_h) in self.stream_blobs.iterkeys():
|
||||
if b_h == blob_hash:
|
||||
return defer.succeed(s_h)
|
||||
return defer.succeed(None)
|
||||
|
||||
def _get_further_blob_infos(self, stream_hash, start_num, end_num, count=None, reverse=False):
|
||||
blob_infos = []
|
||||
for (s_h, b_h), info in self.stream_blobs.iteritems():
|
||||
if stream_hash == s_h:
|
||||
position = info['blob_num']
|
||||
length = info['length']
|
||||
iv = info['iv']
|
||||
if (start_num is None) or (position > start_num):
|
||||
if (end_num is None) or (position < end_num):
|
||||
blob_infos.append((b_h, position, iv, length))
|
||||
blob_infos.sort(key=lambda i: i[1], reverse=reverse)
|
||||
if count is not None:
|
||||
blob_infos = blob_infos[:count]
|
||||
return defer.succeed(blob_infos)
|
||||
|
||||
def _get_blob_num_by_hash(self, stream_hash, blob_hash):
|
||||
if (stream_hash, blob_hash) in self.stream_blobs:
|
||||
return defer.succeed(self.stream_blobs[(stream_hash, blob_hash)]['blob_num'])
|
||||
|
||||
def save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
self.sd_files[sd_blob_hash] = stream_hash
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return defer.succeed([sd_hash for sd_hash, s_h in self.sd_files.iteritems() if stream_hash == s_h])
|
138
lbrynet/lbryfile/StreamDescriptor.py
Normal file
138
lbrynet/lbryfile/StreamDescriptor.py
Normal file
|
@ -0,0 +1,138 @@
|
|||
import binascii
|
||||
import logging
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj
|
||||
from lbrynet.cryptstream.CryptBlob import CryptBlobInfo
|
||||
from twisted.internet import defer
|
||||
from lbrynet.core.Error import DuplicateStreamHashError
|
||||
|
||||
|
||||
LBRYFileStreamType = "lbryfile"
|
||||
|
||||
|
||||
def save_sd_info(stream_info_manager, sd_info, ignore_duplicate=False):
|
||||
logging.debug("Saving info for %s", str(sd_info['stream_name']))
|
||||
hex_stream_name = sd_info['stream_name']
|
||||
key = sd_info['key']
|
||||
stream_hash = sd_info['stream_hash']
|
||||
raw_blobs = sd_info['blobs']
|
||||
suggested_file_name = sd_info['suggested_file_name']
|
||||
crypt_blobs = []
|
||||
for blob in raw_blobs:
|
||||
length = blob['length']
|
||||
if length != 0:
|
||||
blob_hash = blob['blob_hash']
|
||||
else:
|
||||
blob_hash = None
|
||||
blob_num = blob['blob_num']
|
||||
iv = blob['iv']
|
||||
crypt_blobs.append(CryptBlobInfo(blob_hash, blob_num, length, iv))
|
||||
logging.debug("Trying to save stream info for %s", str(hex_stream_name))
|
||||
d = stream_info_manager.save_stream(stream_hash, hex_stream_name, key,
|
||||
suggested_file_name, crypt_blobs)
|
||||
|
||||
def check_if_duplicate(err):
|
||||
if ignore_duplicate is True:
|
||||
err.trap(DuplicateStreamHashError)
|
||||
|
||||
d.addErrback(check_if_duplicate)
|
||||
|
||||
d.addCallback(lambda _: stream_hash)
|
||||
return d
|
||||
|
||||
|
||||
def get_sd_info(stream_info_manager, stream_hash, include_blobs):
|
||||
d = stream_info_manager.get_stream_info(stream_hash)
|
||||
|
||||
def format_info(stream_info):
|
||||
fields = {}
|
||||
fields['stream_type'] = LBRYFileStreamType
|
||||
fields['stream_name'] = stream_info[1]
|
||||
fields['key'] = stream_info[0]
|
||||
fields['suggested_file_name'] = stream_info[2]
|
||||
fields['stream_hash'] = stream_hash
|
||||
|
||||
def format_blobs(blobs):
|
||||
formatted_blobs = []
|
||||
for blob_hash, blob_num, iv, length in blobs:
|
||||
blob = {}
|
||||
if length != 0:
|
||||
blob['blob_hash'] = blob_hash
|
||||
blob['blob_num'] = blob_num
|
||||
blob['iv'] = iv
|
||||
blob['length'] = length
|
||||
formatted_blobs.append(blob)
|
||||
fields['blobs'] = formatted_blobs
|
||||
return fields
|
||||
|
||||
if include_blobs is True:
|
||||
d = stream_info_manager.get_blobs_for_stream(stream_hash)
|
||||
else:
|
||||
d = defer.succeed([])
|
||||
d.addCallback(format_blobs)
|
||||
return d
|
||||
|
||||
d.addCallback(format_info)
|
||||
return d
|
||||
|
||||
|
||||
class LBRYFileStreamDescriptorValidator(object):
|
||||
def __init__(self, raw_info):
|
||||
self.raw_info = raw_info
|
||||
|
||||
def validate(self):
|
||||
logging.debug("Trying to validate stream descriptor for %s", str(self.raw_info['stream_name']))
|
||||
try:
|
||||
hex_stream_name = self.raw_info['stream_name']
|
||||
key = self.raw_info['key']
|
||||
hex_suggested_file_name = self.raw_info['suggested_file_name']
|
||||
stream_hash = self.raw_info['stream_hash']
|
||||
blobs = self.raw_info['blobs']
|
||||
except KeyError as e:
|
||||
raise ValueError("Invalid stream descriptor. Missing '%s'" % (e.args[0]))
|
||||
for c in hex_suggested_file_name:
|
||||
if c not in '0123456789abcdef':
|
||||
raise ValueError("Invalid stream descriptor: "
|
||||
"suggested file name is not a hex-encoded string")
|
||||
h = get_lbry_hash_obj()
|
||||
h.update(hex_stream_name)
|
||||
h.update(key)
|
||||
h.update(hex_suggested_file_name)
|
||||
|
||||
def get_blob_hashsum(b):
|
||||
length = b['length']
|
||||
if length != 0:
|
||||
blob_hash = b['blob_hash']
|
||||
else:
|
||||
blob_hash = None
|
||||
blob_num = b['blob_num']
|
||||
iv = b['iv']
|
||||
blob_hashsum = get_lbry_hash_obj()
|
||||
if length != 0:
|
||||
blob_hashsum.update(blob_hash)
|
||||
blob_hashsum.update(str(blob_num))
|
||||
blob_hashsum.update(iv)
|
||||
blob_hashsum.update(str(length))
|
||||
return blob_hashsum.digest()
|
||||
|
||||
blobs_hashsum = get_lbry_hash_obj()
|
||||
for blob in blobs:
|
||||
blobs_hashsum.update(get_blob_hashsum(blob))
|
||||
if blobs[-1]['length'] != 0:
|
||||
raise ValueError("Improperly formed stream descriptor. Must end with a zero-length blob.")
|
||||
h.update(blobs_hashsum.digest())
|
||||
if h.hexdigest() != stream_hash:
|
||||
raise ValueError("Stream hash does not match stream metadata")
|
||||
return defer.succeed(True)
|
||||
|
||||
def info_to_show(self):
|
||||
info = []
|
||||
info.append(("stream_name", binascii.unhexlify(self.raw_info.get("stream_name"))))
|
||||
size_so_far = 0
|
||||
for blob_info in self.raw_info.get("blobs", []):
|
||||
size_so_far += int(blob_info['length'])
|
||||
info.append(("stream_size", str(size_so_far)))
|
||||
suggested_file_name = self.raw_info.get("suggested_file_name", None)
|
||||
if suggested_file_name is not None:
|
||||
suggested_file_name = binascii.unhexlify(suggested_file_name)
|
||||
info.append(("suggested_file_name", suggested_file_name))
|
||||
return info
|
0
lbrynet/lbryfile/__init__.py
Normal file
0
lbrynet/lbryfile/__init__.py
Normal file
284
lbrynet/lbryfile/client/LBRYFileDownloader.py
Normal file
284
lbrynet/lbryfile/client/LBRYFileDownloader.py
Normal file
|
@ -0,0 +1,284 @@
|
|||
import subprocess
|
||||
import binascii
|
||||
|
||||
from zope.interface import implements
|
||||
|
||||
from lbrynet.core.DownloadOption import DownloadOption
|
||||
from lbrynet.lbryfile.StreamDescriptor import save_sd_info
|
||||
from lbrynet.cryptstream.client.CryptStreamDownloader import CryptStreamDownloader
|
||||
from lbrynet.core.client.StreamProgressManager import FullStreamProgressManager
|
||||
from lbrynet.interfaces import IStreamDownloaderFactory
|
||||
from lbrynet.lbryfile.client.LBRYFileMetadataHandler import LBRYFileMetadataHandler
|
||||
import os
|
||||
from twisted.internet import defer, threads, reactor
|
||||
|
||||
|
||||
class LBRYFileDownloader(CryptStreamDownloader):
|
||||
"""Classes which inherit from this class download LBRY files"""
|
||||
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, payment_rate_manager, wallet, upload_allowed):
|
||||
CryptStreamDownloader.__init__(self, peer_finder, rate_limiter, blob_manager,
|
||||
payment_rate_manager, wallet, upload_allowed)
|
||||
self.stream_hash = stream_hash
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.suggested_file_name = None
|
||||
self._calculated_total_bytes = None
|
||||
|
||||
def set_stream_info(self):
|
||||
if self.key is None:
|
||||
d = self.stream_info_manager.get_stream_info(self.stream_hash)
|
||||
|
||||
def set_stream_info(stream_info):
|
||||
key, stream_name, suggested_file_name = stream_info
|
||||
self.key = binascii.unhexlify(key)
|
||||
self.stream_name = binascii.unhexlify(stream_name)
|
||||
self.suggested_file_name = binascii.unhexlify(suggested_file_name)
|
||||
|
||||
d.addCallback(set_stream_info)
|
||||
return d
|
||||
else:
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
d = self._close_output()
|
||||
d.addCallback(lambda _: CryptStreamDownloader.stop(self))
|
||||
return d
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return FullStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager)
|
||||
|
||||
def _start(self):
|
||||
d = self._setup_output()
|
||||
d.addCallback(lambda _: CryptStreamDownloader._start(self))
|
||||
return d
|
||||
|
||||
def _setup_output(self):
|
||||
pass
|
||||
|
||||
def _close_output(self):
|
||||
pass
|
||||
|
||||
def get_total_bytes(self):
|
||||
if self._calculated_total_bytes is None or self._calculated_total_bytes == 0:
|
||||
if self.download_manager is None:
|
||||
return 0
|
||||
else:
|
||||
self._calculated_total_bytes = self.download_manager.calculate_total_bytes()
|
||||
return self._calculated_total_bytes
|
||||
|
||||
def get_bytes_left_to_output(self):
|
||||
if self.download_manager is not None:
|
||||
return self.download_manager.calculate_bytes_left_to_output()
|
||||
else:
|
||||
return 0
|
||||
|
||||
def get_bytes_left_to_download(self):
|
||||
if self.download_manager is not None:
|
||||
return self.download_manager.calculate_bytes_left_to_download()
|
||||
else:
|
||||
return 0
|
||||
|
||||
def _get_metadata_handler(self, download_manager):
|
||||
return LBRYFileMetadataHandler(self.stream_hash, self.stream_info_manager, download_manager)
|
||||
|
||||
|
||||
class LBRYFileDownloaderFactory(object):
|
||||
implements(IStreamDownloaderFactory)
|
||||
|
||||
def __init__(self, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
wallet):
|
||||
self.peer_finder = peer_finder
|
||||
self.rate_limiter = rate_limiter
|
||||
self.blob_manager = blob_manager
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.wallet = wallet
|
||||
|
||||
def get_downloader_options(self, sd_validator, payment_rate_manager):
|
||||
options = [
|
||||
DownloadOption(
|
||||
[float, None],
|
||||
"rate which will be paid for data (None means use application default)",
|
||||
"data payment rate",
|
||||
None
|
||||
),
|
||||
DownloadOption(
|
||||
[bool],
|
||||
"allow reuploading data downloaded for this file",
|
||||
"allow upload",
|
||||
True
|
||||
),
|
||||
]
|
||||
return options
|
||||
|
||||
def make_downloader(self, sd_validator, options, payment_rate_manager, **kwargs):
|
||||
if options[0] is not None:
|
||||
payment_rate_manager.float(options[0])
|
||||
upload_allowed = options[1]
|
||||
|
||||
def create_downloader(stream_hash):
|
||||
downloader = self._make_downloader(stream_hash, payment_rate_manager, sd_validator.raw_info,
|
||||
upload_allowed)
|
||||
d = downloader.set_stream_info()
|
||||
d.addCallback(lambda _: downloader)
|
||||
return d
|
||||
|
||||
d = save_sd_info(self.stream_info_manager, sd_validator.raw_info)
|
||||
|
||||
d.addCallback(create_downloader)
|
||||
return d
|
||||
|
||||
def _make_downloader(self, stream_hash, payment_rate_manager, stream_info, upload_allowed):
|
||||
pass
|
||||
|
||||
|
||||
class LBRYFileSaver(LBRYFileDownloader):
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
payment_rate_manager, wallet, download_directory, upload_allowed, file_name=None):
|
||||
LBRYFileDownloader.__init__(self, stream_hash, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, payment_rate_manager, wallet, upload_allowed)
|
||||
self.download_directory = download_directory
|
||||
self.file_name = file_name
|
||||
self.file_handle = None
|
||||
|
||||
def set_stream_info(self):
|
||||
d = LBRYFileDownloader.set_stream_info(self)
|
||||
|
||||
def set_file_name():
|
||||
if self.file_name is None:
|
||||
if self.suggested_file_name:
|
||||
self.file_name = os.path.basename(self.suggested_file_name)
|
||||
else:
|
||||
self.file_name = os.path.basename(self.stream_name)
|
||||
|
||||
d.addCallback(lambda _: set_file_name())
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
d = LBRYFileDownloader.stop(self)
|
||||
d.addCallback(lambda _: self._delete_from_info_manager())
|
||||
return d
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return FullStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager,
|
||||
delete_blob_after_finished=True)
|
||||
|
||||
def _setup_output(self):
|
||||
def open_file():
|
||||
if self.file_handle is None:
|
||||
file_name = self.file_name
|
||||
if not file_name:
|
||||
file_name = "_"
|
||||
if os.path.exists(os.path.join(self.download_directory, file_name)):
|
||||
ext_num = 1
|
||||
while os.path.exists(os.path.join(self.download_directory,
|
||||
file_name + "_" + str(ext_num))):
|
||||
ext_num += 1
|
||||
file_name = file_name + "_" + str(ext_num)
|
||||
self.file_handle = open(os.path.join(self.download_directory, file_name), 'wb')
|
||||
return threads.deferToThread(open_file)
|
||||
|
||||
def _close_output(self):
|
||||
self.file_handle, file_handle = None, self.file_handle
|
||||
|
||||
def close_file():
|
||||
if file_handle is not None:
|
||||
name = file_handle.name
|
||||
file_handle.close()
|
||||
if self.completed is False:
|
||||
os.remove(name)
|
||||
|
||||
return threads.deferToThread(close_file)
|
||||
|
||||
def _get_write_func(self):
|
||||
def write_func(data):
|
||||
if self.stopped is False and self.file_handle is not None:
|
||||
self.file_handle.write(data)
|
||||
return write_func
|
||||
|
||||
def _delete_from_info_manager(self):
|
||||
return self.stream_info_manager.delete_stream(self.stream_hash)
|
||||
|
||||
|
||||
class LBRYFileSaverFactory(LBRYFileDownloaderFactory):
|
||||
def __init__(self, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
wallet, download_directory):
|
||||
LBRYFileDownloaderFactory.__init__(self, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, wallet)
|
||||
self.download_directory = download_directory
|
||||
|
||||
def _make_downloader(self, stream_hash, payment_rate_manager, stream_info, upload_allowed):
|
||||
return LBRYFileSaver(stream_hash, self.peer_finder, self.rate_limiter, self.blob_manager,
|
||||
self.stream_info_manager, payment_rate_manager, self.wallet,
|
||||
self.download_directory, upload_allowed)
|
||||
|
||||
def get_description(self):
|
||||
return "Save"
|
||||
|
||||
|
||||
class LBRYFileOpener(LBRYFileDownloader):
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
payment_rate_manager, wallet, upload_allowed):
|
||||
LBRYFileDownloader.__init__(self, stream_hash, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, payment_rate_manager, wallet, upload_allowed)
|
||||
self.process = None
|
||||
self.process_log = None
|
||||
|
||||
def stop(self):
|
||||
d = LBRYFileDownloader.stop(self)
|
||||
d.addCallback(lambda _: self._delete_from_info_manager())
|
||||
return d
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return FullStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager,
|
||||
delete_blob_after_finished=True)
|
||||
|
||||
def _setup_output(self):
|
||||
def start_process():
|
||||
if os.name == "nt":
|
||||
paths = [r'C:\Program Files\VideoLAN\VLC\vlc.exe',
|
||||
r'C:\Program Files (x86)\VideoLAN\VLC\vlc.exe']
|
||||
for p in paths:
|
||||
if os.path.exists(p):
|
||||
vlc_path = p
|
||||
break
|
||||
else:
|
||||
raise ValueError("You must install VLC media player to stream files")
|
||||
else:
|
||||
vlc_path = 'vlc'
|
||||
self.process_log = open("vlc.out", 'a')
|
||||
try:
|
||||
self.process = subprocess.Popen([vlc_path, '-'], stdin=subprocess.PIPE,
|
||||
stdout=self.process_log, stderr=self.process_log)
|
||||
except OSError:
|
||||
raise ValueError("VLC media player could not be opened")
|
||||
|
||||
d = threads.deferToThread(start_process)
|
||||
return d
|
||||
|
||||
def _close_output(self):
|
||||
if self.process is not None:
|
||||
self.process.stdin.close()
|
||||
self.process = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def _get_write_func(self):
|
||||
def write_func(data):
|
||||
if self.stopped is False and self.process is not None:
|
||||
try:
|
||||
self.process.stdin.write(data)
|
||||
except IOError:
|
||||
reactor.callLater(0, self.stop)
|
||||
return write_func
|
||||
|
||||
def _delete_from_info_manager(self):
|
||||
return self.stream_info_manager.delete_stream(self.stream_hash)
|
||||
|
||||
|
||||
class LBRYFileOpenerFactory(LBRYFileDownloaderFactory):
|
||||
def _make_downloader(self, stream_hash, payment_rate_manager, stream_info, upload_allowed):
|
||||
return LBRYFileOpener(stream_hash, self.peer_finder, self.rate_limiter, self.blob_manager,
|
||||
self.stream_info_manager, payment_rate_manager, self.wallet, upload_allowed)
|
||||
|
||||
def get_description(self):
|
||||
return "Stream"
|
36
lbrynet/lbryfile/client/LBRYFileMetadataHandler.py
Normal file
36
lbrynet/lbryfile/client/LBRYFileMetadataHandler.py
Normal file
|
@ -0,0 +1,36 @@
|
|||
import logging
|
||||
from zope.interface import implements
|
||||
from lbrynet.cryptstream.CryptBlob import CryptBlobInfo
|
||||
from lbrynet.interfaces import IMetadataHandler
|
||||
|
||||
|
||||
class LBRYFileMetadataHandler(object):
|
||||
implements(IMetadataHandler)
|
||||
|
||||
def __init__(self, stream_hash, stream_info_manager, download_manager):
|
||||
self.stream_hash = stream_hash
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.download_manager = download_manager
|
||||
self._final_blob_num = None
|
||||
|
||||
######### IMetadataHandler #########
|
||||
|
||||
def get_initial_blobs(self):
|
||||
d = self.stream_info_manager.get_blobs_for_stream(self.stream_hash)
|
||||
d.addCallback(self._format_initial_blobs_for_download_manager)
|
||||
return d
|
||||
|
||||
def final_blob_num(self):
|
||||
return self._final_blob_num
|
||||
|
||||
######### internal calls #########
|
||||
|
||||
def _format_initial_blobs_for_download_manager(self, blob_infos):
|
||||
infos = []
|
||||
for blob_hash, blob_num, iv, length in blob_infos:
|
||||
if blob_hash is not None:
|
||||
infos.append(CryptBlobInfo(blob_hash, blob_num, length, iv))
|
||||
else:
|
||||
logging.debug("Setting _final_blob_num to %s", str(blob_num - 1))
|
||||
self._final_blob_num = blob_num - 1
|
||||
return infos
|
0
lbrynet/lbryfile/client/__init__.py
Normal file
0
lbrynet/lbryfile/client/__init__.py
Normal file
159
lbrynet/lbryfilemanager/LBRYFileCreator.py
Normal file
159
lbrynet/lbryfilemanager/LBRYFileCreator.py
Normal file
|
@ -0,0 +1,159 @@
|
|||
"""
|
||||
Utilities for turning plain files into LBRY Files.
|
||||
"""
|
||||
|
||||
import binascii
|
||||
import logging
|
||||
import os
|
||||
from lbrynet.core.StreamDescriptor import PlainStreamDescriptorWriter
|
||||
from lbrynet.cryptstream.CryptStreamCreator import CryptStreamCreator
|
||||
from lbrynet import conf
|
||||
from lbrynet.lbryfile.StreamDescriptor import get_sd_info
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj
|
||||
from twisted.protocols.basic import FileSender
|
||||
from lbrynet.lbryfilemanager.LBRYFileDownloader import ManagedLBRYFileDownloader
|
||||
|
||||
|
||||
class LBRYFileStreamCreator(CryptStreamCreator):
|
||||
"""
|
||||
A CryptStreamCreator which adds itself and its additional metadata to an LBRYFileManager
|
||||
"""
|
||||
def __init__(self, blob_manager, lbry_file_manager, name=None,
|
||||
key=None, iv_generator=None, suggested_file_name=None):
|
||||
CryptStreamCreator.__init__(self, blob_manager, name, key, iv_generator)
|
||||
self.lbry_file_manager = lbry_file_manager
|
||||
if suggested_file_name is None:
|
||||
self.suggested_file_name = name
|
||||
else:
|
||||
self.suggested_file_name = suggested_file_name
|
||||
self.stream_hash = None
|
||||
self.blob_infos = []
|
||||
|
||||
def _blob_finished(self, blob_info):
|
||||
logging.debug("length: %s", str(blob_info.length))
|
||||
self.blob_infos.append(blob_info)
|
||||
|
||||
def _save_lbry_file_info(self):
|
||||
stream_info_manager = self.lbry_file_manager.stream_info_manager
|
||||
d = stream_info_manager.save_stream(self.stream_hash, binascii.hexlify(self.name),
|
||||
binascii.hexlify(self.key),
|
||||
binascii.hexlify(self.suggested_file_name),
|
||||
self.blob_infos)
|
||||
return d
|
||||
|
||||
def setup(self):
|
||||
d = CryptStreamCreator.setup(self)
|
||||
d.addCallback(lambda _: self.stream_hash)
|
||||
|
||||
return d
|
||||
|
||||
def _get_blobs_hashsum(self):
|
||||
blobs_hashsum = get_lbry_hash_obj()
|
||||
for blob_info in sorted(self.blob_infos, key=lambda b_i: b_i.blob_num):
|
||||
length = blob_info.length
|
||||
if length != 0:
|
||||
blob_hash = blob_info.blob_hash
|
||||
else:
|
||||
blob_hash = None
|
||||
blob_num = blob_info.blob_num
|
||||
iv = blob_info.iv
|
||||
blob_hashsum = get_lbry_hash_obj()
|
||||
if length != 0:
|
||||
blob_hashsum.update(blob_hash)
|
||||
blob_hashsum.update(str(blob_num))
|
||||
blob_hashsum.update(iv)
|
||||
blob_hashsum.update(str(length))
|
||||
blobs_hashsum.update(blob_hashsum.digest())
|
||||
return blobs_hashsum.digest()
|
||||
|
||||
def _make_stream_hash(self):
|
||||
hashsum = get_lbry_hash_obj()
|
||||
hashsum.update(binascii.hexlify(self.name))
|
||||
hashsum.update(binascii.hexlify(self.key))
|
||||
hashsum.update(binascii.hexlify(self.suggested_file_name))
|
||||
hashsum.update(self._get_blobs_hashsum())
|
||||
self.stream_hash = hashsum.hexdigest()
|
||||
|
||||
def _finished(self):
|
||||
self._make_stream_hash()
|
||||
d = self._save_lbry_file_info()
|
||||
d.addCallback(lambda _: self.lbry_file_manager.change_lbry_file_status(
|
||||
self.stream_hash, ManagedLBRYFileDownloader.STATUS_FINISHED
|
||||
))
|
||||
return d
|
||||
|
||||
|
||||
def create_lbry_file(session, lbry_file_manager, file_name, file_handle, key=None,
|
||||
iv_generator=None, suggested_file_name=None):
|
||||
"""
|
||||
Turn a plain file into an LBRY File.
|
||||
|
||||
An LBRY File is a collection of encrypted blobs of data and the metadata that binds them
|
||||
together which, when decrypted and put back together according to the metadata, results
|
||||
in the original file.
|
||||
|
||||
The stream parameters that aren't specified are generated, the file is read and broken
|
||||
into chunks and encrypted, and then a stream descriptor file with the stream parameters
|
||||
and other metadata is written to disk.
|
||||
|
||||
@param session: An LBRYSession object.
|
||||
@type session: LBRYSession
|
||||
|
||||
@param lbry_file_manager: The LBRYFileManager object this LBRY File will be added to.
|
||||
@type lbry_file_manager: LBRYFileManager
|
||||
|
||||
@param file_name: The path to the plain file.
|
||||
@type file_name: string
|
||||
|
||||
@param file_handle: The file-like object to read
|
||||
@type file_handle: any file-like object which can be read by twisted.protocols.basic.FileSender
|
||||
|
||||
@param secret_pass_phrase: A string that will be used to generate the public key. If None, a
|
||||
random string will be used.
|
||||
@type secret_pass_phrase: string
|
||||
|
||||
@param key: the raw AES key which will be used to encrypt the blobs. If None, a random key will
|
||||
be generated.
|
||||
@type key: string
|
||||
|
||||
@param iv_generator: a generator which yields initialization vectors for the blobs. Will be called
|
||||
once for each blob.
|
||||
@type iv_generator: a generator function which yields strings
|
||||
|
||||
@param suggested_file_name: what the file should be called when the LBRY File is saved to disk.
|
||||
@type suggested_file_name: string
|
||||
|
||||
@return: a Deferred which fires with the stream_hash of the LBRY File
|
||||
@rtype: Deferred which fires with hex-encoded string
|
||||
"""
|
||||
|
||||
def stop_file(creator):
|
||||
logging.debug("the file sender has triggered its deferred. stopping the stream writer")
|
||||
return creator.stop()
|
||||
|
||||
def make_stream_desc_file(stream_hash):
|
||||
logging.debug("creating the stream descriptor file")
|
||||
descriptor_writer = PlainStreamDescriptorWriter(file_name + conf.CRYPTSD_FILE_EXTENSION)
|
||||
|
||||
d = get_sd_info(lbry_file_manager.stream_info_manager, stream_hash, True)
|
||||
|
||||
d.addCallback(descriptor_writer.create_descriptor)
|
||||
|
||||
return d
|
||||
|
||||
base_file_name = os.path.basename(file_name)
|
||||
|
||||
lbry_file_creator = LBRYFileStreamCreator(session.blob_manager, lbry_file_manager, base_file_name,
|
||||
key, iv_generator, suggested_file_name)
|
||||
|
||||
def start_stream():
|
||||
file_sender = FileSender()
|
||||
d = file_sender.beginFileTransfer(file_handle, lbry_file_creator)
|
||||
d.addCallback(lambda _: stop_file(lbry_file_creator))
|
||||
d.addCallback(lambda _: make_stream_desc_file(lbry_file_creator.stream_hash))
|
||||
d.addCallback(lambda _: lbry_file_creator.stream_hash)
|
||||
return d
|
||||
|
||||
d = lbry_file_creator.setup()
|
||||
d.addCallback(lambda _: start_stream())
|
||||
return d
|
149
lbrynet/lbryfilemanager/LBRYFileDownloader.py
Normal file
149
lbrynet/lbryfilemanager/LBRYFileDownloader.py
Normal file
|
@ -0,0 +1,149 @@
|
|||
"""
|
||||
Download LBRY Files from LBRYnet and save them to disk.
|
||||
"""
|
||||
|
||||
from lbrynet.core.DownloadOption import DownloadOption
|
||||
from zope.interface import implements
|
||||
from lbrynet.core.client.StreamProgressManager import FullStreamProgressManager
|
||||
from lbrynet.lbryfile.client.LBRYFileDownloader import LBRYFileSaver, LBRYFileDownloader
|
||||
from lbrynet.lbryfilemanager.LBRYFileStatusReport import LBRYFileStatusReport
|
||||
from lbrynet.interfaces import IStreamDownloaderFactory
|
||||
from lbrynet.lbryfile.StreamDescriptor import save_sd_info
|
||||
from twisted.internet import defer
|
||||
|
||||
|
||||
class ManagedLBRYFileDownloader(LBRYFileSaver):
|
||||
|
||||
STATUS_RUNNING = "running"
|
||||
STATUS_STOPPED = "stopped"
|
||||
STATUS_FINISHED = "finished"
|
||||
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
lbry_file_manager, payment_rate_manager, wallet, download_directory, upload_allowed,
|
||||
file_name=None):
|
||||
LBRYFileSaver.__init__(self, stream_hash, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, payment_rate_manager, wallet, download_directory,
|
||||
upload_allowed)
|
||||
self.lbry_file_manager = lbry_file_manager
|
||||
self.file_name = file_name
|
||||
self.file_handle = None
|
||||
self.saving_status = False
|
||||
|
||||
def restore(self):
|
||||
d = self.lbry_file_manager.get_lbry_file_status(self.stream_hash)
|
||||
|
||||
def restore_status(status):
|
||||
if status == ManagedLBRYFileDownloader.STATUS_RUNNING:
|
||||
return self.start()
|
||||
elif status == ManagedLBRYFileDownloader.STATUS_STOPPED:
|
||||
return defer.succeed(False)
|
||||
elif status == ManagedLBRYFileDownloader.STATUS_FINISHED:
|
||||
self.completed = True
|
||||
return defer.succeed(True)
|
||||
|
||||
d.addCallback(restore_status)
|
||||
return d
|
||||
|
||||
def stop(self, change_status=True):
|
||||
|
||||
def set_saving_status_done():
|
||||
self.saving_status = False
|
||||
|
||||
d = LBRYFileDownloader.stop(self) # LBRYFileSaver deletes metadata when it's stopped. We don't want that here.
|
||||
if change_status is True:
|
||||
self.saving_status = True
|
||||
d.addCallback(lambda _: self._save_status())
|
||||
d.addCallback(lambda _: set_saving_status_done())
|
||||
return d
|
||||
|
||||
def status(self):
|
||||
def find_completed_blobhashes(blobs):
|
||||
blobhashes = [b[0] for b in blobs if b[0] is not None]
|
||||
|
||||
def get_num_completed(completed_blobs):
|
||||
return len(completed_blobs), len(blobhashes)
|
||||
|
||||
inner_d = self.blob_manager.completed_blobs(blobhashes)
|
||||
inner_d.addCallback(get_num_completed)
|
||||
return inner_d
|
||||
|
||||
def make_full_status(progress):
|
||||
num_completed = progress[0]
|
||||
num_known = progress[1]
|
||||
if self.completed is True:
|
||||
s = "completed"
|
||||
elif self.stopped is True:
|
||||
s = "stopped"
|
||||
else:
|
||||
s = "running"
|
||||
status = LBRYFileStatusReport(self.file_name, num_completed, num_known, s)
|
||||
return status
|
||||
|
||||
d = self.stream_info_manager.get_blobs_for_stream(self.stream_hash)
|
||||
d.addCallback(find_completed_blobhashes)
|
||||
d.addCallback(make_full_status)
|
||||
return d
|
||||
|
||||
def _start(self):
|
||||
|
||||
d = LBRYFileSaver._start(self)
|
||||
|
||||
d.addCallback(lambda _: self._save_status())
|
||||
|
||||
return d
|
||||
|
||||
def _get_finished_deferred_callback_value(self):
|
||||
if self.completed is True:
|
||||
return "Download successful"
|
||||
else:
|
||||
return "Download stopped"
|
||||
|
||||
def _save_status(self):
|
||||
if self.completed is True:
|
||||
s = ManagedLBRYFileDownloader.STATUS_FINISHED
|
||||
elif self.stopped is True:
|
||||
s = ManagedLBRYFileDownloader.STATUS_STOPPED
|
||||
else:
|
||||
s = ManagedLBRYFileDownloader.STATUS_RUNNING
|
||||
return self.lbry_file_manager.change_lbry_file_status(self.stream_hash, s)
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return FullStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager)
|
||||
|
||||
|
||||
class ManagedLBRYFileDownloaderFactory(object):
|
||||
implements(IStreamDownloaderFactory)
|
||||
|
||||
def __init__(self, lbry_file_manager):
|
||||
self.lbry_file_manager = lbry_file_manager
|
||||
|
||||
def get_downloader_options(self, sd_validator, payment_rate_manager):
|
||||
options = [
|
||||
DownloadOption(
|
||||
[float, None],
|
||||
"rate which will be paid for data (None means use application default)",
|
||||
"data payment rate",
|
||||
None
|
||||
),
|
||||
DownloadOption(
|
||||
[bool],
|
||||
"allow reuploading data downloaded for this file",
|
||||
"allow upload",
|
||||
True
|
||||
),
|
||||
]
|
||||
return options
|
||||
|
||||
def make_downloader(self, sd_validator, options, payment_rate_manager):
|
||||
data_rate = options[0]
|
||||
upload_allowed = options[1]
|
||||
|
||||
d = save_sd_info(self.lbry_file_manager.stream_info_manager, sd_validator.raw_info)
|
||||
d.addCallback(lambda stream_hash: self.lbry_file_manager.add_lbry_file(stream_hash,
|
||||
payment_rate_manager,
|
||||
data_rate,
|
||||
upload_allowed))
|
||||
return d
|
||||
|
||||
def get_description(self):
|
||||
return "Save the file to disk"
|
255
lbrynet/lbryfilemanager/LBRYFileManager.py
Normal file
255
lbrynet/lbryfilemanager/LBRYFileManager.py
Normal file
|
@ -0,0 +1,255 @@
|
|||
"""
|
||||
Keep track of which LBRY Files are downloading and store their LBRY File specific metadata
|
||||
"""
|
||||
|
||||
import logging
|
||||
import json
|
||||
|
||||
import leveldb
|
||||
|
||||
from lbrynet.lbryfile.StreamDescriptor import LBRYFileStreamDescriptorValidator
|
||||
import os
|
||||
from lbrynet.lbryfilemanager.LBRYFileDownloader import ManagedLBRYFileDownloader
|
||||
from lbrynet.lbryfilemanager.LBRYFileDownloader import ManagedLBRYFileDownloaderFactory
|
||||
from lbrynet.lbryfile.StreamDescriptor import LBRYFileStreamType
|
||||
from lbrynet.core.PaymentRateManager import PaymentRateManager
|
||||
from twisted.internet import threads, defer, task, reactor
|
||||
from twisted.python.failure import Failure
|
||||
from lbrynet.cryptstream.client.CryptStreamDownloader import AlreadyStoppedError, CurrentlyStoppingError
|
||||
|
||||
|
||||
class LBRYFileManager(object):
|
||||
"""
|
||||
Keeps track of currently opened LBRY Files, their options, and their LBRY File specific metadata.
|
||||
"""
|
||||
SETTING = "s"
|
||||
LBRYFILE_STATUS = "t"
|
||||
LBRYFILE_OPTIONS = "o"
|
||||
|
||||
def __init__(self, session, stream_info_manager, sd_identifier):
|
||||
self.session = session
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.sd_identifier = sd_identifier
|
||||
self.lbry_files = []
|
||||
self.db = None
|
||||
self.download_directory = os.getcwd()
|
||||
|
||||
def setup(self):
|
||||
d = threads.deferToThread(self._open_db)
|
||||
d.addCallback(lambda _: self._add_to_sd_identifier())
|
||||
d.addCallback(lambda _: self._start_lbry_files())
|
||||
return d
|
||||
|
||||
def get_all_lbry_file_stream_hashes_and_options(self):
|
||||
d = threads.deferToThread(self._get_all_lbry_file_stream_hashes)
|
||||
|
||||
def get_options(stream_hashes):
|
||||
ds = []
|
||||
|
||||
def get_options_for_stream_hash(stream_hash):
|
||||
d = self.get_lbry_file_options(stream_hash)
|
||||
d.addCallback(lambda options: (stream_hash, options))
|
||||
return d
|
||||
|
||||
for stream_hash in stream_hashes:
|
||||
ds.append(get_options_for_stream_hash(stream_hash))
|
||||
dl = defer.DeferredList(ds)
|
||||
dl.addCallback(lambda results: [r[1] for r in results if r[0]])
|
||||
return dl
|
||||
|
||||
d.addCallback(get_options)
|
||||
return d
|
||||
|
||||
def get_lbry_file_status(self, stream_hash):
|
||||
return threads.deferToThread(self._get_lbry_file_status, stream_hash)
|
||||
|
||||
def save_lbry_file_options(self, stream_hash, blob_data_rate):
|
||||
return threads.deferToThread(self._save_lbry_file_options, stream_hash, blob_data_rate)
|
||||
|
||||
def get_lbry_file_options(self, stream_hash):
|
||||
return threads.deferToThread(self._get_lbry_file_options, stream_hash)
|
||||
|
||||
def delete_lbry_file_options(self, stream_hash):
|
||||
return threads.deferToThread(self._delete_lbry_file_options, stream_hash)
|
||||
|
||||
def set_lbry_file_data_payment_rate(self, stream_hash, new_rate):
|
||||
return threads.deferToThread(self._set_lbry_file_payment_rate, stream_hash, new_rate)
|
||||
|
||||
def change_lbry_file_status(self, stream_hash, status):
|
||||
logging.debug("Changing status of %s to %s", stream_hash, status)
|
||||
return threads.deferToThread(self._change_file_status, stream_hash, status)
|
||||
|
||||
def delete_lbry_file_status(self, stream_hash):
|
||||
return threads.deferToThread(self._delete_lbry_file_status, stream_hash)
|
||||
|
||||
def get_lbry_file_status_reports(self):
|
||||
ds = []
|
||||
|
||||
for lbry_file in self.lbry_files:
|
||||
ds.append(lbry_file.status())
|
||||
|
||||
dl = defer.DeferredList(ds)
|
||||
|
||||
def filter_failures(status_reports):
|
||||
return [status_report for success, status_report in status_reports if success is True]
|
||||
|
||||
dl.addCallback(filter_failures)
|
||||
return dl
|
||||
|
||||
def _add_to_sd_identifier(self):
|
||||
downloader_factory = ManagedLBRYFileDownloaderFactory(self)
|
||||
self.sd_identifier.add_stream_info_validator(LBRYFileStreamType, LBRYFileStreamDescriptorValidator)
|
||||
self.sd_identifier.add_stream_downloader_factory(LBRYFileStreamType, downloader_factory)
|
||||
|
||||
def _start_lbry_files(self):
|
||||
|
||||
def set_options_and_restore(stream_hash, options):
|
||||
payment_rate_manager = PaymentRateManager(self.session.base_payment_rate_manager)
|
||||
d = self.add_lbry_file(stream_hash, payment_rate_manager, blob_data_rate=options[0])
|
||||
d.addCallback(lambda downloader: downloader.restore())
|
||||
return d
|
||||
|
||||
def log_error(err):
|
||||
logging.error("An error occurred while starting a lbry file: %s", err.getErrorMessage())
|
||||
|
||||
def start_lbry_files(stream_hashes_and_options):
|
||||
for stream_hash, options in stream_hashes_and_options:
|
||||
d = set_options_and_restore(stream_hash, options)
|
||||
d.addErrback(log_error)
|
||||
return True
|
||||
|
||||
d = self.get_all_lbry_file_stream_hashes_and_options()
|
||||
d.addCallback(start_lbry_files)
|
||||
return d
|
||||
|
||||
def add_lbry_file(self, stream_hash, payment_rate_manager, blob_data_rate=None, upload_allowed=True):
|
||||
payment_rate_manager.min_blob_data_payment_rate = blob_data_rate
|
||||
lbry_file_downloader = ManagedLBRYFileDownloader(stream_hash, self.session.peer_finder,
|
||||
self.session.rate_limiter, self.session.blob_manager,
|
||||
self.stream_info_manager, self,
|
||||
payment_rate_manager, self.session.wallet,
|
||||
self.download_directory,
|
||||
upload_allowed)
|
||||
self.lbry_files.append(lbry_file_downloader)
|
||||
d = self.save_lbry_file_options(stream_hash, blob_data_rate)
|
||||
d.addCallback(lambda _: lbry_file_downloader.set_stream_info())
|
||||
d.addCallback(lambda _: lbry_file_downloader)
|
||||
return d
|
||||
|
||||
def delete_lbry_file(self, stream_hash):
|
||||
for l in self.lbry_files:
|
||||
if l.stream_hash == stream_hash:
|
||||
lbry_file = l
|
||||
break
|
||||
else:
|
||||
return defer.fail(Failure(ValueError("Could not find an LBRY file with the given stream hash, " +
|
||||
stream_hash)))
|
||||
|
||||
def wait_for_finished(count=2):
|
||||
if count <= 0 or lbry_file.saving_status is False:
|
||||
return True
|
||||
else:
|
||||
return task.deferLater(reactor, 1, wait_for_finished, count=count - 1)
|
||||
|
||||
def ignore_stopped(err):
|
||||
err.trap(AlreadyStoppedError, CurrentlyStoppingError)
|
||||
return wait_for_finished()
|
||||
|
||||
d = lbry_file.stop()
|
||||
d.addErrback(ignore_stopped)
|
||||
|
||||
def remove_from_list():
|
||||
self.lbry_files.remove(lbry_file)
|
||||
|
||||
d.addCallback(lambda _: remove_from_list())
|
||||
d.addCallback(lambda _: self.delete_lbry_file_options(stream_hash))
|
||||
d.addCallback(lambda _: self.delete_lbry_file_status(stream_hash))
|
||||
return d
|
||||
|
||||
def toggle_lbry_file_running(self, stream_hash):
|
||||
"""Toggle whether a stream reader is currently running"""
|
||||
for l in self.lbry_files:
|
||||
if l.stream_hash == stream_hash:
|
||||
return l.toggle_running()
|
||||
else:
|
||||
return defer.fail(Failure(ValueError("Could not find an LBRY file with the given stream hash, " +
|
||||
stream_hash)))
|
||||
|
||||
def get_stream_hash_from_name(self, lbry_file_name):
|
||||
for l in self.lbry_files:
|
||||
if l.file_name == lbry_file_name:
|
||||
return l.stream_hash
|
||||
return None
|
||||
|
||||
def stop(self):
|
||||
ds = []
|
||||
|
||||
def wait_for_finished(lbry_file, count=2):
|
||||
if count <= 0 or lbry_file.saving_status is False:
|
||||
return True
|
||||
else:
|
||||
return task.deferLater(reactor, 1, wait_for_finished, lbry_file, count=count - 1)
|
||||
|
||||
def ignore_stopped(err, lbry_file):
|
||||
err.trap(AlreadyStoppedError, CurrentlyStoppingError)
|
||||
return wait_for_finished(lbry_file)
|
||||
|
||||
for lbry_file in self.lbry_files:
|
||||
d = lbry_file.stop(change_status=False)
|
||||
d.addErrback(ignore_stopped, lbry_file)
|
||||
ds.append(d)
|
||||
dl = defer.DeferredList(ds)
|
||||
|
||||
def close_db():
|
||||
self.db = None
|
||||
|
||||
dl.addCallback(lambda _: close_db())
|
||||
return dl
|
||||
|
||||
######### database calls #########
|
||||
|
||||
def _open_db(self):
|
||||
self.db = leveldb.LevelDB(os.path.join(self.session.db_dir, "lbryfiles.db"))
|
||||
|
||||
def _save_payment_rate(self, rate_type, rate):
|
||||
if rate is not None:
|
||||
self.db.Put(json.dumps((self.SETTING, rate_type)), json.dumps(rate), sync=True)
|
||||
else:
|
||||
self.db.Delete(json.dumps((self.SETTING, rate_type)), sync=True)
|
||||
|
||||
def _save_lbry_file_options(self, stream_hash, blob_data_rate):
|
||||
self.db.Put(json.dumps((self.LBRYFILE_OPTIONS, stream_hash)), json.dumps((blob_data_rate,)),
|
||||
sync=True)
|
||||
|
||||
def _get_lbry_file_options(self, stream_hash):
|
||||
try:
|
||||
return json.loads(self.db.Get(json.dumps((self.LBRYFILE_OPTIONS, stream_hash))))
|
||||
except KeyError:
|
||||
return None, None
|
||||
|
||||
def _delete_lbry_file_options(self, stream_hash):
|
||||
self.db.Delete(json.dumps((self.LBRYFILE_OPTIONS, stream_hash)), sync=True)
|
||||
|
||||
def _set_lbry_file_payment_rate(self, stream_hash, new_rate):
|
||||
|
||||
self.db.Put(json.dumps((self.LBRYFILE_OPTIONS, stream_hash)), json.dumps((new_rate, )), sync=True)
|
||||
|
||||
def _get_all_lbry_file_stream_hashes(self):
|
||||
hashes = []
|
||||
for k, v in self.db.RangeIter():
|
||||
key_type, stream_hash = json.loads(k)
|
||||
if key_type == self.LBRYFILE_STATUS:
|
||||
hashes.append(stream_hash)
|
||||
return hashes
|
||||
|
||||
def _change_file_status(self, stream_hash, new_status):
|
||||
self.db.Put(json.dumps((self.LBRYFILE_STATUS, stream_hash)), new_status, sync=True)
|
||||
|
||||
def _get_lbry_file_status(self, stream_hash):
|
||||
try:
|
||||
return self.db.Get(json.dumps((self.LBRYFILE_STATUS, stream_hash)))
|
||||
except KeyError:
|
||||
return ManagedLBRYFileDownloader.STATUS_STOPPED
|
||||
|
||||
def _delete_lbry_file_status(self, stream_hash):
|
||||
self.db.Delete(json.dumps((self.LBRYFILE_STATUS, stream_hash)), sync=True)
|
6
lbrynet/lbryfilemanager/LBRYFileStatusReport.py
Normal file
6
lbrynet/lbryfilemanager/LBRYFileStatusReport.py
Normal file
|
@ -0,0 +1,6 @@
|
|||
class LBRYFileStatusReport(object):
|
||||
def __init__(self, name, num_completed, num_known, running_status):
|
||||
self.name = name
|
||||
self.num_completed = num_completed
|
||||
self.num_known = num_known
|
||||
self.running_status = running_status
|
7
lbrynet/lbryfilemanager/__init__.py
Normal file
7
lbrynet/lbryfilemanager/__init__.py
Normal file
|
@ -0,0 +1,7 @@
|
|||
"""
|
||||
Classes and functions used to create and download LBRY Files.
|
||||
|
||||
LBRY Files are Crypt Streams created from any regular file. The whole file is read
|
||||
at the time that the LBRY File is created, so all constituent blobs are known and
|
||||
included in the stream descriptor file.
|
||||
"""
|
117
lbrynet/lbrylive/LBRYStdinUploader.py
Normal file
117
lbrynet/lbrylive/LBRYStdinUploader.py
Normal file
|
@ -0,0 +1,117 @@
|
|||
import logging
|
||||
import sys
|
||||
from lbrynet.lbrylive.LiveStreamCreator import StdOutLiveStreamCreator
|
||||
from lbrynet.core.BlobManager import TempBlobManager
|
||||
from lbrynet.core.Session import LBRYSession
|
||||
from lbrynet.core.server.BlobAvailabilityHandler import BlobAvailabilityHandlerFactory
|
||||
from lbrynet.core.server.BlobRequestHandler import BlobRequestHandlerFactory
|
||||
from lbrynet.core.server.ServerProtocol import ServerProtocolFactory
|
||||
from lbrynet.lbrylive.PaymentRateManager import BaseLiveStreamPaymentRateManager
|
||||
from lbrynet.lbrylive.LiveStreamMetadataManager import DBLiveStreamMetadataManager
|
||||
from lbrynet.lbrylive.server.LiveBlobInfoQueryHandler import CryptBlobInfoQueryHandlerFactory
|
||||
from lbrynet.dht.node import Node
|
||||
from twisted.internet import defer, task
|
||||
|
||||
|
||||
class LBRYStdinUploader():
|
||||
"""This class reads from standard in, creates a stream, and makes it available on the network."""
|
||||
def __init__(self, peer_port, dht_node_port, known_dht_nodes):
|
||||
"""
|
||||
@param peer_port: the network port on which to listen for peers
|
||||
|
||||
@param dht_node_port: the network port on which to listen for nodes in the DHT
|
||||
|
||||
@param known_dht_nodes: a list of (ip_address, dht_port) which will be used to join the DHT network
|
||||
"""
|
||||
self.peer_port = peer_port
|
||||
self.lbry_server_port = None
|
||||
self.session = LBRYSession(blob_manager_class=TempBlobManager,
|
||||
stream_info_manager_class=DBLiveStreamMetadataManager,
|
||||
dht_node_class=Node, dht_node_port=dht_node_port,
|
||||
known_dht_nodes=known_dht_nodes, peer_port=self.peer_port,
|
||||
use_upnp=False)
|
||||
self.payment_rate_manager = BaseLiveStreamPaymentRateManager()
|
||||
|
||||
def start(self):
|
||||
"""Initialize the session and start listening on the peer port"""
|
||||
d = self.session.setup()
|
||||
d.addCallback(lambda _: self._start())
|
||||
|
||||
return d
|
||||
|
||||
def _start(self):
|
||||
self._start_server()
|
||||
return True
|
||||
|
||||
def _start_server(self):
|
||||
query_handler_factories = [
|
||||
CryptBlobInfoQueryHandlerFactory(self.stream_info_manager, self.session.wallet,
|
||||
self.payment_rate_manager),
|
||||
BlobAvailabilityHandlerFactory(self.session.blob_manager),
|
||||
BlobRequestHandlerFactory(self.session.blob_manager, self.session.wallet,
|
||||
self.payment_rate_manager),
|
||||
self.session.wallet.get_wallet_info_query_handler_factory()
|
||||
]
|
||||
|
||||
self.server_factory = ServerProtocolFactory(self.session.rate_limiter,
|
||||
query_handler_factories,
|
||||
self.session.peer_manager)
|
||||
from twisted.internet import reactor
|
||||
self.lbry_server_port = reactor.listenTCP(self.peer_port, self.server_factory)
|
||||
|
||||
def start_live_stream(self, stream_name):
|
||||
"""Create the stream and start reading from stdin
|
||||
|
||||
@param stream_name: a string, the suggested name of this stream
|
||||
"""
|
||||
stream_creator_helper = StdOutLiveStreamCreator(stream_name, self.session.blob_manager,
|
||||
self.stream_info_manager)
|
||||
d = stream_creator_helper.create_and_publish_stream_descriptor()
|
||||
|
||||
def print_sd_hash(sd_hash):
|
||||
print "Stream descriptor hash:", sd_hash
|
||||
|
||||
d.addCallback(print_sd_hash)
|
||||
d.addCallback(lambda _: stream_creator_helper.start_streaming())
|
||||
return d
|
||||
|
||||
def shut_down(self):
|
||||
"""End the session and stop listening on the server port"""
|
||||
d = self.session.shut_down()
|
||||
d.addCallback(lambda _: self._shut_down())
|
||||
return d
|
||||
|
||||
def _shut_down(self):
|
||||
if self.lbry_server_port is not None:
|
||||
d = defer.maybeDeferred(self.lbry_server_port.stopListening)
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
return d
|
||||
|
||||
|
||||
def launch_stdin_uploader():
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
logging.basicConfig(level=logging.WARNING, filename="ul.log")
|
||||
if len(sys.argv) == 4:
|
||||
uploader = LBRYStdinUploader(int(sys.argv[2]), int(sys.argv[3]), [])
|
||||
elif len(sys.argv) == 6:
|
||||
uploader = LBRYStdinUploader(int(sys.argv[2]), int(sys.argv[3]), [(sys.argv[4], int(sys.argv[5]))])
|
||||
else:
|
||||
print "Usage: lbrynet-stdin-uploader <stream_name> <peer_port> <dht_node_port>" \
|
||||
" [<dht_bootstrap_host> <dht_bootstrap port>]"
|
||||
sys.exit(1)
|
||||
|
||||
def start_stdin_uploader():
|
||||
return uploader.start_live_stream(sys.argv[1])
|
||||
|
||||
def shut_down():
|
||||
logging.debug("Telling the reactor to stop in 60 seconds")
|
||||
reactor.callLater(60, reactor.stop)
|
||||
|
||||
d = task.deferLater(reactor, 0, uploader.start)
|
||||
d.addCallback(lambda _: start_stdin_uploader())
|
||||
d.addCallback(lambda _: shut_down())
|
||||
reactor.addSystemEventTrigger('before', 'shutdown', uploader.shut_down)
|
||||
reactor.run()
|
96
lbrynet/lbrylive/LBRYStdoutDownloader.py
Normal file
96
lbrynet/lbrylive/LBRYStdoutDownloader.py
Normal file
|
@ -0,0 +1,96 @@
|
|||
import logging
|
||||
import sys
|
||||
|
||||
from lbrynet.lbrynet_console.plugins.LBRYLive.LBRYLiveStreamDownloader import LBRYLiveStreamDownloader
|
||||
from lbrynet.core.BlobManager import TempBlobManager
|
||||
from lbrynet.core.Session import LBRYSession
|
||||
from lbrynet.core.client.StandaloneBlobDownloader import StandaloneBlobDownloader
|
||||
from lbrynet.core.StreamDescriptor import BlobStreamDescriptorReader
|
||||
from lbrynet.lbrylive.PaymentRateManager import BaseLiveStreamPaymentRateManager
|
||||
from lbrynet.lbrylive.LiveStreamMetadataManager import DBLiveStreamMetadataManager
|
||||
from lbrynet.lbrylive.StreamDescriptor import save_sd_info
|
||||
from lbrynet.dht.node import Node
|
||||
from twisted.internet import task
|
||||
|
||||
|
||||
class LBRYStdoutDownloader():
|
||||
"""This class downloads a live stream from the network and outputs it to standard out."""
|
||||
def __init__(self, dht_node_port, known_dht_nodes):
|
||||
"""
|
||||
@param dht_node_port: the network port on which to listen for DHT node requests
|
||||
|
||||
@param known_dht_nodes: a list of (ip_address, dht_port) which will be used to join the DHT network
|
||||
|
||||
"""
|
||||
self.session = LBRYSession(blob_manager_class=TempBlobManager,
|
||||
stream_info_manager_class=DBLiveStreamMetadataManager,
|
||||
dht_node_class=Node, dht_node_port=dht_node_port, known_dht_nodes=known_dht_nodes,
|
||||
use_upnp=False)
|
||||
self.payment_rate_manager = BaseLiveStreamPaymentRateManager()
|
||||
|
||||
def start(self):
|
||||
"""Initialize the session"""
|
||||
d = self.session.setup()
|
||||
return d
|
||||
|
||||
def read_sd_file(self, sd_blob):
|
||||
reader = BlobStreamDescriptorReader(sd_blob)
|
||||
return save_sd_info(self.stream_info_manager, reader, ignore_duplicate=True)
|
||||
|
||||
def download_sd_file_from_hash(self, sd_hash):
|
||||
downloader = StandaloneBlobDownloader(sd_hash, self.session.blob_manager,
|
||||
self.session.peer_finder, self.session.rate_limiter,
|
||||
self.session.wallet)
|
||||
d = downloader.download()
|
||||
return d
|
||||
|
||||
def start_download(self, sd_hash):
|
||||
"""Start downloading the stream from the network and outputting it to standard out"""
|
||||
d = self.download_sd_file_from_hash(sd_hash)
|
||||
d.addCallbacks(self.read_sd_file)
|
||||
|
||||
def start_stream(stream_hash):
|
||||
consumer = LBRYLiveStreamDownloader(stream_hash, self.session.peer_finder,
|
||||
self.session.rate_limiter, self.session.blob_manager,
|
||||
self.stream_info_manager, self.payment_rate_manager,
|
||||
self.session.wallet)
|
||||
return consumer.start()
|
||||
|
||||
d.addCallback(start_stream)
|
||||
return d
|
||||
|
||||
def shut_down(self):
|
||||
"""End the session"""
|
||||
d = self.session.shut_down()
|
||||
return d
|
||||
|
||||
|
||||
def launch_stdout_downloader():
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
logging.basicConfig(level=logging.WARNING, filename="dl.log")
|
||||
if len(sys.argv) == 3:
|
||||
downloader = LBRYStdoutDownloader(int(sys.argv[2]), [])
|
||||
elif len(sys.argv) == 5:
|
||||
downloader = LBRYStdoutDownloader(int(sys.argv[2]), [(sys.argv[3], int(sys.argv[4]))])
|
||||
else:
|
||||
print "Usage: lbrynet-stdout-downloader <sd_hash> <peer_port> <dht_node_port>" \
|
||||
" [<dht_bootstrap_host> <dht_bootstrap port>]"
|
||||
sys.exit(1)
|
||||
|
||||
def start_stdout_downloader():
|
||||
return downloader.start_download(sys.argv[1])
|
||||
|
||||
def print_error(err):
|
||||
logging.warning(err.getErrorMessage())
|
||||
|
||||
def shut_down():
|
||||
reactor.stop()
|
||||
|
||||
d = task.deferLater(reactor, 0, downloader.start)
|
||||
d.addCallback(lambda _: start_stdout_downloader())
|
||||
d.addErrback(print_error)
|
||||
d.addCallback(lambda _: shut_down())
|
||||
reactor.addSystemEventTrigger('before', 'shutdown', downloader.shut_down)
|
||||
reactor.run()
|
23
lbrynet/lbrylive/LiveBlob.py
Normal file
23
lbrynet/lbrylive/LiveBlob.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from lbrynet.cryptstream.CryptBlob import CryptStreamBlobMaker, CryptBlobInfo
|
||||
import binascii
|
||||
|
||||
|
||||
class LiveBlobInfo(CryptBlobInfo):
|
||||
def __init__(self, blob_hash, blob_num, length, iv, revision, signature):
|
||||
CryptBlobInfo.__init__(self, blob_hash, blob_num, length, iv)
|
||||
self.revision = revision
|
||||
self.signature = signature
|
||||
|
||||
|
||||
class LiveStreamBlobMaker(CryptStreamBlobMaker):
|
||||
def __init__(self, key, iv, blob_num, blob):
|
||||
CryptStreamBlobMaker.__init__(self, key, iv, blob_num, blob)
|
||||
# The following is a placeholder for a currently unimplemented feature.
|
||||
# In the future it may be possible for the live stream creator to overwrite a blob
|
||||
# with a newer revision. If that happens, the 0 will be incremented to the
|
||||
# actual revision count
|
||||
self.revision = 0
|
||||
|
||||
def _return_info(self, blob_hash):
|
||||
return LiveBlobInfo(blob_hash, self.blob_num, self.length, binascii.hexlify(self.iv),
|
||||
self.revision, None)
|
189
lbrynet/lbrylive/LiveStreamCreator.py
Normal file
189
lbrynet/lbrylive/LiveStreamCreator.py
Normal file
|
@ -0,0 +1,189 @@
|
|||
from lbrynet.core.StreamDescriptor import BlobStreamDescriptorWriter
|
||||
from lbrynet.lbrylive.StreamDescriptor import get_sd_info, LiveStreamType, LBRYLiveStreamDescriptorValidator
|
||||
from lbrynet.cryptstream.CryptStreamCreator import CryptStreamCreator
|
||||
from lbrynet.lbrylive.LiveBlob import LiveStreamBlobMaker
|
||||
from lbrynet.lbrylive.PaymentRateManager import BaseLiveStreamPaymentRateManager
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj, get_pub_key, sign_with_pass_phrase
|
||||
from Crypto import Random
|
||||
import binascii
|
||||
import logging
|
||||
from lbrynet.conf import CRYPTSD_FILE_EXTENSION
|
||||
from lbrynet.conf import MIN_BLOB_INFO_PAYMENT_RATE
|
||||
from lbrynet.lbrylive.client.LiveStreamDownloader import FullLiveStreamDownloaderFactory
|
||||
from twisted.internet import interfaces, defer
|
||||
from twisted.protocols.basic import FileSender
|
||||
from zope.interface import implements
|
||||
|
||||
|
||||
class LiveStreamCreator(CryptStreamCreator):
|
||||
def __init__(self, blob_manager, stream_info_manager, name=None, key=None, iv_generator=None,
|
||||
delete_after_num=None, secret_pass_phrase=None):
|
||||
CryptStreamCreator.__init__(self, blob_manager, name, key, iv_generator)
|
||||
self.stream_hash = None
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.delete_after_num = delete_after_num
|
||||
self.secret_pass_phrase = secret_pass_phrase
|
||||
self.file_extension = CRYPTSD_FILE_EXTENSION
|
||||
self.finished_blob_hashes = {}
|
||||
|
||||
def _save_stream(self):
|
||||
d = self.stream_info_manager.save_stream(self.stream_hash, get_pub_key(self.secret_pass_phrase),
|
||||
binascii.hexlify(self.name), binascii.hexlify(self.key),
|
||||
[])
|
||||
return d
|
||||
|
||||
def _blob_finished(self, blob_info):
|
||||
logging.debug("In blob_finished")
|
||||
logging.debug("length: %s", str(blob_info.length))
|
||||
sig_hash = get_lbry_hash_obj()
|
||||
sig_hash.update(self.stream_hash)
|
||||
if blob_info.length != 0:
|
||||
sig_hash.update(blob_info.blob_hash)
|
||||
sig_hash.update(str(blob_info.blob_num))
|
||||
sig_hash.update(str(blob_info.revision))
|
||||
sig_hash.update(blob_info.iv)
|
||||
sig_hash.update(str(blob_info.length))
|
||||
signature = sign_with_pass_phrase(sig_hash.digest(), self.secret_pass_phrase)
|
||||
blob_info.signature = signature
|
||||
self.finished_blob_hashes[blob_info.blob_num] = blob_info.blob_hash
|
||||
if self.delete_after_num is not None:
|
||||
self._delete_old_blobs(blob_info.blob_num)
|
||||
d = self.stream_info_manager.add_blobs_to_stream(self.stream_hash, [blob_info])
|
||||
|
||||
def log_add_error(err):
|
||||
logging.error("An error occurred adding a blob info to the stream info manager: %s", err.getErrorMessage())
|
||||
return err
|
||||
|
||||
d.addErrback(log_add_error)
|
||||
logging.debug("returning from blob_finished")
|
||||
return d
|
||||
|
||||
def setup(self):
|
||||
"""Create the secret pass phrase if it wasn't provided, compute the stream hash,
|
||||
save the stream to the stream info manager, and return the stream hash
|
||||
"""
|
||||
if self.secret_pass_phrase is None:
|
||||
self.secret_pass_phrase = Random.new().read(512)
|
||||
|
||||
d = CryptStreamCreator.setup(self)
|
||||
|
||||
def make_stream_hash():
|
||||
hashsum = get_lbry_hash_obj()
|
||||
hashsum.update(binascii.hexlify(self.name))
|
||||
hashsum.update(get_pub_key(self.secret_pass_phrase))
|
||||
hashsum.update(binascii.hexlify(self.key))
|
||||
self.stream_hash = hashsum.hexdigest()
|
||||
return self.stream_hash
|
||||
|
||||
d.addCallback(lambda _: make_stream_hash())
|
||||
d.addCallback(lambda _: self._save_stream())
|
||||
d.addCallback(lambda _: self.stream_hash)
|
||||
return d
|
||||
|
||||
def publish_stream_descriptor(self):
|
||||
descriptor_writer = BlobStreamDescriptorWriter(self.blob_manager)
|
||||
d = get_sd_info(self.stream_info_manager, self.stream_hash, False)
|
||||
d.addCallback(descriptor_writer.create_descriptor)
|
||||
return d
|
||||
|
||||
def _delete_old_blobs(self, newest_blob_num):
|
||||
assert self.delete_after_num is not None, "_delete_old_blobs called with delete_after_num=None"
|
||||
oldest_to_keep = newest_blob_num - self.delete_after_num + 1
|
||||
nums_to_delete = [num for num in self.finished_blob_hashes.iterkeys() if num < oldest_to_keep]
|
||||
for num in nums_to_delete:
|
||||
self.blob_manager.delete_blobs([self.finished_blob_hashes[num]])
|
||||
del self.finished_blob_hashes[num]
|
||||
|
||||
def _get_blob_maker(self, iv, blob_creator):
|
||||
return LiveStreamBlobMaker(self.key, iv, self.blob_count, blob_creator)
|
||||
|
||||
|
||||
class StdOutLiveStreamCreator(LiveStreamCreator):
|
||||
def __init__(self, stream_name, blob_manager, stream_info_manager):
|
||||
LiveStreamCreator.__init__(self, blob_manager, stream_info_manager, stream_name,
|
||||
delete_after_num=20)
|
||||
|
||||
def start_streaming(self):
|
||||
stdin_producer = StdinStreamProducer(self)
|
||||
d = stdin_producer.begin_producing()
|
||||
|
||||
def stop_stream():
|
||||
d = self.stop()
|
||||
return d
|
||||
|
||||
d.addCallback(lambda _: stop_stream())
|
||||
return d
|
||||
|
||||
|
||||
class FileLiveStreamCreator(LiveStreamCreator):
|
||||
def __init__(self, blob_manager, stream_info_manager, file_name, file_handle,
|
||||
secret_pass_phrase=None, key=None, iv_generator=None, stream_name=None):
|
||||
if stream_name is None:
|
||||
stream_name = file_name
|
||||
LiveStreamCreator.__init__(self, blob_manager, stream_info_manager, stream_name,
|
||||
secret_pass_phrase, key, iv_generator)
|
||||
self.file_name = file_name
|
||||
self.file_handle = file_handle
|
||||
|
||||
def start_streaming(self):
|
||||
file_sender = FileSender()
|
||||
d = file_sender.beginFileTransfer(self.file_handle, self)
|
||||
|
||||
def stop_stream():
|
||||
d = self.stop()
|
||||
return d
|
||||
|
||||
d.addCallback(lambda _: stop_stream())
|
||||
return d
|
||||
|
||||
|
||||
class StdinStreamProducer(object):
|
||||
"""This class reads data from standard in and sends it to a stream creator"""
|
||||
|
||||
implements(interfaces.IPushProducer)
|
||||
|
||||
def __init__(self, consumer):
|
||||
self.consumer = consumer
|
||||
self.reader = None
|
||||
self.finished_deferred = None
|
||||
|
||||
def begin_producing(self):
|
||||
|
||||
self.finished_deferred = defer.Deferred()
|
||||
self.consumer.registerProducer(self, True)
|
||||
#self.reader = process.ProcessReader(reactor, self, 'read', 0)
|
||||
self.resumeProducing()
|
||||
return self.finished_deferred
|
||||
|
||||
def resumeProducing(self):
|
||||
if self.reader is not None:
|
||||
self.reader.resumeProducing()
|
||||
|
||||
def stopProducing(self):
|
||||
if self.reader is not None:
|
||||
self.reader.stopReading()
|
||||
self.consumer.unregisterProducer()
|
||||
self.finished_deferred.callback(True)
|
||||
|
||||
def pauseProducing(self):
|
||||
if self.reader is not None:
|
||||
self.reader.pauseProducing()
|
||||
|
||||
def childDataReceived(self, fd, data):
|
||||
self.consumer.write(data)
|
||||
|
||||
def childConnectionLost(self, fd, reason):
|
||||
self.stopProducing()
|
||||
|
||||
|
||||
def add_live_stream_to_sd_identifier(session, stream_info_manager, sd_identifier):
|
||||
downloader_factory = FullLiveStreamDownloaderFactory(session.peer_finder,
|
||||
session.rate_limiter,
|
||||
session.blob_manager,
|
||||
stream_info_manager,
|
||||
session.wallet,
|
||||
BaseLiveStreamPaymentRateManager(
|
||||
MIN_BLOB_INFO_PAYMENT_RATE
|
||||
))
|
||||
sd_identifier.add_stream_info_validator(LiveStreamType, LBRYLiveStreamDescriptorValidator)
|
||||
sd_identifier.add_stream_downloader_factory(LiveStreamType, downloader_factory)
|
328
lbrynet/lbrylive/LiveStreamMetadataManager.py
Normal file
328
lbrynet/lbrylive/LiveStreamMetadataManager.py
Normal file
|
@ -0,0 +1,328 @@
|
|||
import time
|
||||
import logging
|
||||
import leveldb
|
||||
import json
|
||||
import os
|
||||
from twisted.internet import threads, defer
|
||||
from lbrynet.core.server.DHTHashAnnouncer import DHTHashSupplier
|
||||
from lbrynet.core.Error import DuplicateStreamHashError
|
||||
|
||||
|
||||
class DBLiveStreamMetadataManager(DHTHashSupplier):
|
||||
"""This class stores all stream info in a leveldb database stored in the same directory as the blobfiles"""
|
||||
|
||||
def __init__(self, db_dir, hash_announcer):
|
||||
DHTHashSupplier.__init__(self, hash_announcer)
|
||||
self.db_dir = db_dir
|
||||
self.stream_info_db = None
|
||||
self.stream_blob_db = None
|
||||
self.stream_desc_db = None
|
||||
|
||||
def setup(self):
|
||||
return threads.deferToThread(self._open_db)
|
||||
|
||||
def stop(self):
|
||||
self.stream_info_db = None
|
||||
self.stream_blob_db = None
|
||||
self.stream_desc_db = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_all_streams(self):
|
||||
return threads.deferToThread(self._get_all_streams)
|
||||
|
||||
def save_stream(self, stream_hash, pub_key, file_name, key, blobs):
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
d = threads.deferToThread(self._store_stream, stream_hash, pub_key, file_name, key,
|
||||
next_announce_time=next_announce_time)
|
||||
|
||||
def save_blobs():
|
||||
return self.add_blobs_to_stream(stream_hash, blobs)
|
||||
|
||||
def announce_have_stream():
|
||||
if self.hash_announcer is not None:
|
||||
self.hash_announcer.immediate_announce([stream_hash])
|
||||
return stream_hash
|
||||
|
||||
d.addCallback(lambda _: save_blobs())
|
||||
d.addCallback(lambda _: announce_have_stream())
|
||||
return d
|
||||
|
||||
def get_stream_info(self, stream_hash):
|
||||
return threads.deferToThread(self._get_stream_info, stream_hash)
|
||||
|
||||
def check_if_stream_exists(self, stream_hash):
|
||||
return threads.deferToThread(self._check_if_stream_exists, stream_hash)
|
||||
|
||||
def delete_stream(self, stream_hash):
|
||||
return threads.deferToThread(self._delete_stream, stream_hash)
|
||||
|
||||
def add_blobs_to_stream(self, stream_hash, blobs):
|
||||
|
||||
def add_blobs():
|
||||
self._add_blobs_to_stream(stream_hash, blobs, ignore_duplicate_error=True)
|
||||
|
||||
return threads.deferToThread(add_blobs)
|
||||
|
||||
def get_blobs_for_stream(self, stream_hash, start_blob=None, end_blob=None, count=None, reverse=False):
|
||||
logging.info("Getting blobs for a stream. Count is %s", str(count))
|
||||
|
||||
def get_positions_of_start_and_end():
|
||||
if start_blob is not None:
|
||||
start_num = self._get_blob_num_by_hash(stream_hash, start_blob)
|
||||
else:
|
||||
start_num = None
|
||||
if end_blob is not None:
|
||||
end_num = self._get_blob_num_by_hash(stream_hash, end_blob)
|
||||
else:
|
||||
end_num = None
|
||||
return start_num, end_num
|
||||
|
||||
def get_blob_infos(nums):
|
||||
start_num, end_num = nums
|
||||
return threads.deferToThread(self._get_further_blob_infos, stream_hash, start_num, end_num,
|
||||
count, reverse)
|
||||
|
||||
d = threads.deferToThread(get_positions_of_start_and_end)
|
||||
d.addCallback(get_blob_infos)
|
||||
return d
|
||||
|
||||
def get_stream_of_blob(self, blob_hash):
|
||||
return threads.deferToThread(self._get_stream_of_blobhash, blob_hash)
|
||||
|
||||
def save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
return threads.deferToThread(self._save_sd_blob_hash_to_stream, stream_hash, sd_blob_hash)
|
||||
|
||||
def get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return threads.deferToThread(self._get_sd_blob_hashes_for_stream, stream_hash)
|
||||
|
||||
def hashes_to_announce(self):
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
return threads.deferToThread(self._get_streams_to_announce, next_announce_time)
|
||||
|
||||
######### database calls #########
|
||||
|
||||
def _open_db(self):
|
||||
self.stream_info_db = leveldb.LevelDB(os.path.join(self.db_dir, "stream_info.db"))
|
||||
self.stream_blob_db = leveldb.LevelDB(os.path.join(self.db_dir, "stream_blob.db"))
|
||||
self.stream_desc_db = leveldb.LevelDB(os.path.join(self.db_dir, "stream_desc.db"))
|
||||
|
||||
def _delete_stream(self, stream_hash):
|
||||
desc_batch = leveldb.WriteBatch()
|
||||
for sd_blob_hash, s_h in self.stream_desc_db.RangeIter():
|
||||
if stream_hash == s_h:
|
||||
desc_batch.Delete(sd_blob_hash)
|
||||
self.stream_desc_db.Write(desc_batch, sync=True)
|
||||
|
||||
blob_batch = leveldb.WriteBatch()
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if stream_hash == s_h:
|
||||
blob_batch.Delete(blob_hash_stream_hash)
|
||||
self.stream_blob_db.Write(blob_batch, sync=True)
|
||||
|
||||
stream_batch = leveldb.WriteBatch()
|
||||
for s_h, stream_info in self.stream_info_db.RangeIter():
|
||||
if stream_hash == s_h:
|
||||
stream_batch.Delete(s_h)
|
||||
self.stream_info_db.Write(stream_batch, sync=True)
|
||||
|
||||
def _store_stream(self, stream_hash, public_key, name, key, next_announce_time=None):
|
||||
try:
|
||||
self.stream_info_db.Get(stream_hash)
|
||||
raise DuplicateStreamHashError("Stream hash %s already exists" % stream_hash)
|
||||
except KeyError:
|
||||
pass
|
||||
self.stream_info_db.Put(stream_hash, json.dumps((public_key, key, name, next_announce_time)), sync=True)
|
||||
|
||||
def _get_all_streams(self):
|
||||
return [stream_hash for stream_hash, stream_info in self.stream_info_db.RangeIter()]
|
||||
|
||||
def _get_stream_info(self, stream_hash):
|
||||
return json.loads(self.stream_info_db.Get(stream_hash))[:3]
|
||||
|
||||
def _check_if_stream_exists(self, stream_hash):
|
||||
try:
|
||||
self.stream_info_db.Get(stream_hash)
|
||||
return True
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
def _get_streams_to_announce(self, next_announce_time):
|
||||
# TODO: See if the following would be better for handling announce times:
|
||||
# TODO: Have a separate db for them, and read the whole thing into memory
|
||||
# TODO: on startup, and then write changes to db when they happen
|
||||
stream_hashes = []
|
||||
batch = leveldb.WriteBatch()
|
||||
current_time = time.time()
|
||||
for stream_hash, stream_info in self.stream_info_db.RangeIter():
|
||||
public_key, key, name, announce_time = json.loads(stream_info)
|
||||
if announce_time < current_time:
|
||||
batch.Put(stream_hash, json.dumps((public_key, key, name, next_announce_time)))
|
||||
stream_hashes.append(stream_hash)
|
||||
self.stream_info_db.Write(batch, sync=True)
|
||||
return stream_hashes
|
||||
|
||||
def _get_blob_num_by_hash(self, stream_hash, blob_hash):
|
||||
blob_hash_stream_hash = json.dumps((blob_hash, stream_hash))
|
||||
return json.loads(self.stream_blob_db.Get(blob_hash_stream_hash))[0]
|
||||
|
||||
def _get_further_blob_infos(self, stream_hash, start_num, end_num, count=None, reverse=False):
|
||||
blob_infos = []
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if stream_hash == s_h:
|
||||
position, revision, iv, length, signature = json.loads(blob_info)
|
||||
if (start_num is None) or (position > start_num):
|
||||
if (end_num is None) or (position < end_num):
|
||||
blob_infos.append((b_h, position, revision, iv, length, signature))
|
||||
blob_infos.sort(key=lambda i: i[1], reverse=reverse)
|
||||
if count is not None:
|
||||
blob_infos = blob_infos[:count]
|
||||
return blob_infos
|
||||
|
||||
def _add_blobs_to_stream(self, stream_hash, blob_infos, ignore_duplicate_error=False):
|
||||
batch = leveldb.WriteBatch()
|
||||
for blob_info in blob_infos:
|
||||
blob_hash_stream_hash = json.dumps((blob_info.blob_hash, stream_hash))
|
||||
try:
|
||||
self.stream_blob_db.Get(blob_hash_stream_hash)
|
||||
if ignore_duplicate_error is False:
|
||||
raise KeyError() # TODO: change this to DuplicateStreamBlobError?
|
||||
continue
|
||||
except KeyError:
|
||||
pass
|
||||
batch.Put(blob_hash_stream_hash,
|
||||
json.dumps((blob_info.blob_num,
|
||||
blob_info.revision,
|
||||
blob_info.iv,
|
||||
blob_info.length,
|
||||
blob_info.signature)))
|
||||
self.stream_blob_db.Write(batch, sync=True)
|
||||
|
||||
def _get_stream_of_blobhash(self, blob_hash):
|
||||
for blob_hash_stream_hash, blob_info in self.stream_blob_db.RangeIter():
|
||||
b_h, s_h = json.loads(blob_hash_stream_hash)
|
||||
if blob_hash == b_h:
|
||||
return s_h
|
||||
return None
|
||||
|
||||
def _save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
self.stream_desc_db.Put(sd_blob_hash, stream_hash)
|
||||
|
||||
def _get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return [sd_blob_hash for sd_blob_hash, s_h in self.stream_desc_db.RangeIter() if stream_hash == s_h]
|
||||
|
||||
|
||||
class TempLiveStreamMetadataManager(DHTHashSupplier):
|
||||
|
||||
def __init__(self, hash_announcer):
|
||||
DHTHashSupplier.__init__(self, hash_announcer)
|
||||
self.streams = {}
|
||||
self.stream_blobs = {}
|
||||
self.stream_desc = {}
|
||||
|
||||
def setup(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_all_streams(self):
|
||||
return defer.succeed(self.streams.keys())
|
||||
|
||||
def save_stream(self, stream_hash, pub_key, file_name, key, blobs):
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
self.streams[stream_hash] = {'public_key': pub_key, 'stream_name': file_name,
|
||||
'key': key, 'next_announce_time': next_announce_time}
|
||||
d = self.add_blobs_to_stream(stream_hash, blobs)
|
||||
|
||||
def announce_have_stream():
|
||||
if self.hash_announcer is not None:
|
||||
self.hash_announcer.immediate_announce([stream_hash])
|
||||
return stream_hash
|
||||
|
||||
d.addCallback(lambda _: announce_have_stream())
|
||||
return d
|
||||
|
||||
def get_stream_info(self, stream_hash):
|
||||
if stream_hash in self.streams:
|
||||
stream_info = self.streams[stream_hash]
|
||||
return defer.succeed([stream_info['public_key'], stream_info['key'], stream_info['stream_name']])
|
||||
return defer.succeed(None)
|
||||
|
||||
def delete_stream(self, stream_hash):
|
||||
if stream_hash in self.streams:
|
||||
del self.streams[stream_hash]
|
||||
for (s_h, b_h) in self.stream_blobs.keys():
|
||||
if s_h == stream_hash:
|
||||
del self.stream_blobs[(s_h, b_h)]
|
||||
return defer.succeed(True)
|
||||
|
||||
def add_blobs_to_stream(self, stream_hash, blobs):
|
||||
assert stream_hash in self.streams, "Can't add blobs to a stream that isn't known"
|
||||
for blob in blobs:
|
||||
info = {}
|
||||
info['blob_num'] = blob.blob_num
|
||||
info['length'] = blob.length
|
||||
info['iv'] = blob.iv
|
||||
info['revision'] = blob.revision
|
||||
info['signature'] = blob.signature
|
||||
self.stream_blobs[(stream_hash, blob.blob_hash)] = info
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_blobs_for_stream(self, stream_hash, start_blob=None, end_blob=None, count=None, reverse=False):
|
||||
|
||||
if start_blob is not None:
|
||||
start_num = self._get_blob_num_by_hash(stream_hash, start_blob)
|
||||
else:
|
||||
start_num = None
|
||||
if end_blob is not None:
|
||||
end_num = self._get_blob_num_by_hash(stream_hash, end_blob)
|
||||
else:
|
||||
end_num = None
|
||||
return self._get_further_blob_infos(stream_hash, start_num, end_num, count, reverse)
|
||||
|
||||
def get_stream_of_blob(self, blob_hash):
|
||||
for (s_h, b_h) in self.stream_blobs.iterkeys():
|
||||
if b_h == blob_hash:
|
||||
return defer.succeed(s_h)
|
||||
return defer.succeed(None)
|
||||
|
||||
def _get_further_blob_infos(self, stream_hash, start_num, end_num, count=None, reverse=False):
|
||||
blob_infos = []
|
||||
for (s_h, b_h), info in self.stream_blobs.iteritems():
|
||||
if stream_hash == s_h:
|
||||
position = info['blob_num']
|
||||
length = info['length']
|
||||
iv = info['iv']
|
||||
revision = info['revision']
|
||||
signature = info['signature']
|
||||
if (start_num is None) or (position > start_num):
|
||||
if (end_num is None) or (position < end_num):
|
||||
blob_infos.append((b_h, position, revision, iv, length, signature))
|
||||
blob_infos.sort(key=lambda i: i[1], reverse=reverse)
|
||||
if count is not None:
|
||||
blob_infos = blob_infos[:count]
|
||||
return defer.succeed(blob_infos)
|
||||
|
||||
def _get_blob_num_by_hash(self, stream_hash, blob_hash):
|
||||
if (stream_hash, blob_hash) in self.stream_blobs:
|
||||
return self.stream_blobs[(stream_hash, blob_hash)]['blob_num']
|
||||
|
||||
def save_sd_blob_hash_to_stream(self, stream_hash, sd_blob_hash):
|
||||
self.stream_desc[sd_blob_hash] = stream_hash
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_sd_blob_hashes_for_stream(self, stream_hash):
|
||||
return defer.succeed([sd_hash for sd_hash, s_h in self.stream_desc.iteritems() if s_h == stream_hash])
|
||||
|
||||
def hashes_to_announce(self):
|
||||
next_announce_time = time.time() + self.hash_reannounce_time
|
||||
stream_hashes = []
|
||||
current_time = time.time()
|
||||
for stream_hash, stream_info in self.streams.iteritems():
|
||||
announce_time = stream_info['announce_time']
|
||||
if announce_time < current_time:
|
||||
self.streams[stream_hash]['announce_time'] = next_announce_time
|
||||
stream_hashes.append(stream_hash)
|
||||
return stream_hashes
|
45
lbrynet/lbrylive/PaymentRateManager.py
Normal file
45
lbrynet/lbrylive/PaymentRateManager.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
class BaseLiveStreamPaymentRateManager(object):
|
||||
def __init__(self, blob_info_rate, blob_data_rate=None):
|
||||
self.min_live_blob_info_payment_rate = blob_info_rate
|
||||
self.min_blob_data_payment_rate = blob_data_rate
|
||||
|
||||
|
||||
class LiveStreamPaymentRateManager(object):
|
||||
def __init__(self, base_live_stream_payment_rate_manager, payment_rate_manager,
|
||||
blob_info_rate=None, blob_data_rate=None):
|
||||
self._base_live_stream_payment_rate_manager = base_live_stream_payment_rate_manager
|
||||
self._payment_rate_manager = payment_rate_manager
|
||||
self.min_live_blob_info_payment_rate = blob_info_rate
|
||||
self.min_blob_data_payment_rate = blob_data_rate
|
||||
self.points_paid = 0.0
|
||||
|
||||
def get_rate_live_blob_info(self, peer):
|
||||
return self.get_effective_min_live_blob_info_payment_rate()
|
||||
|
||||
def accept_rate_live_blob_info(self, peer, payment_rate):
|
||||
return payment_rate >= self.get_effective_min_live_blob_info_payment_rate()
|
||||
|
||||
def get_rate_blob_data(self, peer):
|
||||
return self.get_effective_min_blob_data_payment_rate()
|
||||
|
||||
def accept_rate_blob_data(self, peer, payment_rate):
|
||||
return payment_rate >= self.get_effective_min_blob_data_payment_rate()
|
||||
|
||||
def get_effective_min_blob_data_payment_rate(self):
|
||||
rate = self.min_blob_data_payment_rate
|
||||
if rate is None:
|
||||
rate = self._payment_rate_manager.min_blob_data_payment_rate
|
||||
if rate is None:
|
||||
rate = self._base_live_stream_payment_rate_manager.min_blob_data_payment_rate
|
||||
if rate is None:
|
||||
rate = self._payment_rate_manager.get_effective_min_blob_data_payment_rate()
|
||||
return rate
|
||||
|
||||
def get_effective_min_live_blob_info_payment_rate(self):
|
||||
rate = self.min_live_blob_info_payment_rate
|
||||
if rate is None:
|
||||
rate = self._base_live_stream_payment_rate_manager.min_live_blob_info_payment_rate
|
||||
return rate
|
||||
|
||||
def record_points_paid(self, amount):
|
||||
self.points_paid += amount
|
131
lbrynet/lbrylive/StreamDescriptor.py
Normal file
131
lbrynet/lbrylive/StreamDescriptor.py
Normal file
|
@ -0,0 +1,131 @@
|
|||
import binascii
|
||||
import logging
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj, verify_signature
|
||||
from twisted.internet import defer, threads
|
||||
from lbrynet.core.Error import DuplicateStreamHashError
|
||||
from lbrynet.lbrylive.LiveBlob import LiveBlobInfo
|
||||
from lbrynet.interfaces import IStreamDescriptorValidator
|
||||
from zope.interface import implements
|
||||
|
||||
|
||||
LiveStreamType = "lbrylive"
|
||||
|
||||
|
||||
def save_sd_info(stream_info_manager, sd_info, ignore_duplicate=False):
|
||||
logging.debug("Saving info for %s", str(sd_info['stream_name']))
|
||||
hex_stream_name = sd_info['stream_name']
|
||||
public_key = sd_info['public_key']
|
||||
key = sd_info['key']
|
||||
stream_hash = sd_info['stream_hash']
|
||||
raw_blobs = sd_info['blobs']
|
||||
crypt_blobs = []
|
||||
for blob in raw_blobs:
|
||||
length = blob['length']
|
||||
if length != 0:
|
||||
blob_hash = blob['blob_hash']
|
||||
else:
|
||||
blob_hash = None
|
||||
blob_num = blob['blob_num']
|
||||
revision = blob['revision']
|
||||
iv = blob['iv']
|
||||
signature = blob['signature']
|
||||
crypt_blobs.append(LiveBlobInfo(blob_hash, blob_num, length, iv, revision, signature))
|
||||
logging.debug("Trying to save stream info for %s", str(hex_stream_name))
|
||||
d = stream_info_manager.save_stream(stream_hash, public_key, hex_stream_name,
|
||||
key, crypt_blobs)
|
||||
|
||||
def check_if_duplicate(err):
|
||||
if ignore_duplicate is True:
|
||||
err.trap(DuplicateStreamHashError)
|
||||
|
||||
d.addErrback(check_if_duplicate)
|
||||
|
||||
d.addCallback(lambda _: stream_hash)
|
||||
return d
|
||||
|
||||
|
||||
def get_sd_info(stream_info_manager, stream_hash, include_blobs):
|
||||
d = stream_info_manager.get_stream_info(stream_hash)
|
||||
|
||||
def format_info(stream_info):
|
||||
fields = {}
|
||||
fields['stream_type'] = LiveStreamType
|
||||
fields['stream_name'] = stream_info[2]
|
||||
fields['public_key'] = stream_info[0]
|
||||
fields['key'] = stream_info[1]
|
||||
fields['stream_hash'] = stream_hash
|
||||
|
||||
def format_blobs(blobs):
|
||||
formatted_blobs = []
|
||||
for blob_hash, blob_num, revision, iv, length, signature in blobs:
|
||||
blob = {}
|
||||
if length != 0:
|
||||
blob['blob_hash'] = blob_hash
|
||||
blob['blob_num'] = blob_num
|
||||
blob['revision'] = revision
|
||||
blob['iv'] = iv
|
||||
blob['length'] = length
|
||||
blob['signature'] = signature
|
||||
formatted_blobs.append(blob)
|
||||
fields['blobs'] = formatted_blobs
|
||||
return fields
|
||||
|
||||
if include_blobs is True:
|
||||
d = stream_info_manager.get_blobs_for_stream(stream_hash)
|
||||
else:
|
||||
d = defer.succeed([])
|
||||
d.addCallback(format_blobs)
|
||||
return d
|
||||
|
||||
d.addCallback(format_info)
|
||||
return d
|
||||
|
||||
|
||||
class LBRYLiveStreamDescriptorValidator(object):
|
||||
implements(IStreamDescriptorValidator)
|
||||
|
||||
def __init__(self, raw_info):
|
||||
self.raw_info = raw_info
|
||||
|
||||
def validate(self):
|
||||
logging.debug("Trying to validate stream descriptor for %s", str(self.raw_info['stream_name']))
|
||||
hex_stream_name = self.raw_info['stream_name']
|
||||
public_key = self.raw_info['public_key']
|
||||
key = self.raw_info['key']
|
||||
stream_hash = self.raw_info['stream_hash']
|
||||
h = get_lbry_hash_obj()
|
||||
h.update(hex_stream_name)
|
||||
h.update(public_key)
|
||||
h.update(key)
|
||||
if h.hexdigest() != stream_hash:
|
||||
raise ValueError("Stream hash does not match stream metadata")
|
||||
blobs = self.raw_info['blobs']
|
||||
|
||||
def check_blob_signatures():
|
||||
for blob in blobs:
|
||||
length = blob['length']
|
||||
if length != 0:
|
||||
blob_hash = blob['blob_hash']
|
||||
else:
|
||||
blob_hash = None
|
||||
blob_num = blob['blob_num']
|
||||
revision = blob['revision']
|
||||
iv = blob['iv']
|
||||
signature = blob['signature']
|
||||
hashsum = get_lbry_hash_obj()
|
||||
hashsum.update(stream_hash)
|
||||
if length != 0:
|
||||
hashsum.update(blob_hash)
|
||||
hashsum.update(str(blob_num))
|
||||
hashsum.update(str(revision))
|
||||
hashsum.update(iv)
|
||||
hashsum.update(str(length))
|
||||
if not verify_signature(hashsum.digest(), signature, public_key):
|
||||
raise ValueError("Invalid signature in stream descriptor")
|
||||
|
||||
return threads.deferToThread(check_blob_signatures)
|
||||
|
||||
def info_to_show(self):
|
||||
info = []
|
||||
info.append(("stream_name", binascii.unhexlify(self.raw_info.get("stream_name"))))
|
||||
return info
|
0
lbrynet/lbrylive/__init__.py
Normal file
0
lbrynet/lbrylive/__init__.py
Normal file
180
lbrynet/lbrylive/client/LiveStreamDownloader.py
Normal file
180
lbrynet/lbrylive/client/LiveStreamDownloader.py
Normal file
|
@ -0,0 +1,180 @@
|
|||
import binascii
|
||||
from lbrynet.core.DownloadOption import DownloadOption
|
||||
from lbrynet.cryptstream.client.CryptStreamDownloader import CryptStreamDownloader
|
||||
from zope.interface import implements
|
||||
from lbrynet.lbrylive.client.LiveStreamMetadataHandler import LiveStreamMetadataHandler
|
||||
from lbrynet.lbrylive.client.LiveStreamProgressManager import LiveStreamProgressManager
|
||||
import os
|
||||
from lbrynet.lbrylive.StreamDescriptor import save_sd_info
|
||||
from lbrynet.lbrylive.PaymentRateManager import LiveStreamPaymentRateManager
|
||||
from twisted.internet import defer, threads # , process
|
||||
from lbrynet.interfaces import IStreamDownloaderFactory
|
||||
|
||||
|
||||
class LiveStreamDownloader(CryptStreamDownloader):
|
||||
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
payment_rate_manager, wallet, upload_allowed):
|
||||
CryptStreamDownloader.__init__(self, peer_finder, rate_limiter, blob_manager,
|
||||
payment_rate_manager, wallet, upload_allowed)
|
||||
self.stream_hash = stream_hash
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.public_key = None
|
||||
|
||||
def set_stream_info(self):
|
||||
if self.public_key is None and self.key is None:
|
||||
|
||||
d = self.stream_info_manager.get_stream_info(self.stream_hash)
|
||||
|
||||
def set_stream_info(stream_info):
|
||||
public_key, key, stream_name = stream_info
|
||||
self.public_key = public_key
|
||||
self.key = binascii.unhexlify(key)
|
||||
self.stream_name = binascii.unhexlify(stream_name)
|
||||
|
||||
d.addCallback(set_stream_info)
|
||||
return d
|
||||
else:
|
||||
return defer.succeed(True)
|
||||
|
||||
|
||||
class LBRYLiveStreamDownloader(LiveStreamDownloader):
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
payment_rate_manager, wallet, upload_allowed):
|
||||
LiveStreamDownloader.__init__(self, stream_hash, peer_finder, rate_limiter, blob_manager,
|
||||
stream_info_manager, payment_rate_manager, wallet, upload_allowed)
|
||||
|
||||
#self.writer = process.ProcessWriter(reactor, self, 'write', 1)
|
||||
|
||||
def _get_metadata_handler(self, download_manager):
|
||||
return LiveStreamMetadataHandler(self.stream_hash, self.stream_info_manager,
|
||||
self.peer_finder, self.public_key, False,
|
||||
self.payment_rate_manager, self.wallet, download_manager, 10)
|
||||
|
||||
def _get_progress_manager(self, download_manager):
|
||||
return LiveStreamProgressManager(self._finished_downloading, self.blob_manager, download_manager,
|
||||
delete_blob_after_finished=True, download_whole=False,
|
||||
max_before_skip_ahead=10)
|
||||
|
||||
def _get_write_func(self):
|
||||
def write_func(data):
|
||||
if self.stopped is False:
|
||||
#self.writer.write(data)
|
||||
pass
|
||||
return write_func
|
||||
|
||||
|
||||
class FullLiveStreamDownloader(LiveStreamDownloader):
|
||||
def __init__(self, stream_hash, peer_finder, rate_limiter, blob_manager, stream_info_manager,
|
||||
payment_rate_manager, wallet, upload_allowed):
|
||||
LiveStreamDownloader.__init__(self, stream_hash, peer_finder, rate_limiter,
|
||||
blob_manager, stream_info_manager, payment_rate_manager,
|
||||
wallet, upload_allowed)
|
||||
self.file_handle = None
|
||||
self.file_name = None
|
||||
|
||||
def set_stream_info(self):
|
||||
d = LiveStreamDownloader.set_stream_info(self)
|
||||
|
||||
def set_file_name_if_unset():
|
||||
if not self.file_name:
|
||||
if not self.stream_name:
|
||||
self.stream_name = "_"
|
||||
self.file_name = os.path.basename(self.stream_name)
|
||||
|
||||
d.addCallback(lambda _: set_file_name_if_unset())
|
||||
return d
|
||||
|
||||
def stop(self):
|
||||
d = self._close_file()
|
||||
d.addBoth(lambda _: LiveStreamDownloader.stop(self))
|
||||
return d
|
||||
|
||||
def _start(self):
|
||||
if self.file_handle is None:
|
||||
d = self._open_file()
|
||||
else:
|
||||
d = defer.succeed(True)
|
||||
d.addCallback(lambda _: LiveStreamDownloader._start(self))
|
||||
return d
|
||||
|
||||
def _open_file(self):
|
||||
def open_file():
|
||||
self.file_handle = open(self.file_name, 'wb')
|
||||
return threads.deferToThread(open_file)
|
||||
|
||||
def _get_metadata_handler(self, download_manager):
|
||||
return LiveStreamMetadataHandler(self.stream_hash, self.stream_info_manager,
|
||||
self.peer_finder, self.public_key, True,
|
||||
self.payment_rate_manager, self.wallet, download_manager)
|
||||
|
||||
def _get_primary_request_creators(self, download_manager):
|
||||
return [download_manager.blob_requester, download_manager.blob_info_finder]
|
||||
|
||||
def _get_write_func(self):
|
||||
def write_func(data):
|
||||
if self.stopped is False:
|
||||
self.file_handle.write(data)
|
||||
return write_func
|
||||
|
||||
def _close_file(self):
|
||||
def close_file():
|
||||
if self.file_handle is not None:
|
||||
self.file_handle.close()
|
||||
self.file_handle = None
|
||||
return threads.deferToThread(close_file)
|
||||
|
||||
|
||||
class FullLiveStreamDownloaderFactory(object):
|
||||
|
||||
implements(IStreamDownloaderFactory)
|
||||
|
||||
def __init__(self, peer_finder, rate_limiter, blob_manager, stream_info_manager, wallet,
|
||||
default_payment_rate_manager):
|
||||
self.peer_finder = peer_finder
|
||||
self.rate_limiter = rate_limiter
|
||||
self.blob_manager = blob_manager
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.wallet = wallet
|
||||
self.default_payment_rate_manager = default_payment_rate_manager
|
||||
|
||||
def get_downloader_options(self, sd_validator, payment_rate_manager):
|
||||
options = [
|
||||
DownloadOption(
|
||||
[float, None],
|
||||
"rate which will be paid for data (None means use application default)",
|
||||
"data payment rate",
|
||||
None
|
||||
),
|
||||
DownloadOption(
|
||||
[float, None],
|
||||
"rate which will be paid for metadata (None means use application default)",
|
||||
"metadata payment rate",
|
||||
None
|
||||
),
|
||||
DownloadOption(
|
||||
[bool],
|
||||
"allow reuploading data downloaded for this file",
|
||||
"allow upload",
|
||||
True
|
||||
),
|
||||
]
|
||||
return options
|
||||
|
||||
def make_downloader(self, sd_validator, options, payment_rate_manager):
|
||||
# TODO: check options for payment rate manager parameters
|
||||
payment_rate_manager = LiveStreamPaymentRateManager(self.default_payment_rate_manager,
|
||||
payment_rate_manager)
|
||||
d = save_sd_info(self.stream_info_manager, sd_validator.raw_info)
|
||||
|
||||
def create_downloader(stream_hash):
|
||||
stream_downloader = FullLiveStreamDownloader(stream_hash, self.peer_finder, self.rate_limiter,
|
||||
self.blob_manager, self.stream_info_manager,
|
||||
payment_rate_manager, self.wallet, True)
|
||||
# TODO: change upload_allowed=True above to something better
|
||||
d = stream_downloader.set_stream_info()
|
||||
d.addCallback(lambda _: stream_downloader)
|
||||
return d
|
||||
|
||||
d.addCallback(create_downloader)
|
||||
return d
|
342
lbrynet/lbrylive/client/LiveStreamMetadataHandler.py
Normal file
342
lbrynet/lbrylive/client/LiveStreamMetadataHandler.py
Normal file
|
@ -0,0 +1,342 @@
|
|||
from collections import defaultdict
|
||||
import logging
|
||||
from zope.interface import implements
|
||||
from twisted.internet import defer
|
||||
from twisted.python.failure import Failure
|
||||
from lbrynet.conf import MAX_BLOB_INFOS_TO_REQUEST
|
||||
from lbrynet.core.client.ClientRequest import ClientRequest, ClientPaidRequest
|
||||
from lbrynet.lbrylive.LiveBlob import LiveBlobInfo
|
||||
from lbrynet.core.cryptoutils import get_lbry_hash_obj, verify_signature
|
||||
from lbrynet.interfaces import IRequestCreator, IMetadataHandler
|
||||
from lbrynet.core.Error import InsufficientFundsError, InvalidResponseError, RequestCanceledError
|
||||
from lbrynet.core.Error import NoResponseError
|
||||
|
||||
|
||||
class LiveStreamMetadataHandler(object):
|
||||
implements(IRequestCreator, IMetadataHandler)
|
||||
|
||||
def __init__(self, stream_hash, stream_info_manager, peer_finder, stream_pub_key, download_whole,
|
||||
payment_rate_manager, wallet, download_manager, max_before_skip_ahead=None):
|
||||
self.stream_hash = stream_hash
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.wallet = wallet
|
||||
self.peer_finder = peer_finder
|
||||
self.stream_pub_key = stream_pub_key
|
||||
self.download_whole = download_whole
|
||||
self.max_before_skip_ahead = max_before_skip_ahead
|
||||
if self.download_whole is False:
|
||||
assert self.max_before_skip_ahead is not None, \
|
||||
"If download whole is False, max_before_skip_ahead must be set"
|
||||
self.download_manager = download_manager
|
||||
self._peers = defaultdict(int) # {Peer: score}
|
||||
self._protocol_prices = {}
|
||||
self._final_blob_num = None
|
||||
self._price_disagreements = [] # [Peer]
|
||||
self._incompatible_peers = [] # [Peer]
|
||||
|
||||
######### IMetadataHandler #########
|
||||
|
||||
def get_initial_blobs(self):
|
||||
d = self.stream_info_manager.get_blobs_for_stream(self.stream_hash)
|
||||
d.addCallback(self._format_initial_blobs_for_download_manager)
|
||||
return d
|
||||
|
||||
def final_blob_num(self):
|
||||
return self._final_blob_num
|
||||
|
||||
######## IRequestCreator #########
|
||||
|
||||
def send_next_request(self, peer, protocol):
|
||||
if self._finished_discovery() is False and self._should_send_request_to(peer) is True:
|
||||
p_r = None
|
||||
if not self._price_settled(protocol):
|
||||
p_r = self._get_price_request(peer, protocol)
|
||||
d_r = self._get_discover_request(peer)
|
||||
reserved_points = self._reserve_points(peer, protocol, d_r.max_pay_units)
|
||||
if reserved_points is not None:
|
||||
d1 = protocol.add_request(d_r)
|
||||
d1.addCallback(self._handle_discover_response, peer, d_r)
|
||||
d1.addBoth(self._pay_or_cancel_payment, protocol, reserved_points)
|
||||
d1.addErrback(self._request_failed, peer)
|
||||
if p_r is not None:
|
||||
d2 = protocol.add_request(p_r)
|
||||
d2.addCallback(self._handle_price_response, peer, p_r, protocol)
|
||||
d2.addErrback(self._request_failed, peer)
|
||||
return defer.succeed(True)
|
||||
else:
|
||||
return defer.fail(InsufficientFundsError())
|
||||
return defer.succeed(False)
|
||||
|
||||
def get_new_peers(self):
|
||||
d = self._get_hash_for_peer_search()
|
||||
d.addCallback(self._find_peers_for_hash)
|
||||
return d
|
||||
|
||||
######### internal calls #########
|
||||
|
||||
def _get_hash_for_peer_search(self):
|
||||
r = None
|
||||
if self._finished_discovery() is False:
|
||||
r = self.stream_hash
|
||||
logging.debug("Info finder peer search response for stream %s: %s", str(self.stream_hash), str(r))
|
||||
return defer.succeed(r)
|
||||
|
||||
def _find_peers_for_hash(self, h):
|
||||
if h is None:
|
||||
return None
|
||||
else:
|
||||
d = self.peer_finder.find_peers_for_blob(h)
|
||||
|
||||
def choose_best_peers(peers):
|
||||
bad_peers = self._get_bad_peers()
|
||||
return [p for p in peers if not p in bad_peers]
|
||||
|
||||
d.addCallback(choose_best_peers)
|
||||
return d
|
||||
|
||||
def _format_initial_blobs_for_download_manager(self, blob_infos):
|
||||
infos = []
|
||||
for blob_hash, blob_num, revision, iv, length, signature in blob_infos:
|
||||
if blob_hash is not None:
|
||||
infos.append(LiveBlobInfo(blob_hash, blob_num, length, iv, revision, signature))
|
||||
else:
|
||||
logging.debug("Setting _final_blob_num to %s", str(blob_num - 1))
|
||||
self._final_blob_num = blob_num - 1
|
||||
return infos
|
||||
|
||||
def _should_send_request_to(self, peer):
|
||||
if self._peers[peer] < -5.0:
|
||||
return False
|
||||
if peer in self._price_disagreements:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _get_bad_peers(self):
|
||||
return [p for p in self._peers.iterkeys() if not self._should_send_request_to(p)]
|
||||
|
||||
def _finished_discovery(self):
|
||||
if self._get_discovery_params() is None:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _get_discover_request(self, peer):
|
||||
discovery_params = self._get_discovery_params()
|
||||
if discovery_params:
|
||||
further_blobs_request = {}
|
||||
reference, start, end, count = discovery_params
|
||||
further_blobs_request['reference'] = reference
|
||||
if start is not None:
|
||||
further_blobs_request['start'] = start
|
||||
if end is not None:
|
||||
further_blobs_request['end'] = end
|
||||
if count is not None:
|
||||
further_blobs_request['count'] = count
|
||||
else:
|
||||
further_blobs_request['count'] = MAX_BLOB_INFOS_TO_REQUEST
|
||||
logging.debug("Requesting %s blob infos from %s", str(further_blobs_request['count']), str(peer))
|
||||
r_dict = {'further_blobs': further_blobs_request}
|
||||
response_identifier = 'further_blobs'
|
||||
request = ClientPaidRequest(r_dict, response_identifier, further_blobs_request['count'])
|
||||
return request
|
||||
return None
|
||||
|
||||
def _get_discovery_params(self):
|
||||
logging.debug("In _get_discovery_params")
|
||||
stream_position = self.download_manager.stream_position()
|
||||
blobs = self.download_manager.blobs
|
||||
if blobs:
|
||||
last_blob_num = max(blobs.iterkeys())
|
||||
else:
|
||||
last_blob_num = -1
|
||||
final_blob_num = self.final_blob_num()
|
||||
if final_blob_num is not None:
|
||||
last_blob_num = final_blob_num
|
||||
if self.download_whole is False:
|
||||
logging.debug("download_whole is False")
|
||||
if final_blob_num is not None:
|
||||
for i in xrange(stream_position, final_blob_num + 1):
|
||||
if not i in blobs:
|
||||
count = min(self.max_before_skip_ahead, (final_blob_num - i + 1))
|
||||
return self.stream_hash, None, 'end', count
|
||||
return None
|
||||
else:
|
||||
if blobs:
|
||||
for i in xrange(stream_position, last_blob_num + 1):
|
||||
if not i in blobs:
|
||||
if i == 0:
|
||||
return self.stream_hash, 'beginning', 'end', -1 * self.max_before_skip_ahead
|
||||
else:
|
||||
return self.stream_hash, blobs[i-1].blob_hash, 'end', -1 * self.max_before_skip_ahead
|
||||
return self.stream_hash, blobs[last_blob_num].blob_hash, 'end', -1 * self.max_before_skip_ahead
|
||||
else:
|
||||
return self.stream_hash, None, 'end', -1 * self.max_before_skip_ahead
|
||||
logging.debug("download_whole is True")
|
||||
beginning = None
|
||||
end = None
|
||||
for i in xrange(stream_position, last_blob_num + 1):
|
||||
if not i in blobs:
|
||||
if beginning is None:
|
||||
if i == 0:
|
||||
beginning = 'beginning'
|
||||
else:
|
||||
beginning = blobs[i-1].blob_hash
|
||||
else:
|
||||
if beginning is not None:
|
||||
end = blobs[i].blob_hash
|
||||
break
|
||||
if beginning is None:
|
||||
if final_blob_num is not None:
|
||||
logging.debug("Discovery is finished. stream_position: %s, last_blob_num + 1: %s", str(stream_position),
|
||||
str(last_blob_num + 1))
|
||||
return None
|
||||
else:
|
||||
logging.debug("Discovery is not finished. final blob num is unknown.")
|
||||
if last_blob_num != -1:
|
||||
return self.stream_hash, blobs[last_blob_num].blob_hash, None, None
|
||||
else:
|
||||
return self.stream_hash, 'beginning', None, None
|
||||
else:
|
||||
logging.info("Discovery is not finished. Not all blobs are known.")
|
||||
return self.stream_hash, beginning, end, None
|
||||
|
||||
def _price_settled(self, protocol):
|
||||
if protocol in self._protocol_prices:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _update_local_score(self, peer, amount):
|
||||
self._peers[peer] += amount
|
||||
|
||||
def _reserve_points(self, peer, protocol, max_infos):
|
||||
assert protocol in self._protocol_prices
|
||||
point_amount = 1.0 * max_infos * self._protocol_prices[protocol] / 1000.0
|
||||
return self.wallet.reserve_points(peer, point_amount)
|
||||
|
||||
def _pay_or_cancel_payment(self, arg, protocol, reserved_points):
|
||||
if isinstance(arg, Failure) or arg == 0:
|
||||
self._cancel_points(reserved_points)
|
||||
else:
|
||||
self._pay_peer(protocol, arg, reserved_points)
|
||||
return arg
|
||||
|
||||
def _pay_peer(self, protocol, num_infos, reserved_points):
|
||||
assert num_infos != 0
|
||||
assert protocol in self._protocol_prices
|
||||
point_amount = 1.0 * num_infos * self._protocol_prices[protocol] / 1000.0
|
||||
self.wallet.send_points(reserved_points, point_amount)
|
||||
self.payment_rate_manager.record_points_paid(point_amount)
|
||||
|
||||
def _cancel_points(self, reserved_points):
|
||||
return self.wallet.cancel_point_reservation(reserved_points)
|
||||
|
||||
def _get_price_request(self, peer, protocol):
|
||||
self._protocol_prices[protocol] = self.payment_rate_manager.get_rate_live_blob_info(peer)
|
||||
request_dict = {'blob_info_payment_rate': self._protocol_prices[protocol]}
|
||||
request = ClientRequest(request_dict, 'blob_info_payment_rate')
|
||||
return request
|
||||
|
||||
def _handle_price_response(self, response_dict, peer, request, protocol):
|
||||
if not request.response_identifier in response_dict:
|
||||
return InvalidResponseError("response identifier not in response")
|
||||
assert protocol in self._protocol_prices
|
||||
response = response_dict[request.response_identifier]
|
||||
if response == "RATE_ACCEPTED":
|
||||
return True
|
||||
else:
|
||||
logging.info("Rate offer has been rejected by %s", str(peer))
|
||||
del self._protocol_prices[protocol]
|
||||
self._price_disagreements.append(peer)
|
||||
return True
|
||||
|
||||
def _handle_discover_response(self, response_dict, peer, request):
|
||||
if not request.response_identifier in response_dict:
|
||||
return InvalidResponseError("response identifier not in response")
|
||||
response = response_dict[request.response_identifier]
|
||||
blob_infos = []
|
||||
if 'error' in response:
|
||||
if response['error'] == 'RATE_UNSET':
|
||||
return defer.succeed(0)
|
||||
else:
|
||||
return InvalidResponseError("Got an unknown error from the peer: %s" %
|
||||
(response['error'],))
|
||||
if not 'blob_infos' in response:
|
||||
return InvalidResponseError("Missing the required field 'blob_infos'")
|
||||
raw_blob_infos = response['blob_infos']
|
||||
logging.info("Handling %s further blobs from %s", str(len(raw_blob_infos)), str(peer))
|
||||
logging.debug("blobs: %s", str(raw_blob_infos))
|
||||
for raw_blob_info in raw_blob_infos:
|
||||
length = raw_blob_info['length']
|
||||
if length != 0:
|
||||
blob_hash = raw_blob_info['blob_hash']
|
||||
else:
|
||||
blob_hash = None
|
||||
num = raw_blob_info['blob_num']
|
||||
revision = raw_blob_info['revision']
|
||||
iv = raw_blob_info['iv']
|
||||
signature = raw_blob_info['signature']
|
||||
blob_info = LiveBlobInfo(blob_hash, num, length, iv, revision, signature)
|
||||
logging.debug("Learned about a potential blob: %s", str(blob_hash))
|
||||
if self._verify_blob(blob_info):
|
||||
if blob_hash is None:
|
||||
logging.info("Setting _final_blob_num to %s", str(num - 1))
|
||||
self._final_blob_num = num - 1
|
||||
else:
|
||||
blob_infos.append(blob_info)
|
||||
else:
|
||||
raise ValueError("Peer sent an invalid blob info")
|
||||
d = self.stream_info_manager.add_blobs_to_stream(self.stream_hash, blob_infos)
|
||||
|
||||
def add_blobs_to_download_manager():
|
||||
blob_nums = [b.blob_num for b in blob_infos]
|
||||
logging.info("Adding the following blob nums to the download manager: %s", str(blob_nums))
|
||||
self.download_manager.add_blobs_to_download(blob_infos)
|
||||
|
||||
d.addCallback(lambda _: add_blobs_to_download_manager())
|
||||
|
||||
def pay_or_penalize_peer():
|
||||
if len(blob_infos):
|
||||
self._update_local_score(peer, len(blob_infos))
|
||||
peer.update_stats('downloaded_crypt_blob_infos', len(blob_infos))
|
||||
peer.update_score(len(blob_infos))
|
||||
else:
|
||||
self._update_local_score(peer, -.0001)
|
||||
return len(blob_infos)
|
||||
|
||||
d.addCallback(lambda _: pay_or_penalize_peer())
|
||||
|
||||
return d
|
||||
|
||||
def _verify_blob(self, blob):
|
||||
logging.debug("Got an unverified blob to check:")
|
||||
logging.debug("blob_hash: %s", blob.blob_hash)
|
||||
logging.debug("blob_num: %s", str(blob.blob_num))
|
||||
logging.debug("revision: %s", str(blob.revision))
|
||||
logging.debug("iv: %s", blob.iv)
|
||||
logging.debug("length: %s", str(blob.length))
|
||||
hashsum = get_lbry_hash_obj()
|
||||
hashsum.update(self.stream_hash)
|
||||
if blob.length != 0:
|
||||
hashsum.update(blob.blob_hash)
|
||||
hashsum.update(str(blob.blob_num))
|
||||
hashsum.update(str(blob.revision))
|
||||
hashsum.update(blob.iv)
|
||||
hashsum.update(str(blob.length))
|
||||
logging.debug("hexdigest to be verified: %s", hashsum.hexdigest())
|
||||
if verify_signature(hashsum.digest(), blob.signature, self.stream_pub_key):
|
||||
logging.debug("Blob info is valid")
|
||||
return True
|
||||
else:
|
||||
logging.debug("The blob info is invalid")
|
||||
return False
|
||||
|
||||
def _request_failed(self, reason, peer):
|
||||
if reason.check(RequestCanceledError):
|
||||
return
|
||||
if reason.check(NoResponseError):
|
||||
self._incompatible_peers.append(peer)
|
||||
return
|
||||
logging.warning("Crypt stream info finder: a request failed. Reason: %s", reason.getErrorMessage())
|
||||
self._update_local_score(peer, -5.0)
|
||||
peer.update_score(-10.0)
|
||||
return reason
|
87
lbrynet/lbrylive/client/LiveStreamProgressManager.py
Normal file
87
lbrynet/lbrylive/client/LiveStreamProgressManager.py
Normal file
|
@ -0,0 +1,87 @@
|
|||
import logging
|
||||
from lbrynet.core.client.StreamProgressManager import StreamProgressManager
|
||||
from twisted.internet import defer
|
||||
|
||||
|
||||
class LiveStreamProgressManager(StreamProgressManager):
|
||||
def __init__(self, finished_callback, blob_manager, download_manager, delete_blob_after_finished=False,
|
||||
download_whole=True, max_before_skip_ahead=5):
|
||||
self.download_whole = download_whole
|
||||
self.max_before_skip_ahead = max_before_skip_ahead
|
||||
StreamProgressManager.__init__(self, finished_callback, blob_manager, download_manager,
|
||||
delete_blob_after_finished)
|
||||
|
||||
######### IProgressManager #########
|
||||
|
||||
def stream_position(self):
|
||||
blobs = self.download_manager.blobs
|
||||
if not blobs:
|
||||
return 0
|
||||
else:
|
||||
newest_known_blobnum = max(blobs.iterkeys())
|
||||
position = newest_known_blobnum
|
||||
oldest_relevant_blob_num = (max(0, newest_known_blobnum - self.max_before_skip_ahead + 1))
|
||||
for i in xrange(newest_known_blobnum, oldest_relevant_blob_num - 1, -1):
|
||||
if i in blobs and (not blobs[i].is_validated() and not i in self.provided_blob_nums):
|
||||
position = i
|
||||
return position
|
||||
|
||||
def needed_blobs(self):
|
||||
blobs = self.download_manager.blobs
|
||||
stream_position = self.stream_position()
|
||||
if blobs:
|
||||
newest_known_blobnum = max(blobs.iterkeys())
|
||||
else:
|
||||
newest_known_blobnum = -1
|
||||
blobs_needed = []
|
||||
for i in xrange(stream_position, newest_known_blobnum + 1):
|
||||
if i in blobs and not blobs[i].is_validated() and not i in self.provided_blob_nums:
|
||||
blobs_needed.append(blobs[i])
|
||||
return blobs_needed
|
||||
|
||||
######### internal #########
|
||||
|
||||
def _output_loop(self):
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
if self.stopped is True:
|
||||
if self.outputting_d is not None:
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
||||
return
|
||||
|
||||
blobs = self.download_manager.blobs
|
||||
logging.info("In _output_loop. last_blob_outputted: %s", str(self.last_blob_outputted))
|
||||
if blobs:
|
||||
logging.debug("Newest blob number: %s", str(max(blobs.iterkeys())))
|
||||
if self.outputting_d is None:
|
||||
self.outputting_d = defer.Deferred()
|
||||
|
||||
current_blob_num = self.last_blob_outputted + 1
|
||||
|
||||
def finished_outputting_blob():
|
||||
self.last_blob_outputted += 1
|
||||
final_blob_num = self.download_manager.final_blob_num()
|
||||
if final_blob_num is not None and final_blob_num == self.last_blob_outputted:
|
||||
self._finished_outputting()
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
||||
else:
|
||||
reactor.callLater(0, self._output_loop)
|
||||
|
||||
if current_blob_num in blobs and blobs[current_blob_num].is_validated():
|
||||
logging.info("Outputting blob %s", str(current_blob_num))
|
||||
self.provided_blob_nums.append(current_blob_num)
|
||||
d = self.download_manager.handle_blob(current_blob_num)
|
||||
d.addCallback(lambda _: finished_outputting_blob())
|
||||
d.addCallback(lambda _: self._finished_with_blob(current_blob_num))
|
||||
elif blobs and max(blobs.iterkeys()) > self.last_blob_outputted + self.max_before_skip_ahead - 1:
|
||||
self.last_blob_outputted += 1
|
||||
logging.info("Skipping blob number %s due to knowing about blob number %s",
|
||||
str(self.last_blob_outputted), str(max(blobs.iterkeys())))
|
||||
self._finished_with_blob(current_blob_num)
|
||||
reactor.callLater(0, self._output_loop)
|
||||
else:
|
||||
self.outputting_d.callback(True)
|
||||
self.outputting_d = None
|
0
lbrynet/lbrylive/client/__init__.py
Normal file
0
lbrynet/lbrylive/client/__init__.py
Normal file
180
lbrynet/lbrylive/server/LiveBlobInfoQueryHandler.py
Normal file
180
lbrynet/lbrylive/server/LiveBlobInfoQueryHandler.py
Normal file
|
@ -0,0 +1,180 @@
|
|||
import logging
|
||||
from twisted.internet import defer
|
||||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IQueryHandlerFactory, IQueryHandler
|
||||
|
||||
|
||||
class CryptBlobInfoQueryHandlerFactory(object):
|
||||
implements(IQueryHandlerFactory)
|
||||
|
||||
def __init__(self, stream_info_manager, wallet, payment_rate_manager):
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.wallet = wallet
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
|
||||
######### IQueryHandlerFactory #########
|
||||
|
||||
def build_query_handler(self):
|
||||
q_h = CryptBlobInfoQueryHandler(self.stream_info_manager, self.wallet, self.payment_rate_manager)
|
||||
return q_h
|
||||
|
||||
def get_primary_query_identifier(self):
|
||||
return 'further_blobs'
|
||||
|
||||
def get_description(self):
|
||||
return ("Stream Blob Information - blob hashes that are associated with streams,"
|
||||
" and the blobs' associated metadata")
|
||||
|
||||
|
||||
class CryptBlobInfoQueryHandler(object):
|
||||
implements(IQueryHandler)
|
||||
|
||||
def __init__(self, stream_info_manager, wallet, payment_rate_manager):
|
||||
self.stream_info_manager = stream_info_manager
|
||||
self.wallet = wallet
|
||||
self.payment_rate_manager = payment_rate_manager
|
||||
self.query_identifiers = ['blob_info_payment_rate', 'further_blobs']
|
||||
self.blob_info_payment_rate = None
|
||||
self.peer = None
|
||||
|
||||
######### IQueryHandler #########
|
||||
|
||||
def register_with_request_handler(self, request_handler, peer):
|
||||
self.peer = peer
|
||||
request_handler.register_query_handler(self, self.query_identifiers)
|
||||
|
||||
def handle_queries(self, queries):
|
||||
response = {}
|
||||
|
||||
if self.query_identifiers[0] in queries:
|
||||
if not self.handle_blob_info_payment_rate(queries[self.query_identifiers[0]]):
|
||||
return defer.succeed({'blob_info_payment_rate': 'RATE_TOO_LOW'})
|
||||
else:
|
||||
response['blob_info_payment_rate'] = "RATE_ACCEPTED"
|
||||
|
||||
if self.query_identifiers[1] in queries:
|
||||
further_blobs_request = queries[self.query_identifiers[1]]
|
||||
logging.debug("Received the client's request for additional blob information")
|
||||
|
||||
if self.blob_info_payment_rate is None:
|
||||
response['further_blobs'] = {'error': 'RATE_UNSET'}
|
||||
return defer.succeed(response)
|
||||
|
||||
def count_and_charge(blob_infos):
|
||||
if len(blob_infos) != 0:
|
||||
logging.info("Responding with %s infos", str(len(blob_infos)))
|
||||
expected_payment = 1.0 * len(blob_infos) * self.blob_info_payment_rate / 1000.0
|
||||
self.wallet.add_expected_payment(self.peer, expected_payment)
|
||||
self.peer.update_stats('uploaded_crypt_blob_infos', len(blob_infos))
|
||||
return blob_infos
|
||||
|
||||
def set_field(further_blobs):
|
||||
response['further_blobs'] = {'blob_infos': further_blobs}
|
||||
return response
|
||||
|
||||
def get_further_blobs(stream_hash):
|
||||
if stream_hash is None:
|
||||
response['further_blobs'] = {'error': 'REFERENCE_HASH_UNKNOWN'}
|
||||
return defer.succeed(response)
|
||||
start = further_blobs_request.get("start")
|
||||
end = further_blobs_request.get("end")
|
||||
count = further_blobs_request.get("count")
|
||||
if count is not None:
|
||||
try:
|
||||
count = int(count)
|
||||
except ValueError:
|
||||
response['further_blobs'] = {'error': 'COUNT_NON_INTEGER'}
|
||||
return defer.succeed(response)
|
||||
|
||||
if len([x for x in [start, end, count] if x is not None]) < 2:
|
||||
response['further_blobs'] = {'error': 'TOO_FEW_PARAMETERS'}
|
||||
return defer.succeed(response)
|
||||
|
||||
inner_d = self.get_further_blobs(stream_hash, start, end, count)
|
||||
|
||||
inner_d.addCallback(count_and_charge)
|
||||
inner_d.addCallback(self.format_blob_infos)
|
||||
inner_d.addCallback(set_field)
|
||||
return inner_d
|
||||
|
||||
if 'reference' in further_blobs_request:
|
||||
d = self.get_stream_hash_from_reference(further_blobs_request['reference'])
|
||||
d.addCallback(get_further_blobs)
|
||||
return d
|
||||
else:
|
||||
response['further_blobs'] = {'error': 'NO_REFERENCE_SENT'}
|
||||
return defer.succeed(response)
|
||||
else:
|
||||
return defer.succeed({})
|
||||
|
||||
######### internal #########
|
||||
|
||||
def handle_blob_info_payment_rate(self, requested_payment_rate):
|
||||
if not self.payment_rate_manager.accept_rate_live_blob_info(self.peer, requested_payment_rate):
|
||||
return False
|
||||
else:
|
||||
self.blob_info_payment_rate = requested_payment_rate
|
||||
return True
|
||||
|
||||
def format_blob_infos(self, blobs):
|
||||
blob_infos = []
|
||||
for blob_hash, blob_num, revision, iv, length, signature in blobs:
|
||||
blob_info = {}
|
||||
if length != 0:
|
||||
blob_info['blob_hash'] = blob_hash
|
||||
blob_info['blob_num'] = blob_num
|
||||
blob_info['revision'] = revision
|
||||
blob_info['iv'] = iv
|
||||
blob_info['length'] = length
|
||||
blob_info['signature'] = signature
|
||||
blob_infos.append(blob_info)
|
||||
return blob_infos
|
||||
|
||||
def get_stream_hash_from_reference(self, reference):
|
||||
d = self.stream_info_manager.check_if_stream_exists(reference)
|
||||
|
||||
def check_if_stream_found(result):
|
||||
if result is True:
|
||||
return reference
|
||||
else:
|
||||
return self.stream_info_manager.get_stream_of_blob(reference)
|
||||
|
||||
d.addCallback(check_if_stream_found)
|
||||
return d
|
||||
|
||||
def get_further_blobs(self, stream_hash, start, end, count):
|
||||
ds = []
|
||||
if start is not None and start != "beginning":
|
||||
ds.append(self.stream_info_manager.get_stream_of_blob(start))
|
||||
if end is not None and end != 'end':
|
||||
ds.append(self.stream_info_manager.get_stream_of_blob(end))
|
||||
dl = defer.DeferredList(ds, fireOnOneErrback=True)
|
||||
|
||||
def ensure_streams_match(results):
|
||||
for success, stream_of_blob in results:
|
||||
if stream_of_blob != stream_hash:
|
||||
raise ValueError("Blob does not match stream")
|
||||
return True
|
||||
|
||||
def get_blob_infos():
|
||||
reverse = False
|
||||
count_to_use = count
|
||||
if start is None:
|
||||
reverse = True
|
||||
elif end is not None and count_to_use is not None and count_to_use < 0:
|
||||
reverse = True
|
||||
if count_to_use is not None and count_to_use < 0:
|
||||
count_to_use *= -1
|
||||
if start == "beginning" or start is None:
|
||||
s = None
|
||||
else:
|
||||
s = start
|
||||
if end == "end" or end is None:
|
||||
e = None
|
||||
else:
|
||||
e = end
|
||||
return self.stream_info_manager.get_blobs_for_stream(stream_hash, s, e, count_to_use, reverse)
|
||||
|
||||
dl.addCallback(ensure_streams_match)
|
||||
dl.addCallback(lambda _: get_blob_infos())
|
||||
return dl
|
0
lbrynet/lbrylive/server/__init__.py
Normal file
0
lbrynet/lbrylive/server/__init__.py
Normal file
60
lbrynet/lbrynet_console/ConsoleControl.py
Normal file
60
lbrynet/lbrynet_console/ConsoleControl.py
Normal file
|
@ -0,0 +1,60 @@
|
|||
from twisted.protocols import basic
|
||||
from twisted.internet import defer
|
||||
|
||||
|
||||
class ConsoleControl(basic.LineReceiver):
|
||||
from os import linesep as delimiter
|
||||
|
||||
def __init__(self, control_handlers):
|
||||
self.control_handlers = {}
|
||||
self.categories = {}
|
||||
categories = set([category for category, handler in control_handlers])
|
||||
prompt_number = 0
|
||||
for category in categories:
|
||||
self.categories[prompt_number] = category
|
||||
for handler in [handler for cat, handler in control_handlers if cat == category]:
|
||||
self.control_handlers[prompt_number] = handler
|
||||
prompt_number += 1
|
||||
self.current_handler = None
|
||||
|
||||
def connectionMade(self):
|
||||
self.show_prompt()
|
||||
|
||||
def lineReceived(self, line):
|
||||
|
||||
def show_response(response):
|
||||
if response is not None:
|
||||
self.sendLine(response)
|
||||
|
||||
def show_error(err):
|
||||
self.sendLine(err.getTraceback())
|
||||
|
||||
if self.current_handler is None:
|
||||
try:
|
||||
num = int(line)
|
||||
except ValueError:
|
||||
num = None
|
||||
if num in self.control_handlers:
|
||||
self.current_handler = self.control_handlers[num].get_handler()
|
||||
line = None
|
||||
if self.current_handler is not None:
|
||||
try:
|
||||
r = self.current_handler.handle_line(line)
|
||||
done, ds = r[0], [d for d in r[1:] if d is not None]
|
||||
except Exception as e:
|
||||
done = True
|
||||
ds = [defer.fail(e)]
|
||||
if done is True:
|
||||
self.current_handler = None
|
||||
map(lambda d: d.addCallbacks(show_response, show_error), ds)
|
||||
if self.current_handler is None:
|
||||
self.show_prompt()
|
||||
|
||||
def show_prompt(self):
|
||||
self.sendLine("Options:")
|
||||
for num, handler in self.control_handlers.iteritems():
|
||||
if num in self.categories:
|
||||
self.sendLine("")
|
||||
self.sendLine(self.categories[num])
|
||||
self.sendLine("")
|
||||
self.sendLine("[" + str(num) + "] " + handler.get_prompt_description())
|
1236
lbrynet/lbrynet_console/ControlHandlers.py
Normal file
1236
lbrynet/lbrynet_console/ControlHandlers.py
Normal file
File diff suppressed because it is too large
Load diff
407
lbrynet/lbrynet_console/LBRYConsole.py
Normal file
407
lbrynet/lbrynet_console/LBRYConsole.py
Normal file
|
@ -0,0 +1,407 @@
|
|||
import logging
|
||||
from lbrynet.core.Session import LBRYSession
|
||||
import os.path
|
||||
import argparse
|
||||
from yapsy.PluginManager import PluginManager
|
||||
from twisted.internet import defer, threads, stdio, task
|
||||
from lbrynet.lbrynet_console.ConsoleControl import ConsoleControl
|
||||
from lbrynet.lbrynet_console.LBRYSettings import LBRYSettings
|
||||
from lbrynet.lbryfilemanager.LBRYFileManager import LBRYFileManager
|
||||
from lbrynet.conf import MIN_BLOB_DATA_PAYMENT_RATE # , MIN_BLOB_INFO_PAYMENT_RATE
|
||||
from lbrynet.core.utils import generate_id
|
||||
from lbrynet.core.StreamDescriptor import StreamDescriptorIdentifier
|
||||
from lbrynet.core.PaymentRateManager import PaymentRateManager
|
||||
from lbrynet.core.server.BlobAvailabilityHandler import BlobAvailabilityHandlerFactory
|
||||
from lbrynet.core.server.BlobRequestHandler import BlobRequestHandlerFactory
|
||||
from lbrynet.core.server.ServerProtocol import ServerProtocolFactory
|
||||
from lbrynet.core.PTCWallet import PTCWallet
|
||||
from lbrynet.lbryfile.client.LBRYFileDownloader import LBRYFileOpenerFactory
|
||||
from lbrynet.lbryfile.StreamDescriptor import LBRYFileStreamType
|
||||
from lbrynet.lbryfile.LBRYFileMetadataManager import DBLBRYFileMetadataManager, TempLBRYFileMetadataManager
|
||||
#from lbrynet.lbrylive.PaymentRateManager import LiveStreamPaymentRateManager
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ApplicationStatusFactory, GetWalletBalancesFactory, ShutDownFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import LBRYFileStatusFactory, DeleteLBRYFileChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ToggleLBRYFileRunningChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ModifyApplicationDefaultsFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import CreateLBRYFileFactory, PublishStreamDescriptorChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ShowPublishedSDHashesChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import CreatePlainStreamDescriptorChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ShowLBRYFileStreamHashChooserFactory, AddStreamFromHashFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import AddStreamFromSDFactory, AddStreamFromLBRYcrdNameFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ClaimNameFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ShowServerStatusFactory, ModifyServerSettingsFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import ModifyLBRYFileOptionsChooserFactory
|
||||
from lbrynet.lbrynet_console.ControlHandlers import PeerStatsAndSettingsChooserFactory
|
||||
from lbrynet.core.LBRYcrdWallet import LBRYcrdWallet
|
||||
|
||||
|
||||
class LBRYConsole():
|
||||
"""A class which can upload and download file streams to and from the network"""
|
||||
def __init__(self, peer_port, dht_node_port, known_dht_nodes, control_class, wallet_type, lbrycrd_rpc_port,
|
||||
use_upnp, conf_dir, data_dir):
|
||||
"""
|
||||
@param peer_port: the network port on which to listen for peers
|
||||
|
||||
@param dht_node_port: the network port on which to listen for dht node requests
|
||||
|
||||
@param known_dht_nodes: a list of (ip_address, dht_port) which will be used to join the DHT network
|
||||
"""
|
||||
self.peer_port = peer_port
|
||||
self.dht_node_port = dht_node_port
|
||||
self.known_dht_nodes = known_dht_nodes
|
||||
self.wallet_type = wallet_type
|
||||
self.wallet_rpc_port = lbrycrd_rpc_port
|
||||
self.use_upnp = use_upnp
|
||||
self.lbry_server_port = None
|
||||
self.control_class = control_class
|
||||
self.session = None
|
||||
self.lbry_file_metadata_manager = None
|
||||
self.lbry_file_manager = None
|
||||
self.conf_dir = conf_dir
|
||||
self.data_dir = data_dir
|
||||
self.plugin_manager = PluginManager()
|
||||
self.plugin_manager.setPluginPlaces([
|
||||
os.path.join(self.conf_dir, "plugins"),
|
||||
os.path.join(os.path.dirname(__file__), "plugins"),
|
||||
])
|
||||
self.control_handlers = []
|
||||
self.query_handlers = {}
|
||||
|
||||
self.settings = LBRYSettings(self.conf_dir)
|
||||
self.blob_request_payment_rate_manager = None
|
||||
self.lbryid = None
|
||||
self.sd_identifier = StreamDescriptorIdentifier()
|
||||
|
||||
def start(self):
|
||||
"""Initialize the session and restore everything to its saved state"""
|
||||
d = threads.deferToThread(self._create_directory)
|
||||
d.addCallback(lambda _: self._get_settings())
|
||||
d.addCallback(lambda _: self._get_session())
|
||||
d.addCallback(lambda _: self._setup_lbry_file_manager())
|
||||
d.addCallback(lambda _: self._setup_lbry_file_opener())
|
||||
d.addCallback(lambda _: self._setup_control_handlers())
|
||||
d.addCallback(lambda _: self._setup_query_handlers())
|
||||
d.addCallback(lambda _: self._load_plugins())
|
||||
d.addCallback(lambda _: self._setup_server())
|
||||
d.addCallback(lambda _: self._start_controller())
|
||||
return d
|
||||
|
||||
def shut_down(self):
|
||||
"""Stop the session, all currently running streams, and stop the server"""
|
||||
d = self.session.shut_down()
|
||||
d.addCallback(lambda _: self._shut_down())
|
||||
return d
|
||||
|
||||
def add_control_handlers(self, control_handlers):
|
||||
for control_handler in control_handlers:
|
||||
self.control_handlers.append(control_handler)
|
||||
|
||||
def add_query_handlers(self, query_handlers):
|
||||
|
||||
def _set_query_handlers(statuses):
|
||||
from future_builtins import zip
|
||||
for handler, (success, status) in zip(query_handlers, statuses):
|
||||
if success is True:
|
||||
self.query_handlers[handler] = status
|
||||
|
||||
ds = []
|
||||
for handler in query_handlers:
|
||||
ds.append(self.settings.get_query_handler_status(handler.get_primary_query_identifier()))
|
||||
dl = defer.DeferredList(ds)
|
||||
dl.addCallback(_set_query_handlers)
|
||||
return dl
|
||||
|
||||
def _create_directory(self):
|
||||
if not os.path.exists(self.conf_dir):
|
||||
os.makedirs(self.conf_dir)
|
||||
logging.debug("Created the configuration directory: %s", str(self.conf_dir))
|
||||
if not os.path.exists(self.data_dir):
|
||||
os.makedirs(self.data_dir)
|
||||
logging.debug("Created the data directory: %s", str(self.data_dir))
|
||||
|
||||
def _get_settings(self):
|
||||
d = self.settings.start()
|
||||
d.addCallback(lambda _: self.settings.get_lbryid())
|
||||
d.addCallback(self.set_lbryid)
|
||||
return d
|
||||
|
||||
def set_lbryid(self, lbryid):
|
||||
if lbryid is None:
|
||||
return self._make_lbryid()
|
||||
else:
|
||||
self.lbryid = lbryid
|
||||
|
||||
def _make_lbryid(self):
|
||||
self.lbryid = generate_id()
|
||||
d = self.settings.save_lbryid(self.lbryid)
|
||||
return d
|
||||
|
||||
def _get_session(self):
|
||||
d = self.settings.get_default_data_payment_rate()
|
||||
|
||||
def create_session(default_data_payment_rate):
|
||||
if default_data_payment_rate is None:
|
||||
default_data_payment_rate = MIN_BLOB_DATA_PAYMENT_RATE
|
||||
if self.wallet_type == "lbrycrd":
|
||||
wallet = LBRYcrdWallet("rpcuser", "rpcpassword", "127.0.0.1", self.wallet_rpc_port)
|
||||
else:
|
||||
wallet = PTCWallet(self.conf_dir)
|
||||
self.session = LBRYSession(default_data_payment_rate, db_dir=self.conf_dir, lbryid=self.lbryid,
|
||||
blob_dir=self.data_dir, dht_node_port=self.dht_node_port,
|
||||
known_dht_nodes=self.known_dht_nodes, peer_port=self.peer_port,
|
||||
use_upnp=self.use_upnp, wallet=wallet)
|
||||
|
||||
d.addCallback(create_session)
|
||||
|
||||
d.addCallback(lambda _: self.session.setup())
|
||||
|
||||
return d
|
||||
|
||||
def _setup_lbry_file_manager(self):
|
||||
self.lbry_file_metadata_manager = DBLBRYFileMetadataManager(self.conf_dir)
|
||||
d = self.lbry_file_metadata_manager.setup()
|
||||
|
||||
def set_lbry_file_manager():
|
||||
self.lbry_file_manager = LBRYFileManager(self.session, self.lbry_file_metadata_manager, self.sd_identifier)
|
||||
return self.lbry_file_manager.setup()
|
||||
|
||||
d.addCallback(lambda _: set_lbry_file_manager())
|
||||
|
||||
return d
|
||||
|
||||
def _setup_lbry_file_opener(self):
|
||||
stream_info_manager = TempLBRYFileMetadataManager()
|
||||
downloader_factory = LBRYFileOpenerFactory(self.session.peer_finder, self.session.rate_limiter,
|
||||
self.session.blob_manager, stream_info_manager,
|
||||
self.session.wallet)
|
||||
self.sd_identifier.add_stream_downloader_factory(LBRYFileStreamType, downloader_factory)
|
||||
return defer.succeed(True)
|
||||
|
||||
def _setup_control_handlers(self):
|
||||
handlers = [
|
||||
('General',
|
||||
ApplicationStatusFactory(self.session.rate_limiter, self.session.dht_node)),
|
||||
('General',
|
||||
GetWalletBalancesFactory(self.session.wallet)),
|
||||
('General',
|
||||
ModifyApplicationDefaultsFactory(self)),
|
||||
('General',
|
||||
ShutDownFactory(self)),
|
||||
('General',
|
||||
PeerStatsAndSettingsChooserFactory(self.session.peer_manager)),
|
||||
('lbryfile',
|
||||
LBRYFileStatusFactory(self.lbry_file_manager)),
|
||||
('Stream Downloading',
|
||||
AddStreamFromSDFactory(self.sd_identifier, self.session.base_payment_rate_manager)),
|
||||
('lbryfile',
|
||||
DeleteLBRYFileChooserFactory(self.lbry_file_metadata_manager, self.session.blob_manager,
|
||||
self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
ToggleLBRYFileRunningChooserFactory(self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
CreateLBRYFileFactory(self.session, self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
PublishStreamDescriptorChooserFactory(self.lbry_file_metadata_manager,
|
||||
self.session.blob_manager,
|
||||
self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
ShowPublishedSDHashesChooserFactory(self.lbry_file_metadata_manager,
|
||||
self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
CreatePlainStreamDescriptorChooserFactory(self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
ShowLBRYFileStreamHashChooserFactory(self.lbry_file_manager)),
|
||||
('lbryfile',
|
||||
ModifyLBRYFileOptionsChooserFactory(self.lbry_file_manager)),
|
||||
('Stream Downloading',
|
||||
AddStreamFromHashFactory(self.sd_identifier, self.session))
|
||||
]
|
||||
self.add_control_handlers(handlers)
|
||||
if self.wallet_type == 'lbrycrd':
|
||||
lbrycrd_handlers = [
|
||||
('Stream Downloading',
|
||||
AddStreamFromLBRYcrdNameFactory(self.sd_identifier, self.session,
|
||||
self.session.wallet)),
|
||||
('General',
|
||||
ClaimNameFactory(self.session.wallet)),
|
||||
]
|
||||
self.add_control_handlers(lbrycrd_handlers)
|
||||
if self.peer_port is not None:
|
||||
server_handlers = [
|
||||
('Server',
|
||||
ShowServerStatusFactory(self)),
|
||||
('Server',
|
||||
ModifyServerSettingsFactory(self)),
|
||||
]
|
||||
self.add_control_handlers(server_handlers)
|
||||
|
||||
def _setup_query_handlers(self):
|
||||
handlers = [
|
||||
#CryptBlobInfoQueryHandlerFactory(self.lbry_file_metadata_manager, self.session.wallet,
|
||||
# self._server_payment_rate_manager),
|
||||
BlobAvailabilityHandlerFactory(self.session.blob_manager),
|
||||
#BlobRequestHandlerFactory(self.session.blob_manager, self.session.wallet,
|
||||
# self._server_payment_rate_manager),
|
||||
self.session.wallet.get_wallet_info_query_handler_factory(),
|
||||
]
|
||||
|
||||
def get_blob_request_handler_factory(rate):
|
||||
self.blob_request_payment_rate_manager = PaymentRateManager(
|
||||
self.session.base_payment_rate_manager, rate
|
||||
)
|
||||
handlers.append(BlobRequestHandlerFactory(self.session.blob_manager, self.session.wallet,
|
||||
self.blob_request_payment_rate_manager))
|
||||
|
||||
d1 = self.settings.get_server_data_payment_rate()
|
||||
d1.addCallback(get_blob_request_handler_factory)
|
||||
|
||||
dl = defer.DeferredList([d1])
|
||||
dl.addCallback(lambda _: self.add_query_handlers(handlers))
|
||||
return dl
|
||||
|
||||
def _load_plugins(self):
|
||||
d = threads.deferToThread(self.plugin_manager.collectPlugins)
|
||||
|
||||
def setup_plugins():
|
||||
ds = []
|
||||
for plugin in self.plugin_manager.getAllPlugins():
|
||||
ds.append(plugin.plugin_object.setup(self))
|
||||
return defer.DeferredList(ds)
|
||||
|
||||
d.addCallback(lambda _: setup_plugins())
|
||||
return d
|
||||
|
||||
def _setup_server(self):
|
||||
|
||||
def restore_running_status(running):
|
||||
if running is True:
|
||||
return self.start_server()
|
||||
return defer.succeed(True)
|
||||
|
||||
dl = self.settings.get_server_running_status()
|
||||
dl.addCallback(restore_running_status)
|
||||
return dl
|
||||
|
||||
def start_server(self):
|
||||
|
||||
if self.peer_port is not None:
|
||||
|
||||
server_factory = ServerProtocolFactory(self.session.rate_limiter,
|
||||
self.query_handlers,
|
||||
self.session.peer_manager)
|
||||
from twisted.internet import reactor
|
||||
self.lbry_server_port = reactor.listenTCP(self.peer_port, server_factory)
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop_server(self):
|
||||
if self.lbry_server_port is not None:
|
||||
self.lbry_server_port, p = None, self.lbry_server_port
|
||||
return defer.maybeDeferred(p.stopListening)
|
||||
else:
|
||||
return defer.succeed(True)
|
||||
|
||||
def _start_controller(self):
|
||||
self.control_class(self.control_handlers)
|
||||
return defer.succeed(True)
|
||||
|
||||
def _shut_down(self):
|
||||
self.plugin_manager = None
|
||||
d1 = self.lbry_file_metadata_manager.stop()
|
||||
d1.addCallback(lambda _: self.lbry_file_manager.stop())
|
||||
d2 = self.stop_server()
|
||||
dl = defer.DeferredList([d1, d2])
|
||||
return dl
|
||||
|
||||
|
||||
class StdIOControl():
|
||||
def __init__(self, control_handlers):
|
||||
stdio.StandardIO(ConsoleControl(control_handlers))
|
||||
|
||||
|
||||
def launch_lbry_console():
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
parser = argparse.ArgumentParser(description="Launch a lbrynet console")
|
||||
parser.add_argument("--no_listen_peer",
|
||||
help="Don't listen for incoming data connections.",
|
||||
action="store_true")
|
||||
parser.add_argument("--peer_port",
|
||||
help="The port on which the console will listen for incoming data connections.",
|
||||
type=int, default=3333)
|
||||
parser.add_argument("--no_listen_dht",
|
||||
help="Don't listen for incoming DHT connections.",
|
||||
action="store_true")
|
||||
parser.add_argument("--dht_node_port",
|
||||
help="The port on which the console will listen for DHT connections.",
|
||||
type=int, default=4444)
|
||||
parser.add_argument("--wallet_type",
|
||||
help="Either 'lbrycrd' or 'ptc'.",
|
||||
type=str, default="lbrycrd")
|
||||
parser.add_argument("--lbrycrd_wallet_rpc_port",
|
||||
help="The rpc port on which the LBRYcrd wallet is listening",
|
||||
type=int, default=8332)
|
||||
parser.add_argument("--no_dht_bootstrap",
|
||||
help="Don't try to connect to the DHT",
|
||||
action="store_true")
|
||||
parser.add_argument("--dht_bootstrap_host",
|
||||
help="The hostname of a known DHT node, to be used to bootstrap into the DHT. "
|
||||
"Must be used with --dht_bootstrap_port",
|
||||
type=str, default='104.236.42.182')
|
||||
parser.add_argument("--dht_bootstrap_port",
|
||||
help="The port of a known DHT node, to be used to bootstrap into the DHT. Must "
|
||||
"be used with --dht_bootstrap_host",
|
||||
type=int, default=4000)
|
||||
parser.add_argument("--use_upnp",
|
||||
help="Try to use UPnP to enable incoming connections through the firewall",
|
||||
action="store_true")
|
||||
parser.add_argument("--conf_dir",
|
||||
help=("The full path to the directory in which to store configuration "
|
||||
"options and user added plugins. Default: ~/.lbrynet"),
|
||||
type=str)
|
||||
parser.add_argument("--data_dir",
|
||||
help=("The full path to the directory in which to store data chunks "
|
||||
"downloaded from lbrynet. Default: <conf_dir>/blobfiles"),
|
||||
type=str)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.no_dht_bootstrap:
|
||||
bootstrap_nodes = []
|
||||
else:
|
||||
bootstrap_nodes = [(args.dht_bootstrap_host, args.dht_bootstrap_port)]
|
||||
|
||||
if args.no_listen_peer:
|
||||
peer_port = None
|
||||
else:
|
||||
peer_port = args.peer_port
|
||||
|
||||
if args.no_listen_dht:
|
||||
dht_node_port = None
|
||||
else:
|
||||
dht_node_port = args.dht_node_port
|
||||
|
||||
if not args.conf_dir:
|
||||
conf_dir = os.path.join(os.path.expanduser("~"), ".lbrynet")
|
||||
else:
|
||||
conf_dir = args.conf_dir
|
||||
if not os.path.exists(conf_dir):
|
||||
os.mkdir(conf_dir)
|
||||
if not args.data_dir:
|
||||
data_dir = os.path.join(conf_dir, "blobfiles")
|
||||
else:
|
||||
data_dir = args.data_dir
|
||||
if not os.path.exists(data_dir):
|
||||
os.mkdir(data_dir)
|
||||
|
||||
log_format = "(%(asctime)s)[%(filename)s:%(lineno)s] %(funcName)s(): %(message)s"
|
||||
logging.basicConfig(level=logging.DEBUG, filename=os.path.join(conf_dir, "console.log"),
|
||||
format=log_format)
|
||||
|
||||
console = LBRYConsole(peer_port, dht_node_port, bootstrap_nodes, StdIOControl, wallet_type=args.wallet_type,
|
||||
lbrycrd_rpc_port=args.lbrycrd_wallet_rpc_port, use_upnp=args.use_upnp,
|
||||
conf_dir=conf_dir, data_dir=data_dir)
|
||||
|
||||
d = task.deferLater(reactor, 0, console.start)
|
||||
reactor.addSystemEventTrigger('before', 'shutdown', console.shut_down)
|
||||
reactor.run()
|
10
lbrynet/lbrynet_console/LBRYPlugin.py
Normal file
10
lbrynet/lbrynet_console/LBRYPlugin.py
Normal file
|
@ -0,0 +1,10 @@
|
|||
from yapsy.IPlugin import IPlugin
|
||||
|
||||
|
||||
class LBRYPlugin(IPlugin):
|
||||
|
||||
def __init__(self):
|
||||
IPlugin.__init__(self)
|
||||
|
||||
def setup(self, lbry_console):
|
||||
raise NotImplementedError
|
116
lbrynet/lbrynet_console/LBRYSettings.py
Normal file
116
lbrynet/lbrynet_console/LBRYSettings.py
Normal file
|
@ -0,0 +1,116 @@
|
|||
import binascii
|
||||
import json
|
||||
import leveldb
|
||||
import logging
|
||||
import os
|
||||
from twisted.internet import threads, defer
|
||||
|
||||
|
||||
class LBRYSettings(object):
|
||||
def __init__(self, db_dir):
|
||||
self.db_dir = db_dir
|
||||
self.db = None
|
||||
|
||||
def start(self):
|
||||
return threads.deferToThread(self._open_db)
|
||||
|
||||
def stop(self):
|
||||
self.db = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def _open_db(self):
|
||||
logging.debug("Opening %s as the settings database", str(os.path.join(self.db_dir, "settings.db")))
|
||||
self.db = leveldb.LevelDB(os.path.join(self.db_dir, "settings.db"))
|
||||
|
||||
def save_lbryid(self, lbryid):
|
||||
|
||||
def save_lbryid():
|
||||
self.db.Put("lbryid", binascii.hexlify(lbryid), sync=True)
|
||||
|
||||
return threads.deferToThread(save_lbryid)
|
||||
|
||||
def get_lbryid(self):
|
||||
|
||||
def get_lbryid():
|
||||
try:
|
||||
return binascii.unhexlify(self.db.Get("lbryid"))
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
return threads.deferToThread(get_lbryid)
|
||||
|
||||
def get_server_running_status(self):
|
||||
|
||||
def get_status():
|
||||
try:
|
||||
return json.loads(self.db.Get("server_running"))
|
||||
except KeyError:
|
||||
return True
|
||||
|
||||
return threads.deferToThread(get_status)
|
||||
|
||||
def save_server_running_status(self, running):
|
||||
|
||||
def save_status():
|
||||
self.db.Put("server_running", json.dumps(running), sync=True)
|
||||
|
||||
return threads.deferToThread(save_status)
|
||||
|
||||
def get_default_data_payment_rate(self):
|
||||
return self._get_payment_rate("default_data_payment_rate")
|
||||
|
||||
def save_default_data_payment_rate(self, rate):
|
||||
return self._save_payment_rate("default_data_payment_rate", rate)
|
||||
|
||||
def get_server_data_payment_rate(self):
|
||||
return self._get_payment_rate("server_data_payment_rate")
|
||||
|
||||
def save_server_data_payment_rate(self, rate):
|
||||
return self._save_payment_rate("server_data_payment_rate", rate)
|
||||
|
||||
def get_server_crypt_info_payment_rate(self):
|
||||
return self._get_payment_rate("server_crypt_info_payment_rate")
|
||||
|
||||
def save_server_crypt_info_payment_rate(self, rate):
|
||||
return self._save_payment_rate("server_crypt_info_payment_rate", rate)
|
||||
|
||||
def _get_payment_rate(self, rate_type):
|
||||
|
||||
def get_rate():
|
||||
try:
|
||||
return json.loads(self.db.Get(rate_type))
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
return threads.deferToThread(get_rate)
|
||||
|
||||
def _save_payment_rate(self, rate_type, rate):
|
||||
|
||||
def save_rate():
|
||||
if rate is not None:
|
||||
self.db.Put(rate_type, json.dumps(rate), sync=True)
|
||||
else:
|
||||
self.db.Delete(rate_type, sync=True)
|
||||
|
||||
return threads.deferToThread(save_rate)
|
||||
|
||||
def get_query_handler_status(self, query_identifier):
|
||||
|
||||
def get_status():
|
||||
try:
|
||||
return json.loads(self.db.Get(json.dumps(('q_h', query_identifier))))
|
||||
except KeyError:
|
||||
return True
|
||||
|
||||
return threads.deferToThread(get_status)
|
||||
|
||||
def enable_query_handler(self, query_identifier):
|
||||
return self._set_query_handler_status(query_identifier, True)
|
||||
|
||||
def disable_query_handler(self, query_identifier):
|
||||
return self._set_query_handler_status(query_identifier, False)
|
||||
|
||||
def _set_query_handler_status(self, query_identifier, status):
|
||||
def set_status():
|
||||
self.db.Put(json.dumps(('q_h', query_identifier)), json.dumps(status), sync=True)
|
||||
return threads.deferToThread(set_status)
|
8
lbrynet/lbrynet_console/__init__.py
Normal file
8
lbrynet/lbrynet_console/__init__.py
Normal file
|
@ -0,0 +1,8 @@
|
|||
"""
|
||||
A plugin-enabled console application for interacting with the LBRY network called lbrynet-console.
|
||||
|
||||
lbrynet-console can be used to download and upload LBRY Files and includes plugins for streaming
|
||||
LBRY Files to an external application and to download unknown chunks of data for the purpose of
|
||||
re-uploading them. It gives the user some control over how much will be paid for data and
|
||||
metadata and also what types of queries from clients.
|
||||
"""
|
14
lbrynet/lbrynet_console/interfaces.py
Normal file
14
lbrynet/lbrynet_console/interfaces.py
Normal file
|
@ -0,0 +1,14 @@
|
|||
from zope.interface import Interface
|
||||
|
||||
|
||||
class IControlHandlerFactory(Interface):
|
||||
def get_prompt_description(self):
|
||||
pass
|
||||
|
||||
def get_handler(self):
|
||||
pass
|
||||
|
||||
|
||||
class IControlHandler(Interface):
|
||||
def handle_line(self, line):
|
||||
pass
|
|
@ -0,0 +1,15 @@
|
|||
from zope.interface import implements
|
||||
from lbrynet.interfaces import IBlobHandler
|
||||
from twisted.internet import defer
|
||||
|
||||
|
||||
class BlindBlobHandler(object):
|
||||
implements(IBlobHandler)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
######### IBlobHandler #########
|
||||
|
||||
def handle_blob(self, blob, blob_info):
|
||||
return defer.succeed(True)
|
|
@ -0,0 +1,62 @@
|
|||
from twisted.internet import threads, defer
|
||||
from ValuableBlobInfo import ValuableBlobInfo
|
||||
from db_keys import BLOB_INFO_TYPE
|
||||
import json
|
||||
import leveldb
|
||||
|
||||
|
||||
class BlindInfoManager(object):
|
||||
|
||||
def __init__(self, db, peer_manager):
|
||||
self.db = db
|
||||
self.peer_manager = peer_manager
|
||||
|
||||
def setup(self):
|
||||
return defer.succeed(True)
|
||||
|
||||
def stop(self):
|
||||
self.db = None
|
||||
return defer.succeed(True)
|
||||
|
||||
def get_all_blob_infos(self):
|
||||
d = threads.deferToThread(self._get_all_blob_infos)
|
||||
|
||||
def make_blob_infos(blob_data):
|
||||
blob_infos = []
|
||||
for blob in blob_data:
|
||||
blob_hash, length, reference, peer_host, peer_port, peer_score = blob
|
||||
peer = self.peer_manager.get_peer(peer_host, peer_port)
|
||||
blob_info = ValuableBlobInfo(blob_hash, length, reference, peer, peer_score)
|
||||
blob_infos.append(blob_info)
|
||||
return blob_infos
|
||||
d.addCallback(make_blob_infos)
|
||||
return d
|
||||
|
||||
def save_blob_infos(self, blob_infos):
|
||||
blobs = []
|
||||
for blob_info in blob_infos:
|
||||
blob_hash = blob_info.blob_hash
|
||||
length = blob_info.length
|
||||
reference = blob_info.reference
|
||||
peer_host = blob_info.peer.host
|
||||
peer_port = blob_info.peer.port
|
||||
peer_score = blob_info.peer_score
|
||||
blobs.append((blob_hash, length, reference, peer_host, peer_port, peer_score))
|
||||
return threads.deferToThread(self._save_blob_infos, blobs)
|
||||
|
||||
def _get_all_blob_infos(self):
|
||||
blob_infos = []
|
||||
for key, blob_info in self.db.RangeIter():
|
||||
key_type, blob_hash = json.loads(key)
|
||||
if key_type == BLOB_INFO_TYPE:
|
||||
blob_infos.append([blob_hash] + json.loads(blob_info))
|
||||
return blob_infos
|
||||
|
||||
def _save_blob_infos(self, blobs):
|
||||
batch = leveldb.WriteBatch()
|
||||
for blob in blobs:
|
||||
try:
|
||||
self.db.Get(json.dumps((BLOB_INFO_TYPE, blob[0])))
|
||||
except KeyError:
|
||||
batch.Put(json.dumps((BLOB_INFO_TYPE, blob[0])), json.dumps(blob[1:]))
|
||||
self.db.Write(batch, sync=True)
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue