Compare commits

...

1264 commits

Author SHA1 Message Date
Jonathan Moody eb5da9511e Revert "TEMP: Try python 3.8."
This reverts commit 8def4d5177.
2023-04-03 13:34:36 -04:00
Jonathan Moody 8722ef840e Bump python_requires >= 3.8.
Code to handle CancelledError (as subclass of Exception) was removed.
2023-04-03 13:34:36 -04:00
Jonathan Moody 6e75a1a89b TEMP: Try python 3.8. 2023-04-03 13:34:36 -04:00
Jonathan Moody ef3189de1d Work on some DeprecationWarnings: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8. 2023-04-03 13:34:36 -04:00
Jonathan Moody c2d2080034 Try to suppress asyncio.CancelledError in a different way in test_streaming.py. 2023-04-03 13:34:36 -04:00
Jonathan Moody d0b5a0a8fd TEMP: Add workflow_dispatch. 2023-04-03 13:34:36 -04:00
Jonathan Moody 1d0e17be21 Another place generalized to Exception or asyncio.CancelledError. 2023-04-03 13:34:36 -04:00
Jonathan Moody 4ef03bb1f4 Try separate file_manager.stop() and start() calls to better
control order of events in test.
While file_manager is stopped, we get no response to file_list().
2023-04-03 13:34:36 -04:00
Jonathan Moody 4bd4bcdc27 Try ubuntu-20.04 to resolve missing libffi.so.7 issue. 2023-04-03 13:34:36 -04:00
Jonathan Moody e5ca967fa2 Make FileManager.stop() async because SourceManager.stop() is now async. 2023-04-03 13:34:36 -04:00
Jonathan Moody eed7d02e8b Tweak aiohttp version to be compatible with hub repository. 2023-04-03 13:34:36 -04:00
Jonathan Moody 02aecad52b CancelledError derives from BaseException in Python >= 3.8. The significant functional
change here is in upload_to_reflector(). Unit tests in TestReflector were failing.
Deal with lint related to CancelledError cleanup.
2023-04-03 13:34:36 -04:00
Jonathan Moody 585962d930 Make stop(), stop_tasks() consistently async routines, and have stop_tasks()
wait for file_output_task completion. This fixes a problem with
test_download_stop_resume_delete.
2023-04-03 13:34:36 -04:00
Jonathan Moody ea4fba39a6 Fix Transport, DatagramTransport mockup issues. 2023-04-03 13:34:36 -04:00
Jonathan Moody 7a86406746 Fix and enable lint no-self-use & try-except-raise. 2023-04-03 13:34:36 -04:00
Jonathan Moody c8a3eb97a4 Bump pylint version. Old pylint did not find standard library stuff on 3.9.12. 2023-04-03 13:34:36 -04:00
Lex Berezhny 20213628d7 upgrade cryptography 2023-04-03 13:34:36 -04:00
Lex Berezhny 2d1649f972 pylint disable shuffle() arg check 2023-04-03 13:34:36 -04:00
Lex Berezhny 5cb04b86a0 shuffle() needs custom random, removed loop from Event()/Queue() 2023-04-03 13:34:36 -04:00
Lex Berezhny 93ab6b3be3 passing loop to asyncio functions is deprecated 2023-04-03 13:34:36 -04:00
Lex Berezhny b9762c3e64 update plyvel 2023-04-03 13:34:36 -04:00
Lex Berezhny 82592d00ef try building 3.9 2023-04-03 13:34:36 -04:00
Jonathan Moody c118174c1a Try shell: bash to simplify. 2023-02-02 14:16:07 -05:00
Jonathan Moody d284acd8b8 Remove "debug pip cache". 2023-02-02 14:16:07 -05:00
Jonathan Moody 235c98372d Fix syntax. 2023-02-02 14:16:07 -05:00
Jonathan Moody d2f5073ef4 Single "set pip cache dir" task with conditional inside. 2023-02-02 14:16:07 -05:00
Jonathan Moody 84e5e43117 Bump upload-artifact version too. 2023-02-02 14:16:07 -05:00
Jonathan Moody 7bd025ae54 Upgrade change-string-case. Use startsWith() to test runner.os.
Bump change-string-case-action version again.
2023-02-02 14:16:07 -05:00
Jonathan Moody 8f28ce65b0 Switch to environment vars in $GITHUB_ENV. 2023-02-02 14:16:07 -05:00
Jonathan Moody d36e305129 Functions save-state, set-output deprecated. Use new mechanism. 2023-02-02 14:16:07 -05:00
Jonathan Moody 2609dee8fb Bump checkout, setup-python, cache action verions. 2023-02-02 14:16:07 -05:00
Lex Berezhny a2da86d4b5 v0.113.0 2023-01-23 10:43:02 -05:00
Alex Grin aa16c7fee5 Update conf.py 2023-01-23 10:30:25 -05:00
Alex Grin 3266f72b82 add s1.lbry.network 2023-01-23 10:30:25 -05:00
Jack Robison 77cd2a3f8a add more non lbry.com hubs/bootstrap dht nodes 2023-01-23 10:30:25 -05:00
Alex Grin 308e586e9a add grin's domain to bootstrap hubs list 2023-01-23 10:30:25 -05:00
Philip Ahlqvist 84beddfd77 Added tracker and dht from pigg.es
Added tracker and dht from pigg.es
2023-01-22 19:09:17 -05:00
Victor Shyba 6258651650
Merge pull request #3716 from lbryio/dht_exceptions
handle remote exceptions on routing table ping
2022-12-13 17:18:47 -03:00
Victor Shyba cc5f0b6630 handle remote exception on routing table ping 2022-12-13 16:56:58 -03:00
Jonathan Moody f64d507d39 TEMP: Pin workflows to ubuntu-20.04 to work around missing ripemd160 issue. 2022-12-12 21:47:41 -05:00
Jonathan Moody 001819d5c2 Bump Hub to include fix for supports with wrong names. 2022-11-20 20:34:30 -05:00
Jonathan Moody 8b4c046d28 Try pyinstaller==4.6 to fix MacOS build failure. 2022-11-20 20:34:30 -05:00
Jonathan Moody 2c20ad6c43 Add another zlib.error mapped to InvalidPasswordError. 2022-11-20 20:34:30 -05:00
Jonathan Moody 9e610cc54c Update test for Hub rename of method stage_put() -> stash_put(). 2022-11-20 20:34:30 -05:00
Jonathan Moody b9d25c6d01 Bump hub to latest, getting fix for TX negative caching issue and others. 2022-11-20 20:34:30 -05:00
Jonathan Moody 419b5b45f2 Allow a few initial "transaction not found" responses from Hub. 2022-11-20 20:34:30 -05:00
Jonathan Moody 516c2dd5d0 Bump hub to fix subscribe race + EADDRINUSE issue. 2022-11-20 20:34:30 -05:00
Jonathan Moody b99102f9c9 Bump max_misuse_attempts by 50% to 120000. 2022-11-20 20:34:30 -05:00
Lex Berezhny 8c6c7b655c v0.112.0 2022-10-30 21:56:17 -04:00
Lex Berezhny 48c6873fc4 channel_sign command has customizeable salt 2022-10-30 21:53:53 -04:00
Victor Shyba 15dc52bd9a
Merge pull request #3695 from lbryio/3690
Fix claim fields fallback raising errors before download is saved on database
2022-10-28 11:16:52 -03:00
Victor Shyba 52d555078f initialize stored claim field for fallback earlier 2022-10-19 15:13:47 -03:00
Victor Shyba cc976bd010 test for early fallback of suggested_file_name 2022-10-19 15:13:47 -03:00
Lex Berezhny 9cc6992011 torrents needs loop 2022-10-18 17:23:56 -04:00
Lex Berezhny a1b87460c5 passing loop to asyncio functions is deprecated 2022-10-18 17:23:56 -04:00
jessopb 007e1115c4 v0.111.0 2022-10-18 11:18:26 -04:00
jessopb 20ae51b949
Merge pull request #3692 from lbryio/fix-import/export-backwards-compat
fix backwards compatibility in wallet import/export
2022-10-18 11:16:34 -04:00
zeppi 24e9c7b435 fix backwards compatibility in wallet import/export 2022-10-18 10:53:51 -04:00
zeppi b452e76e1d reverting version to 0.110.0 2022-10-18 10:27:44 -04:00
Lex Berezhny 341834c30d v0.111.0 2022-10-18 00:37:44 -04:00
Victor Shyba 12bac730bd tests: check mime type as well 2022-10-18 00:31:10 -04:00
Victor Shyba 1027337833 fallback for stream name and tests 2022-10-18 00:31:10 -04:00
Victor Shyba 97fef21f75 fallback for suggested file name and tests 2022-10-18 00:31:10 -04:00
Lex Berezhny 9dafd5f69b added claim_list filtering by reposted_claim_id and fix for claim_id of reposted claim in JSON output 2022-10-18 00:28:13 -04:00
Victor Shyba fd4f0b2049 bump first-start checkpoint 2022-10-18 00:26:10 -04:00
Lex Berezhny 734f0651a4 minor refactor 2022-10-18 00:25:41 -04:00
zeppi 94deaf55df lint 2022-10-18 00:25:41 -04:00
zeppi d957d46f96 lint 2022-10-18 00:25:41 -04:00
zeppi 0217aede3d update docs 2022-10-18 00:25:41 -04:00
zeppi e4e1600f51 Enable unencrypted wallet import and export 2022-10-18 00:25:41 -04:00
Jonathan Moody d0aad8ccaf Add zlib.error string just observed for the first time. 2022-09-29 22:18:54 -04:00
Jonathan Moody ab50cfa5c1 Add test steps to repeatedly sync_apply() using a bad password. 2022-09-29 22:18:54 -04:00
Jonathan Moody 5a26aea398 Feedback: Reuse IntegrationTestcase.generate() in generate_and_wait(). 2022-09-29 22:18:54 -04:00
Jonathan Moody bd1cebdb4c Bump hub to include TaskGroup fix. 2022-09-29 22:18:54 -04:00
Jonathan Moody ec433f069f Substitute InvalidPasswordError for zlib.error. 2022-09-29 22:18:54 -04:00
Jonathan Moody cd6d3fec9c Wait for initial sync in test_wallet_syncing_status(). 2022-09-29 22:18:54 -04:00
Jonathan Moody 8c474a69de Hub error message changed to include blocked/filtered. 2022-09-29 22:18:54 -04:00
Jonathan Moody 8903056648 Bump hub version to latest. 2022-09-29 22:18:54 -04:00
Jonathan Moody 749a92d0e5 Wait on block height too in generate_and_wait(). 2022-09-29 22:18:54 -04:00
Jonathan Moody a7d7efecc7 Logging level back to INFO. 2022-09-20 10:04:23 -04:00
Jonathan Moody c88f0797a3 Log f.exception() if present instead of "Stopped". 2022-09-20 10:04:23 -04:00
Jonathan Moody 137ebd503d Test insufficient funds behavior. 2022-09-20 10:04:23 -04:00
Jonathan Moody c3f5dd780e Revise exception handling. 2022-09-20 10:04:23 -04:00
Jonathan Moody 20b1865879 Don't use retriable_call(). Add handling for InsufficientFundsError. 2022-09-20 10:04:23 -04:00
Jonathan Moody 231b982422 Wait on usage payement TX to be processed. 2022-09-20 10:04:23 -04:00
Jonathan Moody fd69401791 Catch and log exceptions coming from the pay() task.
Change test to reproduce failure.
2022-09-20 10:04:23 -04:00
Jonathan Moody 718d046833 Logging for test_single_server_payment debug. 2022-09-20 10:04:23 -04:00
Victor Shyba e10f57d1ed
Merge pull request #3642 from lbryio/libtorrent
use official libtorrent, fix tests, make it a normal dependency
2022-09-09 19:58:38 -03:00
Victor Shyba 8a033d58df fix torrent component 2022-09-09 12:21:59 -03:00
Victor Shyba c07c369a28 add libtorrent pyinstaller hook 2022-09-09 12:21:59 -03:00
Victor Shyba 5be990fc55 do not ignore libtorrent import error 2022-09-09 12:21:59 -03:00
Victor Shyba 8f26010c04 make libtorrent a normal dependency 2022-09-09 12:21:59 -03:00
Victor Shyba 3021962e3d tests: add peer directly instead of relying on torrent dht 2022-09-09 12:21:59 -03:00
Victor Shyba 84d89ce5af torrent: disable upnp, natpmp. 2022-09-09 12:21:59 -03:00
Victor Shyba 0961cad716 remove dead code 2022-09-09 12:21:59 -03:00
Jonathan Moody 5c543cb374 Wait for hub to update with all 100 new blocks
before proceeding with initial_headers_sync().
2022-09-08 22:40:09 -04:00
Jonathan Moody f78d7896a5 Revert "Add more failure message details for debugging."
This reverts commit 5e00a79751.
2022-09-08 18:13:26 -04:00
Jonathan Moody 78a28de2aa Align style of generate() with generate_and_wait(). 2022-09-08 18:13:26 -04:00
Jonathan Moody 45a255e7a2 Reuse generate() logic to wait on hub
instead of half-baked reorg() logic.
2022-09-08 18:13:26 -04:00
Jonathan Moody d2738c2e72 Add more failure message details for debugging. 2022-09-08 18:13:26 -04:00
Jonathan Moody a7c7ab7f7b Correct the terminal height we wait for in generate(). 2022-09-08 18:13:26 -04:00
Jonathan Moody 988f288715 Lint fix for _es_height checks. 2022-09-08 18:13:26 -04:00
Jonathan Moody 38e9b5b432 Wait for _es_height in addition to db_height. 2022-09-08 18:13:26 -04:00
Victor Shyba f7455600cc
Merge pull request #3625 from lbryio/dht_crawler
Add script to collect DHT metrics
2022-09-07 12:56:41 -03:00
Victor Shyba c7c2d6fe5a collect connections reachability 2022-09-07 12:03:11 -03:00
Victor Shyba c6c0228970 fix crawler startup query 2022-09-07 12:03:11 -03:00
Victor Shyba 8d9d2c76ae routing table sizes as histogram 2022-09-07 12:03:11 -03:00
Victor Shyba 0b059a5445 use a histogram for latency, remove labels 2022-09-07 12:03:11 -03:00
Victor Shyba ab67f417ee dht_crawler: wait and retry during port switch 2022-09-07 12:03:11 -03:00
Victor Shyba 0e7a1aee0a dht_crawler: clean in memory set for expired peers 2022-09-07 12:03:11 -03:00
Victor Shyba d0497cf6b5 dht_crawler: skip saving connections for now 2022-09-07 12:03:11 -03:00
Victor Shyba c38573d5de dht_crawler: gather both loops, avoid task exceptions being hidden 2022-09-07 12:03:11 -03:00
Victor Shyba f077e56cec dht_crawler:only count latency during findNode 2022-09-07 12:03:11 -03:00
Victor Shyba 5e58c2f224 fix hosting metrics, improve logging 2022-09-07 12:03:11 -03:00
Victor Shyba cc64789e96 dht_crawler: fix logging for missing ports 2022-09-07 12:03:11 -03:00
Victor Shyba b5c390ca04 docker: add volume declaration 2022-09-07 12:03:11 -03:00
Victor Shyba da2ffb000e skip peers with bad ports without raising 2022-09-07 12:03:11 -03:00
Victor Shyba df77392fe0 dht crawler:improve logging, metrics, make startup concurrent 2022-09-07 12:03:11 -03:00
Victor Shyba 9aa9ecdc0a add arg for db path 2022-09-07 12:03:11 -03:00
Victor Shyba 43b45a939b format logging 2022-09-07 12:03:11 -03:00
Victor Shyba e2922a434f add script to generate probe dataset 2022-09-07 12:03:11 -03:00
Victor Shyba 0d6125de0b add sd_hash prober 2022-09-07 12:03:11 -03:00
Victor Shyba 13af7800c2 refactor script, remove dep 2022-09-07 12:03:11 -03:00
Victor Shyba 47a5d37d7c change default metric port, add sqlalchemy to dockerfile 2022-09-07 12:03:11 -03:00
Victor Shyba 4a3a7e318d update pip and setuptools on dht dockerfile 2022-09-07 12:03:11 -03:00
Victor Shyba 85ff487af5 dht_crawler: randomize port when idle 2022-09-07 12:03:11 -03:00
Victor Shyba 62eb9d5c75 dht_crawler: only count non zero connections 2022-09-07 12:03:11 -03:00
Victor Shyba cfe5c8de8a dht_crawler: serve prometheus metrics at 7070 2022-09-07 12:03:11 -03:00
Victor Shyba 0497698c5b dht_crawler: skip ping if known node_id 2022-09-07 12:03:11 -03:00
Victor Shyba 508bdb8e94 dht_crawler: keep working set in memory, flush to db on intervals 2022-09-07 12:03:11 -03:00
Victor Shyba cd42f0d726 dht_crawler: fix node id store 2022-09-07 12:03:11 -03:00
Victor Shyba 2706b66a92 dht_crawler: dont re-bootstrap. try known reachable even when they expire 2022-09-07 12:03:11 -03:00
Victor Shyba 29c2d5715d dht_crawler: fix last_seen update 2022-09-07 12:03:11 -03:00
Victor Shyba 965389b759 dht_crawler: process older first, avoid discarding 2022-09-07 12:03:11 -03:00
Victor Shyba 174439f517 dht_crawler: cleanup, try not to reset key 2022-09-07 12:03:11 -03:00
Victor Shyba baf422fc03 dht_crawler: extract refresh_limit, bump to 1h 2022-09-07 12:03:11 -03:00
Victor Shyba 61f7fbe230 dht_crawler: avoid reads 2022-09-07 12:03:11 -03:00
Victor Shyba c6c27925b7 dht_crawler: flush/commit only when finished 2022-09-07 12:03:11 -03:00
Victor Shyba be4c62cf32 check membership instead of one update per peer 2022-09-07 12:03:11 -03:00
Victor Shyba 443a1c32fa dht_crawler: save a set of connections to avoid dupes, enable initial crawl 2022-09-07 12:03:11 -03:00
Victor Shyba 90c2a58470 dht_crawler: dont gather empty, fix crash 2022-09-07 12:03:11 -03:00
Victor Shyba adc79ec404 dht_crawler: only warn for missing key if it replied 2022-09-07 12:03:11 -03:00
Victor Shyba 137d8ca4ac dht_crawler: enable WAL 2022-09-07 12:03:11 -03:00
Victor Shyba abf4d888af dht_crawler: warn if we cannot get node id 2022-09-07 12:03:11 -03:00
Victor Shyba 6c350e57dd dht_crawler: query recently checked as stats 2022-09-07 12:03:11 -03:00
Victor Shyba fb7a93096e only count checked unreachable 2022-09-07 12:03:11 -03:00
Victor Shyba 7ea88e7b31 dht_crawler: store data 2022-09-07 12:03:11 -03:00
Victor Shyba 2361e34541 dht crawler, initial version 2022-09-07 12:03:11 -03:00
Victor Shyba be06378437 add method for getting the node_id from a known peer on peer manager 2022-09-07 12:03:11 -03:00
Victor Shyba a334a93757
Merge pull request #3631 from lbryio/bootstrap_node
Add all peers when running as a bootstrap node
2022-08-29 11:07:29 -03:00
Victor Shyba e3ee3892b2 better test name 2022-08-22 18:45:18 -03:00
Victor Shyba d61accea1a simplify bucket refresh loop 2022-08-11 21:14:56 -03:00
Victor Shyba e887453aa5 remove unused last_accessed 2022-08-11 20:39:51 -03:00
Victor Shyba c3e4f0b988 add 'is_bootstrap_node' conf 2022-08-11 20:38:42 -03:00
Victor Shyba 318728aebd add bootstrap flag to routing table 2022-08-11 20:38:42 -03:00
Victor Shyba d8c1aaebc2 routing table: mark private methods 2022-08-11 20:38:42 -03:00
Victor Shyba d7b65c15d2 return none instead of raising 2022-08-11 20:38:42 -03:00
Victor Shyba 972db80246 move add peer logic to routing table 2022-08-11 20:38:42 -03:00
Victor Shyba 0d343ecb2f simplify iterative find constructor 2022-08-11 20:38:42 -03:00
Lex Berezhny 01cd95fe46 v0.110.0 2022-08-11 10:58:16 -04:00
Lex Berezhny 6dc57fc02c revert version 2022-08-11 10:20:58 -04:00
Lex Berezhny 10df0c1fba disable Hotbit and UPBit exchange rate feeds 2022-08-11 10:19:54 -04:00
Lex Berezhny ec751e5add v0.110.0 2022-08-10 13:52:46 -04:00
Lex Berezhny 3e3974f813 lint 2022-08-08 14:55:44 -04:00
Lex Berezhny ec82486e15 removed go hub dependency 2022-08-08 14:55:44 -04:00
Lex Berezhny e16f6b07b8 revert release 2022-08-08 13:02:12 -04:00
Lex Berezhny 9a842c273b v0.110.0 2022-08-08 08:46:32 -04:00
Jonathan Moody 40f7d3ee4b Stabilize test_streaming.py by scaning the data_dir, not the parent of data_dir 2022-08-01 17:37:06 -04:00
Lex Berezhny 1dc2f0458b fix lint 2022-08-01 10:04:24 -04:00
Lex Berezhny 3924b28cc3 raise not implemented error when importing unencrypted wallet 2022-08-01 10:04:24 -04:00
Lex Berezhny 020487b6a0 account merge bug fix from upstream 2022-08-01 10:04:24 -04:00
zeppi 14037c9b2f help string edits 2022-08-01 10:04:24 -04:00
zeppi 0cb37a5c4b linting 2022-08-01 10:04:24 -04:00
zeppi fa5f3e7e55 change api for data first, password optional, return (str) 2022-08-01 10:04:24 -04:00
zeppi 30aa0724ec newline end of test file 2022-08-01 10:04:24 -04:00
zeppi 059890e4e5 wallet import export feature 2022-08-01 10:04:24 -04:00
Jonathan Moody 9654d4f003 Obtain "amount" from new_txo.amount when calling save_supports(). 2022-08-01 09:10:49 -04:00
Jonathan Moody 956b52a2c1 Refactor _old_get_temp_claim_info(), eliminating "bid" arg. Obtain the value from txo.amount. 2022-08-01 09:10:49 -04:00
Lex Berezhny 2e975c8b61 lint 2022-07-26 22:18:29 -04:00
Lex Berezhny 656e299100 migrate key addresses on changed accounts after sync apply 2022-07-26 22:18:29 -04:00
Jack Robison 352e45b6b7 update pinned hub version 2022-07-25 10:12:46 -04:00
Jack Robison a9a1076362 improve test_es_sync_utility 2022-07-25 10:12:46 -04:00
Jack Robison 6d370b0a12 dont skip test_setting_stream_fields 2022-07-25 10:12:46 -04:00
Jack Robison c9fac27b66 test resolving different streams for a channel using short urls 2022-07-25 10:12:46 -04:00
Jack Robison 59bc0b9682 update censored error 2022-07-25 10:12:46 -04:00
Lex Berezhny ba60aeeebc migrate certificates after importing new account 2022-07-18 10:36:21 -04:00
Jonathan Moody dc427ecf6c Correct collection_update, account_fund docstrings. Regenerate api.json using generate_json_api.py. 2022-07-07 21:33:43 -04:00
Victor Shyba 87b4404767
Merge pull request #3624 from moodyjon/test_fix_exch_rate1
Fixes for intermittent test failures: test_exchange_rate_manager(), test_basic_claim_search()
2022-07-01 17:29:44 -03:00
Jonathan Moody ba9ac489c3 Relax range in test_exchange_rate_manager.py. (again) 2022-06-19 19:17:48 -04:00
Jonathan Moody 7049629ad7 Relax range in test_exchange_rate_manager.py. 2022-06-19 19:17:40 -04:00
Jonathan Moody 3ae4aeea47 Search for longer prefix of sd_hash to give better chance of unique results. 2022-06-19 19:14:54 -04:00
Lex Berezhny 8becf1f69f v0.109.0 2022-06-08 12:40:35 -04:00
Victor Shyba 582f79ba1c do not consider pending blobs on disk space query 2022-06-08 12:25:38 -04:00
Lex Berezhny 3c28d869f4
Merge pull request #3620 from lbryio/repost_title_tags
reposts can have title, description and tags
2022-06-08 12:24:28 -04:00
Lex Berezhny fe61b90610 reposts can have title, description and tags 2022-06-08 10:35:22 -04:00
Lex Berezhny c04fbb2908
Merge pull request #3614 from lbryio/grins-tracker
add tracker.lbry.grin.io
2022-06-06 13:13:01 -04:00
Alex Grintsvayg 571e71b28e
add tracker.lbry.grin.io 2022-06-06 11:29:09 -04:00
Lex Berezhny 39fcfcccfb
Merge pull request #3608 from lbryio/fix_ci
upgraded SDK to use the new LBRY hub project
2022-06-06 09:01:57 -04:00
Jack Robison 2313d30996
fix reconnect test 2022-05-27 11:59:18 -04:00
Jack Robison ac7e94c6ed
pylint 2022-05-27 09:59:11 -04:00
Jack Robison a391fe9fc7
scribe -> hub 2022-05-27 09:58:13 -04:00
Jack Robison ea8adc5367
update scribe env and fix tests 2022-05-27 09:58:13 -04:00
Victor Shyba 0ea8ba72dd
Env->ServerEnv from scribe changes 2022-05-26 14:28:33 -04:00
Victor Shyba 7a8d5da0e8
Merge pull request #3613 from lbryio/fix_ci_lbcd_urls
tests: fix ci lbcd/lbcwallet urls
2022-05-26 10:21:02 -03:00
Victor Shyba da30f003e8 update lbcwallet url 2022-05-25 12:17:57 -03:00
Victor Shyba 6257948ad7 update lbcd url 2022-05-25 12:17:28 -03:00
Victor Shyba a7f606d62c change pip upgrade due windows error 2022-05-23 16:28:36 -03:00
Victor Shyba 1d95eb1549
Merge pull request #3599 from moodyjon/async-for-pr3504
Tighten up IterativeFinder async close behavior (DHT iterator continues after consumer breaks out of it)
2022-05-23 11:12:40 -03:00
Jonathan Moody e5e9873f79 Simplify by eliminating AsyncGenerator base and generator function. Remove any new places enforcing max_results. 2022-05-20 17:23:39 -04:00
Jonathan Moody 530f9c72ea Fix lint error lbry/utils.py 2022-05-20 17:23:39 -04:00
Jonathan Moody fad84c771c Support official contextlib.aclosing() when it's available. 2022-05-20 17:23:39 -04:00
Jonathan Moody fe07aac79c Define and use lbry.utils.aclosing() in lieu of official contextlib.aclosing(). 2022-05-20 17:23:39 -04:00
Jonathan Moody 91a6eae831 Fix lint issue in iterative_find.py. 2022-05-20 17:23:39 -04:00
Jonathan Moody 5852fcd287 Don't wait on running_tasks after cancel(). Sometimes a CancelledError exception is received, which is unhelpful, and complicates shutting down the generator. 2022-05-20 17:23:39 -04:00
Jonathan Moody 4767bb9dee Wrap "async for" over IterativeXXXFinder in try/finally ensuring aclose(). 2022-05-20 17:23:39 -04:00
Jonathan Moody 82d7f81f41 Correct call to _aclose() in response to TransportNotConnected. 2022-05-20 17:23:39 -04:00
Jonathan Moody b036961954 Tighten up IterativeFinder logic to respect max_records better, and wait after task cancel().
Also make IterativeFinder a proper AsyncGenerator. This gives it an offically recognized aclose() method and could help with clean finalization.
2022-05-20 17:23:39 -04:00
Victor Shyba 5c708e1c6f
Merge pull request #3611 from lbryio/fix_hub_url
tests: fix hub url
2022-05-20 18:19:39 -03:00
Victor Shyba 9436600267 tests: bump exchange rate manager test 2022-05-20 17:25:02 -03:00
Victor Shyba 4ab29c4d5f tests: fix hub url 2022-05-20 16:50:09 -03:00
Alex Grin 6944c4a7c4
Update LICENSE 2022-05-17 12:16:00 -04:00
Victor Shyba 2735484fae
Merge pull request #3576 from lbryio/trackers
Add support for announcing and querying LBRY streams over BEP15 (BitTorrent Trackers)
2022-05-13 17:56:20 -03:00
Victor Shyba 03b0d5e250 tracker client: extract default timeout and concurreny. Bump concurrency to 100 2022-05-11 21:13:30 -03:00
Victor Shyba 629812337b changes from review 2022-05-11 21:13:30 -03:00
Victor Shyba e54cc8850c return KademliaPeers directly into the queue instead of exposing Announcement abstraction 2022-05-11 21:13:30 -03:00
Victor Shyba 7cba51ca7d update tests, query with port 0, filter bad ports earlier, make unit tests more reliable 2022-05-11 21:13:30 -03:00
Victor Shyba 3dc145fe68 make peer list query trackers 2022-05-11 21:13:30 -03:00
Victor Shyba 7d560df9fd use same arg name as overriden datagram_received (linting) 2022-05-11 21:13:30 -03:00
Victor Shyba b3f894e480 add integration test for tracker discovery 2022-05-11 21:13:30 -03:00
Victor Shyba 235cc5dc05 results are indexed by ip, setdefault after resolve 2022-05-11 21:13:30 -03:00
Victor Shyba c276053301 move server implementation to tracker module 2022-05-11 21:13:30 -03:00
Victor Shyba 2e85e29ef1 peer id PREFIX is a constant 2022-05-11 21:13:30 -03:00
Victor Shyba 1169a02c8b make client server updatable from conf 2022-05-11 21:13:30 -03:00
Victor Shyba a7cea4082e tracker:log DNS errors as warning instead of trace 2022-05-11 21:13:30 -03:00
Victor Shyba 7e6ea97499 make peer id according to BEP20 2022-05-11 21:13:30 -03:00
Victor Shyba 3c46cc4fdd expire connection id quicker as some trackers have it set low 2022-05-11 21:13:30 -03:00
Victor Shyba 6e5c7a1927 use cache_concurrent to avoid requesting the same connection_id multiple times 2022-05-11 21:13:30 -03:00
Victor Shyba 4e09b35012 remove unused import and dead code 2022-05-11 21:13:30 -03:00
Victor Shyba 16a2023bbd stop tasks before removing transport 2022-05-11 21:13:30 -03:00
Victor Shyba 99fc7178c1 better way to batch announce + handle different intervals for different trackers 2022-05-11 21:13:30 -03:00
Victor Shyba d4aca89a48 handle multiple results from multiple trackers 2022-05-11 21:13:30 -03:00
Victor Shyba 2918d8c7b4 tracker component is running only if the task is alive 2022-05-11 21:13:30 -03:00
Victor Shyba 407c570f8b tests: lower timeout, add test with bad and good mixed 2022-05-11 21:13:30 -03:00
Victor Shyba e299a9c159 tests: multiple trackers, simple case 2022-05-11 21:13:30 -03:00
Victor Shyba cc4a578578 tests: add support for multiple trackers 2022-05-11 21:13:30 -03:00
Victor Shyba 0e4f1eae5b reduce timeout to 10, fix lints 2022-05-11 21:13:30 -03:00
Victor Shyba eccf0e6234 fix reusing result interval from failed expired attempt 2022-05-11 21:13:30 -03:00
Victor Shyba a3da041412 fix exceptions on shutdown, stop using cancel_tasks 2022-05-11 21:13:30 -03:00
Victor Shyba 2f1617eee4 less verbose on timeouts, dont count timeouts, fix stop 2022-05-11 21:13:30 -03:00
Victor Shyba 05124d41ae only log when really announcing, stop counting cached ones 2022-05-11 21:13:30 -03:00
Victor Shyba 42fd1c962e stop tracker tasks on shutdown 2022-05-11 21:13:30 -03:00
Victor Shyba 47e432b4bb make it less verbose, only log after all events are fired 2022-05-11 21:13:30 -03:00
Victor Shyba 61c99abcf1 avoid readding the same hash when tracker is busy with too many files 2022-05-11 21:13:30 -03:00
Victor Shyba 28fdd62945 move concurreny control to lower layer 2022-05-11 21:13:30 -03:00
Victor Shyba 3855db6c66 pause announcer for 1 minute each round 2022-05-11 21:13:30 -03:00
Victor Shyba 30acde0afc at most 10 announces concurrently 2022-05-11 21:13:30 -03:00
Victor Shyba 2d9c5742c7 cache results, save interval on tracker 2022-05-11 21:13:30 -03:00
Victor Shyba 43e50f7f04 fix subscribe_hash 2022-05-11 21:13:30 -03:00
Victor Shyba 888e9918a6 improve timeout handling 2022-05-11 21:13:30 -03:00
Victor Shyba 9e9a64d989 evented system for tracker announcements 2022-05-11 21:13:30 -03:00
Victor Shyba 7acaecaed2 managed_stream: remove unused imports 2022-05-11 21:13:30 -03:00
Victor Shyba 24eb189b7f skip component on test cli 2022-05-11 21:13:30 -03:00
Victor Shyba 2344aca146 fix component property 2022-05-11 21:13:30 -03:00
Victor Shyba 758f9deafe fix unit tests 2022-05-11 21:13:30 -03:00
Victor Shyba 7b425eb2ac add tracker announcer component 2022-05-11 21:13:30 -03:00
Victor Shyba 30e8728f7f use tracker on download 2022-05-11 21:13:30 -03:00
Victor Shyba 3989eef84b return whole announcement so the caller knows the interval 2022-05-11 21:13:30 -03:00
Victor Shyba dc6f8c4fc4 add arg to announce stopped, removing the announcement 2022-05-11 21:13:30 -03:00
Victor Shyba 2df8a1d99d make a helper function to announce 2022-05-11 21:13:30 -03:00
Victor Shyba 4ea858fdd3 add new conf: tracker_servers 2022-05-11 21:13:30 -03:00
Victor Shyba 006391dd26 move udp server to test file, add link to BEP15 2022-05-11 21:13:29 -03:00
Victor Shyba 4a0bf8a702 add torrent udp tracker client, server and tests 2022-05-11 21:13:29 -03:00
Victor Shyba d0e715feb9
Merge pull request #3609 from lbryio/pin_scribe
CI: pin scribe, fix exchange rate manager test
2022-05-11 21:13:00 -03:00
Victor Shyba fd73412f12 test_exchange_rate_manager: bump value 2022-05-11 20:28:06 -03:00
Victor Shyba 3819552861 try usedevelop=true 2022-05-11 20:14:55 -03:00
Victor Shyba ca6fd5b7b9 fix scribe pinning 2022-05-11 20:14:44 -03:00
Lex Berezhny b8867cd18c release.py script changed to use gh auth login for authentication 2022-04-10 23:28:16 -04:00
Lex Berezhny 8209eafc6b v0.108.0 2022-04-10 23:25:15 -04:00
Lex Berezhny 858e72a555
Merge pull request #3595 from lbryio/default_feer_per_name_char
pin scribe to specific version
2022-04-08 13:49:11 -04:00
Lex Berezhny d3880fffa0 pin scribe to specific version 2022-04-08 13:48:30 -04:00
Lex Berezhny 0a51898722
Merge pull request #3593 from lbryio/default_feer_per_name_char
set the default per character fee for claims to zero
2022-04-08 13:46:54 -04:00
Lex Berezhny 63cef81015 fix scribe server version test 2022-04-08 13:22:51 -04:00
Lex Berezhny 9279865078 add sleep to transaction show test per jack suggestion 2022-04-08 12:59:25 -04:00
Lex Berezhny fba7fc7aba fix scribe server version test 2022-04-08 12:53:19 -04:00
Lex Berezhny a3d9d5bce7 fix transaction unit test 2022-04-08 11:05:45 -04:00
Lex Berezhny 23ecbc8ebe set the default per character fee for claims to zero 2022-04-08 10:58:02 -04:00
Lex Berezhny 42b2dbd92e
Merge pull request #3572 from orblivion/json-schema
Add wallet json-schema, validate in one test.
2022-04-08 10:56:58 -04:00
Lex Berezhny 37eb55375a only install jsonschema during testing 2022-04-08 10:56:18 -04:00
Lex Berezhny 94bf357817 cleanup paths 2022-04-08 10:56:18 -04:00
Daniel Krol eca69391ef Add wallet json-schema, validate in one test. 2022-04-08 10:56:18 -04:00
Lex Berezhny d0c5b32a90
Merge pull request #3575 from lbryio/spend_time_locked
added `account_deposit` command which is able to deposit time locked transaction into wallet
2022-04-08 10:52:08 -04:00
Lex Berezhny 84ef52cf4d fix redeem scripthash test 2022-04-08 10:11:11 -04:00
Lex Berezhny 8fb14bf713 remove command not available in lbcd 2022-04-08 09:59:22 -04:00
Lex Berezhny 16eb50a291 working jsonrpc_account_deposit 2022-04-08 09:57:15 -04:00
Lex Berezhny dd503fbb82 set locktime from script 2022-04-08 09:57:15 -04:00
Lex Berezhny ae79314869 wip 2022-04-08 09:57:15 -04:00
Lex Berezhny 0cbc514a8e account_deposit command added which accepts time locked TXs 2022-04-08 09:57:15 -04:00
Lex Berezhny 5777f3e15c wip 2022-04-08 09:57:15 -04:00
Lex Berezhny 8cdcd770c0
Merge pull request #3590 from lbryio/fix-address_list-pagination
fix pagination with `address_list` command
2022-04-06 09:52:41 -04:00
Lex Berezhny 2d20458bc2 re-use existing constraints cleanup function 2022-04-06 09:09:39 -04:00
zeppi 2bd2088248 bugfix 2022-04-06 09:09:39 -04:00
zeppi 5818270803 fix address_list pagination 2022-04-06 09:09:39 -04:00
Victor Shyba 79a5f0e375 lint 2022-04-05 00:35:48 -03:00
Victor Shyba c830784f65
Merge pull request #3586 from AlessandroSpallina/master
fix #3530 added error log when tcp port is already in use
2022-04-05 00:04:59 -03:00
Victor Shyba 3fc538104d v0.107.2 2022-03-31 17:19:58 -03:00
AlessandroSpallina 96490fdb15
Merge branch 'master' into master 2022-03-29 13:50:57 +02:00
Victor Shyba 5a0c225c6f v0.107.0 2022-03-28 15:56:06 -03:00
Lex Berezhny c3e524cb8b
Merge pull request #3588 from lbryio/scribe
move `lbry.wallet.server` to new project called `scribe`, switch from using `lbrycrd` to `lbcd` in integration tests
2022-03-28 00:14:54 -04:00
Jack Robison 9faf6e46ca move lbry.wallet.server to new project called scribe
switch from using lbrycrd to lbcd
2022-03-27 23:33:26 -04:00
Victor Shyba e89acac235
Merge pull request #3585 from lbryio/fix_blob_db_queries
Fixes bugs on disk space management and stream recovery
2022-03-24 21:01:14 -03:00
Victor Shyba 200761ff13 make added_on a required parameter on BlobInfo, fix callers 2022-03-24 19:51:48 -03:00
Victor Shyba cb78e95e3d add missing space on query, typo 2022-03-23 13:40:01 -03:00
AlessandroSpallina f01cf98d62 fix #3530 added error log when tcp port is already in use 2022-03-22 17:17:41 +01:00
Victor Shyba c9c2495611 if a blob file exists but is pending on db, fix on startup 2022-03-21 21:58:36 -03:00
Victor Shyba aac72fa512 fix bug where recovery doesnt update blob status 2022-03-21 21:33:33 -03:00
Victor Shyba c5e2f19dde fix bug where added_on is always 0 for downloads 2022-03-21 04:38:51 -03:00
Victor Shyba 34bd9e5cb4 exclude sd blobs from calculation and make them be picked last on removal 2022-03-21 04:26:27 -03:00
Lex Berezhny ad489ed606
Merge pull request #3581 from lbryio/deterministic_channel_keys_post_unlock
eagerly load deterministic channel keys immediately after wallet is unlocked
2022-03-14 12:36:04 -04:00
Lex Berezhny bb541901d9 fix tests 2022-03-13 21:30:38 -04:00
Lex Berezhny ca4ba19a5e fixes #3577 2022-03-13 20:42:34 -04:00
Victor Shyba f05943ff79 implement announcer as a consumer task on gather 2022-03-02 13:00:34 -03:00
Victor Shyba 7ded8a1333 make active an explicit ordered dict 2022-03-02 13:00:34 -03:00
Victor Shyba c2478d4add remove unused search rounds 2022-03-02 13:00:34 -03:00
Victor Shyba f69747bc89 timeout is now supported on dht tests 2022-03-02 13:00:34 -03:00
Victor Shyba 441cc950aa fix and enable test_blob_announcer 2022-03-02 13:00:34 -03:00
Victor Shyba a76a0ac8c4 simplify dht mock and restore clock after accelerating 2022-03-02 13:00:34 -03:00
Victor Shyba 8b1009161a better representation of kademliapeer on debug logs 2022-03-02 13:00:34 -03:00
Victor Shyba 868a620e91 add a way to wait announcements to finish so tests are reliable 2022-03-02 13:00:34 -03:00
Victor Shyba a0e34b0bc8 make timeout handler immune to asyncio time tricks 2022-03-02 13:00:34 -03:00
Victor Shyba 612dbcb2f3 allow running some extra probes for k replacements 2022-03-02 13:00:34 -03:00
Victor Shyba b3614d965d remove all references to bottoming out 2022-03-02 13:00:34 -03:00
Victor Shyba 5d7137255e no stop condition, let it exhaust 2022-03-02 13:00:34 -03:00
Victor Shyba 6ff867ef55 bottoming out is now warning and no results for peer search 2022-03-02 13:00:34 -03:00
Victor Shyba c14915df29 don't probe peers too far from the top closest 2022-03-02 13:00:34 -03:00
Victor Shyba 7d4966e2ae use a dict for the active queue 2022-03-02 13:00:34 -03:00
Victor Shyba 3876e0317d log bottom out of peer search in debug, show short key id for find value 2022-03-02 13:00:34 -03:00
Victor Shyba 0b2b10f759 bump bottom out limit of peer search so people can use 100 concurrent announcers 2022-03-02 13:00:34 -03:00
Victor Shyba 9a79b33664 wait until k peers are ready. do not double add peers 2022-03-02 13:00:34 -03:00
Victor Shyba af1a6edd15 only return good (contacted) peers 2022-03-02 13:00:34 -03:00
Victor Shyba b78929f4d5 reset closest peer on failure 2022-03-02 13:00:34 -03:00
Victor Shyba fb6e342043 add peers from shortlist regardless, but check from other nodes 2022-03-02 13:00:34 -03:00
Victor Shyba 0faa2d35da bump split index to 2 2022-03-02 13:00:34 -03:00
Victor Shyba 511e57c231 fix distance sorting and improve logging 2022-03-02 13:00:34 -03:00
Victor Shyba d762d675c4 closest peer is only ready when it was contacted and isn't known to be bad 2022-03-02 13:00:34 -03:00
Victor Shyba 3fdadee87c dont probe and ignore bad peers 2022-03-02 13:00:34 -03:00
Victor Shyba 1aa4d9d585 simplify, genaralize to any size and fix tests 2022-02-28 13:06:51 -03:00
Victor Shyba 8019f4bdb3 stop after finding what to download 2022-02-28 13:06:51 -03:00
Victor Shyba ca65c1ebc5 replace duplicated code 2022-02-28 13:06:51 -03:00
Victor Shyba f0e47aae86 add get_colliding_prefix_bits, docs and tests 2022-02-28 13:06:51 -03:00
Victor Shyba dc7cd545ba extract method and avoid using hash builtin name 2022-02-28 13:06:51 -03:00
Victor Shyba 76bd59d82e extract min_prefix_colliding_bits to a contanst 2022-02-28 13:06:51 -03:00
Victor Shyba 461687ffb4 check that the stored blob is at least 1 prefix byte close to peer id 2022-02-28 13:06:51 -03:00
Victor Shyba dd5b9ca81b add migrator to set head blobs should_announce=0 2022-02-20 22:33:57 -03:00
Victor Shyba 89ed04f8a7 fix test_announces 2022-02-20 22:33:57 -03:00
Victor Shyba ec0d9f06c5 do not search for the head blob 2022-02-20 22:33:57 -03:00
Victor Shyba 03b59ac6fc dont set head blob to announce on save 2022-02-20 22:33:57 -03:00
Victor Shyba 43ac3336d7 break tie by length 2022-02-20 22:24:04 -03:00
Victor Shyba d12c78db74 fix and test case for blob_clean after disabling network storage 2022-02-20 22:24:04 -03:00
Jack Robison bfaf1b0957
Merge pull request #3564 from lbryio/fix_downloader_losing_peers
fix handling re-adding lost peers during download
2022-02-16 11:55:22 -05:00
Victor Shyba bb60c385d5 put back all the peers, get rid of re_add 2022-02-08 21:41:52 -03:00
Alex Grin c96d1d9c32
Merge pull request #3537 from lbryio/repost_update 2022-02-08 12:20:20 -05:00
Alex Grintsvayg 7c7a0d4bdf
let stream_update work on non-stream claims 2022-02-08 09:28:17 -05:00
Lex Berezhny cc829a7bf4
Merge pull request #3558 from lbryio/jeffreypicard-patch-1
Update __init__.py
2022-02-04 12:36:01 -05:00
Jeffrey Picard e0ea6383e2
Update __init__.py
Update go hub binary to fix es sync test.
2022-02-04 12:17:19 -05:00
Lex Berezhny bcec5dc2ae
Merge pull request #3556 from lbryio/txo_dust_prevention
prevent creation of change which is below the dust threshold of 1000 dewies
2022-02-04 12:08:16 -05:00
Lex Berezhny cba9c16a06 fix 2022-02-04 12:07:41 -05:00
Lex Berezhny dd68fb077b prevent creation of change which is below the dust threshold of 1000 dewies 2022-02-04 12:07:41 -05:00
Jack Robison c2294e97db
Merge pull request #3552 from lbryio/bump_dht_cache
Increase DHT peer manager cache size to 16384
2022-02-04 11:59:19 -05:00
Victor Shyba c0f512ace7 bump DHT peer manager cache to 16384 2022-02-02 16:54:42 -03:00
Lex Berezhny 3305eb67c6
Merge pull request #3548 from lbryio/announce_metrics
Add optional Prometheus metrics for DHT announcements
2022-02-02 11:06:48 -05:00
Victor Shyba c9d637b4da add gauge for queue size 2022-02-02 11:56:42 -03:00
Victor Shyba ae3e8fadf5 count announcements and how many peers we were able to announce to 2022-02-02 11:56:42 -03:00
Lex Berezhny a1abd94387
Merge pull request #3542 from eug3nix/gh_3481_file_type_detection
file type detection now looks inside the file to determine the type, in addition to using the file extension
2022-01-31 10:29:47 -05:00
Eugene Dubinin 9b463a8cab adds tests for guess_media_type
removes unnecessary comments
2022-01-29 20:49:42 +02:00
Eugene Dubinin babc54a240 adjusts code style 2022-01-29 15:25:17 +02:00
Eugene Dubinin 5836a93b21 fixes KeyError on missing synonyms 2022-01-29 15:25:17 +02:00
Eugene Dubinin 557348e345 detect media_type from the file contents 2022-01-29 15:25:17 +02:00
Lex Berezhny 9adfec6b00
Merge pull request #3549 from lbryio/wallet_lock_w_deterministic_channels
wallet locking/unlocking no longer breaks deterministic channel keys
2022-01-26 11:17:55 -05:00
Lex Berezhny 3a496902f8 wallet locking/unlocking no longer breaks deterministic channel keys 2022-01-24 09:45:08 -05:00
Lex Berezhny b5ead91746
Merge pull request #3534 from lbryio/normalize_signatures
drop dependency on cryptography library in wallet module
2022-01-17 13:38:20 -05:00
Lex Berezhny 302461b446 updated based on code review 2022-01-17 11:08:28 -05:00
Lex Berezhny ac201c718e drop dependency on cryptography library in wallet module 2022-01-17 10:43:59 -05:00
Jack Robison f78e3825ca
Merge pull request #3500 from lbryio/fix_script
Add Prometheus metrics for DHT internals
2022-01-14 12:46:28 -05:00
Victor Shyba 0618053bd4 remove request_flight metric 2022-01-12 12:41:04 -03:00
Victor Shyba 8e6fa3490c disable CSV endpoints by default 2022-01-12 12:39:23 -03:00
Victor Shyba 8a1a1a4000 remove estimation endpoints as that is done over prometheus metrics now 2022-01-12 12:39:23 -03:00
Victor Shyba fd9dcbf9a8 add granular metric for stored blob prefix, for network announcements calculation 2022-01-12 12:39:23 -03:00
Victor Shyba beb8583436 change colliding bits metric to gauge 2022-01-12 12:39:23 -03:00
Victor Shyba b44e2c0b38 count bit collisions between 8 and 16 2022-01-12 12:39:23 -03:00
Victor Shyba 06e94640b5 add counter for peers with colliding bytes 2022-01-12 12:39:23 -03:00
Victor Shyba ff36bdc802 add requests in flight and error 2022-01-12 12:39:23 -03:00
Victor Shyba 46f576de46 add request received 2022-01-12 12:39:23 -03:00
Victor Shyba 7b09c34fce add request_sent and request_time metric on dht 2022-01-12 12:39:23 -03:00
Victor Shyba a22f50aa84 add storing_peers and peer_manager_keys 2022-01-12 12:39:23 -03:00
Victor Shyba 2d9130b4e0 prometheus: move blobs_stored and peers to SDK. add buckets_in_routing_table 2022-01-12 12:39:23 -03:00
Victor Shyba 470ee72462 add passive estimation to prometheus 2022-01-12 12:39:23 -03:00
Victor Shyba add147b409 fix missing async 2022-01-12 12:39:23 -03:00
Victor Shyba 371df6e6c2 keep same node id between runs 2022-01-12 12:39:23 -03:00
Victor Shyba 7ed5fe8f66 add semaphore on active estimation to avoid abuse 2022-01-12 12:39:23 -03:00
Victor Shyba a6ca7a6f38 same api across different estimation methods 2022-01-12 12:39:23 -03:00
Victor Shyba 1c857b8dd8 be explicit about ignoring params 2022-01-12 12:39:23 -03:00
Victor Shyba 87ff3f95ff better endpoint names, small docs 2022-01-12 12:39:23 -03:00
Victor Shyba 5cb4c06d0c add prefix_neighbors_count to routing table debug api 2022-01-12 12:39:23 -03:00
Jack Robison e7d9079389 improve script 2022-01-12 12:39:23 -03:00
Victor Shyba 9cdcff0e1e first attempt at crawling 2022-01-12 12:39:23 -03:00
Lex Berezhny a4dce8cf9f
Merge pull request #3535 from vertbyqb/hexdata-string
convert hexdata argument to a string before signing in `channel_sign` command
2022-01-10 09:48:41 -05:00
Lex Berezhny aaa11c02bf added integration test 2022-01-10 08:46:10 -05:00
vertbyqb d2ebbf5db6 jsonrpc_channel_sign - Convert hexdata to a string before signing
Fixes #3533
2022-01-10 08:46:10 -05:00
Jack Robison e6efc1ad4a
Merge pull request #3538 from lbryio/dht_memory
Unify and fix DHT memory caches for peer manager
2022-01-07 11:29:44 -05:00
Victor Shyba a8523996a9 extract cache values, increase peer cache to 2048 2022-01-07 12:58:52 -03:00
Victor Shyba f586de2bbe DHT bugfix: failures tracking should be bound to 2048 LRU cache size 2022-01-07 12:46:00 -03:00
Victor Shyba 7df02303b2 fix missing docopt argument 2022-01-05 17:10:31 -03:00
Victor Shyba f89c75e642 bump hub version to latest supporting sd_hash search 2022-01-05 17:10:31 -03:00
Victor Shyba d2c1961101 update hub protobuf including sd_hash field 2022-01-05 17:10:31 -03:00
Victor Shyba 2a4c5a48bf increase indexed sd_hash prefix to 4 chars 2022-01-05 17:10:31 -03:00
Victor Shyba 5f5f39a4aa enable and test prefix search for sd hash 2022-01-05 17:10:31 -03:00
Victor Shyba df54cc04af sync and search sd_hash 2022-01-05 17:10:31 -03:00
Victor Shyba 0439616480 add test 2022-01-05 17:10:31 -03:00
Victor Shyba 19fa274227 add sd hash to API 2022-01-05 17:10:31 -03:00
Lex Berezhny 8076000c27
Merge pull request #3450 from lbryio/deterministic_channel_keys
deterministic channel keys (requires wallet server re-sync)
2021-12-23 15:38:15 -05:00
Lex Berezhny c80b30f070 test another signed claim by ytsync 2021-12-22 18:29:46 -05:00
Lex Berezhny 486d5c48b0 takeover tests fix 2021-12-22 18:29:46 -05:00
Lex Berezhny 4822792ee2 create nondetermnistic channel in test to replicate old test behavior 2021-12-22 18:29:46 -05:00
Lex Berezhny 569f1d42b1 fix tests 2021-12-22 18:29:46 -05:00
Lex Berezhny 23c10faff5 lint 2021-12-22 18:29:46 -05:00
Lex Berezhny 1eaa195363 reduced crypto dependency in wallet to coincurve 2021-12-22 18:29:46 -05:00
Lex Berezhny fb57cfa5d8 moved imports for lint 2021-12-22 18:29:46 -05:00
Lex Berezhny d33086c8f7 deleted extraneous test 2021-12-22 18:29:46 -05:00
Lex Berezhny d815a6f02c use ecdsa for signing/veryfing instead of coincurve due to compatibility issues 2021-12-22 18:29:46 -05:00
Lex Berezhny 8216f4a873 work in progress 2021-12-22 18:29:46 -05:00
Lex Berezhny e4cc4521d9 channel key generation no longer arbitrarily bounded 2021-12-22 18:29:46 -05:00
Lex Berezhny 6bd9b3744d progress, channel keys generate deterministically now 2021-12-22 18:29:46 -05:00
Lex Berezhny f741b00768 progress on deterministic channel keys 2021-12-22 18:29:46 -05:00
Lex Berezhny 5eb95d7dd4
Merge pull request #3529 from lbryio/change_default_coin_selection_strategy
changes default coin selection strategy from standard to prefer_confirmed
2021-12-22 11:28:22 -05:00
Lex Berezhny e5268f43e7 changes default coin selection strategy from standard to prefer_confirmed 2021-12-21 10:22:09 -05:00
Victor Shyba 54d6fb9da4 do not limit DHT results by K, respect max_results 2021-12-09 14:34:55 -03:00
Victor Shyba 3d5c9cc1c2 clarify DHT debug logging on key and operation 2021-12-09 14:32:30 -03:00
Alex Grin 442326f1d8
Merge pull request #3499 from lbryio/multiple-release-time-constraints
@jeffreypicard assures me these timeouts are safe to ignore
2021-12-06 11:43:46 -05:00
Jeffrey Picard d66f46e07b Switch RangeField back to ints 2021-12-03 18:12:38 -05:00
Jeffrey Picard 757b53443d Try forcing tox reset 2021-12-03 17:42:56 -05:00
Jeffrey Picard 3436965b33 Debugging 2021-12-03 17:22:52 -05:00
Jeffrey Picard df71132957
Update es version in workflow 2021-12-03 13:03:00 -05:00
Jeffrey Picard 1b322dc404
Update protobufs, go hub shim, and claim test. 2021-12-03 13:03:00 -05:00
Jack Robison 58341f4ff1
remove unused ES fields 2021-12-03 13:03:00 -05:00
Jack Robison 0d3ca80008
support lists of constraints for all range fields 2021-12-03 13:03:00 -05:00
Lex Berezhny 63437712cd
Merge pull request #3490 from ghost/integration_test_setup_cleanup_timeouts
added timeout of async operations to integration test setup/teardown
2021-12-02 19:52:44 -05:00
Jack Robison 26d0e87f46 v0.106.0 2021-12-02 17:17:00 -05:00
Jack Robison 2cad4fa1ce update json docs 2021-12-02 14:51:52 -05:00
Jack Robison 7bb293e5d6 update claim_search doc
backward compatibility for `trending_mixed`, `trending_local`, `trending_global`, and `trending_group` args to `claim_search`
2021-12-02 14:51:52 -05:00
Lex Berezhny e4777f9314
Merge branch 'master' into integration_test_setup_cleanup_timeouts 2021-12-01 22:08:18 -05:00
Jack Robison 3508f562a7
update json docs 2021-12-01 18:47:03 -05:00
Jack Robison 1aa66c6038
update header checkpoints 2021-12-01 18:46:24 -05:00
Victor Shyba e7458edb72 test case for stream_type search on claims missing source + fix 2021-12-01 18:42:47 -05:00
Lex Berezhny 7f97013703
Merge pull request #3497 from lbryio/fee_per_name_env_var
fee per name env var
2021-12-01 11:26:00 -05:00
Lex Berezhny 9e43060d41 fee per name env var 2021-12-01 10:22:34 -05:00
FemtosecondLaser d69486fb6e returned conditional check in add_timeout() as it was making test_node.py tests unhappy 2021-11-30 01:01:35 +00:00
FemtosecondLaser d4ebfdbc3c removed conditional check in add_timeout() 2021-11-29 22:56:50 +00:00
FemtosecondLaser e00c3db71a
Merge branch 'master' into integration_test_setup_cleanup_timeouts 2021-11-29 21:50:05 +00:00
Victor Shyba 11c3ea0b87 fix typo from arg name 2021-11-24 13:05:43 -03:00
Jack Robison 7531401623
keep touched_or_deleted records 2021-11-21 13:52:03 -05:00
FemtosecondLaser e6c1dc251e changed addTimeout to add_timeout for lint compliance 2021-11-20 00:47:46 +00:00
FemtosecondLaser dca7977051 added timeout of async operations to integration test setup/teardown 2021-11-20 00:22:25 +00:00
Victor Shyba d19e07d661 add blob endpoint for listing announced blobs 2021-11-17 13:27:19 -03:00
Victor Shyba 751ff6e21f add /peers.csv to monitoring endpoint 2021-11-17 13:27:19 -03:00
Brendon J. Brewer 3f6fe995b8 Rename trending 2021-11-16 10:59:10 -05:00
Jack Robison 1e00fb369d fix missing es notification for support amount changing 2021-11-15 00:58:18 -05:00
Jack Robison 54b522383a improve tests 2021-11-15 00:58:18 -05:00
Jack Robison 90a7de3b5c improve resolve tests 2021-11-15 00:58:18 -05:00
Jack Robison 3fe1582432 fix duplicate trending notification to ES 2021-11-15 00:58:18 -05:00
Jack Robison 85eddd2100 fix effective amount for resolve/ES being off while claims/supports are unactivated 2021-11-15 00:58:18 -05:00
Jack Robison f5f8775c59 fix test_colliding_short_id 2021-11-10 13:02:28 -03:00
Jack Robison 0ca98678f7 update default tcp/blob port to be the same as the default udp/dht port (4444) 2021-11-10 13:02:28 -03:00
Victor Shyba a19060c08d log unexpected errors, rename task/loop 2021-11-09 14:27:06 -05:00
Victor Shyba fa2ad88cc4 clear cache on test assertions 2021-11-09 14:27:06 -05:00
Victor Shyba 63cbcd0956 make sure the downloader always stops gracefully 2021-11-09 14:27:06 -05:00
Victor Shyba d6d0ebf8f4 cache space stats from running components so status is instant 2021-11-09 14:27:06 -05:00
Victor Shyba 0d810d92ca add index for blob table so size summaries are faster 2021-11-09 14:27:06 -05:00
Victor Shyba 1ff914a6f4 download from stored announcements and dont reannounce 2021-11-09 14:27:06 -05:00
Victor Shyba 5959b1be72 improve disk space manager status, include more info and unify space queries 2021-11-09 14:27:06 -05:00
Victor Shyba d12a214c05 normal_blobs->stream_blobs, proactive->background 2021-11-09 14:27:06 -05:00
Victor Shyba 3a83052f2e fix free space calculation, test it and give a margin of 10mb before starting so it doesnt insist when full 2021-11-09 14:27:06 -05:00
Victor Shyba 510b44ca92 move more logic out of the downloader component 2021-11-09 14:27:06 -05:00
Victor Shyba 15edb6756d extract background downloader to its own class 2021-11-09 14:27:06 -05:00
Victor Shyba fbfd02b08b add analytics event for network disk space 2021-11-09 14:27:06 -05:00
Victor Shyba b39c26fc86 announce orphan blobs manually, as that was done when save stream 2021-11-09 14:27:06 -05:00
Victor Shyba 95b2c8d175 cleanup background downloader blobs from conf 2021-11-09 14:27:06 -05:00
Victor Shyba d52748b09f separated network seeding space metrics 2021-11-09 14:27:06 -05:00
Victor Shyba 34d18a3a9a don't save streams for network blobs and bypass disk space manager 2021-11-09 14:27:06 -05:00
Victor Shyba 3b27d6a9b5 add conf for network seeding space limit 2021-11-09 14:27:06 -05:00
Victor Shyba 703c391f99 schedule the download task instead 2021-11-09 14:27:06 -05:00
Victor Shyba 4f1dc29df1 fix unit tests from component dependency chain changes 2021-11-09 14:27:06 -05:00
Victor Shyba 13667df374 download from DHT 2021-11-09 14:27:06 -05:00
Victor Shyba 8800d6985f drop channel support, prepare to hook into DHT 2021-11-09 14:27:06 -05:00
Victor Shyba 364b8f2605 handle case where something that isn't a sd blob gets hit 2021-11-09 14:27:06 -05:00
Victor Shyba 67b9ea9deb no api yet 2021-11-09 14:27:06 -05:00
Victor Shyba b78f2336a7 download only blobs 2021-11-09 14:27:06 -05:00
Victor Shyba c7ba637c7d fix tests 2021-11-09 14:27:06 -05:00
Victor Shyba 23a5ce3df7 fix exception arguments 2021-11-09 14:27:06 -05:00
Victor Shyba 8f88e28e50 test add/remove/list subscriptions 2021-11-09 14:27:06 -05:00
Victor Shyba 9cf6139557 fix and test main api 2021-11-09 14:27:06 -05:00
Victor Shyba d556065a8b download all blobs and check that on tests 2021-11-09 14:27:06 -05:00
Victor Shyba 951716f7dc create downloader component and initial tests 2021-11-09 14:27:06 -05:00
Victor Shyba 1ddc7ddda3 with the fix we no longer need to restart the stream 2021-11-08 10:50:47 -05:00
Victor Shyba 903ed9f3dc fix tests by checking there are actual blobs being deleted 2021-11-08 10:50:47 -05:00
Victor Shyba c42b76dcb8 dont lose results on duplicates, just warn 2021-11-08 10:50:47 -05:00
Victor Shyba a73582d9ae remove tried_for_this_blob so banned peers are retried for same blob 2021-11-08 10:50:47 -05:00
Cristian Vicas 42c4fc7557 Bug [#2070] where blob_get RPC timed out.
Both stream.downloader and blob_exchange.downloader paths are adding the fixed_peers list to the DHT node.
Tested jsonrpc_blob_get daemon call.

Bug [#2070] where blob_get RPC timed out.

Both stream.downloader and blob_exchange.downloader paths are adding the fixed_peers list to the DHT node.
Tested jsonrpc_blob_get daemon call.
2021-11-08 10:49:48 -05:00
Jack Robison ddbbb6f1dd
use mempool cache in transaction_get_batch 2021-10-27 20:19:08 -04:00
Lex Berezhny ff21a92330
Merge pull request #3457 from FemtosecondLaser/feature/3270-check-default-download-dir-writable
Modified ensure_directory_exists() to check if the directory is writable by the process.
2021-10-27 11:00:13 -04:00
FemtosecondLaser 07f76f7ad1 Added an integration test covering the following scenario:
On start, if download dir is non-writable - daemon terminates with a helpful message.
2021-10-26 11:17:52 +01:00
Jack Robison c90ccffd7b
Update docker-compose-wallet-server.yml 2021-10-25 14:20:39 -04:00
Jack Robison a00d5f18af
add script to setup docker volumes from snapshots 2021-10-24 16:25:34 -04:00
Jack Robison 1e391d211b
fix attempting to update trending on abandoned claims 2021-10-23 18:39:04 -04:00
FemtosecondLaser d87f9672fa Improved the readability of the tests. 2021-10-23 13:12:49 +01:00
FemtosecondLaser 2b5838aa01 Changed the tests to execute against a real file system instead of a fake one. 2021-10-23 02:52:58 +01:00
Jack Robison e10486d6ec
update docs 2021-10-22 16:51:59 -04:00
Jack Robison 1a74d6604d skip loading tx/claim caches in the elastic sync script when not needed 2021-10-22 15:10:35 -04:00
Alex Grin 6d118536b6
Merge pull request #3460 from lbryio/dht_seed_script_metrics 2021-10-22 12:44:16 -04:00
Alex Grin ca4d758db9
Merge branch 'master' into dht_seed_script_metrics 2021-10-22 11:54:19 -04:00
Victor Shyba dc18c26aa4 add optional prometheus to dht_node script 2021-10-22 03:39:46 -03:00
Jack Robison 48505c2968 update trending with help from @eggplantbren 2021-10-21 00:17:12 -04:00
Jack Robison a98ea1e66a update sync script to handle ES falling behind leveldb on shutdown 2021-10-20 23:41:11 -04:00
Jack Robison 3dec697816 logging 2021-10-20 23:41:11 -04:00
Jack Robison 88fd41e597 update docker 2021-10-20 23:41:11 -04:00
Jack Robison b05d071a1c update Env to accept parameters from cli args 2021-10-20 23:41:11 -04:00
Jack Robison a27d3b9689 set default CACHE_MB to 1024mb and the default QUERY_TIMEOUT_MS to 10s 2021-10-20 23:41:11 -04:00
Jack Robison 1facc0cd01 remove unused hub env settings 2021-10-20 23:41:11 -04:00
FemtosecondLaser 837f91d830 renamed the test class to be more specific about the sut 2021-10-21 00:31:02 +01:00
FemtosecondLaser 9c5f5aefb0 removed redundant tests
renamed a test to be more specific about the kind of the precondition
2021-10-21 00:27:31 +01:00
FemtosecondLaser 6b8d4a444b Modified ensure_directory_exists() to check if the directory is writable by the process. 2021-10-20 15:26:16 +01:00
Jack Robison 6bef09a3b1 update lbry-hub-elastic-sync to support resyncing recent blocks 2021-10-19 15:53:20 -04:00
Jack Robison e35319e5a2 add CACHE_ALL_CLAIM_TXOS hub setting 2021-10-19 15:53:20 -04:00
Jack Robison 0e548b3812 remove dead code 2021-10-19 15:53:20 -04:00
Jack Robison bfac02ccab add CACHE_ALL_TX_HASHES setting to optionally use more memory to save i/o 2021-10-19 15:53:20 -04:00
Jack Robison 7ea1a2b361 sleeps 2021-10-19 15:53:20 -04:00
Jack Robison 99df418f1d improve resolve caching 2021-10-19 15:53:20 -04:00
Jack Robison 6416d8ce9c threadpools for block processor and es sync reader 2021-10-19 15:53:20 -04:00
Jack Robison 22b43a2b01 doc strings 2021-10-19 15:53:20 -04:00
Jack Robison 05e5d24c5e improve claims_producer performance 2021-10-19 15:53:20 -04:00
Jack Robison eabcc30367 resolve lru cache 2021-10-19 15:53:20 -04:00
Jack Robison f5e0ef5223 add block_txs index 2021-10-19 15:53:20 -04:00
Jack Robison f46d9330b0 smaller caches 2021-10-19 15:53:20 -04:00
Jack Robison b62a0b4607 Update daemon.py
docstring
2021-10-15 09:40:15 -04:00
Cristian Vicas 1f044321fb Updated documentation for RPC calls: status, blob_list. 2021-10-15 09:40:15 -04:00
Jack Robison a841d49483
Merge branch 'belikor-fix-wrong-url' 2021-10-15 09:00:59 -04:00
belikor 9509acc490
file_manager: raise new InvalidStreamURLError if the URL is invalid
When using `lbrynet get URL`, if the URL is not a valid URL
the function `url.URL.parse` will raise a `ValueError` exception
which will produce a whole backtrace.

For example, this is the case if we provide a channel name
with a forward slash but without a stream name.
```
lbrynet get @Non-existing/
```

```
Traceback (most recent call last):
  File "/opt/git/lbry-sdk/lbry/file/file_manager.py", line 84, in download_from_uri
    if not URL.parse(uri).has_stream:
  File "/opt/git/lbry-sdk/lbry/schema/url.py", line 114, in parse
    raise ValueError('Invalid LBRY URL')
ValueError: Invalid LBRY URL
WARNING  lbry.extras.daemon.daemon:1110: Error downloading Non-existing/: Invalid LBRY URL
```

Now we raise a new `InvalidStreamURLError` which can be trapped in the upper functions
that use `url.URL.parse` such as `FileManager.download_from_uri`.
If we do this the traceback won't be shown.
```
WARNING  lbry.file.file_manager:252:
Failed to download Non-existing/: Invalid LBRY stream URL: '@Non-existing/'
WARNING  lbry.extras.daemon.daemon:1110:
Error downloading Non-existing/: Invalid LBRY stream URL: '@Non-existing/'
```

This handles the case when trying to download only "channel" parts
without the claim part.
```
lbrynet get @Non-existing
lbrynet get @Non-existing/
lbrynet get Non-existing/
```
2021-10-15 08:59:37 -04:00
Jack Robison 02d356ef12
Merge pull request #3443 from lbryio/fix-resolve-reposted-channel
Fix including channels for reposted claims when resolving a repost
2021-10-08 16:51:40 -04:00
Jack Robison d3516f299e
clear es attributes during initial sync 2021-10-08 16:34:48 -04:00
Jack Robison 79630767c2
fix setting references on txos in extra_txos 2021-10-08 16:34:15 -04:00
Jack Robison 084a76d075
fix reposted channel being missing from resolve result
-improve names of the resolve related methods in `LevelDB`
2021-10-07 15:09:13 -04:00
Jack Robison bc6822e397
Merge pull request #3205 from lbryio/leveldb-resolve
drop sqlite in the hub and make resolve handle reorgs
2021-10-07 02:07:48 -04:00
Jack Robison 43432a9e48
fix compactify script 2021-10-07 00:37:55 -04:00
Jack Robison d64a5bc12f
fix test 2021-10-06 23:53:17 -04:00
Jack Robison b2922d18e2
move test_transaction_commands, test_internal_transaction_api , and test_transactions into their own runner
-move test_resolve_command to its own runner
2021-10-06 23:53:17 -04:00
Jack Robison ccf03fc07b
only save undo info for blocks within reorg limit 2021-10-06 12:07:42 -04:00
Jack Robison a7c45da10c
fix channel count 2021-10-06 00:02:16 -04:00
Jack Robison e03f01e24a
try to fix test_sqlite_coin_chooser 2021-10-05 19:36:49 -04:00
Jack Robison 0939589557
move test_claim_commands and test_resolve_command into new directory 2021-10-05 17:51:43 -04:00
Jack Robison 8167af9b4a
sort touched or deleted claim hashes 2021-10-05 16:44:49 -04:00
Jack Robison 4cf76123e5
block processor db refactoring
-access db through HubDB class, don't use plyvel.DB directly
-add channel count and support amount prefixes
2021-10-05 16:44:49 -04:00
Jack Robison 01ee4b23e6
fix and add test for abandoning a controlling in the same block a new claim is made 2021-10-05 16:44:49 -04:00
Jack Robison b198f79214
fix test_sqlite_coin_chooser 2021-10-05 16:44:49 -04:00
Jack Robison 09db868a28
fix ES index name so it stays the same within a test case 2021-10-05 16:44:49 -04:00
Jack Robison 33e8ef75ff
fix bug with early takeover by an update 2021-10-05 16:44:49 -04:00
Jack Robison 11dcb16b14
fix test 2021-10-05 16:44:49 -04:00
Jack Robison 86f21da28b
fix activating non existent claim 2021-10-05 16:44:49 -04:00
Jack Robison 89cd6a9aa4
add tests for takeovers from amount changes in updates before/on/after activation 2021-10-05 16:44:49 -04:00
Jack Robison 18e1256037
batch address history notifications 2021-10-05 16:44:49 -04:00
Jack Robison 02cf478d91
improve leveldb caching 2021-10-05 16:44:49 -04:00
Jack Robison 6ec70192fe
refactor reload_blocking_filtering_streams 2021-10-05 16:44:49 -04:00
Jack Robison 8c75098a9a
fix filtering error upon abandon 2021-10-05 16:44:49 -04:00
Jack Robison 72500f6948
faster read_claim_txos 2021-10-05 16:44:49 -04:00
Jack Robison 37ec9ab464
remove unused executor 2021-10-05 16:44:49 -04:00
Victor Shyba 82fe2a4c8d
fix blocking and filtering 2021-10-05 16:44:49 -04:00
Jack Robison aa50e6ee66
fix test 2021-10-05 16:44:49 -04:00
Jack Robison 91a07cfaee
fix logging number of notified sessions 2021-10-05 16:44:49 -04:00
Jack Robison 709f5e9a65
fix update that initiates takeover not being delayed 2021-10-05 16:44:49 -04:00
Jack Robison b2f9ef21cc
use hub binary from https://github.com/lbryio/hub/pull/13 2021-10-05 16:44:49 -04:00
Jack Robison be6b72edcd
handle invalid release time 2021-10-05 16:44:49 -04:00
Jack Robison ece2d1e78a
name and normalized -> claim_name and normalized_name
-update generated pb files
2021-10-05 16:44:49 -04:00
Jack Robison 1ee1a5f2a1
fix es sync.py 2021-10-05 16:44:49 -04:00
Jack Robison a567326853
fix all_claims_producer 2021-10-05 16:44:49 -04:00
Jack Robison 6231861dd6
merge conflicts 2021-10-05 16:44:49 -04:00
Jack Robison 1ff7b77ee0
claim search fixes 2021-10-05 16:44:49 -04:00
Jack Robison 9365708bb2
fix release_time and creation_timestamp 2021-10-05 16:44:49 -04:00
Jack Robison d23a0a8589
delete unused code 2021-10-05 16:44:49 -04:00
Jack Robison 701b39b043
test_spec_example 2021-10-05 16:44:49 -04:00
Jack Robison 58ad1f3876
non blocking claim producer 2021-10-05 16:44:49 -04:00
Jack Robison 2138e7ea33
fix tests 2021-10-05 16:44:49 -04:00
Jack Robison 32f8c9e59f
renormalization 2021-10-05 16:44:49 -04:00
Jack Robison 57028eab39
add trending integration test 2021-10-05 16:44:49 -04:00
Jack Robison 3a16edd8a6
fix trending overflow 2021-10-05 16:44:49 -04:00
Jack Robison 165f3bb270
refactor trending 2021-10-05 16:44:49 -04:00
Jack Robison 0ba75153f3
trending fixes 2021-10-05 16:44:49 -04:00
Jack Robison db2789990f
make app backward compatible with trending_score
-update trending decay function to zero out low trending score values faster
2021-10-05 16:44:49 -04:00
Jack Robison acaf299bcb
log time to update and decay trending in elasticsearch 2021-10-05 16:44:49 -04:00
Jack Robison 1940301824
skip integrity errors for trending spikes 2021-10-05 16:44:49 -04:00
Jack Robison 34576e880d
update trending in elasticsearch
-add TrendingPrefixSpike to leveldb
-expose `TRENDING_HALF_LIFE`, `TRENDING_WHALE_HALF_LIFE` and `TRENDING_WHALE_THRESHOLD` hub settings
2021-10-05 16:44:49 -04:00
Brendon J. Brewer 65c0668d40
constants 2021-10-05 16:44:49 -04:00
Brendon J. Brewer 53bd2bcbfe
Put trending score into ES 2021-10-05 16:44:49 -04:00
Brendon J. Brewer 388724fccb
Mark claims as touched 2021-10-05 16:44:49 -04:00
Jack Robison 231eabb013
fix non normalized canonical urls 2021-10-05 16:44:49 -04:00
Jack Robison 54903fc2ea
handle unicode error for unnormalized names 2021-10-05 16:44:49 -04:00
Jack Robison 3a1baf0700
prefix db 2021-10-05 16:44:49 -04:00
Brendon J. Brewer 0c0e36b6f8
trending 2021-10-05 16:44:49 -04:00
Jack Robison 234c03db09
fix claims not having non-normalized names 2021-10-05 16:44:49 -04:00
Jack Robison 59db5e7889
update test 2021-10-05 16:44:49 -04:00
Jack Robison 28aa7da349
merge conflicts 2021-10-05 16:44:49 -04:00
Jack Robison c51e344b87
fix missing fields in reposts 2021-10-05 16:44:49 -04:00
Jack Robison 54461dfa75
fix merge conflicts and simplify extract_doc 2021-10-05 16:44:49 -04:00
Jack Robison 2d48e93f74
fix bulk es sync 2021-10-05 16:44:49 -04:00
Jack Robison af22646322
fix tests 2021-10-05 16:44:49 -04:00
Jack Robison 722b42a93e
fix tests 2021-10-05 16:44:49 -04:00
Jack Robison 8f9e7f77a7
handle invalid claim update 2021-10-05 16:44:49 -04:00
Jack Robison 09bb1ba494
fix keeping claim_hash_to_txo and txo_to_claim in sync 2021-10-05 16:44:49 -04:00
Victor Shyba d4137428ff
implement blocking and filtering 2021-10-05 16:44:49 -04:00
Jack Robison b4d6c4f5b7
fix _get_pending_claim_name 2021-10-05 16:44:49 -04:00
Jack Robison ffbe59ece5
fix applying expiration fork 2021-10-05 16:44:49 -04:00
Jack Robison fab9c90ccb
update iterators to use pack_partial_key 2021-10-05 16:44:49 -04:00
Jack Robison fb1a774bc4
delete lbry/wallet/server/storage.py
-expose leveldb lru cache size as `CACHE_MB` hub param
2021-10-05 16:44:49 -04:00
Jack Robison 98bc7d1e0e
remove dead code 2021-10-05 16:44:49 -04:00
Jack Robison f7622f24b2
non blocking mempool loop 2021-10-05 16:44:49 -04:00
Jack Robison f0a195a6d4
faster es sync 2021-10-05 16:44:49 -04:00
Jack Robison 180ba27d84
run advance_block in threadpool 2021-10-05 16:44:49 -04:00
Jack Robison f944671f86
use claim_to_txo cache 2021-10-05 16:44:49 -04:00
Jack Robison def2903f7d
faster _cached_get_active_amount for claims
-remove dead code
2021-10-05 16:44:49 -04:00
Jack Robison 0273a4e839
fix claim search by fee for claims without fees 2021-10-05 16:44:49 -04:00
Jack Robison f8d2f02c5d
clear claim_to_txo cache before reading 2021-10-05 16:44:49 -04:00
Jack Robison 25147d8897
handle claims that dont exist in ES sync 2021-10-05 16:44:49 -04:00
Jack Robison 0fb6f05fba
in memory claim_to_txo and txo_to_claim dictionaries 2021-10-05 16:44:49 -04:00
Jack Robison 4e4e899356
fix spend_utxo 2021-10-05 16:44:49 -04:00
Jack Robison 5a01dbf269
split flush from advance_block 2021-10-05 16:44:49 -04:00
Jack Robison 30b923b283
rename extend_ops 2021-10-05 16:44:49 -04:00
Jack Robison 73ba381d20
faster spend_utxo 2021-10-05 16:44:49 -04:00
Jack Robison 1a5912877e
faster get_future_activated 2021-10-05 16:44:49 -04:00
Jack Robison 813e506b68
threadpool 2021-10-05 16:44:49 -04:00
Jack Robison 077ca987f7
cleanup 2021-10-05 16:44:49 -04:00
Jack Robison c632a7a6a5
fix getting block hash during reorg 2021-10-05 16:44:49 -04:00
Jack Robison e33e767510
fix test 2021-10-05 16:44:49 -04:00
Jack Robison ac82617aa9
fix spends in address histories 2021-10-05 16:44:49 -04:00
Jack Robison a35dfd1fd1
faster es sync 2021-10-05 16:44:49 -04:00
Jack Robison c28aae9913
fix expiring channels 2021-10-05 16:44:49 -04:00
Jack Robison c26a99e65c
fix abandoning signed claims in the same tx as their channel
-fix canonical/short url in es
2021-10-05 16:44:49 -04:00
Jack Robison ca57dcfc2f
handle failure to generate a short id 2021-10-05 16:44:49 -04:00
Jack Robison df5662dd69
fix resolve by short id 2021-10-05 16:44:49 -04:00
Jack Robison 8927a4889e
tests 2021-10-05 16:44:49 -04:00
Jack Robison 1ac7831f3c
move MemPool into BlockProcessor 2021-10-05 16:44:49 -04:00
Jack Robison 292d272a94
combine MemPool and Notifications classes 2021-10-05 16:44:49 -04:00
Jack Robison a6ee8dc66e
fix touched hashXs notifications 2021-10-05 16:44:49 -04:00
Jack Robison 496f89f184
reorg claims in the search index 2021-10-05 16:44:49 -04:00
Jack Robison 7a56eff1ac
small fixes 2021-10-05 16:44:49 -04:00
Jack Robison 07e182aa16
rename 2021-10-05 16:44:49 -04:00
Jack Robison 7de06aa1e0
delete stale code 2021-10-05 16:44:49 -04:00
Jack Robison 3955b64405
simplify advance and reorg 2021-10-05 16:44:49 -04:00
Jack Robison 2bb55d681d
update limited_history 2021-10-05 16:44:49 -04:00
Jack Robison f94e6ac527
update lookup_utxos 2021-10-05 16:44:49 -04:00
Jack Robison b344f17b86
update RevertableOpStack 2021-10-05 16:44:49 -04:00
Jack Robison 677b8cb633
add remaining db prefixes 2021-10-05 16:44:49 -04:00
Jack Robison 6f3342e09e
update plyvel to 1.3.0
https://github.com/lbryio/lbry-sdk/pull/3205#issuecomment-877564489
2021-10-05 16:44:49 -04:00
Jack Robison a1ddd762e0
cleanup 2021-10-05 16:44:49 -04:00
Jack Robison 68474e4057
skip es sync during initial hub sync, halt the hub upon finishing initial sync 2021-10-05 16:44:49 -04:00
Jack Robison a84b9ee396
fix es sync 2021-10-05 16:44:49 -04:00
Jack Robison b9c2ee745a
fix non localhost elasticsearch 2021-10-05 16:44:49 -04:00
Jack Robison c91a47fcaa
improve channel invalidation test 2021-10-05 16:44:49 -04:00
Jack Robison 615e489d8d
fix stream_update --clear_channel flag 2021-10-05 16:44:49 -04:00
Jack Robison c68f9f6f16
fix signed claim invalidation corner cases 2021-10-05 16:44:49 -04:00
Jack Robison 229cb85a6a
extra deletes
-the channel_to_claim/claim_to_channel entries already get deleted when the claim txo is spent
2021-10-05 16:44:49 -04:00
Jack Robison e5c22fa665
fix has_no_source for reposts 2021-10-05 16:44:49 -04:00
Jack Robison 8bcfff05d7
update channel_to_claim and claim_to_channel at the same time 2021-10-05 16:44:49 -04:00
Jack Robison 6416ee8151
typing and fix error string 2021-10-05 16:44:49 -04:00
Jack Robison f8eceb48e6
update staged txo_to_claim after invalidating channel sig
-fixes abandon of claim with invalidated signature and an update in same block
2021-10-05 16:44:49 -04:00
Jack Robison 310c483bfa
missing channel_to_claim delete 2021-10-05 16:44:49 -04:00
Jack Robison a8f20361aa
fix RepostKey 2021-10-05 16:44:49 -04:00
Jack Robison 290be69d99
typing 2021-10-05 16:44:49 -04:00
Jack Robison 3b96bd7ea0
fix 2021-10-05 16:44:49 -04:00
Jack Robison dc2f22f5fa
cleanup 2021-10-05 16:44:49 -04:00
Jack Robison 821be29f41
rename effective_amount prefix 2021-10-05 16:44:49 -04:00
Jack Robison 52ff1a12ff
fix undeleted claim_to_channel record 2021-10-05 16:44:49 -04:00
Jack Robison 814699ef11
cleanup 2021-10-05 16:44:49 -04:00
Jack Robison 0c30838b25
fix mismatch in claim_to_txo<->txo_to_claim 2021-10-05 16:44:49 -04:00
Jack Robison cf66c2a1ee
rename things
-fix effective amount integrity error
2021-10-05 16:44:49 -04:00
Jack Robison 2ee419ffca
fix 2021-10-05 16:44:49 -04:00
Jack Robison bfb9d696d7
pretty print 2021-10-05 16:44:49 -04:00
Jack Robison bb2a34dd6b
fix duplicate activate 2021-10-05 16:44:49 -04:00
Jack Robison ed652c0c56
fix updating resolve by effective amount after abandoning support 2021-10-05 16:44:49 -04:00
Jack Robison 1dc961d6eb
use RevertableOpStack in _get_takeover_ops 2021-10-05 16:44:49 -04:00
Jack Robison d119fcfc98
remove debug prints 2021-10-05 16:44:49 -04:00
Jack Robison 4d3573724a
add RevertableOpStack to verify consistency of ops as they're staged 2021-10-05 16:44:49 -04:00
Jack Robison 8b37a66075
fix fee amount overflow in es 2021-10-05 16:44:49 -04:00
Jack Robison ba4f32075a
faster claim producer
-make batches of claim txos from the iterator, and sort by tx hash before fetching to maximize cache and read ahead hits
2021-10-05 16:44:49 -04:00
Jack Robison 218be22576
imports 2021-10-05 16:44:49 -04:00
Jack Robison 7688293716
close db in sync script 2021-10-05 16:44:49 -04:00
Jack Robison 458f8533c4
try default block size 2021-10-05 16:44:49 -04:00
Jack Robison 34502752fc
update elastic sync 2021-10-05 16:44:49 -04:00
Jack Robison d6758fd823
invalidate channel signatures upon channel abandon 2021-10-05 16:44:49 -04:00
Jack Robison 65700e790e
_prepare_claim_for_sync generators 2021-10-05 16:44:49 -04:00
Jack Robison 7c34e4bb96
logging 2021-10-05 16:44:49 -04:00
Jack Robison d0d6e3563b
use default sync=False during write_batch 2021-10-05 16:44:49 -04:00
Jack Robison a2619f8c78
genesis_bytes attribute 2021-10-05 16:44:49 -04:00
Jack Robison 42d07fd2f0
fix 2021-10-05 16:44:49 -04:00
Jack Robison 8bea10960f
disable es (revert) 2021-10-05 16:44:49 -04:00
Jack Robison 9cbb19c304
_cached_get_active_amount 2021-10-05 16:44:49 -04:00
Jack Robison 1b94dfd712
fix removing unactivated support 2021-10-05 16:44:49 -04:00
Jack Robison 9f3604d739
debug 2021-10-05 16:44:49 -04:00
Jack Robison 4a1b2be269
leveldb tuning 2021-10-05 16:44:49 -04:00
Jack Robison 962dc1b55b
debug 2021-10-05 16:44:49 -04:00
Jack Robison 07c86502f6
refactor ClaimToTXO prefix 2021-10-05 16:44:49 -04:00
Jack Robison adb188e5d0
filter abandoned claims from those considered for early activation 2021-10-05 16:44:49 -04:00
Jack Robison ce031dc6b8
only do early takeover on a larger amount (fix case where they're equal) 2021-10-05 16:44:49 -04:00
Jack Robison 18b5f03247
filter supported claim hashes for claims that dont exist from early takeover/activations 2021-10-05 16:44:49 -04:00
Jack Robison 8a555ecf1c
remove extra open functions 2021-10-05 16:44:49 -04:00
Jack Robison 1b325b9acd
fix flush id 2021-10-05 16:44:49 -04:00
Jack Robison 1bdaddb319
fix clearing pending_support caches upon abandon 2021-10-05 16:44:49 -04:00
Jack Robison 7896e177ef
fix putting spent unactivated supports in removed_active_support 2021-10-05 16:44:49 -04:00
Jack Robison ce8e659008
fix syncing claim to es where channel is in the same block 2021-10-05 16:44:49 -04:00
Jack Robison 27be5deeb2
ignore activation for headless supports 2021-10-05 16:44:49 -04:00
Jack Robison 515f270c3a
faster get_future_activated 2021-10-05 16:44:49 -04:00
Jack Robison ffff3bd334
debugging 2021-10-05 16:44:49 -04:00
Jack Robison f493f13b25
prints 2021-10-05 16:44:49 -04:00
Jack Robison e605c14b13
flush count 2021-10-05 16:44:49 -04:00
Jack Robison 338488f16d
tests 2021-10-05 16:44:49 -04:00
Jack Robison 2abc67c3e8
reposts 2021-10-05 16:44:49 -04:00
Jack Robison eb1ba143ec
fix updating the ES search index
-update search index to use ResolveResult tuples
2021-10-05 16:44:49 -04:00
Jack Robison 6f5bca0f67
bid ordered resolve, feed ES claim data from block processor 2021-10-05 16:44:49 -04:00
Jack Robison 407cd8dd4b
fix duplicate update op for early activating claim 2021-10-05 16:44:49 -04:00
Jack Robison 62a4f0fc04
fix early takeovers by not-yet activated claims 2021-10-05 16:44:49 -04:00
Jack Robison 77cde411f1
test_early_takeover_abandoned_controlling_support 2021-10-05 16:44:49 -04:00
Jack Robison 3eb9d23108
require previous_winning arg for get_takeover_name_ops 2021-10-05 16:44:49 -04:00
Jack Robison 410d4aeb21
fix takeover edge case
if a claim with a higher value than that of a claim taking over a name exists but isn't yet activated, activate it early and have it take over the name
2021-10-05 16:44:49 -04:00
Jack Robison 0a28d216fd
comments 2021-10-05 16:44:49 -04:00
Jack Robison b69faf6920
bid ordered resolve (WIP) 2021-10-05 16:44:49 -04:00
Jack Robison efb92ea37a
fix udp ping test 2021-10-05 16:44:49 -04:00
Jack Robison e77f9981df
DBError 2021-10-05 16:44:49 -04:00
Jack Robison d27c2cc1e9
remove unused COIN file 2021-10-05 16:44:49 -04:00
Jack Robison 586b19675e
claim takeovers 2021-10-05 16:44:49 -04:00
Jack Robison f2907536b4
move get_expiration_height and claimtrie constants to Coin class 2021-10-05 16:44:49 -04:00
Jack Robison 4aa4e35d1c
tests 2021-10-05 16:44:49 -04:00
Jack Robison 9a11ac06bf
claim activations and takeovers (WIP) 2021-10-05 16:44:49 -04:00
Jack Robison aa3b18f848
advance_blocks -> advance_block 2021-10-05 16:44:49 -04:00
Jack Robison 103bdc151f
dead code 2021-10-05 16:44:49 -04:00
Jack Robison 6d4c1cd879
LBRYBlockProcessor -> BlockProcessor
- temporarily disable claim_search
2021-10-05 16:44:49 -04:00
Jack Robison cacbe30871
rebase 2021-10-05 16:44:49 -04:00
Jack Robison bfeeacb230
tests 2021-10-05 16:44:49 -04:00
Jack Robison 04bb7b4919
add wrapper for getnamesintrie
-used for verifying db state against lbrycrd
2021-10-05 16:44:49 -04:00
Jack Robison b7df277a5c
db state struct
-remove dead code
2021-10-05 16:44:49 -04:00
Jack Robison c681041b48
claim expiration 2021-10-05 16:44:49 -04:00
Jack Robison 923834c784
get_claim_by_claim_id 2021-10-05 16:44:49 -04:00
Jack Robison 588edf98be
claims db
-move all leveldb prefixes to DB_PREFIXES enum
-add serializable RevertableOp interface for key/value puts and deletes
-resolve urls from leveldb
2021-10-05 16:44:49 -04:00
Jack Robison 28c603ad5f
transaction_num_mapping 2021-10-05 16:44:49 -04:00
Jack Robison 6988a47e02
disable sqlite in block processor 2021-10-05 16:44:49 -04:00
Jack Robison 2c8ceb1217
named tuples 2021-10-05 16:44:49 -04:00
Jack Robison ccac4ffa24
consolidate flush_backup 2021-10-05 16:44:49 -04:00
Jack Robison 4258cef9bd
remove lbry.wallet.server.history 2021-10-05 16:44:49 -04:00
Jack Robison 62cc6dfe76
consolidate leveldb block advance/reorg
-move methods from History to LevelDB
2021-10-05 16:44:49 -04:00
Jack Robison 9f224a971b
atomic flush_dbs 2021-10-05 16:44:49 -04:00
Jack Robison cf5dba9157
combine leveldb databases 2021-10-05 16:44:49 -04:00
Jack Robison 23035b9aa0
Merkle staticmethods 2021-10-05 16:44:49 -04:00
Lex Berezhny 84908ec8ec v0.105.0 2021-10-05 11:29:39 -04:00
Victor Shyba dade49743b fix file reflect and add test 2021-10-04 19:26:05 -03:00
Lex Berezhny f29bf35c2a
Merge pull request #3438 from lbryio/disk_space_metrics
metrics reported now include disk space consumed by blobs and what the disk usage limit, if any, is set to
2021-10-03 20:01:50 -04:00
Lex Berezhny dfa6701c43 disk space metrics 2021-10-03 19:33:18 -04:00
Victor Shyba 763ca69a73 dht: use bytes hex/fromhex instead of binascii 2021-09-30 13:26:33 -03:00
Victor Shyba 6bf3b152bf add grin to dht known list 2021-09-30 13:26:33 -03:00
Victor Shyba aa19f85996 add madiator to known dht nodes 2021-09-30 13:26:33 -03:00
Victor Shyba 156d89567e add option to set bootstrap_node 2021-09-30 13:26:33 -03:00
Victor Shyba ecc71baf61 add dockerfile for dht node 2021-09-30 13:26:33 -03:00
Victor Shyba 90c743d963 configure where to save peers 2021-09-30 13:26:33 -03:00
Victor Shyba b926293fa7 define arg types 2021-09-30 13:26:33 -03:00
Victor Shyba 71a19191f8 add dht seed node script 2021-09-30 13:26:33 -03:00
Victor Shyba 38a0f20a33 fix conflict with imported function 2021-09-30 13:24:17 -03:00
Victor Shyba c35192108c errors for empyt and misssing file on publish 2021-09-30 13:24:17 -03:00
Victor Shyba 245b564f13 generalize stream empty to argument empty 2021-09-30 13:24:17 -03:00
Victor Shyba 0d8d1ea4f3 empty stream name error for user input 2021-09-30 13:24:17 -03:00
Victor Shyba 27a427a363 error for missing channel private key 2021-09-30 13:24:17 -03:00
Victor Shyba 2ff028a694 error for already purchased claims 2021-09-30 13:24:17 -03:00
Lex Berezhny c211338218
Merge pull request #3434 from belikor/fix-documentation
fix typo in `file list` arguments list
2021-09-27 11:08:10 -04:00
belikor 8ac89af8bd api.json: correct the error in the generated documentation
From `"name": "blobs_in_stream<blobs_in_stream>"`
to `"name": "blobs_in_stream"`.
2021-09-23 21:01:17 -05:00
belikor bbbaf59591 daemon: fix documentation in the file_list docstring
This is necessary to produce the `docs/api.json`
(through `scripts/generate_json_api.py`)
with correct information, and to be able to parse this file later on
by other tools.
2021-09-23 21:00:31 -05:00
Lex Berezhny 169419896f v0.104.0 2021-09-22 18:39:01 -04:00
Lex Berezhny 0543dca502 re-enable coveralls 2021-09-22 18:15:13 -04:00
Lex Berezhny cc6011d57a ubuntu 16.04 is deprecated on github actions, upgrading to 18.04 2021-09-22 18:14:15 -04:00
Lex Berezhny fc4407ef7e revert release 2021-09-22 18:11:41 -04:00
Lex Berezhny 03735a125f v0.104.0 2021-09-22 14:02:52 -04:00
Lex Berezhny 5baeda9ff1
Merge pull request #3417 from lbryio/preserve_own_blobs
use database to track blob disk space use and preserve own blobs
2021-09-20 11:32:59 -04:00
Lex Berezhny 9b9794b5e0 default is_mine to true during migration 2021-09-20 09:23:42 -04:00
Lex Berezhny 0697d60a48 coveralls still down, will have to merged with coveralls off 2021-09-20 09:01:35 -04:00
Lex Berezhny cfe6c82a31 tests 2021-09-19 21:38:09 -04:00
Lex Berezhny 3e30228d95 lint 2021-09-15 10:49:03 -04:00
Lex Berezhny 7264b53e5f during disk clean your own sd blob is now kept and file status of deleted files is set to stopped 2021-09-15 10:37:08 -04:00
Lex Berezhny 60836d8523 db migration and other fixes 2021-09-15 09:10:06 -04:00
Lex Berezhny ef89c2e47a use databse to track blob disk space use and preserve own blobs 2021-09-15 09:10:06 -04:00
Lex Berezhny 2d9e3e1847 v0.103.0 2021-09-14 23:25:32 -04:00
Lex Berezhny 30136a9697 omit just node.py 2021-09-14 23:06:00 -04:00
Lex Berezhny db7ccd66d3 coverage omit fix 2021-09-14 22:38:39 -04:00
Lex Berezhny cfe6483102 omit coverage inside tox 2021-09-14 22:20:09 -04:00
Alex Grin 561566e723
Merge pull request #3421 from lbryio/vault_temp
avoid [''] on peers list
2021-09-13 16:14:43 -04:00
Victor Shyba c2dcc4c898 avoid [''] on peers list 2021-09-13 15:57:21 -03:00
Lex Berezhny d09bfdc4ff omit orchstr8 stuff since it doesnt always run the same way on every test run 2021-09-12 11:45:52 -04:00
Victor Shyba 358ef4536f add ConflictingInputValueError for claim_id+claim_ids 2021-09-10 18:57:20 -03:00
Victor Shyba 5061a35e66 remove ignored output from hub node 2021-09-10 18:57:20 -03:00
Victor Shyba cd9a1e8c9e default to legacy search for this release 2021-09-10 18:57:20 -03:00
Victor Shyba 646902e75e only duplicate blockchain CI step 2021-09-10 18:57:20 -03:00
Victor Shyba 40d26cb868 fix error msg to match Go msg 2021-09-10 18:57:20 -03:00
Victor Shyba b64aa51c0c fix stream_types being an integer 2021-09-10 18:57:20 -03:00
Victor Shyba 8206441834 run CI for old and new setups 2021-09-10 18:57:20 -03:00
Victor Shyba d713783736 ignore default values 2021-09-10 18:57:20 -03:00
Victor Shyba 57dffaa2ce update hub to beta release 2021-09-10 18:57:20 -03:00
Victor Shyba 9e81dd2360 refactor arguments fixup 2021-09-10 18:57:20 -03:00
Victor Shyba e2798969d7 claim_id is an invertible field, not a repeated 2021-09-10 18:57:20 -03:00
Victor Shyba 1c31ec66f2 simplify operator handling 2021-09-10 18:57:20 -03:00
Victor Shyba 241f9fc7b0 not_claim_id/not_claim_ids is not a search parameter 2021-09-10 18:57:20 -03:00
Victor Shyba 270192486a translate grpc errors to RPCError 2021-09-10 18:57:20 -03:00
Victor Shyba a799503c97 update fields from hub 2021-09-10 18:57:20 -03:00
Victor Shyba 9685928087 there is no first_search 2021-09-10 18:57:20 -03:00
Victor Shyba 0e4b2fad99 specify index name 2021-09-10 18:57:20 -03:00
Victor Shyba 3c4571a4e0 remove fallback 2021-09-10 18:57:20 -03:00
Jeffrey Picard 046147eb1d updates for fields 2021-09-10 18:57:20 -03:00
Jeffrey Picard 7834520e54 update code to be consistent with field renames 2021-09-10 18:57:20 -03:00
Jeffrey Picard 8e5b4d4b6f hardcode port 2021-09-10 18:57:20 -03:00
Jeffrey Picard 4544a074d9 Move the go hub settings from network to ledger config and hook reset
correctly.
2021-09-10 18:57:20 -03:00
Jeffrey Picard 9b78501392 Set default server to the networks default and use go hub by default 2021-09-10 18:57:20 -03:00
Jeffrey Picard f59ddcc88d Forgot to remove duplicate tests 2021-09-10 18:57:20 -03:00
Jeffrey Picard a4955a2b79 remove uneeded prints 2021-09-10 18:57:20 -03:00
Jeffrey Picard 92ae1a565b updates protobuf 2021-09-10 18:57:20 -03:00
Jeffrey Picard 15a56ca25e tons of small changes squashed together 2021-09-10 18:57:20 -03:00
Jeffrey Picard 9dcaa829ea update protobufs 2021-09-10 18:57:20 -03:00
Jeffrey Picard 9f65799a3d uncomment tests, add remove_duplicates param
Cleanup prints and commented out code

remove print

don't do list claims

cleanup
2021-09-10 18:57:20 -03:00
Jeffrey Picard 886587848b protobuf changes
more protobuf changes (fix imports)
2021-09-10 18:57:20 -03:00
Jeffrey Picard a97fc6dba8 cleanup and reorgnazing some stuff
Fixing tests

relabel failing tests properly

run all the tests for the hub

cleanup HubNode
2021-09-10 18:57:20 -03:00
Jeffrey Picard c124e88d12 grpc client for python 2021-09-10 18:57:20 -03:00
Jeffrey Picard 17f3870296 Add tests for hub
Have the basic starting /stopping / querying. Still don't have the hub
jsonrpc stuff working right and from the looks of it I need to clearify
some of the logic in the claim search function itself because it's not
returning the correct number of claims anyways.

get the integration working with grpcurl

Got tests working, still need to port the rest of them

ported all of the claim search tests

still a few failing due to not having inflation working, and there's something weird
with limit_claims_per_channel that needs to be fixed.
2021-09-10 18:57:20 -03:00
Lex Berezhny 4626d42d08
Merge pull request #3414 from cristi-zz/remove_comment_api
removed `comment` API endoints
2021-09-09 13:07:12 -04:00
Cristian Vicas e1e760055c Drop comment_* apis.
Refresh documentation.
2021-09-02 11:38:29 +03:00
Cristian Vicas 45bf6c3bf3 Drop comment_* apis.
Refactored dangling functions.
Added unit test.
2021-09-02 11:38:29 +03:00
Cristian Vicas fef0cc764d Drop comment_* apis
Removed the comment API
Removed tests for the comment API
Removed the documentation section
Removed the comment server configuration
2021-09-02 08:51:00 +03:00
Lex Berezhny 72049afcf6
Merge pull request #3410 from belikor/fix-docstring
jsonrpc_support_sum: remove the + signs from the docstring
2021-09-01 10:05:17 -04:00
belikor d26c06dbf3 jsonrpc_support_sum: remove the + signs from the docstring
These symbols came from 0a0ac3b7c9 and were probably added
accidentally to the beginning of the line by copying and pasting
some diffs.
2021-08-25 13:28:02 -05:00
Lex Berezhny 268decd655 update readme 2021-08-21 20:01:49 -04:00
Lex Berezhny 7ae246c839 always run on push, otherwise master branch does not get coverage 2021-08-21 17:10:53 -04:00
Lex Berezhny c7c454e4fb
Merge pull request #3406 from lbryio/coveralls
submit code coverage reports to coveralls
2021-08-21 17:07:22 -04:00
Lex Berezhny 8e27297a81 coveralls flag name fix 2021-08-21 16:47:17 -04:00
Lex Berezhny 2cdec72985 coveralls fix 2021-08-21 16:22:01 -04:00
Lex Berezhny 0085ac534d coverage fix 2021-08-21 15:41:06 -04:00
Lex Berezhny 7828a79a96 coverage combine corrected 2021-08-21 15:31:50 -04:00
Lex Berezhny 5576c21e67 coverage for integration tests 2021-08-21 15:26:14 -04:00
Lex Berezhny e49cfb1d2b another attempt 2021-08-21 14:44:59 -04:00
Lex Berezhny 1e541d0225 explicit coverralls service 2021-08-21 13:52:33 -04:00
Lex Berezhny 0974afd26d guess coverralls service 2021-08-21 12:06:45 -04:00
Lex Berezhny 8d93594771 coverage on win and mac 2021-08-21 10:51:37 -04:00
Lex Berezhny 1136ac70e8 fix makefile 2021-08-21 10:26:55 -04:00
Lex Berezhny dc8d5a39ea fix spaces 2021-08-21 09:42:18 -04:00
Lex Berezhny 8329e649b0 try python coveralls package insead of github action 2021-08-21 09:41:16 -04:00
Lex Berezhny 66da8b164f try AndreMiras/coveralls-python-action action 2021-08-21 09:28:20 -04:00
Lex Berezhny ea48577864 github workflow syntax fix 2021-08-21 09:15:20 -04:00
Lex Berezhny 597146b136 submit coverage to coveralls 2021-08-21 09:04:44 -04:00
Lex Berezhny 30dd0c1e11
Merge pull request #3405 from lbryio/upgrade_pylint
upgrade pylint and fix lint errors
2021-08-20 23:06:44 -04:00
Lex Berezhny 88772c4266 update setup.py 2021-08-20 22:42:12 -04:00
Lex Berezhny dc1d9e1c84 upgrade pylint and fix lint errors 2021-08-20 22:36:35 -04:00
Lex Berezhny 69ea65835d
Merge pull request #3402 from lbryio/save_files_default_false
changed default setting `save_files` to be false
2021-08-19 10:57:51 -04:00
Lex Berezhny d5bae3a8c6 manually set save_files=True in unit tests 2021-08-19 09:31:17 -04:00
Lex Berezhny f14010bd5b explicitly set save_files = True in tests 2021-08-17 16:36:48 -04:00
Lex Berezhny 87094fc83f changed default setting save_files to be false 2021-08-17 15:47:18 -04:00
Lex Berezhny 7c179cfeab missing closing squiggly bracket 2021-08-17 14:48:13 -04:00
Lex Berezhny 7582c221d1 v0.102.0 2021-08-17 14:16:17 -04:00
Lex Berezhny c109895848
Merge pull request #3399 from lbryio/better-error-logging
Less verbose error logs, only log tracebacks for errors not defined in `lbry.error`
2021-08-17 14:14:27 -04:00
Jack Robison eccedada40
add TODOs for errors raised that aren't defined in lbry.error 2021-08-17 12:31:03 -04:00
Jack Robison 25d54accf8
return api errors from wallet_add and wallet_create 2021-08-17 12:30:17 -04:00
Jack Robison d07685f0e9
only log tracebacks for api errors not defined in lbry.error 2021-08-17 11:30:58 -04:00
Jack Robison 2445c00c7e
raise WalletNotLoadedError in get_wallet_or_error instead of ValueError 2021-08-17 11:30:58 -04:00
Lex Berezhny 4c1d3ef514
Merge pull request #3398 from lbryio/clean_blobs_after_delay
clean blobs after waiting interval instead of immediately on startup
2021-08-17 10:44:34 -04:00
Lex Berezhny 4614c7d4c2 clean blobs after waiting interval instead of immediately on startup 2021-08-17 09:52:44 -04:00
Lex Berezhny bbf1ef0dc3
Merge pull request #3378 from lbryio/disk_management
ability to limit disk spaced used for blobs via `blob_storage_limit` setting (oldest blobs are deleted when disk space limit is reached)
2021-08-16 17:41:05 -04:00
Lex Berezhny 3433c9e708 return number of files deleted 2021-08-16 17:03:40 -04:00
Lex Berezhny 2cd5d75a2e return true/false if clean was performed 2021-08-16 17:02:13 -04:00
Lex Berezhny 2535b8adef fix disk space unit test 2021-08-16 14:54:17 -04:00
Lex Berezhny 4edab7bb7f fix sorting by DirEntry error 2021-08-16 14:41:16 -04:00
Lex Berezhny fd8658e317 test component unit test 2021-08-16 14:35:32 -04:00
Lex Berezhny 51d21d8c86 working disk cleanup 2021-08-16 14:15:12 -04:00
Lex Berezhny b4c3307cdf fixed tests 2021-08-13 10:32:46 -04:00
Lex Berezhny 4e8d10cb44 disk space manager and status API 2021-08-13 10:32:46 -04:00
Lex Berezhny e96875a425 workflow syntax 2021-08-13 10:32:46 -04:00
Lex Berezhny 5ab0035348 run tests on windows and mac 2021-08-13 10:32:46 -04:00
Lex Berezhny 4ddff96b1e
Merge pull request #3395 from lbryio/libtorrent_optional
make libtorrent optional and skip test which depends on it
2021-08-13 10:31:06 -04:00
Lex Berezhny a08d84c1df make libtorrent optional and skip test which depends on it 2021-08-13 10:07:06 -04:00
Victor Shyba 21c71bfac1 update sync utility 2021-08-09 18:33:47 -03:00
Victor Shyba 6baaed3581 refactor query with new fields 2021-08-09 18:33:47 -03:00
Victor Shyba 152dbfd5d1 reflect fee_currency, fee_amount and duration on repost searches 2021-08-09 18:33:47 -03:00
Victor Shyba a56d14086b reflect media_type on repost searches 2021-08-09 18:33:47 -03:00
Victor Shyba aee87693f8 reflect stream_type on repost searches 2021-08-09 18:33:47 -03:00
Alex Grin 976b4affd9
Merge pull request #3383 from lbryio/dht_log 2021-08-09 17:10:01 -04:00
Victor Shyba e222b6ad9c log that a invalid query happened 2021-08-09 15:07:44 -03:00
Victor Shyba 19b17374e8 throttle instead of disconnecting 2021-08-09 15:07:44 -03:00
Victor Shyba 43989122bb add error type and message to error readme and update code 2021-08-09 15:07:44 -03:00
Victor Shyba 72712d6047 raise and disconnect if too many parameters are used on search 2021-08-09 15:07:44 -03:00
Victor Shyba 0b52d2cc15 log invalid port as a warning instead of an exception 2021-08-03 15:29:52 -03:00
Lex Berezhny 8304102136 move <!channel> out of markdown converter 2021-07-27 11:54:08 -04:00
Lex Berezhny 3381aefcfa notify channel of slack message 2021-07-27 11:39:35 -04:00
Lex Berezhny 279a365cb1 v0.101.1 2021-07-27 11:12:06 -04:00
Lex Berezhny 2c9e00da56 revert version 2021-07-27 11:10:16 -04:00
Lex Berezhny f7cae69704 switch to using a custom GITHUB_TOKEN for doing releases 2021-07-27 11:07:25 -04:00
Lex Berezhny b7d58bcdbc v0.101.1 2021-07-26 17:01:25 -04:00
Lex Berezhny 13a856b843 revert version 2021-07-26 17:00:32 -04:00
Lex Berezhny 8da38985c3 debugging for release 2021-07-26 16:58:11 -04:00
Lex Berezhny 60cf6c6b97 v0.101.1 2021-07-26 16:02:28 -04:00
Lex Berezhny 35c2b34564
Merge pull request #3372 from lbryio/release_process_fixup
github release process fixes
2021-07-26 15:58:10 -04:00
Lex Berezhny ef2e048efc fixes for release process 2021-07-26 15:57:45 -04:00
Lex Berezhny 6b3261aa33
Merge pull request #3373 from lbryio/fix_typo
fix typo in kwargs key
2021-07-26 15:29:29 -04:00
Victor Shyba 1849c02cb6 fix typo in kwargs key 2021-07-26 16:02:48 -03:00
Lex Berezhny 1ec74a89e2
Merge pull request #3367 from belikor/fix-search-claim-id
fix error when using `--claim_id` with `lbrynet claim search`
2021-07-23 10:08:57 -04:00
Victor Shyba c591792de9 has_source is a special case 2021-07-22 16:25:55 -03:00
Victor Shyba 3108543ae5 3 missing fields 2021-07-22 16:25:55 -03:00
Victor Shyba 1eb221c743 translate reposted, signature_valid and normalized 2021-07-22 16:25:55 -03:00
Alex Grin bebf6bc2e7 Update constants.py 2021-07-22 16:25:55 -03:00
Alex Grin 9e91cc2138 Update constants.py 2021-07-22 16:25:55 -03:00
Victor Shyba c5b939cfb7 fix tests 2021-07-22 16:25:55 -03:00
Victor Shyba 5bd411ca27 filtering hash->id 2021-07-22 16:25:55 -03:00
Victor Shyba a533cda6f0 ES: all _hash to _id 2021-07-22 16:25:55 -03:00
Lex Berezhny fe4b07b8ae v0.101.0 2021-07-21 12:35:16 -04:00
Lex Berezhny f9f2ccd904 revert version 2021-07-21 12:28:41 -04:00
Lex Berezhny d9e87d7c32 publish after uploading release artifacts 2021-07-21 12:27:04 -04:00
Lex Berezhny a0092c0770 remove docker steps from github action build 2021-07-21 12:14:49 -04:00
Lex Berezhny 3100131125 checkout code in release job 2021-07-21 11:33:41 -04:00
Lex Berezhny 988880cf83 update set_build script to work on github 2021-07-21 11:32:37 -04:00
Lex Berezhny c3fb9672c4 re-enable skipping failing DHT unit test 2021-07-21 11:25:44 -04:00
Lex Berezhny 0a2d94e425 updated set_build to use GITHUB_ env vars 2021-07-21 09:19:58 -04:00
Lex Berezhny 8d9073cd31 v0.101.0 2021-07-20 22:52:44 -04:00
Lex Berezhny d075961ffa removed .gitlab-ci.yml 2021-07-20 22:52:05 -04:00
Lex Berezhny 7a72409b61 fix dht node test 2021-07-20 22:43:57 -04:00
Lex Berezhny 34fc530fba cleanup github actions to be able to drop gitlab 2021-07-20 22:43:57 -04:00
Jack Robison f257ff2f97
Merge pull request #3369 from lbryio/fix-hanging-tx-notification
fix stuck transaction notification due to race in mempool when advancing a block
2021-07-20 18:54:22 -04:00
Jack Robison 7ad5822c5b
fix test 2021-07-20 16:03:34 -04:00
Jack Robison 9a8f9f0a94
fix stuck notification due to mempool/notification race 2021-07-20 15:14:10 -04:00
belikor 6421cecafb daemon: fix --claim_id with lbrynet claim search
For some reason, when using `claim_search`
with `--claim_id`, the arguments dictionary will also
contain `claim_ids` with an empty list, even if we didn't specify it.
```
lbrynet claim search --claim_id=8945573bcfcb7f8276187dfbb93545eac4ebf71a
```

Using both `claim_id` and `claim_ids` will raise a `ValueError`
exception so the daemon won't return a valid result
even if the claim ID is in fact valid.

So if `claim_id` exists, we need to discard `claim_ids`
if it is empty, before proceeding with the rest of the code.

On the other hand, if `claim_ids` is used, and `claim_id` is absent,
there will be no problem as `claim_id` won't be added to the dictionary.
```
lbrynet claim search --claim_ids=8945573bcfcb7f8276187dfbb93545eac4ebf71a
```
2021-07-19 22:24:43 -05:00
Alex Grin be544d6d89
Merge pull request #3358 from belikor/improve-install-md 2021-07-19 14:20:44 -04:00
Alex Grin 3c89ecafdd
Merge branch 'master' into improve-install-md 2021-07-19 14:20:39 -04:00
Alex Grin 35ec4eec52
Update INSTALL.md 2021-07-19 14:20:15 -04:00
Alex Grin e47f737a2f
Update INSTALL.md 2021-07-19 14:15:21 -04:00
Alex Grin ac671a065b
Merge pull request #3356 from lbryio/propagate_external_ip_change 2021-07-19 14:12:26 -04:00
Alex Grin 74116cc550
Merge branch 'master' into propagate_external_ip_change 2021-07-19 14:12:19 -04:00
Alex Grin 406070a5c3
Merge pull request #3354 from belikor/note-download-blob-peer 2021-07-19 14:10:13 -04:00
Victor Shyba 0ccafd5b53 make get_or_create_usable_address respect the generator lock 2021-07-19 14:09:52 -04:00
Alex Grin 940f517aa3
Merge branch 'master' into note-download-blob-peer 2021-07-19 14:09:51 -04:00
Alex Grin 216e5f65ad
Merge pull request #3363 from lbryio/troubleshoot_p2p_script
add script with web endpoints that can troubleshoot p2p/dht
2021-07-19 14:04:28 -04:00
Victor Shyba a74685d66d add script to troubleshoot p2p/dht 2021-07-19 15:01:37 -03:00
belikor b7791d2845 exchange_rate_manager: raise exception if 'error' is in json_response
If the error is not handled, the running daemon will continuously
print the following error message:
```
Traceback (most recent call last):
  File "lbry/extras/daemon/exchange_rate_manager.py", line 77, in get_rate
  File "lbry/extras/daemon/exchange_rate_manager.py", line 189, in get_rate_from_response
KeyError: 0
```

This started happening when the UPBit exchange decided to delist
the LBC coin.

Normally `json_response` should be a dictionary, not a list,
so `json_response[0]` causes an error.

By checking for the `'error'` key, we can raise the proper exception.

Once this is done, the message will be a warning, not a traceback.
```
WARNING  lbry.extras.daemon.exchange_rate_manager:92:
Failed to get exchange rate from UPbit: result not found
```
2021-07-19 13:41:49 -04:00
Victor Shyba d151a82d78 add libtool and automake to the dockerfiles so they can build coincurve 2021-07-15 17:11:19 -03:00
belikor 8ce61fbd52 INSTALL.md: break the big blocks of code, and remove the space
Remove the first space in the block of code as it is not necessary.

This
 ```
 $ python --version
 ```

Becomes this
```
$ python --version
```

Also break the big block of code into individual blocks.
2021-07-11 19:44:54 -05:00
belikor 90c24aade3 INSTALL.md: using Python 3.8 does not work, issue #2769
Because of issue #2769 at the moment the `lbrynet` daemon
will only work correctly with Python 3.7.

The `deadsnakes` personal package archive (PPA) provides
Python 3.7 for Ubuntu distributions that no longer have it
in their official repositories like 18.04 and 20.04.

If Python 3.8+ is used, the daemon will start but the RPC server
may not accept messages, returning the following:
```
Could not connect to daemon. Are you sure it's running?
```
2021-07-11 19:44:54 -05:00
belikor 6b3f787fee INSTALL.md: add more information on the virtual environments
Leave with `deactivate`.

Enter the environment again with
```
source lbry-venv/bin/activate
```

When developing, we can start the server interactively.
```
python lbry/extras/cli.py start
```

Parameters can be passed in the same way.
```
python lbry/extras/cli.py wallet balance
```

If a Python debugger (`pdb` or `ipdb`) is installed we can also start
it in this way, set up break points, and step through the code.
```
ipdb lbry/extras/cli.py
```
2021-07-11 19:44:25 -05:00
belikor 4ebe4ce1b7 scripts: note to further investigate in download_blob_from_peer
Currently `lbrynet blob get <hash>` does not work to download
single blobs which are not already present in the system.
The function locks up and never returns.
It only works for blobs that are in the `blobfiles` directory
already.

This bug is reported in lbryio/lbry-sdk, issue #2070.

Maybe this script can be investigated, and certain parts
can be added to `lbry.extras.daemon.daemon.jsonrpc_blob_get`
in order to solve the previous issue, and finally download
single blobs from the network (peers or reflector servers).
2021-07-09 11:53:35 -05:00
belikor 8c79740ee8 script/test_claim_search: fix the import of ClientSession
This is nothing special, it just allows the module
to run without throwing an error on the import.

From
```
from lbry.wallet.client.basenetwork import ClientSession
```

To
```
from lbry.wallet.network import ClientSession
```
2021-07-09 10:52:41 -04:00
belikor 59d027ca02 script/find_max_server: fix the import of ClientSession
This is nothing special, it just allows the module
to run without throwing an error on the import.

From
```
from lbry.wallet.client.basenetwork import ClientSession
```

To
```
from lbry.wallet.network import ClientSession
```
2021-07-09 10:52:41 -04:00
Ofek Lev 37a7345a90 Upgrade coincurve dependency 2021-07-09 10:51:03 -04:00
Victor Shyba c519d4651b loop.time is not usable on advance time, use wall time 2021-07-08 03:55:21 -03:00
Victor Shyba 9b3b609e40 re-enable test_losing_connection 2021-07-08 03:46:48 -03:00
Victor Shyba 6254f53716 propagate external ip changes from upnp component to dht node protocol 2021-07-08 03:46:05 -03:00
Jack Robison f05dc46432
Merge pull request #3342 from lbryio/bug_flush_counter
[resync required] Avoid flush counter overflows on long running hubs by increasing it to 32 bits
2021-07-07 23:45:47 -04:00
Victor Shyba 3de0982a4a limit request error logging to 16k 2021-07-07 18:39:38 -03:00
Victor Shyba c2184fb3bf run migration on history db open 2021-07-07 18:39:38 -03:00
Victor Shyba 919c09fcb0 add migration 2021-07-07 18:39:38 -03:00
Victor Shyba 1d9dbd40ec increase flush counter to 32 bits 2021-07-07 18:39:38 -03:00
belikor 0cd953a6f3 script/checktrie: fix the import to SQLDB
This is nothing special, it just allows the module
to run without throwing an error.

From
```
from lbry.wallet.server.db import SQLDB
```

To
```
from lbry.wallet.server.db.writer import SQLDB
```
2021-07-07 11:02:28 -03:00
Alex Grin 4db2b72351
Merge pull request #3347 from kodxana/master 2021-07-02 11:16:40 -04:00
kodxana dd54fcbdbd
Create README.md 2021-07-01 18:21:20 +02:00
kodxana 3123cf7ac6
Added docker-compose 2021-07-01 18:17:36 +02:00
Victor Shyba 6b579dd4ce add dockerfiles for web sdk 2021-06-30 18:03:00 -03:00
Alex Grin 16dfaa3e27
Merge pull request #3343 from lbryio/example_es
add small example script showing how to read and update values to ES as we currently use it
2021-06-30 11:30:06 -04:00
Victor Shyba d7842b9f84 small script showing how to read/update values to ES as we currently use it 2021-06-25 12:41:05 -03:00
Alex Grin 115034fccb
Merge pull request #3232 from lbryio/timeout 2021-06-25 11:05:25 -04:00
Victor Shyba 309e957a85 add concurrent_hub_requests conf 2021-06-24 21:21:19 -03:00
Victor Shyba d7007e402e move request semaphore to session and apply to all requests 2021-06-24 21:02:41 -03:00
Victor Shyba 91323a21cf add hub_timeout and propagate it to network code 2021-06-24 21:02:41 -03:00
Lex Berezhny fea893d76c v0.100.0 2021-06-22 13:33:03 -04:00
Lex Berezhny 761bc6ba4c revert release and fix test 2021-06-22 13:32:41 -04:00
Lex Berezhny 75172feb4e v0.100.0 2021-06-22 12:53:23 -04:00
Lex Berezhny 3285fb1608 revert release 2021-06-22 12:52:48 -04:00
Lex Berezhny 03a4c6910d v0.100.0 2021-06-22 12:51:36 -04:00
Lex Berezhny 485b958599 revert release 2021-06-22 12:50:11 -04:00
Lex Berezhny da47ba2f67 v0.100.0 2021-06-22 11:11:02 -04:00
Lex Berezhny c39195488a bug fix 2021-06-22 11:07:58 -04:00
Lex Berezhny 227fb0ae9b network integration test fix 2021-06-22 11:07:58 -04:00
Lex Berezhny b12ff5b503 test fixes 2021-06-22 11:07:58 -04:00
Lex Berezhny 0946c72b88 lint 2021-06-22 11:07:58 -04:00
Lex Berezhny 7d49b046d4 added support to config for determining if value is set and implemented hub selection logic 2021-06-22 11:07:58 -04:00
Lex Berezhny 5f0426c840 country jurisdiction added to hub UDP protocol 2021-06-22 11:07:58 -04:00
Lex Berezhny 73e239cc5f client side hub discovery pub/sub and hub metadata stored, removed old peers implementation 2021-06-22 11:07:58 -04:00
Lex Berezhny ad670f721a working client peer hub 2021-06-22 11:07:58 -04:00
Lex Berezhny 028a4a70cf wallet server federation, client portion 2021-06-22 11:07:58 -04:00
Lex Berezhny 77d7960347 increase lbc exchange rate threshold 2021-06-18 11:26:30 -04:00
Lex Berezhny 39821146bd increase lbc threshold in exchange rate integration tests even more 2021-06-17 10:23:33 -04:00
Lex Berezhny 7d505a41ac drop sqlite indexes from test 2021-06-15 18:22:42 -04:00
Lex Berezhny e457b2f0d6 fix trending to use built-in sqlite instead of apsw 2021-06-15 18:22:42 -04:00
Lex Berezhny c9cf7fd4d4 drop apsw in wallet.server.db.elasticsearch.sync 2021-06-15 18:22:42 -04:00
Lex Berezhny b0371dd33d update test reader to use plain sqlite 2021-06-15 18:22:42 -04:00
Lex Berezhny 25e16c3565 dropping apsw 2021-06-15 18:22:42 -04:00
Lex Berezhny 7b39527863 update exchange rate threshold in integration tests due to significant drop in LBC price 2021-06-15 15:58:59 -04:00
Alex Grin d861b08866
Merge pull request #3323 from lbryio/dht_leak 2021-06-07 16:15:59 -04:00
Victor Shyba fb438dc108 remove the unregister call 2021-06-04 12:47:16 -03:00
Victor Shyba 4e6b4f179b add items() to LRUCache 2021-06-04 12:20:44 -03:00
Victor Shyba 00d038c8f3 add default parameter to pop on LRUCache 2021-06-04 12:15:47 -03:00
Victor Shyba a9f6a68952 use LRU caches for DHT metrics 2021-06-04 11:54:37 -03:00
Alex Grin b9142bbc5a
Merge pull request #3319 from lbryio/support_value_type
drop `value_type` for supports
2021-06-04 08:39:40 -04:00
Victor Shyba 6c812f663e drop value_type for support 2021-06-02 18:01:54 -03:00
Alex Grin a93ec9783a
Update README.md 2021-06-02 14:10:19 -04:00
Lex Berezhny 2d184d77b6 v0.99.0 2021-06-02 12:07:37 -04:00
Victor Shyba bce299ccc7 fix docopt typo 2021-06-02 12:05:36 -04:00
Victor Shyba 235cebd14a fix test value 2021-06-02 12:05:36 -04:00
Victor Shyba a638aa9d53 add and test support for support_create anonymous --comment 2021-06-02 12:05:36 -04:00
Victor Shyba 67cce0ef7e test+implement --comment for support_create 2021-06-02 12:05:36 -04:00
Victor Shyba 82f4267bf6 add comment property/setter to the signable support class 2021-06-02 12:05:36 -04:00
Victor Shyba 45a9ca29c4 update generated support protobuf with field 2021-06-02 12:05:36 -04:00
Victor Shyba 7f4e813277 document schema update process 2021-06-02 12:05:36 -04:00
Lex Berezhny 3805ff4a0c fix purchase test 2021-06-02 11:34:21 -04:00
Lex Berezhny 464cfd475e properly format scripthash address on output 2021-06-02 11:34:21 -04:00
Lex Berezhny fe469ae57f create appropriate script for scripthash address 2021-06-02 11:34:21 -04:00
Lex Berezhny 550ef9a1c4 allows script addresses (beginning with r) to be used 2021-06-02 11:34:21 -04:00
Alex Grin 935adfb51a
Merge pull request #3301 from lbryio/no_repeat_claim_id
add `--remove_duplicates` to the search api
2021-05-28 11:01:15 -04:00
Victor Shyba 3974df4a62 fix interaction between two modes 2021-05-27 20:14:12 -03:00
Victor Shyba 4870974161 update json api 2021-05-27 20:14:12 -03:00
Victor Shyba 8c4b0037f5 API: add --remove_duplicates to claim_search 2021-05-27 20:14:12 -03:00
Victor Shyba 2c6f763ef2 test picking oldest when originals doesnt match 2021-05-27 20:14:12 -03:00
Victor Shyba ca28de02d8 test and implementation for remove_duplicates on post-search filtering 2021-05-27 20:14:12 -03:00
Victor Shyba bfc15ea029 handle limit being 0 and skip reordering if 0/none 2021-05-27 20:14:12 -03:00
Victor Shyba 6e8b8a5920 always call search_ahead 2021-05-27 20:14:12 -03:00
Jack Robison 099f3b6a62
Merge pull request #3308 from lbryio/reflect_more
Don't set stream as reflected until reflector says it doesn't need any blob
2021-05-27 19:13:23 -04:00
Victor Shyba 142d182bc1 if progress was made, retry without a delay 2021-05-27 18:24:58 -03:00
Victor Shyba 1437871d88 fix reflector client: only set completed when server says so 2021-05-27 18:24:58 -03:00
Victor Shyba 352bf69409 improve test 2021-05-27 18:24:58 -03:00
Victor Shyba 9bdf3d23e1 test bug 3296, failing 2021-05-27 18:24:58 -03:00
Victor Shyba be8ecfa707 sort keys so helper scripts can send blobs using send_request 2021-05-27 18:24:58 -03:00
Lex Berezhny 51da0d0259 v0.98.0 2021-05-26 09:23:19 -04:00
Alex Grin f55b78a994
Merge pull request #3306 from lbryio/fix-collectionUpdateWithReplace 2021-05-18 15:25:53 -04:00
Alex Grin e1a44c93f8
Merge branch 'master' into fix-collectionUpdateWithReplace 2021-05-18 15:25:40 -04:00
Alex Grin 07e7087a09
Merge pull request #3303 from keikari/patch-1
Minor fix suggestion for issue #3240
2021-05-18 15:23:56 -04:00
Alex Grin 2c79c7e2f6
Merge branch 'master' into patch-1 2021-05-18 15:23:08 -04:00
Victor Shyba 09f6637fe0 remove unused multiprocessin.Manager 2021-05-17 15:07:32 -03:00
Victor Shyba 3784db3308 test collections update with --replace 2021-05-15 03:27:33 -03:00
zeppi 2b950ff5dd fix bug in collection_update --replace 2021-05-15 03:27:33 -03:00
Alex Grin 09339c9cfb
Merge pull request #3305 from lbryio/fix_migrator_tool
fix hub Elasticsearch sync/migrations tool for when the db exists already
2021-05-14 11:06:07 -04:00
Victor Shyba ccadd88af5 fix cache call 2021-05-13 22:40:21 -03:00
Victor Shyba cc02a0efc2 fix es migration bug, expand test case 2021-05-13 19:00:53 -03:00
Victor Shyba 43a1385b79 test sync helper 2021-05-13 19:00:53 -03:00
Victor Shyba 5101464e3b add integration tests command on install.md 2021-05-13 19:00:36 -03:00
Victor Shyba 3d71478d38 update install.md with ES instructions 2021-05-13 19:00:36 -03:00
Victor Shyba 4989ed445e add ES to makefile 2021-05-13 19:00:36 -03:00
keikari d9413039ec
Fix suggestion for issue #3240
L135: If `getattr()` returns `None`, use `""` instead to avoid error in issue #3240
2021-05-12 18:30:38 +03:00
Jack Robison eba0c9be34
fix typo 2021-05-07 16:51:19 -04:00
Jack Robison 48c9e9f3cc
Merge pull request #3294 from lbryio/versioned-es-search-index
add versioning to ES search index and automate resync on version bumps
2021-05-07 15:38:21 -04:00
Jack Robison 81ebde88db
resync ES search index on version bumps
-bump ES search index to version 1
2021-05-07 14:36:53 -04:00
Jack Robison 79ced9d0f8
Merge pull request #3262 from lbryio/channel_repost_has_source
Fix bug for `has_source=True` hiding channel reposts
2021-05-07 14:36:04 -04:00
Victor Shyba a4058b84ce clean out unused sharding 2021-05-07 15:03:37 -03:00
Victor Shyba 7bf211a52b apply reposted_claim_type on es sync 2021-05-07 15:03:37 -03:00
Victor Shyba d5f722792f fix and test has_source for channel reposts 2021-05-07 15:03:37 -03:00
Victor Shyba 0f02906c9b fix has_source for reposted channels 2021-05-07 15:03:37 -03:00
Victor Shyba 9582e228b1 assert instead of sleep 2021-05-07 15:02:31 -03:00
Victor Shyba 45f20431f9 update tests from the removed feed 2021-05-07 15:02:31 -03:00
Victor Shyba 7554e6d7f9 remove dead code 2021-05-07 15:02:31 -03:00
Victor Shyba cb8f26f177 remove broken feed 2021-05-07 15:02:31 -03:00
Jack Robison b5dfce7861
Revert "finished switch from using hash # in URLs to colon :"
This reverts commit 888aa558
2021-05-07 11:31:28 -04:00
Jack Robison 2ca5a65544
Revert "FindShortestID updated"
This reverts commit 8f04a50c
2021-05-07 11:30:31 -04:00
Jack Robison 17deb136db
Revert "StreamCommands"
This reverts commit 2a8ccb06
2021-05-07 11:29:45 -04:00
Jack Robison 8c9710c76c
Merge pull request #3293 from lbryio/fix-block-processor-crash-invalid-fee
fix invalid claim fees breaking the block processor
2021-05-07 10:03:40 -04:00
Jack Robison 32f7ecb261
fix invalid claim fees breaking the block processor 2021-05-06 11:18:58 -04:00
Victor Shyba fb77fde710 for debug, it is always whole page 2021-05-04 22:22:07 -03:00
Victor Shyba 3c67bb90d7 don't fail when a single one go on maintenance and set completion event regardless of failures 2021-05-04 22:22:07 -03:00
Victor Shyba dabb168853 dont log full exceptions on simple connection errors 2021-05-04 22:22:07 -03:00
Victor Shyba 45e5b3b219 dont log full pages 2021-05-04 22:22:07 -03:00
Alex Grin a6b7469923
Update README.md 2021-05-03 18:04:29 -04:00
Jack Robison cb5dab3033
Merge pull request #3285 from lbryio/restrict-udp-source
Restrict udp sources, add `ALLOW_LAN_UDP` hub setting
2021-04-29 10:19:32 -04:00
Jack Robison 21d0038ff2
add timestamps to hub log 2021-04-28 16:47:00 -04:00
Jack Robison c094d8f2e8
add ALLOW_LAN_UDP hub setting 2021-04-28 16:47:00 -04:00
Jack Robison c465d6a6c2
ignore udp packets with low source ports 2021-04-28 16:47:00 -04:00
Lex Berezhny 73d35bc985 v0.97.0 2021-04-28 16:23:46 -04:00
Lex Berezhny 2a8ccb065b StreamCommands 2021-04-28 16:21:01 -04:00
Lex Berezhny 8f04a50ce1 FindShortestID updated 2021-04-28 16:21:01 -04:00
Lex Berezhny 888aa5586b finished switch from using hash # in URLs to colon : 2021-04-28 16:21:01 -04:00
Lex Berezhny 99f56f5d22 v0.96.0 2021-04-28 15:26:58 -04:00
Jack Robison ad6281090d
Merge pull request #3275 from lbryio/search_caching_issues
add caching to "search ahead" code and invalidate short_url cache on every block
2021-04-28 14:14:29 -04:00
Victor Shyba f0d334d3e2 refactor from review 2021-04-28 13:28:38 -03:00
Victor Shyba 5f829b048f use separator to avoid cache key conflicts 2021-04-27 22:57:04 -03:00
Victor Shyba 1a961e66ff invalidate short_id cache on new block 2021-04-27 22:57:04 -03:00
Victor Shyba fdb0e22656 cache search_ahead 2021-04-27 22:57:04 -03:00
Jack Robison 132ee1915f
Merge pull request #3283 from lbryio/fix_multiprocessing_db
fix multiprocessing support on client db
2021-04-27 16:46:28 -04:00
Victor Shyba 44bf4f3c8f fix if statement from always evaluating a string 2021-04-27 17:10:04 -03:00
Alex Grintsvayg 6237767d5a
Merge branch 'make-test'
* make-test:
  run tests using make
2021-04-23 15:26:26 -04:00
Alex Grintsvayg dec9d96417
run tests using make 2021-04-23 15:25:40 -04:00
Lex Berezhny b167c87267 v0.95.0 2021-04-23 14:55:38 -04:00
Lex Berezhny 2280fe8e8e default has_source to 1 2021-04-23 14:54:51 -04:00
Lex Berezhny 575d6dcd2d migration specifically for upgrading from client db v1.5 to v1.6 2021-04-23 14:54:51 -04:00
Lex Berezhny f729490c6b pending claims ordered towards top in claim_list 2021-04-23 11:00:58 -04:00
Lex Berezhny b32124cdd6 regenerate docs 2021-04-23 10:24:48 -04:00
Lex Berezhny 3d4321ee38 added --has_source/--has_no_source filters to claim_list 2021-04-23 10:24:48 -04:00
Alex Grintsvayg 85034b382e
Revert "run tests using make"
This reverts commit 77a51d1ad4.
2021-04-21 11:41:16 -04:00
Alex Grintsvayg 77a51d1ad4
run tests using make 2021-04-20 15:41:09 -04:00
Alex Grin 33e0cdc2d7
Update docker-compose-wallet-server.yml 2021-04-16 11:58:02 -04:00
Alex Grin 6519faa2fe
Update docker-compose-wallet-server.yml 2021-04-16 11:53:16 -04:00
Lex Berezhny 5e3a234cbe v0.94.1 2021-04-16 11:18:24 -04:00
Lex Berezhny e54c31d2d5 fix bug in how reserved balance is calculated 2021-04-16 11:17:51 -04:00
Alex Grin 66c0537251
Create ossar-analysis.yml 2021-04-15 15:24:26 -04:00
Alex Grin ac58516593
Create codeql-analysis.yml 2021-04-15 15:22:57 -04:00
Alex Grin c3da6322b5
Create SECURITY.md 2021-04-15 15:21:17 -04:00
Lex Berezhny 3d241500cf v0.94.0 2021-04-14 19:55:35 -04:00
Lex Berezhny ded8224f66 update docs 2021-04-14 19:52:50 -04:00
Lex Berezhny f8814881a1 ability to set sd_hash, file_name and file_hash when updating a stream claim 2021-04-14 19:52:50 -04:00
Victor Shyba cc2852cd48 new implementation for limit_claims_per_channel 2021-04-14 18:32:16 -04:00
Lex Berezhny 467637a9eb fix test 2021-04-14 11:24:58 -04:00
Lex Berezhny 3cfc292d84 lint 2021-04-14 11:24:58 -04:00
Lex Berezhny 6acf94a810 moved balance calculation to SQL 2021-04-14 11:24:58 -04:00
Jack Robison 31367fb4c4 show hostnames of spvs 2021-04-13 11:51:27 -04:00
Jack Robison 12d6074e3b fix typing 2021-04-13 11:51:27 -04:00
Lex Berezhny ff30386051 lint 2021-04-06 21:22:27 -04:00
shubhendra 601f99ac16 Remove unnecessary generator
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
shubhendra 87fe5c6101 Refactor the comparison involving not
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
shubhendra 68399ca31c Iterate dictionary directly
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
shubhendra 2a6d7fd80f Remove methods with unnecessary super delegation.
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
shubhendra 4725f510d8 Remove unnecessary use of comprehension
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
shubhendra be0ba22222 Remove unnecessary comprehension
Signed-off-by: shubhendra <withshubh@gmail.com>
2021-04-06 21:22:27 -04:00
Lex Berezhny c8781392be added unit test for Access-Control HTTP headers 2021-04-06 17:12:05 -04:00
John Leith b97164fcfb adding access control headers 2021-04-06 17:12:05 -04:00
Lex Berezhny 0dfb92281b v0.93.0 2021-03-30 20:59:47 -04:00
Victor Shyba 4fe80c40da also apply to test:json-api 2021-03-30 17:00:15 -04:00
Victor Shyba f0fac5115a update tox to pass ELASTIC_HOST 2021-03-30 17:00:15 -04:00
Victor Shyba 46dd389d0d add elasticsearch service to gitlab 2021-03-30 17:00:15 -04:00
Jack Robison 1e28e21ab5
Merge pull request #3248 from lbryio/add-es-host-setting
add ELASTIC_HOST and ELASTIC_PORT settings to hub
2021-03-30 13:09:18 -04:00
Jack Robison 7832c62c5d
add ELASTIC_HOST and ELASTIC_PORT settings to hub 2021-03-30 12:48:13 -04:00
Lex Berezhny d025ee9dbe revert release 2021-03-30 11:29:17 -04:00
Lex Berezhny a9a9cb4319 v0.93.0 2021-03-30 10:15:31 -04:00
Victor Shyba aa727cb9b1 show channels regardless of no_source 2021-03-30 09:47:08 -04:00
Victor Shyba b8c9a99f20 fix no_source for reposts 2021-03-30 09:47:08 -04:00
Lex Berezhny aff995b0d0 temporary fix for mempool sync failing during reorg 2021-03-29 16:11:03 -04:00
Jack Robison 2cc7e5dfdc
Merge pull request #3153 from lbryio/elasticsearch
hub: use Elasticsearch for `claim_search` and `resolve` calls
2021-03-24 16:44:14 -04:00
Victor Shyba 5235a150b1 add prog name to sync arg parser 2021-03-24 17:07:17 -03:00
Victor Shyba c6372ea9de hub->lbry-hub 2021-03-24 17:03:57 -03:00
Victor Shyba 7df4cc44c4 fixes from review 2021-03-24 16:30:33 -03:00
Victor Shyba d47cf40544 add reader.py for test_sqldb tests 2021-03-19 19:58:13 -03:00
Victor Shyba 7f5d88e95c remove dead/broken/unused API 2021-03-19 19:58:13 -03:00
Victor Shyba d09663c066 remove flush call 2021-03-19 19:58:13 -03:00
Victor Shyba ef97c9b69f torba-server -> hub 2021-03-19 19:58:13 -03:00
Victor Shyba d855e6c8b1 move elasticsearch things into its own module 2021-03-19 19:58:13 -03:00
Victor Shyba cd66f7eb43 if not no_totals, use default page size 2021-03-19 19:58:13 -03:00
Victor Shyba 6a35a7ba4c expand content filtering tests for no_totals 2021-03-19 19:58:13 -03:00
Victor Shyba a3e146dc68 sort on index time 2021-03-19 19:58:13 -03:00
Victor Shyba b81305a4a9 index and allow has_source 2021-03-19 19:58:13 -03:00
Victor Shyba 73884b34bc apply no_totals 2021-03-19 19:58:13 -03:00
Victor Shyba 6166a34db2 check cache item before locking 2021-03-19 19:58:13 -03:00
Victor Shyba 6fa7da4b1c less slices 2021-03-19 19:58:13 -03:00
Victor Shyba c3e426c491 fix search by channel for invalid channel 2021-03-19 19:58:13 -03:00
Victor Shyba 21e023f0db fix search by channel 2021-03-19 19:58:13 -03:00
Victor Shyba 063be001b3 cache inner parsing 2021-03-19 19:58:13 -03:00
Victor Shyba 5dff02e8bc on resolve, get all claims at once 2021-03-19 19:58:13 -03:00
Victor Shyba 60a59407d8 cache the encoded output instead 2021-03-19 19:58:13 -03:00
Victor Shyba 20a5aecfca fix lib exception to asyncio TimeoutError 2021-03-19 19:58:13 -03:00
Victor Shyba c2e7b5a67d restore some of the interrupt metrics 2021-03-19 19:58:13 -03:00
Victor Shyba 8f32303d07 apply search timeout 2021-03-19 19:58:13 -03:00
Victor Shyba 891b1e7782 track results up to 200 2021-03-19 19:58:13 -03:00
Victor Shyba f26394fd3b report deletions on docs that doesnt exist, but dont raise 2021-03-19 19:58:13 -03:00
Victor Shyba 4d83d42b4c fix equality instead of mod 2021-03-19 19:58:13 -03:00
Victor Shyba 57f1108df2 fix query being json serializable 2021-03-19 19:58:13 -03:00
Victor Shyba 2641a9abe5 make better resolve cache 2021-03-19 19:58:13 -03:00
Victor Shyba 6b193ab350 make indexing cooperative 2021-03-19 19:58:13 -03:00
Victor Shyba b1bb37511c use right key on cache 2021-03-19 19:58:13 -03:00
Victor Shyba 319187d6d6 log mempool task exceptions 2021-03-19 19:58:13 -03:00
Victor Shyba 02eb789f84 caching for resolve 2021-03-19 19:58:13 -03:00
Victor Shyba 5a9338a27f use a dict on set_reference 2021-03-19 19:58:13 -03:00
Victor Shyba eb6924277f round time to 10 minutes and fetch referenced by id 2021-03-19 19:58:13 -03:00
Victor Shyba 325419404d update dockerfile 2021-03-19 19:58:13 -03:00
Victor Shyba bd8f371fd5 bump referenced rows query limit up 2021-03-19 19:58:13 -03:00
Victor Shyba 1783ff2845 dont delete claims on reorg 2021-03-19 19:58:13 -03:00
Victor Shyba d388527ffa log indexing errors 2021-03-19 19:58:13 -03:00
Victor Shyba 19494088bd generate from queue 2021-03-19 19:58:13 -03:00
Victor Shyba 920dad524a simplify sync and use asyncio Queue instead 2021-03-19 19:58:13 -03:00
Victor Shyba ec89bcac8e improve sync script for no-downtime maintenance 2021-03-19 19:58:13 -03:00
Victor Shyba a916c1f4ad check if db file exists before sync 2021-03-19 19:58:13 -03:00
Victor Shyba a9a0ac92d7 ignore unset flag 2021-03-19 19:58:13 -03:00
Victor Shyba da8a8bd1ef filter+fts and tests for edge cases 2021-03-19 19:58:13 -03:00
Victor Shyba d9c746891d pin python3.7 2021-03-19 19:58:13 -03:00
Victor Shyba 67817005b5 check ES synced without a process and wait for ES 2021-03-19 19:58:13 -03:00
Jack Robison 24d11de5a7 torba-elastic-sync 2021-03-19 19:58:13 -03:00
Victor Shyba 9251c87323 refresh after sync 2021-03-19 19:58:13 -03:00
Victor Shyba e12fab90d1 docker compose update 2021-03-19 19:58:13 -03:00
Victor Shyba 0a194b5b01 claim_ids query 2021-03-19 19:58:13 -03:00
Victor Shyba 8d028adc53 be a writer by default 2021-03-19 19:58:13 -03:00
Victor Shyba dfca15395e claim id is also a keyword 2021-03-19 19:58:13 -03:00
Victor Shyba e21f2362fe apply reorg deletion as well 2021-03-19 19:58:13 -03:00
Victor Shyba 1ce328e8a9 cache signature inspection 2021-03-19 19:58:13 -03:00
Victor Shyba 038a5f999f cache encoded headers 2021-03-19 19:58:13 -03:00
Victor Shyba 5d3704c7ea reader mode 2021-03-19 19:58:13 -03:00
Victor Shyba 87037c06c9 remove reader code 2021-03-19 19:58:13 -03:00
Victor Shyba dd412c0f50 delete sqlite fts 2021-03-19 19:58:13 -03:00
Victor Shyba bf44befff6 backport fixes from server 2021-03-19 19:58:13 -03:00
Victor Shyba e61874bb6f only repeat search if it has blocked items 2021-03-19 19:58:13 -03:00
Victor Shyba 1e5331768f fix some of the tests 2021-03-19 19:58:13 -03:00
Victor Shyba ec9a3a4f7c do not page filtered 2021-03-19 19:58:13 -03:00
Victor Shyba e439a3a8dc advanced resolve 2021-03-19 19:58:13 -03:00
Victor Shyba 19f70d7a11 create changelog trigger 2021-03-19 19:58:13 -03:00
Victor Shyba afe7ed5b05 adjust size 2021-03-19 19:58:13 -03:00
Victor Shyba d4bf004d74 use a thread pool to sync changes 2021-03-19 19:58:13 -03:00
Victor Shyba e4d06a088b include the channel being filtered/blocked 2021-03-19 19:58:13 -03:00
Victor Shyba 0929088b12 missing refresh step 2021-03-19 19:58:13 -03:00
Victor Shyba 7b4838fc9b dont update more than 400 items a time 2021-03-19 19:58:13 -03:00
Victor Shyba 0cf9533248 narrow update by query 2021-03-19 19:58:13 -03:00
Victor Shyba 84ff0b8a9f general timeout 2021-03-19 19:58:13 -03:00
Victor Shyba d467dcfeaf increase sync queue 2021-03-19 19:58:13 -03:00
Victor Shyba 8e68ba4751 fix join, refresh before update 2021-03-19 19:58:13 -03:00
Victor Shyba 0f2a85ba9f simplify sync 2021-03-19 19:58:13 -03:00
Victor Shyba 7674a0a91e backport fixes from testing server 2021-03-19 19:58:13 -03:00
Victor Shyba 5bc1a66572 32 slices and add censor type to fields 2021-03-19 19:58:13 -03:00
Victor Shyba 9b56067213 raise request timeout for content filtering 2021-03-19 19:58:13 -03:00
Victor Shyba 9a9df2fc3c apply filtering only to whats unfiltered 2021-03-19 19:58:13 -03:00
Victor Shyba 9989d8d1d4 refresh after delete 2021-03-19 19:58:13 -03:00
Victor Shyba f9471f297e apply filter and block from ES script lang 2021-03-19 19:58:13 -03:00
Victor Shyba 146b693e4a exclude title and description 2021-03-19 19:58:13 -03:00
Victor Shyba 7295b7e329 make sync parallel 2021-03-19 19:58:13 -03:00
Victor Shyba e2441ea3e7 use prefix from ES docs 2021-03-19 19:58:13 -03:00
Victor Shyba 119e51912e fix partial id 2021-03-19 19:58:13 -03:00
Victor Shyba dd950f5b0d tag can have empty space 2021-03-19 19:58:13 -03:00
Victor Shyba 78a9bad1e1 no indexer_task 2021-03-19 19:58:13 -03:00
Victor Shyba 0c6eaf5484 fix resolve partial id 2021-03-19 19:58:13 -03:00
Victor Shyba 1010068ddb disable refresh interval. start with 3 shards 2021-03-19 19:58:13 -03:00
Victor Shyba 82eec3d8d7 use multiple clients on sync script indexing 2021-03-19 19:58:13 -03:00
Victor Shyba ee7b37d3f3 also normalize the name supplied by user 2021-03-19 19:58:13 -03:00
Victor Shyba 143d82d242 normalized, not normalized_name 2021-03-19 19:58:13 -03:00
Victor Shyba 8b91b38855 update winners in one go 2021-03-19 19:58:13 -03:00
Victor Shyba 1098f0d2a3 use normalized name instead 2021-03-19 19:58:13 -03:00
Victor Shyba ab53cec022 fix is_controlling sync 2021-03-19 19:58:13 -03:00
Victor Shyba 6f5f8e5648 add elasticsearch dep 2021-03-19 19:58:13 -03:00
Victor Shyba edfd707c22 run ES on github actions 2021-03-19 19:58:13 -03:00
Victor Shyba 1870f30af8 add sync script 2021-03-19 19:58:13 -03:00
Victor Shyba 90106f5f08 all test_claim_commands tests green 2021-03-19 19:58:13 -03:00
Victor Shyba 9924b7b438 reposts and tag inheritance 2021-03-19 19:58:13 -03:00
Victor Shyba aa37faab0a use porter analyzer with weights on full text search 2021-03-19 19:58:13 -03:00
Victor Shyba dc10f8ce72 ignore errors when deleting 2021-03-19 19:58:13 -03:00
Victor Shyba 996686c1da claim search and resolve translated to ES queries 2021-03-19 19:58:13 -03:00
Victor Shyba 488785d013 add indexer task 2021-03-19 19:58:13 -03:00
Victor Shyba 3abdc01230 index ES during sync 2021-03-19 19:58:13 -03:00
Victor Shyba 8da04a584f start waiting before generate 2021-03-19 18:01:29 -03:00
Victor Shyba 27cc61d45e limit test time to 2 minutes, then consider it a failure and log what was running 2021-03-19 18:01:29 -03:00
Lex Berezhny 7371c30064 v0.92.0 2021-03-15 13:07:30 -04:00
Lex Berezhny 140d163895 removed redundant comment 2021-03-14 10:11:42 -04:00
Victor Shyba dc33bdc1dc update api json 2021-03-14 10:11:42 -04:00
Victor Shyba 74df4fab83 change column to has_source and document both flags 2021-03-14 10:11:42 -04:00
Victor Shyba 1e5cd3d7a1 typo, fix tests 2021-03-14 10:11:42 -04:00
Victor Shyba a54e9b64aa add no_source claim_search filter 2021-03-14 10:11:42 -04:00
Victor Shyba 74660704e3 fix update 2021-03-14 10:11:42 -04:00
Victor Shyba 7439893a2a fix get for sourceless claims 2021-03-14 10:11:42 -04:00
Victor Shyba e27e49e9dc call update only once 2021-03-14 10:11:42 -04:00
Victor Shyba 34ed729c59 there is no 'sd_hash' parameter for this API 2021-03-14 10:11:42 -04:00
Victor Shyba adaeeca3fd let file_path be optional 2021-03-14 10:11:42 -04:00
Jack Robison dac75563d3 add --no_file_path param to publish, stream_create, and stream_update 2021-03-14 10:11:42 -04:00
Alex Grintsvayg cbc76adcaa only return unspent txos if is_spent flag is not used. fixes #2923 2021-03-13 06:44:20 -05:00
Lex Berezhny 69a9cb383d oops 2021-03-12 13:29:55 -05:00
Lex Berezhny 4343073c00 clients can connect to wallet server even when they are not reachable by UDP 2021-03-12 13:29:55 -05:00
Jack Robison fe60d4be88
Merge pull request #3221 from lbryio/subscribe_hash_on_call
Improve performance of address subscriptions and transaction proofs
2021-03-10 15:58:50 -05:00
Victor Shyba ae337807f5 get merkles outside thread cooperatively 2021-03-10 13:05:17 -03:00
Victor Shyba 9ae30ac08e during subscribe, hash address only when its time 2021-03-10 12:51:58 -03:00
Lex Berezhny 62fa85c0a4 fix test 2021-03-09 13:27:36 -05:00
Lex Berezhny 7bb873dad9 removed connection_status field from the status command, use wallet.connected instead to determine if SDK is connected 2021-03-09 13:27:36 -05:00
Lex Berezhny 5f6c1c14cb v0.91.0 2021-03-04 00:04:25 -05:00
Lex Berezhny d43189ad33 regenerate docs 2021-03-04 00:03:16 -05:00
Lex Berezhny fcad76fc51 lint 2021-03-04 00:03:16 -05:00
Lex Berezhny 97e6e1684e simplifying 2021-03-04 00:03:16 -05:00
zeppi 67a0d3e926 update docs 2021-03-04 00:03:16 -05:00
zeppi 183fb9f9ff provide --resolve tag for collection claim, separate from resolving its contents
bugfix and docs generation

review changes
2021-03-04 00:03:16 -05:00
Lex Berezhny 9815ddef1f fixes stalling client reconnect issue 2021-03-03 23:31:59 -05:00
213 changed files with 10474 additions and 17132 deletions

View file

@ -1,84 +1,206 @@
name: ci name: ci
on: pull_request on: ["push", "pull_request", "workflow_dispatch"]
jobs: jobs:
lint: lint:
name: lint name: lint
runs-on: ubuntu-latest runs-on: ubuntu-20.04
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: actions/setup-python@v1 - uses: actions/setup-python@v4
with: with:
python-version: '3.7' python-version: '3.9'
- run: make install tools - name: extract pip cache
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- run: pip install --user --upgrade pip wheel
- run: pip install -e .[lint]
- run: make lint - run: make lint
tests-unit: tests-unit:
name: "tests / unit" name: "tests / unit"
runs-on: ubuntu-latest strategy:
matrix:
os:
- ubuntu-20.04
- macos-latest
- windows-latest
runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: actions/setup-python@v1 - uses: actions/setup-python@v4
with: with:
python-version: '3.7' python-version: '3.9'
- run: make install tools - name: set pip cache dir
- working-directory: lbry shell: bash
run: echo "PIP_CACHE_DIR=$(pip cache dir)" >> $GITHUB_ENV
- name: extract pip cache
uses: actions/cache@v3
with:
path: ${{ env.PIP_CACHE_DIR }}
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- id: os-name
uses: ASzc/change-string-case-action@v5
with:
string: ${{ runner.os }}
- run: python -m pip install --user --upgrade pip wheel
- if: startsWith(runner.os, 'linux')
run: pip install -e .[test]
- if: startsWith(runner.os, 'linux')
env: env:
HOME: /tmp HOME: /tmp
run: coverage run -p --source=lbry -m unittest discover -vv tests.unit run: make test-unit-coverage
- if: startsWith(runner.os, 'linux') != true
run: pip install -e .[test]
- if: startsWith(runner.os, 'linux') != true
env:
HOME: /tmp
run: coverage run --source=lbry -m unittest tests/unit/test_conf.py
- name: submit coverage report
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_FLAG_NAME: tests-unit-${{ steps.os-name.outputs.lowercase }}
COVERALLS_PARALLEL: true
run: |
pip install coveralls
coveralls --service=github
tests-integration: tests-integration:
name: "tests / integration" name: "tests / integration"
runs-on: ubuntu-latest runs-on: ubuntu-20.04
strategy: strategy:
matrix: matrix:
test: test:
- datanetwork - datanetwork
- blockchain - blockchain
- claims
- takeovers
- transactions
- other - other
steps: steps:
- uses: actions/checkout@v2 - name: Configure sysctl limits
- uses: actions/setup-python@v1 run: |
sudo swapoff -a
sudo sysctl -w vm.swappiness=1
sudo sysctl -w fs.file-max=262144
sudo sysctl -w vm.max_map_count=262144
- name: Runs Elasticsearch
uses: elastic/elastic-github-actions/elasticsearch@master
with: with:
python-version: '3.7' stack-version: 7.12.1
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- if: matrix.test == 'other' - if: matrix.test == 'other'
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y --no-install-recommends ffmpeg sudo apt-get install -y --no-install-recommends ffmpeg
- run: pip install tox-travis - name: extract pip cache
uses: actions/cache@v3
with:
path: ./.tox
key: tox-integration-${{ matrix.test }}-${{ hashFiles('setup.py') }}
restore-keys: txo-integration-${{ matrix.test }}-
- run: pip install tox coverage coveralls
- if: matrix.test == 'claims'
run: rm -rf .tox
- run: tox -e ${{ matrix.test }} - run: tox -e ${{ matrix.test }}
- name: submit coverage report
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_FLAG_NAME: tests-integration-${{ matrix.test }}
COVERALLS_PARALLEL: true
run: |
coverage combine tests
coveralls --service=github
coverage:
needs: ["tests-unit", "tests-integration"]
runs-on: ubuntu-20.04
steps:
- name: finalize coverage report submission
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
pip install coveralls
coveralls --service=github --finish
build: build:
needs: ["lint", "tests-unit", "tests-integration"] needs: ["lint", "tests-unit", "tests-integration"]
name: "build" name: "build / binary"
strategy: strategy:
matrix: matrix:
os: os:
- ubuntu-latest - ubuntu-20.04
- macos-latest - macos-latest
- windows-latest - windows-latest
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v3
- uses: actions/setup-python@v1 - uses: actions/setup-python@v4
with: with:
python-version: '3.7' python-version: '3.9'
- name: Setup - id: os-name
run: | uses: ASzc/change-string-case-action@v5
pip install pyinstaller with:
pip install -e . string: ${{ runner.os }}
# https://stackoverflow.com/a/61693590 - name: set pip cache dir
# https://github.com/pypa/setuptools/issues/1963 shell: bash
pip install --upgrade 'setuptools<45.0.0' run: echo "PIP_CACHE_DIR=$(pip cache dir)" >> $GITHUB_ENV
- if: startsWith(matrix.os, 'windows') == false - name: extract pip cache
uses: actions/cache@v3
with:
path: ${{ env.PIP_CACHE_DIR }}
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
restore-keys: ${{ runner.os }}-pip-
- run: pip install pyinstaller==4.6
- run: pip install -e .
- if: startsWith(github.ref, 'refs/tags/v')
run: python docker/set_build.py
- if: startsWith(runner.os, 'linux') || startsWith(runner.os, 'mac')
name: Build & Run (Unix) name: Build & Run (Unix)
run: | run: |
pyinstaller --onefile --name lbrynet lbry/extras/cli.py pyinstaller --onefile --name lbrynet lbry/extras/cli.py
chmod +x dist/lbrynet
dist/lbrynet --version dist/lbrynet --version
- if: startsWith(matrix.os, 'windows') - if: startsWith(runner.os, 'windows')
name: Build & Run (Windows) name: Build & Run (Windows)
run: | run: |
pip install pywin32 pip install pywin32==301
pyinstaller --additional-hooks-dir=scripts/. --icon=icons/lbry256.ico --onefile --name lbrynet lbry/extras/cli.py pyinstaller --additional-hooks-dir=scripts/. --icon=icons/lbry256.ico --onefile --name lbrynet lbry/extras/cli.py
dist/lbrynet.exe --version dist/lbrynet.exe --version
- uses: actions/upload-artifact@v3
with:
name: lbrynet-${{ steps.os-name.outputs.lowercase }}
path: dist/
release:
name: "release"
if: startsWith(github.ref, 'refs/tags/v')
needs: ["build"]
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v1
- uses: actions/download-artifact@v2
- name: upload binaries
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_API_TOKEN }}
run: |
pip install githubrelease
chmod +x lbrynet-macos/lbrynet
chmod +x lbrynet-linux/lbrynet
zip --junk-paths lbrynet-mac.zip lbrynet-macos/lbrynet
zip --junk-paths lbrynet-linux.zip lbrynet-linux/lbrynet
zip --junk-paths lbrynet-windows.zip lbrynet-windows/lbrynet.exe
ls -lh
githubrelease release lbryio/lbry-sdk info ${GITHUB_REF#refs/tags/}
githubrelease asset lbryio/lbry-sdk upload ${GITHUB_REF#refs/tags/} \
lbrynet-mac.zip lbrynet-linux.zip lbrynet-windows.zip
githubrelease release lbryio/lbry-sdk publish ${GITHUB_REF#refs/tags/}

22
.github/workflows/release.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: slack
on:
release:
types: [published]
jobs:
release:
name: "slack notification"
runs-on: ubuntu-20.04
steps:
- uses: LoveToKnow/slackify-markdown-action@v1.0.0
id: markdown
with:
text: "There is a new SDK release: ${{github.event.release.html_url}}\n${{ github.event.release.body }}"
- uses: slackapi/slack-github-action@v1.14.0
env:
CHANGELOG: '<!channel> ${{ steps.markdown.outputs.text }}'
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_RELEASE_BOT_WEBHOOK }}
with:
payload: '{"type": "mrkdwn", "text": ${{ toJSON(env.CHANGELOG) }} }'

2
.gitignore vendored
View file

@ -13,7 +13,7 @@ __pycache__
_trial_temp/ _trial_temp/
trending*.log trending*.log
/tests/integration/blockchain/files /tests/integration/claims/files
/tests/.coverage.* /tests/.coverage.*
/lbry/wallet/bin /lbry/wallet/bin

View file

@ -1,213 +0,0 @@
default:
image: python:3.7
#cache:
# directories:
# - $HOME/venv
# - $HOME/.cache/pip
# - $HOME/Library/Caches/pip
# - $HOME/Library/Caches/Homebrew
# - $TRAVIS_BUILD_DIR/.tox
stages:
- test
- build
- assets
- release
.tagged:
rules:
- if: '$CI_COMMIT_TAG =~ /^v[0-9\.]+$/'
when: on_success
test:lint:
stage: test
script:
- make install tools
- make lint
test:unit:
stage: test
script:
- make install tools
- HOME=/tmp coverage run -p --source=lbry -m unittest discover -vv tests.unit
test:datanetwork-integration:
stage: test
script:
- pip install tox-travis
- tox -e datanetwork --recreate
test:blockchain-integration:
stage: test
script:
- pip install tox-travis
- tox -e blockchain
test:other-integration:
stage: test
script:
- apt-get update
- apt-get install -y --no-install-recommends ffmpeg
- pip install tox-travis
- tox -e other
test:json-api:
stage: test
script:
- make install tools
- HOME=/tmp coverage run -p --source=lbry scripts/generate_json_api.py
.build:
stage: build
artifacts:
expire_in: 1 day
paths:
- lbrynet-${OS}.zip
script:
- pip install --upgrade 'setuptools<45.0.0'
- pip install pyinstaller
- pip install -e .
- python3.7 docker/set_build.py # must come after lbry is installed because it imports lbry
- pyinstaller --onefile --name lbrynet lbry/extras/cli.py
- chmod +x dist/lbrynet
- zip --junk-paths ${CI_PROJECT_DIR}/lbrynet-${OS}.zip dist/lbrynet # gitlab expects artifacts to be in $CI_PROJECT_DIR
- openssl dgst -sha256 ${CI_PROJECT_DIR}/lbrynet-${OS}.zip | egrep -o [0-9a-f]+$ # get sha256 of asset. works on mac and ubuntu
- dist/lbrynet --version
build:linux:
extends: .build
image: ubuntu:16.04
variables:
OS: linux
before_script:
- apt-get update
- apt-get install -y --no-install-recommends software-properties-common zip curl build-essential
- add-apt-repository -y ppa:deadsnakes/ppa
- apt-get update
- apt-get install -y --no-install-recommends python3.7-dev
- python3.7 <(curl -q https://bootstrap.pypa.io/get-pip.py) # make sure we get pip with python3.7
- pip install lbry-libtorrent
build:mac:
extends: .build
tags: [macos] # makes gitlab use the mac runner
variables:
OS: mac
GIT_DEPTH: 5
VENV: /tmp/gitlab-lbry-sdk-venv
before_script:
# - brew upgrade python || true
- python3 --version | grep -q '^Python 3\.7\.' # dont upgrade python on every run. just make sure we're on the right Python
# - pip3 install --user --upgrade pip virtualenv
- pip3 --version | grep -q '\(python 3\.7\)'
- virtualenv --python=python3.7 "${VENV}"
- source "${VENV}/bin/activate"
after_script:
- rm -rf "${VENV}"
build:windows:
extends: .build
tags: [windows] # makes gitlab use the windows runner
variables:
OS: windows
GIT_DEPTH: 5
before_script:
- ./docker/install_choco.ps1
- choco install -y --x64 python --version=3.7.9
- choco install -y 7zip checksum
- python --version # | findstr /B "Python 3\.7\." # dont upgrade python on every run. just make sure we're on the right Python
- pip --version # | findstr /E '\(python 3\.7\)'
- python -c "import sys;print(f'{str(64 if sys.maxsize > 2**32 else 32)} bit python\n{sys.platform}')"
- pip install virtualenv pywin32
- virtualenv venv
- venv/Scripts/activate.ps1
- pip install pip==19.3.1; $true # $true ignores errors. need this to get the correct coincurve wheel. see commit notes for details.
after_script:
- rmdir -Recurse venv
script:
- pip install --upgrade 'setuptools<45.0.0'
- pip install pyinstaller==3.5
- pip install -e .
- python docker/set_build.py # must come after lbry is installed because it imports lbry
- pyinstaller --additional-hooks-dir=scripts/. --icon=icons/lbry256.ico -F -n lbrynet lbry/extras/cli.py
- 7z a -tzip $env:CI_PROJECT_DIR/lbrynet-${OS}.zip ./dist/lbrynet.exe
- checksum --type=sha256 --file=$env:CI_PROJECT_DIR/lbrynet-${OS}.zip
- dist/lbrynet.exe --version
# s3 = upload asset to s3 (build.lbry.io)
.s3:
stage: assets
variables:
GIT_STRATEGY: none
script:
- "[ -f lbrynet-${OS}.zip ]" # check that asset exists before trying to upload
- pip install awscli
- S3_PATH="daemon/gitlab-build-${CI_PIPELINE_ID}_commit-${CI_COMMIT_SHA:0:7}$( if [ ! -z ${CI_COMMIT_TAG} ]; then echo _tag-${CI_COMMIT_TAG}; else echo _branch-${CI_COMMIT_REF_NAME}; fi )"
- AWS_ACCESS_KEY_ID=${ARTIFACTS_KEY} AWS_SECRET_ACCESS_KEY=${ARTIFACTS_SECRET} AWS_REGION=${ARTIFACTS_REGION}
aws s3 cp lbrynet-${OS}.zip s3://${ARTIFACTS_BUCKET}/${S3_PATH}/lbrynet-${OS}.zip
s3:linux:
extends: .s3
variables: {OS: linux}
needs: ["build:linux"]
s3:mac:
extends: .s3
variables: {OS: mac}
needs: ["build:mac"]
s3:windows:
extends: .s3
variables: {OS: windows}
needs: ["build:windows"]
# github = upload assets to github when there's a tagged release
.github:
extends: .tagged
stage: assets
variables:
GIT_STRATEGY: none
script:
- "[ -f lbrynet-${OS}.zip ]" # check that asset exists before trying to upload. githubrelease won't error if its missing
- pip install githubrelease
- githubrelease --no-progress --github-token ${GITHUB_CI_USER_ACCESS_TOKEN} asset lbryio/lbry-sdk upload ${CI_COMMIT_TAG} lbrynet-${OS}.zip
github:linux:
extends: .github
variables: {OS: linux}
needs: ["build:linux"]
github:mac:
extends: .github
variables: {OS: mac}
needs: ["build:mac"]
github:windows:
extends: .github
variables: {OS: windows}
needs: ["build:windows"]
publish:
extends: .tagged
stage: release
variables:
GIT_STRATEGY: none
script:
- pip install githubrelease
- githubrelease --no-progress --github-token ${GITHUB_CI_USER_ACCESS_TOKEN} release lbryio/lbry-sdk publish ${CI_COMMIT_TAG}
- >
curl -X POST -H 'Content-type: application/json' --data '{"text":"<!channel> There is a new SDK release: https://github.com/lbryio/lbry-sdk/releases/tag/'"${CI_COMMIT_TAG}"'\n'"$(curl -s "https://api.github.com/repos/lbryio/lbry-sdk/releases/tags/${CI_COMMIT_TAG}" | egrep '\w*\"body\":' | cut -d':' -f 2- | tail -c +3 | head -c -2)"'", "channel":"tech"}' "$(echo ${SLACK_WEBHOOK_URL_BASE64} | base64 -d)"

View file

@ -9,20 +9,29 @@ Here's a video walkthrough of this setup, which is itself hosted by the LBRY net
## Prerequisites ## Prerequisites
Running `lbrynet` from source requires Python 3.7 or higher. Get the installer for your OS [here](https://www.python.org/downloads/release/python-370/). Running `lbrynet` from source requires Python 3.7. Get the installer for your OS [here](https://www.python.org/downloads/release/python-370/).
After installing python 3, you'll need to install some additional libraries depending on your operating system. After installing Python 3.7, you'll need to install some additional libraries depending on your operating system.
Because of [issue #2769](https://github.com/lbryio/lbry-sdk/issues/2769)
at the moment the `lbrynet` daemon will only work correctly with Python 3.7.
If Python 3.8+ is used, the daemon will start but the RPC server
may not accept messages, returning the following:
```
Could not connect to daemon. Are you sure it's running?
```
### macOS ### macOS
macOS users will need to install [xcode command line tools](https://developer.xamarin.com/guides/testcloud/calabash/configuring/osx/install-xcode-command-line-tools/) and [homebrew](http://brew.sh/). macOS users will need to install [xcode command line tools](https://developer.xamarin.com/guides/testcloud/calabash/configuring/osx/install-xcode-command-line-tools/) and [homebrew](http://brew.sh/).
These environment variables also need to be set: These environment variables also need to be set:
1. PYTHONUNBUFFERED=1 ```
2. EVENT_NOKQUEUE=1 PYTHONUNBUFFERED=1
EVENT_NOKQUEUE=1
```
Remaining dependencies can then be installed by running: Remaining dependencies can then be installed by running:
``` ```
brew install python protobuf brew install python protobuf
``` ```
@ -31,14 +40,17 @@ Assistance installing Python3: https://docs.python-guide.org/starting/install3/o
### Linux ### Linux
On Ubuntu (16.04 minimum, we recommend 18.04), install the following: On Ubuntu (we recommend 18.04 or 20.04), install the following:
``` ```
sudo add-apt-repository ppa:deadsnakes/ppa sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update sudo apt-get update
sudo apt-get install build-essential python3.7 python3.7-dev git python3.7-venv libssl-dev python-protobuf sudo apt-get install build-essential python3.7 python3.7-dev git python3.7-venv libssl-dev python-protobuf
``` ```
The [deadsnakes PPA](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa) provides Python 3.7
for those Ubuntu distributions that no longer have it in their
official repositories.
On Raspbian, you will also need to install `python-pyparsing`. On Raspbian, you will also need to install `python-pyparsing`.
If you're running another Linux distro, install the equivalent of the above packages for your system. If you're running another Linux distro, install the equivalent of the above packages for your system.
@ -47,62 +59,119 @@ If you're running another Linux distro, install the equivalent of the above pack
### Linux/Mac ### Linux/Mac
To install on Linux/Mac: Clone the repository:
```bash
git clone https://github.com/lbryio/lbry-sdk.git
cd lbry-sdk
```
``` Create a Python virtual environment for lbry-sdk:
Clone the repository: ```bash
$ git clone https://github.com/lbryio/lbry-sdk.git python3.7 -m venv lbry-venv
$ cd lbry-sdk ```
Create a Python virtual environment for lbry-sdk: Activate virtual environment:
$ python3.7 -m venv lbry-venv ```bash
source lbry-venv/bin/activate
Activating lbry-sdk virtual environment: ```
$ source lbry-venv/bin/activate
Make sure you're on Python 3.7+ (as the default Python in virtual environment):
$ python --version
Install packages: Make sure you're on Python 3.7+ as default in the virtual environment:
$ make install ```bash
python --version
```
If you are on Linux and using PyCharm, generates initial configs: Install packages:
$ make idea ```bash
``` make install
```
To verify your installation, `which lbrynet` should return a path inside of the `lbry-venv` folder created by the `python3.7 -m venv lbry-venv` command. If you are on Linux and using PyCharm, generates initial configs:
```bash
make idea
```
To verify your installation, `which lbrynet` should return a path inside
of the `lbry-venv` folder.
```bash
(lbry-venv) $ which lbrynet
/opt/lbry-sdk/lbry-venv/bin/lbrynet
```
To exit the virtual environment simply use the command `deactivate`.
### Windows ### Windows
To install on Windows: Clone the repository:
```bash
git clone https://github.com/lbryio/lbry-sdk.git
cd lbry-sdk
```
``` Create a Python virtual environment for lbry-sdk:
Clone the repository: ```bash
> git clone https://github.com/lbryio/lbry-sdk.git python -m venv lbry-venv
> cd lbry-sdk ```
Create a Python virtual environment for lbry-sdk: Activate virtual environment:
> python -m venv lbry-venv ```bash
lbry-venv\Scripts\activate
```
Activating lbry-sdk virtual environment: Install packages:
> lbry-venv\Scripts\activate ```bash
pip install -e .
Install packages: ```
> pip install -e .
```
## Run the tests ## Run the tests
### Elasticsearch
To run the unit tests from the repo directory: For running integration tests, Elasticsearch is required to be available at localhost:9200/
``` The easiest way to start it is using docker with:
python -m unittest discover tests.unit ```bash
``` make elastic-docker
```
Alternative installation methods are available [at Elasticsearch website](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html).
To run the unit and integration tests from the repo directory:
```
python -m unittest discover tests.unit
python -m unittest discover tests.integration
```
## Usage ## Usage
To start the API server: To start the API server:
`lbrynet start` ```
lbrynet start
```
Whenever the code inside [lbry-sdk/lbry](./lbry)
is modified we should run `make install` to recompile the `lbrynet`
executable with the newest code.
## Development
When developing, remember to enter the environment,
and if you wish start the server interactively.
```bash
$ source lbry-venv/bin/activate
(lbry-venv) $ python lbry/extras/cli.py start
```
Parameters can be passed in the same way.
```bash
(lbry-venv) $ python lbry/extras/cli.py wallet balance
```
If a Python debugger (`pdb` or `ipdb`) is installed we can also start it
in this way, set up break points, and step through the code.
```bash
(lbry-venv) $ pip install ipdb
(lbry-venv) $ ipdb lbry/extras/cli.py
```
Happy hacking! Happy hacking!

View file

@ -1,6 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2015-2020 LBRY Inc Copyright (c) 2015-2022 LBRY Inc
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish,

View file

@ -1,24 +1,26 @@
.PHONY: install tools lint test idea .PHONY: install tools lint test test-unit test-unit-coverage test-integration idea
install: install:
pip install https://s3.amazonaws.com/files.lbry.io/python_libtorrent-1.2.4-py3-none-any.whl
CFLAGS="-DSQLITE_MAX_VARIABLE_NUMBER=2500000" pip install -U https://github.com/rogerbinns/apsw/releases/download/3.30.1-r1/apsw-3.30.1-r1.zip \
--global-option=fetch \
--global-option=--version --global-option=3.30.1 --global-option=--all \
--global-option=build --global-option=--enable --global-option=fts5
pip install -e . pip install -e .
tools:
pip install mypy==0.701 pylint==2.4.4
pip install coverage astroid pylint
lint: lint:
pylint --rcfile=setup.cfg lbry pylint --rcfile=setup.cfg lbry
#mypy --ignore-missing-imports lbry #mypy --ignore-missing-imports lbry
test: test: test-unit test-integration
test-unit:
python -m unittest discover tests.unit
test-unit-coverage:
coverage run --source=lbry -m unittest discover -vv tests.unit
test-integration:
tox tox
idea: idea:
mkdir -p .idea mkdir -p .idea
cp -r scripts/idea/* .idea cp -r scripts/idea/* .idea
elastic-docker:
docker run -d -v lbryhub:/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 -e"ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.12.1

View file

@ -1,10 +1,10 @@
# <img src="https://raw.githubusercontent.com/lbryio/lbry-sdk/master/lbry.png" alt="LBRY" width="48" height="36" /> LBRY SDK [![Gitlab CI Badge](https://ci.lbry.tech/lbry/lbry-sdk/badges/master/pipeline.svg)](https://ci.lbry.tech/lbry/lbry-sdk) # <img src="https://raw.githubusercontent.com/lbryio/lbry-sdk/master/lbry.png" alt="LBRY" width="48" height="36" /> LBRY SDK [![build](https://github.com/lbryio/lbry-sdk/actions/workflows/main.yml/badge.svg)](https://github.com/lbryio/lbry-sdk/actions/workflows/main.yml) [![coverage](https://coveralls.io/repos/github/lbryio/lbry-sdk/badge.svg)](https://coveralls.io/github/lbryio/lbry-sdk)
LBRY is a decentralized peer-to-peer protocol for publishing and accessing digital content. It utilizes the [LBRY blockchain](https://github.com/lbryio/lbrycrd) as a global namespace and database of digital content. Blockchain entries contain searchable content metadata, identities, rights and access rules. LBRY also provides a data network that consists of peers (seeders) uploading and downloading data from other peers, possibly in exchange for payments, as well as a distributed hash table used by peers to discover other peers. LBRY is a decentralized peer-to-peer protocol for publishing and accessing digital content. It utilizes the [LBRY blockchain](https://github.com/lbryio/lbrycrd) as a global namespace and database of digital content. Blockchain entries contain searchable content metadata, identities, rights and access rules. LBRY also provides a data network that consists of peers (seeders) uploading and downloading data from other peers, possibly in exchange for payments, as well as a distributed hash table used by peers to discover other peers.
LBRY SDK for Python is currently the most fully featured implementation of the LBRY Network protocols and includes many useful components and tools for building decentralized applications. Primary features and components include: LBRY SDK for Python is currently the most fully featured implementation of the LBRY Network protocols and includes many useful components and tools for building decentralized applications. Primary features and components include:
* Built on Python 3.7+ and `asyncio`. * Built on Python 3.7 and `asyncio`.
* Kademlia DHT (Distributed Hash Table) implementation for finding peers to download from and announcing to peers what we have to host ([lbry.dht](https://github.com/lbryio/lbry-sdk/tree/master/lbry/dht)). * Kademlia DHT (Distributed Hash Table) implementation for finding peers to download from and announcing to peers what we have to host ([lbry.dht](https://github.com/lbryio/lbry-sdk/tree/master/lbry/dht)).
* Blob exchange protocol for transferring encrypted blobs of content and negotiating payments ([lbry.blob_exchange](https://github.com/lbryio/lbry-sdk/tree/master/lbry/blob_exchange)). * Blob exchange protocol for transferring encrypted blobs of content and negotiating payments ([lbry.blob_exchange](https://github.com/lbryio/lbry-sdk/tree/master/lbry/blob_exchange)).
* Protobuf schema for encoding and decoding metadata stored on the blockchain ([lbry.schema](https://github.com/lbryio/lbry-sdk/tree/master/lbry/schema)). * Protobuf schema for encoding and decoding metadata stored on the blockchain ([lbry.schema](https://github.com/lbryio/lbry-sdk/tree/master/lbry/schema)).
@ -41,7 +41,7 @@ This project is MIT licensed. For the full license, see [LICENSE](LICENSE).
## Security ## Security
We take security seriously. Please contact security@lbry.com regarding any security issues. [Our GPG key is here](https://lbry.com/faq/gpg-key) if you need it. We take security seriously. Please contact security@lbry.com regarding any security issues. [Our PGP key is here](https://lbry.com/faq/pgp-key) if you need it.
## Contact ## Contact

9
SECURITY.md Normal file
View file

@ -0,0 +1,9 @@
# Security Policy
## Supported Versions
While we are not at v1.0 yet, only the latest release will be supported.
## Reporting a Vulnerability
See https://lbry.com/faq/security

View file

@ -0,0 +1,43 @@
FROM debian:10-slim
ARG user=lbry
ARG projects_dir=/home/$user
ARG db_dir=/database
ARG DOCKER_TAG
ARG DOCKER_COMMIT=docker
ENV DOCKER_TAG=$DOCKER_TAG DOCKER_COMMIT=$DOCKER_COMMIT
RUN apt-get update && \
apt-get -y --no-install-recommends install \
wget \
automake libtool \
tar unzip \
build-essential \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \
python3-pip \
python3-wheel \
python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/*
RUN groupadd -g 999 $user && useradd -m -u 999 -g $user $user
COPY . $projects_dir
RUN chown -R $user:$user $projects_dir
RUN mkdir -p $db_dir
RUN chown -R $user:$user $db_dir
USER $user
WORKDIR $projects_dir
RUN python3 -m pip install -U setuptools pip
RUN make install
RUN python3 docker/set_build.py
RUN rm ~/.cache -rf
VOLUME $db_dir
ENTRYPOINT ["python3", "scripts/dht_node.py"]

View file

@ -1,4 +1,4 @@
FROM ubuntu:20.04 FROM debian:10-slim
ARG user=lbry ARG user=lbry
ARG db_dir=/database ARG db_dir=/database
@ -13,10 +13,14 @@ RUN apt-get update && \
wget \ wget \
tar unzip \ tar unzip \
build-essential \ build-essential \
python3 \ automake libtool \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \ python3-dev \
python3-pip \ python3-pip \
python3-wheel \ python3-wheel \
python3-cffi \
python3-setuptools && \ python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \ update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*

45
docker/Dockerfile.web Normal file
View file

@ -0,0 +1,45 @@
FROM debian:10-slim
ARG user=lbry
ARG downloads_dir=/database
ARG projects_dir=/home/$user
ARG DOCKER_TAG
ARG DOCKER_COMMIT=docker
ENV DOCKER_TAG=$DOCKER_TAG DOCKER_COMMIT=$DOCKER_COMMIT
RUN apt-get update && \
apt-get -y --no-install-recommends install \
wget \
automake libtool \
tar unzip \
build-essential \
pkg-config \
libleveldb-dev \
python3.7 \
python3-dev \
python3-pip \
python3-wheel \
python3-setuptools && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 && \
rm -rf /var/lib/apt/lists/*
RUN groupadd -g 999 $user && useradd -m -u 999 -g $user $user
RUN mkdir -p $downloads_dir
RUN chown -R $user:$user $downloads_dir
COPY . $projects_dir
RUN chown -R $user:$user $projects_dir
USER $user
WORKDIR $projects_dir
RUN pip install uvloop
RUN make install
RUN python3 docker/set_build.py
RUN rm ~/.cache -rf
# entry point
VOLUME $downloads_dir
COPY ./docker/webconf.yaml /webconf.yaml
ENTRYPOINT ["/home/lbry/.local/bin/lbrynet", "start", "--config=/webconf.yaml"]

9
docker/README.md Normal file
View file

@ -0,0 +1,9 @@
### How to run with docker-compose
1. Edit config file and after that fix permissions with
```
sudo chown -R 999:999 webconf.yaml
```
2. Start SDK with
```
docker-compose up -d
```

View file

@ -1,36 +1,49 @@
version: "3" version: "3"
volumes: volumes:
lbrycrd:
wallet_server: wallet_server:
es01:
services: services:
lbrycrd:
image: lbry/lbrycrd:${LBRYCRD_TAG:-latest-release}
restart: always
ports: # accessible from host
- "9246:9246" # rpc port
expose: # internal to docker network. also this doesn't do anything. its for documentation only.
- "9245" # node-to-node comms port
volumes:
- "lbrycrd:/data/.lbrycrd"
environment:
- RUN_MODE=default
# Curently not snapshot provided
#- SNAPSHOT_URL=${LBRYCRD_SNAPSHOT_URL-https://lbry.com/snapshot/blockchain}
- RPC_ALLOW_IP=0.0.0.0/0
wallet_server: wallet_server:
depends_on:
- es01
image: lbry/wallet-server:${WALLET_SERVER_TAG:-latest-release} image: lbry/wallet-server:${WALLET_SERVER_TAG:-latest-release}
depends_on:
- lbrycrd
restart: always restart: always
network_mode: host
ports: ports:
- "50001:50001" # rpc port - "50001:50001" # rpc port
- "50005:50005" # websocket port - "2112:2112" # uncomment to enable prometheus
#- "2112:2112" # uncomment to enable prometheus
volumes: volumes:
- "wallet_server:/database" - "wallet_server:/database"
environment: environment:
# Curently not snapshot provided - DAEMON_URL=http://lbry:lbry@127.0.0.1:9245
# - SNAPSHOT_URL=${WALLET_SERVER_SNAPSHOT_URL-https://lbry.com/snapshot/wallet} - MAX_QUERY_WORKERS=4
- DAEMON_URL=http://lbry:lbry@lbrycrd:9245 - CACHE_MB=1024
- CACHE_ALL_TX_HASHES=
- CACHE_ALL_CLAIM_TXOS=
- MAX_SEND=1000000000000000000
- MAX_RECEIVE=1000000000000000000
- MAX_SESSIONS=100000
- HOST=0.0.0.0
- TCP_PORT=50001
- PROMETHEUS_PORT=2112
- FILTERING_CHANNEL_IDS=770bd7ecba84fd2f7607fb15aedd2b172c2e153f 95e5db68a3101df19763f3a5182e4b12ba393ee8
- BLOCKING_CHANNEL_IDS=dd687b357950f6f271999971f43c785e8067c3a9 06871aa438032244202840ec59a469b303257cad b4a2528f436eca1bf3bf3e10ff3f98c57bd6c4c6
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es01
environment:
- node.name=es01
- discovery.type=single-node
- indices.query.bool.max_clause_count=8192
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms4g -Xmx4g" # no more than 32, remember to disable swap
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es01:/usr/share/elasticsearch/data
ports:
- 127.0.0.1:9200:9200

View file

@ -0,0 +1,9 @@
version: '3'
services:
websdk:
image: vshyba/websdk
ports:
- '5279:5279'
- '5280:5280'
volumes:
- ./webconf.yaml:/webconf.yaml

View file

@ -20,7 +20,7 @@ def _check_and_set(d: dict, key: str, value: str):
def main(): def main():
build_info = {item: build_info_mod.__dict__[item] for item in dir(build_info_mod) if not item.startswith("__")} build_info = {item: build_info_mod.__dict__[item] for item in dir(build_info_mod) if not item.startswith("__")}
commit_hash = os.getenv('DOCKER_COMMIT', os.getenv('CI_COMMIT_SHA', os.getenv('TRAVIS_COMMIT'))) commit_hash = os.getenv('DOCKER_COMMIT', os.getenv('GITHUB_SHA'))
if commit_hash is None: if commit_hash is None:
raise ValueError("Commit hash not found in env vars") raise ValueError("Commit hash not found in env vars")
_check_and_set(build_info, "COMMIT_HASH", commit_hash[:6]) _check_and_set(build_info, "COMMIT_HASH", commit_hash[:6])
@ -30,8 +30,10 @@ def main():
_check_and_set(build_info, "DOCKER_TAG", docker_tag) _check_and_set(build_info, "DOCKER_TAG", docker_tag)
_check_and_set(build_info, "BUILD", "docker") _check_and_set(build_info, "BUILD", "docker")
else: else:
ci_tag = os.getenv('CI_COMMIT_TAG', os.getenv('TRAVIS_TAG')) if re.match(r'refs/tags/v\d+\.\d+\.\d+$', str(os.getenv('GITHUB_REF'))):
_check_and_set(build_info, "BUILD", "release" if re.match(r'v\d+\.\d+\.\d+$', str(ci_tag)) else "qa") _check_and_set(build_info, "BUILD", "release")
else:
_check_and_set(build_info, "BUILD", "qa")
log.debug("build info: %s", ", ".join([f"{k}={v}" for k, v in build_info.items()])) log.debug("build info: %s", ", ".join([f"{k}={v}" for k, v in build_info.items()]))
with open(build_info_mod.__file__, 'w') as f: with open(build_info_mod.__file__, 'w') as f:

View file

@ -6,7 +6,7 @@ set -euo pipefail
SNAPSHOT_URL="${SNAPSHOT_URL:-}" #off by default. latest snapshot at https://lbry.com/snapshot/wallet SNAPSHOT_URL="${SNAPSHOT_URL:-}" #off by default. latest snapshot at https://lbry.com/snapshot/wallet
if [[ -n "$SNAPSHOT_URL" ]] && [[ ! -f /database/claims.db ]]; then if [[ -n "$SNAPSHOT_URL" ]] && [[ ! -f /database/lbry-leveldb ]]; then
files="$(ls)" files="$(ls)"
echo "Downloading wallet snapshot from $SNAPSHOT_URL" echo "Downloading wallet snapshot from $SNAPSHOT_URL"
wget --no-verbose --trust-server-names --content-disposition "$SNAPSHOT_URL" wget --no-verbose --trust-server-names --content-disposition "$SNAPSHOT_URL"
@ -20,4 +20,6 @@ if [[ -n "$SNAPSHOT_URL" ]] && [[ ! -f /database/claims.db ]]; then
rm "$filename" rm "$filename"
fi fi
/home/lbry/.local/bin/torba-server "$@" /home/lbry/.local/bin/lbry-hub-elastic-sync
echo 'starting server'
/home/lbry/.local/bin/lbry-hub "$@"

9
docker/webconf.yaml Normal file
View file

@ -0,0 +1,9 @@
allowed_origin: "*"
max_key_fee: "0.0 USD"
save_files: false
save_blobs: false
streaming_server: "0.0.0.0:5280"
api: "0.0.0.0:5279"
data_dir: /tmp
download_dir: /tmp
wallet_dir: /tmp

File diff suppressed because one or more lines are too long

View file

@ -1,2 +1,2 @@
__version__ = "0.90.1" __version__ = "0.113.0"
version = tuple(map(int, __version__.split('.'))) # pylint: disable=invalid-name version = tuple(map(int, __version__.split('.'))) # pylint: disable=invalid-name

View file

@ -1,5 +1,6 @@
import os import os
import re import re
import time
import asyncio import asyncio
import binascii import binascii
import logging import logging
@ -70,21 +71,27 @@ class AbstractBlob:
'writers', 'writers',
'verified', 'verified',
'writing', 'writing',
'readers' 'readers',
'added_on',
'is_mine',
] ]
def __init__(self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None, def __init__(
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None, self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_directory: typing.Optional[str] = None): blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False,
):
self.loop = loop self.loop = loop
self.blob_hash = blob_hash self.blob_hash = blob_hash
self.length = length self.length = length
self.blob_completed_callback = blob_completed_callback self.blob_completed_callback = blob_completed_callback
self.blob_directory = blob_directory self.blob_directory = blob_directory
self.writers: typing.Dict[typing.Tuple[typing.Optional[str], typing.Optional[int]], HashBlobWriter] = {} self.writers: typing.Dict[typing.Tuple[typing.Optional[str], typing.Optional[int]], HashBlobWriter] = {}
self.verified: asyncio.Event = asyncio.Event(loop=self.loop) self.verified: asyncio.Event = asyncio.Event()
self.writing: asyncio.Event = asyncio.Event(loop=self.loop) self.writing: asyncio.Event = asyncio.Event()
self.readers: typing.List[typing.BinaryIO] = [] self.readers: typing.List[typing.BinaryIO] = []
self.added_on = added_on or time.time()
self.is_mine = is_mine
if not is_valid_blobhash(blob_hash): if not is_valid_blobhash(blob_hash):
raise InvalidBlobHashError(blob_hash) raise InvalidBlobHashError(blob_hash)
@ -180,20 +187,21 @@ class AbstractBlob:
@classmethod @classmethod
async def create_from_unencrypted( async def create_from_unencrypted(
cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes, cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes,
unencrypted: bytes, blob_num: int, unencrypted: bytes, blob_num: int, added_on: int, is_mine: bool,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], None]] = None) -> BlobInfo: blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], None]] = None,
) -> BlobInfo:
""" """
Create an encrypted BlobFile from plaintext bytes Create an encrypted BlobFile from plaintext bytes
""" """
blob_bytes, blob_hash = encrypt_blob_bytes(key, iv, unencrypted) blob_bytes, blob_hash = encrypt_blob_bytes(key, iv, unencrypted)
length = len(blob_bytes) length = len(blob_bytes)
blob = cls(loop, blob_hash, length, blob_completed_callback, blob_dir) blob = cls(loop, blob_hash, length, blob_completed_callback, blob_dir, added_on, is_mine)
writer = blob.get_blob_writer() writer = blob.get_blob_writer()
writer.write(blob_bytes) writer.write(blob_bytes)
await blob.verified.wait() await blob.verified.wait()
return BlobInfo(blob_num, length, binascii.hexlify(iv).decode(), blob_hash) return BlobInfo(blob_num, length, binascii.hexlify(iv).decode(), added_on, blob_hash, is_mine)
def save_verified_blob(self, verified_bytes: bytes): def save_verified_blob(self, verified_bytes: bytes):
if self.verified.is_set(): if self.verified.is_set():
@ -214,7 +222,7 @@ class AbstractBlob:
peer_port: typing.Optional[int] = None) -> HashBlobWriter: peer_port: typing.Optional[int] = None) -> HashBlobWriter:
if (peer_address, peer_port) in self.writers and not self.writers[(peer_address, peer_port)].closed(): if (peer_address, peer_port) in self.writers and not self.writers[(peer_address, peer_port)].closed():
raise OSError(f"attempted to download blob twice from {peer_address}:{peer_port}") raise OSError(f"attempted to download blob twice from {peer_address}:{peer_port}")
fut = asyncio.Future(loop=self.loop) fut = asyncio.Future()
writer = HashBlobWriter(self.blob_hash, self.get_length, fut) writer = HashBlobWriter(self.blob_hash, self.get_length, fut)
self.writers[(peer_address, peer_port)] = writer self.writers[(peer_address, peer_port)] = writer
@ -248,11 +256,13 @@ class BlobBuffer(AbstractBlob):
""" """
An in-memory only blob An in-memory only blob
""" """
def __init__(self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None, def __init__(
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None, self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_directory: typing.Optional[str] = None): blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False
):
self._verified_bytes: typing.Optional[BytesIO] = None self._verified_bytes: typing.Optional[BytesIO] = None
super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory) super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory, added_on, is_mine)
@contextlib.contextmanager @contextlib.contextmanager
def _reader_context(self) -> typing.ContextManager[typing.BinaryIO]: def _reader_context(self) -> typing.ContextManager[typing.BinaryIO]:
@ -289,10 +299,12 @@ class BlobFile(AbstractBlob):
""" """
A blob existing on the local file system A blob existing on the local file system
""" """
def __init__(self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None, def __init__(
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None, self, loop: asyncio.AbstractEventLoop, blob_hash: str, length: typing.Optional[int] = None,
blob_directory: typing.Optional[str] = None): blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None,
super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory) blob_directory: typing.Optional[str] = None, added_on: typing.Optional[int] = None, is_mine: bool = False
):
super().__init__(loop, blob_hash, length, blob_completed_callback, blob_directory, added_on, is_mine)
if not blob_directory or not os.path.isdir(blob_directory): if not blob_directory or not os.path.isdir(blob_directory):
raise OSError(f"invalid blob directory '{blob_directory}'") raise OSError(f"invalid blob directory '{blob_directory}'")
self.file_path = os.path.join(self.blob_directory, self.blob_hash) self.file_path = os.path.join(self.blob_directory, self.blob_hash)
@ -343,12 +355,12 @@ class BlobFile(AbstractBlob):
@classmethod @classmethod
async def create_from_unencrypted( async def create_from_unencrypted(
cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes, cls, loop: asyncio.AbstractEventLoop, blob_dir: typing.Optional[str], key: bytes, iv: bytes,
unencrypted: bytes, blob_num: int, unencrypted: bytes, blob_num: int, added_on: float, is_mine: bool,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], asyncio.Task]] = None
asyncio.Task]] = None) -> BlobInfo: ) -> BlobInfo:
if not blob_dir or not os.path.isdir(blob_dir): if not blob_dir or not os.path.isdir(blob_dir):
raise OSError(f"cannot create blob in directory: '{blob_dir}'") raise OSError(f"cannot create blob in directory: '{blob_dir}'")
return await super().create_from_unencrypted( return await super().create_from_unencrypted(
loop, blob_dir, key, iv, unencrypted, blob_num, blob_completed_callback loop, blob_dir, key, iv, unencrypted, blob_num, added_on, is_mine, blob_completed_callback
) )

View file

@ -7,13 +7,19 @@ class BlobInfo:
'blob_num', 'blob_num',
'length', 'length',
'iv', 'iv',
'added_on',
'is_mine'
] ]
def __init__(self, blob_num: int, length: int, iv: str, blob_hash: typing.Optional[str] = None): def __init__(
self, blob_num: int, length: int, iv: str, added_on,
blob_hash: typing.Optional[str] = None, is_mine=False):
self.blob_hash = blob_hash self.blob_hash = blob_hash
self.blob_num = blob_num self.blob_num = blob_num
self.length = length self.length = length
self.iv = iv self.iv = iv
self.added_on = added_on
self.is_mine = is_mine
def as_dict(self) -> typing.Dict: def as_dict(self) -> typing.Dict:
d = { d = {

View file

@ -36,30 +36,30 @@ class BlobManager:
self.config.blob_lru_cache_size) self.config.blob_lru_cache_size)
self.connection_manager = ConnectionManager(loop) self.connection_manager = ConnectionManager(loop)
def _get_blob(self, blob_hash: str, length: typing.Optional[int] = None): def _get_blob(self, blob_hash: str, length: typing.Optional[int] = None, is_mine: bool = False):
if self.config.save_blobs or ( if self.config.save_blobs or (
is_valid_blobhash(blob_hash) and os.path.isfile(os.path.join(self.blob_dir, blob_hash))): is_valid_blobhash(blob_hash) and os.path.isfile(os.path.join(self.blob_dir, blob_hash))):
return BlobFile( return BlobFile(
self.loop, blob_hash, length, self.blob_completed, self.blob_dir self.loop, blob_hash, length, self.blob_completed, self.blob_dir, is_mine=is_mine
) )
return BlobBuffer( return BlobBuffer(
self.loop, blob_hash, length, self.blob_completed, self.blob_dir self.loop, blob_hash, length, self.blob_completed, self.blob_dir, is_mine=is_mine
) )
def get_blob(self, blob_hash, length: typing.Optional[int] = None): def get_blob(self, blob_hash, length: typing.Optional[int] = None, is_mine: bool = False):
if blob_hash in self.blobs: if blob_hash in self.blobs:
if self.config.save_blobs and isinstance(self.blobs[blob_hash], BlobBuffer): if self.config.save_blobs and isinstance(self.blobs[blob_hash], BlobBuffer):
buffer = self.blobs.pop(blob_hash) buffer = self.blobs.pop(blob_hash)
if blob_hash in self.completed_blob_hashes: if blob_hash in self.completed_blob_hashes:
self.completed_blob_hashes.remove(blob_hash) self.completed_blob_hashes.remove(blob_hash)
self.blobs[blob_hash] = self._get_blob(blob_hash, length) self.blobs[blob_hash] = self._get_blob(blob_hash, length, is_mine)
if buffer.is_readable(): if buffer.is_readable():
with buffer.reader_context() as reader: with buffer.reader_context() as reader:
self.blobs[blob_hash].write_blob(reader.read()) self.blobs[blob_hash].write_blob(reader.read())
if length and self.blobs[blob_hash].length is None: if length and self.blobs[blob_hash].length is None:
self.blobs[blob_hash].set_length(length) self.blobs[blob_hash].set_length(length)
else: else:
self.blobs[blob_hash] = self._get_blob(blob_hash, length) self.blobs[blob_hash] = self._get_blob(blob_hash, length, is_mine)
return self.blobs[blob_hash] return self.blobs[blob_hash]
def is_blob_verified(self, blob_hash: str, length: typing.Optional[int] = None) -> bool: def is_blob_verified(self, blob_hash: str, length: typing.Optional[int] = None) -> bool:
@ -83,6 +83,8 @@ class BlobManager:
to_add = await self.storage.sync_missing_blobs(in_blobfiles_dir) to_add = await self.storage.sync_missing_blobs(in_blobfiles_dir)
if to_add: if to_add:
self.completed_blob_hashes.update(to_add) self.completed_blob_hashes.update(to_add)
# check blobs that aren't set as finished but were seen on disk
await self.ensure_completed_blobs_status(in_blobfiles_dir - to_add)
if self.config.track_bandwidth: if self.config.track_bandwidth:
self.connection_manager.start() self.connection_manager.start()
return True return True
@ -105,13 +107,26 @@ class BlobManager:
if isinstance(blob, BlobFile): if isinstance(blob, BlobFile):
if blob.blob_hash not in self.completed_blob_hashes: if blob.blob_hash not in self.completed_blob_hashes:
self.completed_blob_hashes.add(blob.blob_hash) self.completed_blob_hashes.add(blob.blob_hash)
return self.loop.create_task(self.storage.add_blobs((blob.blob_hash, blob.length), finished=True)) return self.loop.create_task(self.storage.add_blobs(
(blob.blob_hash, blob.length, blob.added_on, blob.is_mine), finished=True)
)
else: else:
return self.loop.create_task(self.storage.add_blobs((blob.blob_hash, blob.length), finished=False)) return self.loop.create_task(self.storage.add_blobs(
(blob.blob_hash, blob.length, blob.added_on, blob.is_mine), finished=False)
)
def check_completed_blobs(self, blob_hashes: typing.List[str]) -> typing.List[str]: async def ensure_completed_blobs_status(self, blob_hashes: typing.Iterable[str]):
"""Returns of the blobhashes_to_check, which are valid""" """Ensures that completed blobs from a given list of blob hashes are set as 'finished' in the database."""
return [blob_hash for blob_hash in blob_hashes if self.is_blob_verified(blob_hash)] to_add = []
for blob_hash in blob_hashes:
if not self.is_blob_verified(blob_hash):
continue
blob = self.get_blob(blob_hash)
to_add.append((blob.blob_hash, blob.length, blob.added_on, blob.is_mine))
if len(to_add) > 500:
await self.storage.add_blobs(*to_add, finished=True)
to_add.clear()
return await self.storage.add_blobs(*to_add, finished=True)
def delete_blob(self, blob_hash: str): def delete_blob(self, blob_hash: str):
if not is_valid_blobhash(blob_hash): if not is_valid_blobhash(blob_hash):

View file

@ -0,0 +1,77 @@
import asyncio
import logging
log = logging.getLogger(__name__)
class DiskSpaceManager:
def __init__(self, config, db, blob_manager, cleaning_interval=30 * 60, analytics=None):
self.config = config
self.db = db
self.blob_manager = blob_manager
self.cleaning_interval = cleaning_interval
self.running = False
self.task = None
self.analytics = analytics
self._used_space_bytes = None
async def get_free_space_mb(self, is_network_blob=False):
limit_mb = self.config.network_storage_limit if is_network_blob else self.config.blob_storage_limit
space_used_mb = await self.get_space_used_mb()
space_used_mb = space_used_mb['network_storage'] if is_network_blob else space_used_mb['content_storage']
return max(0, limit_mb - space_used_mb)
async def get_space_used_bytes(self):
self._used_space_bytes = await self.db.get_stored_blob_disk_usage()
return self._used_space_bytes
async def get_space_used_mb(self, cached=True):
cached = cached and self._used_space_bytes is not None
space_used_bytes = self._used_space_bytes if cached else await self.get_space_used_bytes()
return {key: int(value/1024.0/1024.0) for key, value in space_used_bytes.items()}
async def clean(self):
await self._clean(False)
await self._clean(True)
async def _clean(self, is_network_blob=False):
space_used_mb = await self.get_space_used_mb(cached=False)
if is_network_blob:
space_used_mb = space_used_mb['network_storage']
else:
space_used_mb = space_used_mb['content_storage'] + space_used_mb['private_storage']
storage_limit_mb = self.config.network_storage_limit if is_network_blob else self.config.blob_storage_limit
if self.analytics:
asyncio.create_task(
self.analytics.send_disk_space_used(space_used_mb, storage_limit_mb, is_network_blob)
)
delete = []
available = storage_limit_mb - space_used_mb
if storage_limit_mb == 0 if not is_network_blob else available >= 0:
return 0
for blob_hash, file_size, _ in await self.db.get_stored_blobs(is_mine=False, is_network_blob=is_network_blob):
delete.append(blob_hash)
available += int(file_size/1024.0/1024.0)
if available >= 0:
break
if delete:
await self.db.stop_all_files()
await self.blob_manager.delete_blobs(delete, delete_from_db=True)
self._used_space_bytes = None
return len(delete)
async def cleaning_loop(self):
while self.running:
await asyncio.sleep(self.cleaning_interval)
await self.clean()
async def start(self):
self.running = True
self.task = asyncio.create_task(self.cleaning_loop())
self.task.add_done_callback(lambda _: log.info("Stopping blob cleanup service."))
async def stop(self):
if self.running:
self.running = False
self.task.cancel()

View file

@ -32,7 +32,7 @@ class BlobExchangeClientProtocol(asyncio.Protocol):
self.buf = b'' self.buf = b''
# this is here to handle the race when the downloader is closed right as response_fut gets a result # this is here to handle the race when the downloader is closed right as response_fut gets a result
self.closed = asyncio.Event(loop=self.loop) self.closed = asyncio.Event()
def data_received(self, data: bytes): def data_received(self, data: bytes):
if self.connection_manager: if self.connection_manager:
@ -111,7 +111,7 @@ class BlobExchangeClientProtocol(asyncio.Protocol):
self.transport.write(msg) self.transport.write(msg)
if self.connection_manager: if self.connection_manager:
self.connection_manager.sent_data(f"{self.peer_address}:{self.peer_port}", len(msg)) self.connection_manager.sent_data(f"{self.peer_address}:{self.peer_port}", len(msg))
response: BlobResponse = await asyncio.wait_for(self._response_fut, self.peer_timeout, loop=self.loop) response: BlobResponse = await asyncio.wait_for(self._response_fut, self.peer_timeout)
availability_response = response.get_availability_response() availability_response = response.get_availability_response()
price_response = response.get_price_response() price_response = response.get_price_response()
blob_response = response.get_blob_response() blob_response = response.get_blob_response()
@ -151,7 +151,7 @@ class BlobExchangeClientProtocol(asyncio.Protocol):
f" timeout in {self.peer_timeout}" f" timeout in {self.peer_timeout}"
log.debug(msg) log.debug(msg)
msg = f"downloaded {self.blob.blob_hash[:8]} from {self.peer_address}:{self.peer_port}" msg = f"downloaded {self.blob.blob_hash[:8]} from {self.peer_address}:{self.peer_port}"
await asyncio.wait_for(self.writer.finished, self.peer_timeout, loop=self.loop) await asyncio.wait_for(self.writer.finished, self.peer_timeout)
# wait for the io to finish # wait for the io to finish
await self.blob.verified.wait() await self.blob.verified.wait()
log.info("%s at %fMB/s", msg, log.info("%s at %fMB/s", msg,
@ -187,7 +187,7 @@ class BlobExchangeClientProtocol(asyncio.Protocol):
try: try:
self._blob_bytes_received = 0 self._blob_bytes_received = 0
self.blob, self.writer = blob, blob.get_blob_writer(self.peer_address, self.peer_port) self.blob, self.writer = blob, blob.get_blob_writer(self.peer_address, self.peer_port)
self._response_fut = asyncio.Future(loop=self.loop) self._response_fut = asyncio.Future()
return await self._download_blob() return await self._download_blob()
except OSError: except OSError:
# i'm not sure how to fix this race condition - jack # i'm not sure how to fix this race condition - jack
@ -244,7 +244,7 @@ async def request_blob(loop: asyncio.AbstractEventLoop, blob: Optional['Abstract
try: try:
if not connected_protocol: if not connected_protocol:
await asyncio.wait_for(loop.create_connection(lambda: protocol, address, tcp_port), await asyncio.wait_for(loop.create_connection(lambda: protocol, address, tcp_port),
peer_connect_timeout, loop=loop) peer_connect_timeout)
connected_protocol = protocol connected_protocol = protocol
if blob is None or blob.get_is_verified() or not blob.is_writeable(): if blob is None or blob.get_is_verified() or not blob.is_writeable():
# blob is None happens when we are just opening a connection # blob is None happens when we are just opening a connection

View file

@ -3,6 +3,7 @@ import typing
import logging import logging
from lbry.utils import cache_concurrent from lbry.utils import cache_concurrent
from lbry.blob_exchange.client import request_blob from lbry.blob_exchange.client import request_blob
from lbry.dht.node import get_kademlia_peers_from_hosts
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.conf import Config from lbry.conf import Config
from lbry.dht.node import Node from lbry.dht.node import Node
@ -29,7 +30,7 @@ class BlobDownloader:
self.failures: typing.Dict['KademliaPeer', int] = {} self.failures: typing.Dict['KademliaPeer', int] = {}
self.connection_failures: typing.Set['KademliaPeer'] = set() self.connection_failures: typing.Set['KademliaPeer'] = set()
self.connections: typing.Dict['KademliaPeer', 'BlobExchangeClientProtocol'] = {} self.connections: typing.Dict['KademliaPeer', 'BlobExchangeClientProtocol'] = {}
self.is_running = asyncio.Event(loop=self.loop) self.is_running = asyncio.Event()
def should_race_continue(self, blob: 'AbstractBlob'): def should_race_continue(self, blob: 'AbstractBlob'):
max_probes = self.config.max_connections_per_download * (1 if self.connections else 10) max_probes = self.config.max_connections_per_download * (1 if self.connections else 10)
@ -63,8 +64,8 @@ class BlobDownloader:
self.scores[peer] = bytes_received / elapsed if bytes_received and elapsed else 1 self.scores[peer] = bytes_received / elapsed if bytes_received and elapsed else 1
async def new_peer_or_finished(self): async def new_peer_or_finished(self):
active_tasks = list(self.active_connections.values()) + [asyncio.sleep(1)] active_tasks = list(self.active_connections.values()) + [asyncio.create_task(asyncio.sleep(1))]
await asyncio.wait(active_tasks, loop=self.loop, return_when='FIRST_COMPLETED') await asyncio.wait(active_tasks, return_when='FIRST_COMPLETED')
def cleanup_active(self): def cleanup_active(self):
if not self.active_connections and not self.connections: if not self.active_connections and not self.connections:
@ -87,7 +88,6 @@ class BlobDownloader:
if blob.get_is_verified(): if blob.get_is_verified():
return blob return blob
self.is_running.set() self.is_running.set()
tried_for_this_blob: typing.Set['KademliaPeer'] = set()
try: try:
while not blob.get_is_verified() and self.is_running.is_set(): while not blob.get_is_verified() and self.is_running.is_set():
batch: typing.Set['KademliaPeer'] = set(self.connections.keys()) batch: typing.Set['KademliaPeer'] = set(self.connections.keys())
@ -97,24 +97,15 @@ class BlobDownloader:
"%s running, %d peers, %d ignored, %d active, %s connections", blob_hash[:6], "%s running, %d peers, %d ignored, %d active, %s connections", blob_hash[:6],
len(batch), len(self.ignored), len(self.active_connections), len(self.connections) len(batch), len(self.ignored), len(self.active_connections), len(self.connections)
) )
re_add: typing.Set['KademliaPeer'] = set()
for peer in sorted(batch, key=lambda peer: self.scores.get(peer, 0), reverse=True): for peer in sorted(batch, key=lambda peer: self.scores.get(peer, 0), reverse=True):
if peer in self.ignored: if peer in self.ignored:
continue continue
if peer in tried_for_this_blob: if peer in self.active_connections or not self.should_race_continue(blob):
continue continue
if peer in self.active_connections:
if peer not in re_add:
re_add.add(peer)
continue
if not self.should_race_continue(blob):
break
log.debug("request %s from %s:%i", blob_hash[:8], peer.address, peer.tcp_port) log.debug("request %s from %s:%i", blob_hash[:8], peer.address, peer.tcp_port)
t = self.loop.create_task(self.request_blob_from_peer(blob, peer, connection_id)) t = self.loop.create_task(self.request_blob_from_peer(blob, peer, connection_id))
self.active_connections[peer] = t self.active_connections[peer] = t
tried_for_this_blob.add(peer) self.peer_queue.put_nowait(list(batch))
if not re_add:
self.peer_queue.put_nowait(list(batch))
await self.new_peer_or_finished() await self.new_peer_or_finished()
self.cleanup_active() self.cleanup_active()
log.debug("downloaded %s", blob_hash[:8]) log.debug("downloaded %s", blob_hash[:8])
@ -133,11 +124,14 @@ class BlobDownloader:
protocol.close() protocol.close()
async def download_blob(loop, config: 'Config', blob_manager: 'BlobManager', node: 'Node', async def download_blob(loop, config: 'Config', blob_manager: 'BlobManager', dht_node: 'Node',
blob_hash: str) -> 'AbstractBlob': blob_hash: str) -> 'AbstractBlob':
search_queue = asyncio.Queue(loop=loop, maxsize=config.max_connections_per_download) search_queue = asyncio.Queue(maxsize=config.max_connections_per_download)
search_queue.put_nowait(blob_hash) search_queue.put_nowait(blob_hash)
peer_queue, accumulate_task = node.accumulate_peers(search_queue) peer_queue, accumulate_task = dht_node.accumulate_peers(search_queue)
fixed_peers = None if not config.fixed_peers else await get_kademlia_peers_from_hosts(config.fixed_peers)
if fixed_peers:
loop.call_later(config.fixed_peer_delay, peer_queue.put_nowait, fixed_peers)
downloader = BlobDownloader(loop, config, blob_manager, peer_queue) downloader = BlobDownloader(loop, config, blob_manager, peer_queue)
try: try:
return await downloader.download_blob(blob_hash) return await downloader.download_blob(blob_hash)

View file

@ -1,6 +1,7 @@
import asyncio import asyncio
import binascii import binascii
import logging import logging
import socket
import typing import typing
from json.decoder import JSONDecodeError from json.decoder import JSONDecodeError
from lbry.blob_exchange.serialization import BlobResponse, BlobRequest, blob_response_types from lbry.blob_exchange.serialization import BlobResponse, BlobRequest, blob_response_types
@ -24,19 +25,19 @@ class BlobServerProtocol(asyncio.Protocol):
self.idle_timeout = idle_timeout self.idle_timeout = idle_timeout
self.transfer_timeout = transfer_timeout self.transfer_timeout = transfer_timeout
self.server_task: typing.Optional[asyncio.Task] = None self.server_task: typing.Optional[asyncio.Task] = None
self.started_listening = asyncio.Event(loop=self.loop) self.started_listening = asyncio.Event()
self.buf = b'' self.buf = b''
self.transport: typing.Optional[asyncio.Transport] = None self.transport: typing.Optional[asyncio.Transport] = None
self.lbrycrd_address = lbrycrd_address self.lbrycrd_address = lbrycrd_address
self.peer_address_and_port: typing.Optional[str] = None self.peer_address_and_port: typing.Optional[str] = None
self.started_transfer = asyncio.Event(loop=self.loop) self.started_transfer = asyncio.Event()
self.transfer_finished = asyncio.Event(loop=self.loop) self.transfer_finished = asyncio.Event()
self.close_on_idle_task: typing.Optional[asyncio.Task] = None self.close_on_idle_task: typing.Optional[asyncio.Task] = None
async def close_on_idle(self): async def close_on_idle(self):
while self.transport: while self.transport:
try: try:
await asyncio.wait_for(self.started_transfer.wait(), self.idle_timeout, loop=self.loop) await asyncio.wait_for(self.started_transfer.wait(), self.idle_timeout)
except asyncio.TimeoutError: except asyncio.TimeoutError:
log.debug("closing idle connection from %s", self.peer_address_and_port) log.debug("closing idle connection from %s", self.peer_address_and_port)
return self.close() return self.close()
@ -100,7 +101,7 @@ class BlobServerProtocol(asyncio.Protocol):
log.debug("send %s to %s:%i", blob_hash, peer_address, peer_port) log.debug("send %s to %s:%i", blob_hash, peer_address, peer_port)
self.started_transfer.set() self.started_transfer.set()
try: try:
sent = await asyncio.wait_for(blob.sendfile(self), self.transfer_timeout, loop=self.loop) sent = await asyncio.wait_for(blob.sendfile(self), self.transfer_timeout)
if sent and sent > 0: if sent and sent > 0:
self.blob_manager.connection_manager.sent_data(self.peer_address_and_port, sent) self.blob_manager.connection_manager.sent_data(self.peer_address_and_port, sent)
log.info("sent %s (%i bytes) to %s:%i", blob_hash, sent, peer_address, peer_port) log.info("sent %s (%i bytes) to %s:%i", blob_hash, sent, peer_address, peer_port)
@ -137,7 +138,7 @@ class BlobServerProtocol(asyncio.Protocol):
try: try:
request = BlobRequest.deserialize(self.buf + data) request = BlobRequest.deserialize(self.buf + data)
self.buf = remainder self.buf = remainder
except JSONDecodeError: except (UnicodeDecodeError, JSONDecodeError):
log.error("request from %s is not valid json (%i bytes): %s", self.peer_address_and_port, log.error("request from %s is not valid json (%i bytes): %s", self.peer_address_and_port,
len(self.buf + data), '' if not data else binascii.hexlify(self.buf + data).decode()) len(self.buf + data), '' if not data else binascii.hexlify(self.buf + data).decode())
self.close() self.close()
@ -156,7 +157,7 @@ class BlobServer:
self.loop = loop self.loop = loop
self.blob_manager = blob_manager self.blob_manager = blob_manager
self.server_task: typing.Optional[asyncio.Task] = None self.server_task: typing.Optional[asyncio.Task] = None
self.started_listening = asyncio.Event(loop=self.loop) self.started_listening = asyncio.Event()
self.lbrycrd_address = lbrycrd_address self.lbrycrd_address = lbrycrd_address
self.idle_timeout = idle_timeout self.idle_timeout = idle_timeout
self.transfer_timeout = transfer_timeout self.transfer_timeout = transfer_timeout
@ -167,6 +168,13 @@ class BlobServer:
raise Exception("already running") raise Exception("already running")
async def _start_server(): async def _start_server():
# checking if the port is in use
# thx https://stackoverflow.com/a/52872579
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(('localhost', port)) == 0:
# the port is already in use!
log.error("Failed to bind TCP %s:%d", interface, port)
server = await self.loop.create_server( server = await self.loop.create_server(
lambda: self.server_protocol_class(self.loop, self.blob_manager, self.lbrycrd_address, lambda: self.server_protocol_class(self.loop, self.blob_manager, self.lbrycrd_address,
self.idle_timeout, self.transfer_timeout), self.idle_timeout, self.transfer_timeout),

View file

@ -1,8 +1,8 @@
import os import os
import re import re
import sys import sys
import typing
import logging import logging
from typing import List, Dict, Tuple, Union, TypeVar, Generic, Optional
from argparse import ArgumentParser from argparse import ArgumentParser
from contextlib import contextmanager from contextlib import contextmanager
from appdirs import user_data_dir, user_config_dir from appdirs import user_data_dir, user_config_dir
@ -15,7 +15,7 @@ log = logging.getLogger(__name__)
NOT_SET = type('NOT_SET', (object,), {}) # pylint: disable=invalid-name NOT_SET = type('NOT_SET', (object,), {}) # pylint: disable=invalid-name
T = typing.TypeVar('T') T = TypeVar('T')
CURRENCIES = { CURRENCIES = {
'BTC': {'type': 'crypto'}, 'BTC': {'type': 'crypto'},
@ -24,11 +24,11 @@ CURRENCIES = {
} }
class Setting(typing.Generic[T]): class Setting(Generic[T]):
def __init__(self, doc: str, default: typing.Optional[T] = None, def __init__(self, doc: str, default: Optional[T] = None,
previous_names: typing.Optional[typing.List[str]] = None, previous_names: Optional[List[str]] = None,
metavar: typing.Optional[str] = None): metavar: Optional[str] = None):
self.doc = doc self.doc = doc
self.default = default self.default = default
self.previous_names = previous_names or [] self.previous_names = previous_names or []
@ -45,7 +45,7 @@ class Setting(typing.Generic[T]):
def no_cli_name(self): def no_cli_name(self):
return f"--no-{self.name.replace('_', '-')}" return f"--no-{self.name.replace('_', '-')}"
def __get__(self, obj: typing.Optional['BaseConfig'], owner) -> T: def __get__(self, obj: Optional['BaseConfig'], owner) -> T:
if obj is None: if obj is None:
return self return self
for location in obj.search_order: for location in obj.search_order:
@ -53,7 +53,7 @@ class Setting(typing.Generic[T]):
return location[self.name] return location[self.name]
return self.default return self.default
def __set__(self, obj: 'BaseConfig', val: typing.Union[T, NOT_SET]): def __set__(self, obj: 'BaseConfig', val: Union[T, NOT_SET]):
if val == NOT_SET: if val == NOT_SET:
for location in obj.modify_order: for location in obj.modify_order:
if self.name in location: if self.name in location:
@ -63,6 +63,18 @@ class Setting(typing.Generic[T]):
for location in obj.modify_order: for location in obj.modify_order:
location[self.name] = val location[self.name] = val
def is_set(self, obj: 'BaseConfig') -> bool:
for location in obj.search_order:
if self.name in location:
return True
return False
def is_set_to_default(self, obj: 'BaseConfig') -> bool:
for location in obj.search_order:
if self.name in location:
return location[self.name] == self.default
return False
def validate(self, value): def validate(self, value):
raise NotImplementedError() raise NotImplementedError()
@ -87,7 +99,7 @@ class String(Setting[str]):
f"Setting '{self.name}' must be a string." f"Setting '{self.name}' must be a string."
# TODO: removes this after pylint starts to understand generics # TODO: removes this after pylint starts to understand generics
def __get__(self, obj: typing.Optional['BaseConfig'], owner) -> str: # pylint: disable=useless-super-delegation def __get__(self, obj: Optional['BaseConfig'], owner) -> str: # pylint: disable=useless-super-delegation
return super().__get__(obj, owner) return super().__get__(obj, owner)
@ -191,7 +203,7 @@ class MaxKeyFee(Setting[dict]):
) )
parser.add_argument( parser.add_argument(
self.no_cli_name, self.no_cli_name,
help=f"Disable maximum key fee check.", help="Disable maximum key fee check.",
dest=self.name, dest=self.name,
const=None, const=None,
action="store_const", action="store_const",
@ -200,7 +212,7 @@ class MaxKeyFee(Setting[dict]):
class StringChoice(String): class StringChoice(String):
def __init__(self, doc: str, valid_values: typing.List[str], default: str, *args, **kwargs): def __init__(self, doc: str, valid_values: List[str], default: str, *args, **kwargs):
super().__init__(doc, default, *args, **kwargs) super().__init__(doc, default, *args, **kwargs)
if not valid_values: if not valid_values:
raise ValueError("No valid values provided") raise ValueError("No valid values provided")
@ -273,6 +285,75 @@ class Strings(ListSetting):
f"'{self.name}' must be a string." f"'{self.name}' must be a string."
class KnownHubsList:
def __init__(self, config: 'Config' = None, file_name: str = 'known_hubs.yml'):
self.file_name = file_name
self.path = os.path.join(config.wallet_dir, self.file_name) if config else None
self.hubs: Dict[Tuple[str, int], Dict] = {}
if self.exists:
self.load()
@property
def exists(self):
return self.path and os.path.exists(self.path)
@property
def serialized(self) -> Dict[str, Dict]:
return {f"{host}:{port}": details for (host, port), details in self.hubs.items()}
def filter(self, match_none=False, **kwargs):
if not kwargs:
return self.hubs
result = {}
for hub, details in self.hubs.items():
for key, constraint in kwargs.items():
value = details.get(key)
if value == constraint or (match_none and value is None):
result[hub] = details
break
return result
def load(self):
if self.path:
with open(self.path, 'r') as known_hubs_file:
raw = known_hubs_file.read()
for hub, details in yaml.safe_load(raw).items():
self.set(hub, details)
def save(self):
if self.path:
with open(self.path, 'w') as known_hubs_file:
known_hubs_file.write(yaml.safe_dump(self.serialized, default_flow_style=False))
def set(self, hub: str, details: Dict):
if hub and hub.count(':') == 1:
host, port = hub.split(':')
hub_parts = (host, int(port))
if hub_parts not in self.hubs:
self.hubs[hub_parts] = details
return hub
def add_hubs(self, hubs: List[str]):
added = False
for hub in hubs:
if self.set(hub, {}) is not None:
added = True
return added
def items(self):
return self.hubs.items()
def __bool__(self):
return len(self) > 0
def __len__(self):
return self.hubs.__len__()
def __iter__(self):
return iter(self.hubs)
class EnvironmentAccess: class EnvironmentAccess:
PREFIX = 'LBRY_' PREFIX = 'LBRY_'
@ -377,7 +458,7 @@ class ConfigFileAccess:
del self.data[key] del self.data[key]
TBC = typing.TypeVar('TBC', bound='BaseConfig') TBC = TypeVar('TBC', bound='BaseConfig')
class BaseConfig: class BaseConfig:
@ -508,6 +589,9 @@ class CLIConfig(TranscodeConfig):
class Config(CLIConfig): class Config(CLIConfig):
jurisdiction = String("Limit interactions to wallet server in this jurisdiction.")
# directories # directories
data_dir = Path("Directory path to store blobs.", metavar='DIR') data_dir = Path("Directory path to store blobs.", metavar='DIR')
download_dir = Path( download_dir = Path(
@ -529,7 +613,7 @@ class Config(CLIConfig):
"ports or have firewall rules you likely want to disable this.", True "ports or have firewall rules you likely want to disable this.", True
) )
udp_port = Integer("UDP port for communicating on the LBRY DHT", 4444, previous_names=['dht_node_port']) udp_port = Integer("UDP port for communicating on the LBRY DHT", 4444, previous_names=['dht_node_port'])
tcp_port = Integer("TCP port to listen for incoming blob requests", 3333, previous_names=['peer_port']) tcp_port = Integer("TCP port to listen for incoming blob requests", 4444, previous_names=['peer_port'])
prometheus_port = Integer("Port to expose prometheus metrics (off by default)", 0) prometheus_port = Integer("Port to expose prometheus metrics (off by default)", 0)
network_interface = String("Interface to use for the DHT and blob exchange", '0.0.0.0') network_interface = String("Interface to use for the DHT and blob exchange", '0.0.0.0')
@ -538,17 +622,24 @@ class Config(CLIConfig):
"Routing table bucket index below which we always split the bucket if given a new key to add to it and " "Routing table bucket index below which we always split the bucket if given a new key to add to it and "
"the bucket is full. As this value is raised the depth of the routing table (and number of peers in it) " "the bucket is full. As this value is raised the depth of the routing table (and number of peers in it) "
"will increase. This setting is used by seed nodes, you probably don't want to change it during normal " "will increase. This setting is used by seed nodes, you probably don't want to change it during normal "
"use.", 1 "use.", 2
)
is_bootstrap_node = Toggle(
"When running as a bootstrap node, disable all logic related to balancing the routing table, so we can "
"add as many peers as possible and better help first-runs.", False
) )
# protocol timeouts # protocol timeouts
download_timeout = Float("Cumulative timeout for a stream to begin downloading before giving up", 30.0) download_timeout = Float("Cumulative timeout for a stream to begin downloading before giving up", 30.0)
blob_download_timeout = Float("Timeout to download a blob from a peer", 30.0) blob_download_timeout = Float("Timeout to download a blob from a peer", 30.0)
hub_timeout = Float("Timeout when making a hub request", 30.0)
peer_connect_timeout = Float("Timeout to establish a TCP connection to a peer", 3.0) peer_connect_timeout = Float("Timeout to establish a TCP connection to a peer", 3.0)
node_rpc_timeout = Float("Timeout when making a DHT request", constants.RPC_TIMEOUT) node_rpc_timeout = Float("Timeout when making a DHT request", constants.RPC_TIMEOUT)
# blob announcement and download # blob announcement and download
save_blobs = Toggle("Save encrypted blob files for hosting, otherwise download blobs to memory only.", True) save_blobs = Toggle("Save encrypted blob files for hosting, otherwise download blobs to memory only.", True)
network_storage_limit = Integer("Disk space in MB to be allocated for helping the P2P network. 0 = disable", 0)
blob_storage_limit = Integer("Disk space in MB to be allocated for blob storage. 0 = no limit", 0)
blob_lru_cache_size = Integer( blob_lru_cache_size = Integer(
"LRU cache size for decrypted downloaded blobs used to minimize re-downloading the same blobs when " "LRU cache size for decrypted downloaded blobs used to minimize re-downloading the same blobs when "
"replying to a range request. Set to 0 to disable.", 32 "replying to a range request. Set to 0 to disable.", 32
@ -565,6 +656,7 @@ class Config(CLIConfig):
"Maximum number of peers to connect to while downloading a blob", 4, "Maximum number of peers to connect to while downloading a blob", 4,
previous_names=['max_connections_per_stream'] previous_names=['max_connections_per_stream']
) )
concurrent_hub_requests = Integer("Maximum number of concurrent hub requests", 32)
fixed_peer_delay = Float( fixed_peer_delay = Float(
"Amount of seconds before adding the reflector servers as potential peers to download from in case dht" "Amount of seconds before adding the reflector servers as potential peers to download from in case dht"
"peers are not found or are slow", 2.0 "peers are not found or are slow", 2.0
@ -593,6 +685,14 @@ class Config(CLIConfig):
('cdn.reflector.lbry.com', 5567) ('cdn.reflector.lbry.com', 5567)
]) ])
tracker_servers = Servers("BitTorrent-compatible (BEP15) UDP trackers for helping P2P discovery", [
('tracker.lbry.com', 9252),
('tracker.lbry.grin.io', 9252),
('tracker.lbry.pigg.es', 9252),
('tracker.lizard.technology', 9252),
('s1.lbry.network', 9252),
])
lbryum_servers = Servers("SPV wallet servers", [ lbryum_servers = Servers("SPV wallet servers", [
('spv11.lbry.com', 50001), ('spv11.lbry.com', 50001),
('spv12.lbry.com', 50001), ('spv12.lbry.com', 50001),
@ -603,21 +703,27 @@ class Config(CLIConfig):
('spv17.lbry.com', 50001), ('spv17.lbry.com', 50001),
('spv18.lbry.com', 50001), ('spv18.lbry.com', 50001),
('spv19.lbry.com', 50001), ('spv19.lbry.com', 50001),
('hub.lbry.grin.io', 50001),
('hub.lizard.technology', 50001),
('s1.lbry.network', 50001),
]) ])
known_dht_nodes = Servers("Known nodes for bootstrapping connection to the DHT", [ known_dht_nodes = Servers("Known nodes for bootstrapping connection to the DHT", [
('dht.lbry.grin.io', 4444), # Grin
('dht.lbry.madiator.com', 4444), # Madiator
('dht.lbry.pigg.es', 4444), # Pigges
('lbrynet1.lbry.com', 4444), # US EAST ('lbrynet1.lbry.com', 4444), # US EAST
('lbrynet2.lbry.com', 4444), # US WEST ('lbrynet2.lbry.com', 4444), # US WEST
('lbrynet3.lbry.com', 4444), # EU ('lbrynet3.lbry.com', 4444), # EU
('lbrynet4.lbry.com', 4444) # ASIA ('lbrynet4.lbry.com', 4444), # ASIA
('dht.lizard.technology', 4444), # Jack
('s2.lbry.network', 4444),
]) ])
comment_server = String("Comment server API URL", "https://comments.lbry.com/api/v2")
# blockchain # blockchain
blockchain_name = String("Blockchain name - lbrycrd_main, lbrycrd_regtest, or lbrycrd_testnet", 'lbrycrd_main') blockchain_name = String("Blockchain name - lbrycrd_main, lbrycrd_regtest, or lbrycrd_testnet", 'lbrycrd_main')
# daemon # daemon
save_files = Toggle("Save downloaded files when calling `get` by default", True) save_files = Toggle("Save downloaded files when calling `get` by default", False)
components_to_skip = Strings("components which will be skipped during start-up of daemon", []) components_to_skip = Strings("components which will be skipped during start-up of daemon", [])
share_usage_data = Toggle( share_usage_data = Toggle(
"Whether to share usage stats and diagnostic info with LBRY.", False, "Whether to share usage stats and diagnostic info with LBRY.", False,
@ -636,7 +742,8 @@ class Config(CLIConfig):
coin_selection_strategy = StringChoice( coin_selection_strategy = StringChoice(
"Strategy to use when selecting UTXOs for a transaction", "Strategy to use when selecting UTXOs for a transaction",
STRATEGIES, "standard") STRATEGIES, "prefer_confirmed"
)
transaction_cache_size = Integer("Transaction cache size", 2 ** 17) transaction_cache_size = Integer("Transaction cache size", 2 ** 17)
save_resolved_claims = Toggle( save_resolved_claims = Toggle(
@ -655,6 +762,7 @@ class Config(CLIConfig):
def __init__(self, **kwargs): def __init__(self, **kwargs):
super().__init__(**kwargs) super().__init__(**kwargs)
self.set_default_paths() self.set_default_paths()
self.known_hubs = KnownHubsList(self)
def set_default_paths(self): def set_default_paths(self):
if 'darwin' in sys.platform.lower(): if 'darwin' in sys.platform.lower():
@ -676,7 +784,7 @@ class Config(CLIConfig):
return os.path.join(self.data_dir, 'lbrynet.log') return os.path.join(self.data_dir, 'lbrynet.log')
def get_windows_directories() -> typing.Tuple[str, str, str]: def get_windows_directories() -> Tuple[str, str, str]:
from lbry.winpaths import get_path, FOLDERID, UserHandle, \ from lbry.winpaths import get_path, FOLDERID, UserHandle, \
PathNotFoundException # pylint: disable=import-outside-toplevel PathNotFoundException # pylint: disable=import-outside-toplevel
@ -698,14 +806,14 @@ def get_windows_directories() -> typing.Tuple[str, str, str]:
return data_dir, lbryum_dir, download_dir return data_dir, lbryum_dir, download_dir
def get_darwin_directories() -> typing.Tuple[str, str, str]: def get_darwin_directories() -> Tuple[str, str, str]:
data_dir = user_data_dir('LBRY') data_dir = user_data_dir('LBRY')
lbryum_dir = os.path.expanduser('~/.lbryum') lbryum_dir = os.path.expanduser('~/.lbryum')
download_dir = os.path.expanduser('~/Downloads') download_dir = os.path.expanduser('~/Downloads')
return data_dir, lbryum_dir, download_dir return data_dir, lbryum_dir, download_dir
def get_linux_directories() -> typing.Tuple[str, str, str]: def get_linux_directories() -> Tuple[str, str, str]:
try: try:
with open(os.path.join(user_config_dir(), 'user-dirs.dirs'), 'r') as xdg: with open(os.path.join(user_config_dir(), 'user-dirs.dirs'), 'r') as xdg:
down_dir = re.search(r'XDG_DOWNLOAD_DIR=(.+)', xdg.read()) down_dir = re.search(r'XDG_DOWNLOAD_DIR=(.+)', xdg.read())

View file

@ -67,7 +67,7 @@ class ConnectionManager:
while True: while True:
last = time.perf_counter() last = time.perf_counter()
await asyncio.sleep(0.1, loop=self.loop) await asyncio.sleep(0.1)
self._status['incoming_bps'].clear() self._status['incoming_bps'].clear()
self._status['outgoing_bps'].clear() self._status['outgoing_bps'].clear()
now = time.perf_counter() now = time.perf_counter()

View file

@ -1,6 +1,9 @@
import asyncio import asyncio
import typing import typing
import logging import logging
from prometheus_client import Counter, Gauge
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.dht.node import Node from lbry.dht.node import Node
from lbry.extras.daemon.storage import SQLiteStorage from lbry.extras.daemon.storage import SQLiteStorage
@ -9,45 +12,59 @@ log = logging.getLogger(__name__)
class BlobAnnouncer: class BlobAnnouncer:
announcements_sent_metric = Counter(
"announcements_sent", "Number of announcements sent and their respective status.", namespace="dht_node",
labelnames=("peers", "error"),
)
announcement_queue_size_metric = Gauge(
"announcement_queue_size", "Number of hashes waiting to be announced.", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, node: 'Node', storage: 'SQLiteStorage'): def __init__(self, loop: asyncio.AbstractEventLoop, node: 'Node', storage: 'SQLiteStorage'):
self.loop = loop self.loop = loop
self.node = node self.node = node
self.storage = storage self.storage = storage
self.announce_task: asyncio.Task = None self.announce_task: asyncio.Task = None
self.announce_queue: typing.List[str] = [] self.announce_queue: typing.List[str] = []
self._done = asyncio.Event()
self.announced = set()
async def _submit_announcement(self, blob_hash): async def _run_consumer(self):
try: while self.announce_queue:
peers = len(await self.node.announce_blob(blob_hash)) try:
if peers > 4: blob_hash = self.announce_queue.pop()
return blob_hash peers = len(await self.node.announce_blob(blob_hash))
else: self.announcements_sent_metric.labels(peers=peers, error=False).inc()
log.debug("failed to announce %s, could only find %d peers, retrying soon.", blob_hash[:8], peers) if peers > 4:
except Exception as err: self.announced.add(blob_hash)
if isinstance(err, asyncio.CancelledError): # TODO: remove when updated to 3.8 else:
raise err log.debug("failed to announce %s, could only find %d peers, retrying soon.", blob_hash[:8], peers)
log.warning("error announcing %s: %s", blob_hash[:8], str(err)) except Exception as err:
self.announcements_sent_metric.labels(peers=0, error=True).inc()
log.warning("error announcing %s: %s", blob_hash[:8], str(err))
async def _announce(self, batch_size: typing.Optional[int] = 10): async def _announce(self, batch_size: typing.Optional[int] = 10):
while batch_size: while batch_size:
if not self.node.joined.is_set(): if not self.node.joined.is_set():
await self.node.joined.wait() await self.node.joined.wait()
await asyncio.sleep(60, loop=self.loop) await asyncio.sleep(60)
if not self.node.protocol.routing_table.get_peers(): if not self.node.protocol.routing_table.get_peers():
log.warning("No peers in DHT, announce round skipped") log.warning("No peers in DHT, announce round skipped")
continue continue
self.announce_queue.extend(await self.storage.get_blobs_to_announce()) self.announce_queue.extend(await self.storage.get_blobs_to_announce())
self.announcement_queue_size_metric.labels(scope="global").set(len(self.announce_queue))
log.debug("announcer task wake up, %d blobs to announce", len(self.announce_queue)) log.debug("announcer task wake up, %d blobs to announce", len(self.announce_queue))
while len(self.announce_queue) > 0: while len(self.announce_queue) > 0:
log.info("%i blobs to announce", len(self.announce_queue)) log.info("%i blobs to announce", len(self.announce_queue))
announced = await asyncio.gather(*[ await asyncio.gather(*[self._run_consumer() for _ in range(batch_size)])
self._submit_announcement( announced = list(filter(None, self.announced))
self.announce_queue.pop()) for _ in range(batch_size) if self.announce_queue
], loop=self.loop)
announced = list(filter(None, announced))
if announced: if announced:
await self.storage.update_last_announced_blobs(announced) await self.storage.update_last_announced_blobs(announced)
log.info("announced %i blobs", len(announced)) log.info("announced %i blobs", len(announced))
self.announced.clear()
self._done.set()
self._done.clear()
def start(self, batch_size: typing.Optional[int] = 10): def start(self, batch_size: typing.Optional[int] = 10):
assert not self.announce_task or self.announce_task.done(), "already running" assert not self.announce_task or self.announce_task.done(), "already running"
@ -56,3 +73,6 @@ class BlobAnnouncer:
def stop(self): def stop(self):
if self.announce_task and not self.announce_task.done(): if self.announce_task and not self.announce_task.done():
self.announce_task.cancel() self.announce_task.cancel()
def wait(self):
return self._done.wait()

View file

@ -20,7 +20,6 @@ MAYBE_PING_DELAY = 300 # 5 minutes
CHECK_REFRESH_INTERVAL = REFRESH_INTERVAL / 5 CHECK_REFRESH_INTERVAL = REFRESH_INTERVAL / 5
RPC_ID_LENGTH = 20 RPC_ID_LENGTH = 20
PROTOCOL_VERSION = 1 PROTOCOL_VERSION = 1
BOTTOM_OUT_LIMIT = 3
MSG_SIZE_LIMIT = 1400 MSG_SIZE_LIMIT = 1400

View file

@ -1,9 +1,11 @@
import logging import logging
import asyncio import asyncio
import typing import typing
import binascii
import socket import socket
from lbry.utils import resolve_host
from prometheus_client import Gauge
from lbry.utils import aclosing, resolve_host
from lbry.dht import constants from lbry.dht import constants
from lbry.dht.peer import make_kademlia_peer from lbry.dht.peer import make_kademlia_peer
from lbry.dht.protocol.distance import Distance from lbry.dht.protocol.distance import Distance
@ -18,20 +20,32 @@ log = logging.getLogger(__name__)
class Node: class Node:
storing_peers_metric = Gauge(
"storing_peers", "Number of peers storing blobs announced to this node", namespace="dht_node",
labelnames=("scope",),
)
stored_blob_with_x_bytes_colliding = Gauge(
"stored_blobs_x_bytes_colliding", "Number of blobs with at least X bytes colliding with this node id prefix",
namespace="dht_node", labelnames=("amount",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, udp_port: int, def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, udp_port: int,
internal_udp_port: int, peer_port: int, external_ip: str, rpc_timeout: float = constants.RPC_TIMEOUT, internal_udp_port: int, peer_port: int, external_ip: str, rpc_timeout: float = constants.RPC_TIMEOUT,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_bootstrap_node: bool = False,
storage: typing.Optional['SQLiteStorage'] = None): storage: typing.Optional['SQLiteStorage'] = None):
self.loop = loop self.loop = loop
self.internal_udp_port = internal_udp_port self.internal_udp_port = internal_udp_port
self.protocol = KademliaProtocol(loop, peer_manager, node_id, external_ip, udp_port, peer_port, rpc_timeout, self.protocol = KademliaProtocol(loop, peer_manager, node_id, external_ip, udp_port, peer_port, rpc_timeout,
split_buckets_under_index) split_buckets_under_index, is_bootstrap_node)
self.listening_port: asyncio.DatagramTransport = None self.listening_port: asyncio.DatagramTransport = None
self.joined = asyncio.Event(loop=self.loop) self.joined = asyncio.Event()
self._join_task: asyncio.Task = None self._join_task: asyncio.Task = None
self._refresh_task: asyncio.Task = None self._refresh_task: asyncio.Task = None
self._storage = storage self._storage = storage
@property
def stored_blob_hashes(self):
return self.protocol.data_store.keys()
async def refresh_node(self, force_once=False): async def refresh_node(self, force_once=False):
while True: while True:
# remove peers with expired blob announcements from the datastore # remove peers with expired blob announcements from the datastore
@ -41,17 +55,21 @@ class Node:
# add all peers in the routing table # add all peers in the routing table
total_peers.extend(self.protocol.routing_table.get_peers()) total_peers.extend(self.protocol.routing_table.get_peers())
# add all the peers who have announced blobs to us # add all the peers who have announced blobs to us
total_peers.extend(self.protocol.data_store.get_storing_contacts()) storing_peers = self.protocol.data_store.get_storing_contacts()
self.storing_peers_metric.labels("global").set(len(storing_peers))
total_peers.extend(storing_peers)
counts = {0: 0, 1: 0, 2: 0}
node_id = self.protocol.node_id
for blob_hash in self.protocol.data_store.keys():
bytes_colliding = 0 if blob_hash[0] != node_id[0] else 2 if blob_hash[1] == node_id[1] else 1
counts[bytes_colliding] += 1
self.stored_blob_with_x_bytes_colliding.labels(amount=0).set(counts[0])
self.stored_blob_with_x_bytes_colliding.labels(amount=1).set(counts[1])
self.stored_blob_with_x_bytes_colliding.labels(amount=2).set(counts[2])
# get ids falling in the midpoint of each bucket that hasn't been recently updated # get ids falling in the midpoint of each bucket that hasn't been recently updated
node_ids = self.protocol.routing_table.get_refresh_list(0, True) node_ids = self.protocol.routing_table.get_refresh_list(0, True)
# if we have 3 or fewer populated buckets get two random ids in the range of each to try and
# populate/split the buckets further
buckets_with_contacts = self.protocol.routing_table.buckets_with_contacts()
if buckets_with_contacts <= 3:
for i in range(buckets_with_contacts):
node_ids.append(self.protocol.routing_table.random_id_in_bucket_range(i))
node_ids.append(self.protocol.routing_table.random_id_in_bucket_range(i))
if self.protocol.routing_table.get_peers(): if self.protocol.routing_table.get_peers():
# if we have node ids to look up, perform the iterative search until we have k results # if we have node ids to look up, perform the iterative search until we have k results
@ -61,7 +79,7 @@ class Node:
else: else:
if force_once: if force_once:
break break
fut = asyncio.Future(loop=self.loop) fut = asyncio.Future()
self.loop.call_later(constants.REFRESH_INTERVAL // 4, fut.set_result, None) self.loop.call_later(constants.REFRESH_INTERVAL // 4, fut.set_result, None)
await fut await fut
continue continue
@ -75,12 +93,12 @@ class Node:
if force_once: if force_once:
break break
fut = asyncio.Future(loop=self.loop) fut = asyncio.Future()
self.loop.call_later(constants.REFRESH_INTERVAL, fut.set_result, None) self.loop.call_later(constants.REFRESH_INTERVAL, fut.set_result, None)
await fut await fut
async def announce_blob(self, blob_hash: str) -> typing.List[bytes]: async def announce_blob(self, blob_hash: str) -> typing.List[bytes]:
hash_value = binascii.unhexlify(blob_hash.encode()) hash_value = bytes.fromhex(blob_hash)
assert len(hash_value) == constants.HASH_LENGTH assert len(hash_value) == constants.HASH_LENGTH
peers = await self.peer_search(hash_value) peers = await self.peer_search(hash_value)
@ -90,12 +108,12 @@ class Node:
for peer in peers: for peer in peers:
log.debug("store to %s %s %s", peer.address, peer.udp_port, peer.tcp_port) log.debug("store to %s %s %s", peer.address, peer.udp_port, peer.tcp_port)
stored_to_tup = await asyncio.gather( stored_to_tup = await asyncio.gather(
*(self.protocol.store_to_peer(hash_value, peer) for peer in peers), loop=self.loop *(self.protocol.store_to_peer(hash_value, peer) for peer in peers)
) )
stored_to = [node_id for node_id, contacted in stored_to_tup if contacted] stored_to = [node_id for node_id, contacted in stored_to_tup if contacted]
if stored_to: if stored_to:
log.debug( log.debug(
"Stored %s to %i of %i attempted peers", binascii.hexlify(hash_value).decode()[:8], "Stored %s to %i of %i attempted peers", hash_value.hex()[:8],
len(stored_to), len(peers) len(stored_to), len(peers)
) )
else: else:
@ -164,39 +182,36 @@ class Node:
for address, udp_port in known_node_urls or [] for address, udp_port in known_node_urls or []
])) ]))
except socket.gaierror: except socket.gaierror:
await asyncio.sleep(30, loop=self.loop) await asyncio.sleep(30)
continue continue
self.protocol.peer_manager.reset() self.protocol.peer_manager.reset()
self.protocol.ping_queue.enqueue_maybe_ping(*seed_peers, delay=0.0) self.protocol.ping_queue.enqueue_maybe_ping(*seed_peers, delay=0.0)
await self.peer_search(self.protocol.node_id, shortlist=seed_peers, count=32) await self.peer_search(self.protocol.node_id, shortlist=seed_peers, count=32)
await asyncio.sleep(1, loop=self.loop) await asyncio.sleep(1)
def start(self, interface: str, known_node_urls: typing.Optional[typing.List[typing.Tuple[str, int]]] = None): def start(self, interface: str, known_node_urls: typing.Optional[typing.List[typing.Tuple[str, int]]] = None):
self._join_task = self.loop.create_task(self.join_network(interface, known_node_urls)) self._join_task = self.loop.create_task(self.join_network(interface, known_node_urls))
def get_iterative_node_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None, def get_iterative_node_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None,
bottom_out_limit: int = constants.BOTTOM_OUT_LIMIT,
max_results: int = constants.K) -> IterativeNodeFinder: max_results: int = constants.K) -> IterativeNodeFinder:
shortlist = shortlist or self.protocol.routing_table.find_close_peers(key)
return IterativeNodeFinder(self.loop, self.protocol.peer_manager, self.protocol.routing_table, self.protocol, return IterativeNodeFinder(self.loop, self.protocol, key, max_results, shortlist)
key, bottom_out_limit, max_results, None, shortlist)
def get_iterative_value_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None, def get_iterative_value_finder(self, key: bytes, shortlist: typing.Optional[typing.List['KademliaPeer']] = None,
bottom_out_limit: int = 40,
max_results: int = -1) -> IterativeValueFinder: max_results: int = -1) -> IterativeValueFinder:
shortlist = shortlist or self.protocol.routing_table.find_close_peers(key)
return IterativeValueFinder(self.loop, self.protocol.peer_manager, self.protocol.routing_table, self.protocol, return IterativeValueFinder(self.loop, self.protocol, key, max_results, shortlist)
key, bottom_out_limit, max_results, None, shortlist)
async def peer_search(self, node_id: bytes, count=constants.K, max_results=constants.K * 2, async def peer_search(self, node_id: bytes, count=constants.K, max_results=constants.K * 2,
bottom_out_limit=20, shortlist: typing.Optional[typing.List['KademliaPeer']] = None shortlist: typing.Optional[typing.List['KademliaPeer']] = None
) -> typing.List['KademliaPeer']: ) -> typing.List['KademliaPeer']:
peers = [] peers = []
async for iteration_peers in self.get_iterative_node_finder( async with aclosing(self.get_iterative_node_finder(
node_id, shortlist=shortlist, bottom_out_limit=bottom_out_limit, max_results=max_results): node_id, shortlist=shortlist, max_results=max_results)) as node_finder:
peers.extend(iteration_peers) async for iteration_peers in node_finder:
peers.extend(iteration_peers)
distance = Distance(node_id) distance = Distance(node_id)
peers.sort(key=lambda peer: distance(peer.node_id)) peers.sort(key=lambda peer: distance(peer.node_id))
return peers[:count] return peers[:count]
@ -222,39 +237,46 @@ class Node:
# prioritize peers who reply to a dht ping first # prioritize peers who reply to a dht ping first
# this minimizes attempting to make tcp connections that won't work later to dead or unreachable peers # this minimizes attempting to make tcp connections that won't work later to dead or unreachable peers
async with aclosing(self.get_iterative_value_finder(bytes.fromhex(blob_hash))) as value_finder:
async for results in self.get_iterative_value_finder(binascii.unhexlify(blob_hash.encode())): async for results in value_finder:
to_put = [] to_put = []
for peer in results: for peer in results:
if peer.address == self.protocol.external_ip and self.protocol.peer_port == peer.tcp_port: if peer.address == self.protocol.external_ip and self.protocol.peer_port == peer.tcp_port:
continue continue
is_good = self.protocol.peer_manager.peer_is_good(peer) is_good = self.protocol.peer_manager.peer_is_good(peer)
if is_good: if is_good:
# the peer has replied recently over UDP, it can probably be reached on the TCP port # the peer has replied recently over UDP, it can probably be reached on the TCP port
to_put.append(peer) to_put.append(peer)
elif is_good is None: elif is_good is None:
if not peer.udp_port: if not peer.udp_port:
# TODO: use the same port for TCP and UDP # TODO: use the same port for TCP and UDP
# the udp port must be guessed # the udp port must be guessed
# default to the ports being the same. if the TCP port appears to be <=0.48.0 default, # default to the ports being the same. if the TCP port appears to be <=0.48.0 default,
# including on a network with several nodes, then assume the udp port is proportionately # including on a network with several nodes, then assume the udp port is proportionately
# based on a starting port of 4444 # based on a starting port of 4444
udp_port_to_try = peer.tcp_port udp_port_to_try = peer.tcp_port
if 3400 > peer.tcp_port > 3332: if 3400 > peer.tcp_port > 3332:
udp_port_to_try = (peer.tcp_port - 3333) + 4444 udp_port_to_try = (peer.tcp_port - 3333) + 4444
self.loop.create_task(put_into_result_queue_after_pong( self.loop.create_task(put_into_result_queue_after_pong(
make_kademlia_peer(peer.node_id, peer.address, udp_port_to_try, peer.tcp_port) make_kademlia_peer(peer.node_id, peer.address, udp_port_to_try, peer.tcp_port)
)) ))
else:
self.loop.create_task(put_into_result_queue_after_pong(peer))
else: else:
self.loop.create_task(put_into_result_queue_after_pong(peer)) # the peer is known to be bad/unreachable, skip trying to connect to it over TCP
else: log.debug("skip bad peer %s:%i for %s", peer.address, peer.tcp_port, blob_hash)
# the peer is known to be bad/unreachable, skip trying to connect to it over TCP if to_put:
log.debug("skip bad peer %s:%i for %s", peer.address, peer.tcp_port, blob_hash) result_queue.put_nowait(to_put)
if to_put:
result_queue.put_nowait(to_put)
def accumulate_peers(self, search_queue: asyncio.Queue, def accumulate_peers(self, search_queue: asyncio.Queue,
peer_queue: typing.Optional[asyncio.Queue] = None peer_queue: typing.Optional[asyncio.Queue] = None
) -> typing.Tuple[asyncio.Queue, asyncio.Task]: ) -> typing.Tuple[asyncio.Queue, asyncio.Task]:
queue = peer_queue or asyncio.Queue(loop=self.loop) queue = peer_queue or asyncio.Queue()
return queue, self.loop.create_task(self._accumulate_peers_for_value(search_queue, queue)) return queue, self.loop.create_task(self._accumulate_peers_for_value(search_queue, queue))
async def get_kademlia_peers_from_hosts(peer_list: typing.List[typing.Tuple[str, int]]) -> typing.List['KademliaPeer']:
peer_address_list = [(await resolve_host(url, port, proto='tcp'), port) for url, port in peer_list]
kademlia_peer_list = [make_kademlia_peer(None, address, None, tcp_port=port, allow_localhost=True)
for address, port in peer_address_list]
return kademlia_peer_list

View file

@ -1,18 +1,21 @@
import typing import typing
import asyncio import asyncio
import logging import logging
from binascii import hexlify
from dataclasses import dataclass, field from dataclasses import dataclass, field
from functools import lru_cache from functools import lru_cache
from lbry.utils import is_valid_public_ipv4 as _is_valid_public_ipv4
from prometheus_client import Gauge
from lbry.utils import is_valid_public_ipv4 as _is_valid_public_ipv4, LRUCache
from lbry.dht import constants from lbry.dht import constants
from lbry.dht.serialization.datagram import make_compact_address, make_compact_ip, decode_compact_address from lbry.dht.serialization.datagram import make_compact_address, make_compact_ip, decode_compact_address
ALLOW_LOCALHOST = False ALLOW_LOCALHOST = False
CACHE_SIZE = 16384
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@lru_cache(1024) @lru_cache(CACHE_SIZE)
def make_kademlia_peer(node_id: typing.Optional[bytes], address: typing.Optional[str], def make_kademlia_peer(node_id: typing.Optional[bytes], address: typing.Optional[str],
udp_port: typing.Optional[int] = None, udp_port: typing.Optional[int] = None,
tcp_port: typing.Optional[int] = None, tcp_port: typing.Optional[int] = None,
@ -26,17 +29,26 @@ def is_valid_public_ipv4(address, allow_localhost: bool = False):
class PeerManager: class PeerManager:
peer_manager_keys_metric = Gauge(
"peer_manager_keys", "Number of keys tracked by PeerManager dicts (sum)", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop): def __init__(self, loop: asyncio.AbstractEventLoop):
self._loop = loop self._loop = loop
self._rpc_failures: typing.Dict[ self._rpc_failures: typing.Dict[
typing.Tuple[str, int], typing.Tuple[typing.Optional[float], typing.Optional[float]] typing.Tuple[str, int], typing.Tuple[typing.Optional[float], typing.Optional[float]]
] = {} ] = LRUCache(CACHE_SIZE)
self._last_replied: typing.Dict[typing.Tuple[str, int], float] = {} self._last_replied: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._last_sent: typing.Dict[typing.Tuple[str, int], float] = {} self._last_sent: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._last_requested: typing.Dict[typing.Tuple[str, int], float] = {} self._last_requested: typing.Dict[typing.Tuple[str, int], float] = LRUCache(CACHE_SIZE)
self._node_id_mapping: typing.Dict[typing.Tuple[str, int], bytes] = {} self._node_id_mapping: typing.Dict[typing.Tuple[str, int], bytes] = LRUCache(CACHE_SIZE)
self._node_id_reverse_mapping: typing.Dict[bytes, typing.Tuple[str, int]] = {} self._node_id_reverse_mapping: typing.Dict[bytes, typing.Tuple[str, int]] = LRUCache(CACHE_SIZE)
self._node_tokens: typing.Dict[bytes, (float, bytes)] = {} self._node_tokens: typing.Dict[bytes, (float, bytes)] = LRUCache(CACHE_SIZE)
def count_cache_keys(self):
return len(self._rpc_failures) + len(self._last_replied) + len(self._last_sent) + len(
self._last_requested) + len(self._node_id_mapping) + len(self._node_id_reverse_mapping) + len(
self._node_tokens)
def reset(self): def reset(self):
for statistic in (self._rpc_failures, self._last_replied, self._last_sent, self._last_requested): for statistic in (self._rpc_failures, self._last_replied, self._last_sent, self._last_requested):
@ -86,6 +98,10 @@ class PeerManager:
self._node_id_mapping.pop(self._node_id_reverse_mapping.pop(node_id)) self._node_id_mapping.pop(self._node_id_reverse_mapping.pop(node_id))
self._node_id_mapping[(address, udp_port)] = node_id self._node_id_mapping[(address, udp_port)] = node_id
self._node_id_reverse_mapping[node_id] = (address, udp_port) self._node_id_reverse_mapping[node_id] = (address, udp_port)
self.peer_manager_keys_metric.labels("global").set(self.count_cache_keys())
def get_node_id_for_endpoint(self, address, port):
return self._node_id_mapping.get((address, port))
def prune(self): # TODO: periodically call this def prune(self): # TODO: periodically call this
now = self._loop.time() now = self._loop.time()
@ -137,9 +153,10 @@ class PeerManager:
def peer_is_good(self, peer: 'KademliaPeer'): def peer_is_good(self, peer: 'KademliaPeer'):
return self.contact_triple_is_good(peer.node_id, peer.address, peer.udp_port) return self.contact_triple_is_good(peer.node_id, peer.address, peer.udp_port)
def decode_tcp_peer_from_compact_address(self, compact_address: bytes) -> 'KademliaPeer': # pylint: disable=no-self-use
node_id, address, tcp_port = decode_compact_address(compact_address) def decode_tcp_peer_from_compact_address(compact_address: bytes) -> 'KademliaPeer': # pylint: disable=no-self-use
return make_kademlia_peer(node_id, address, udp_port=None, tcp_port=tcp_port) node_id, address, tcp_port = decode_compact_address(compact_address)
return make_kademlia_peer(node_id, address, udp_port=None, tcp_port=tcp_port)
@dataclass(unsafe_hash=True) @dataclass(unsafe_hash=True)
@ -154,11 +171,11 @@ class KademliaPeer:
def __post_init__(self): def __post_init__(self):
if self._node_id is not None: if self._node_id is not None:
if not len(self._node_id) == constants.HASH_LENGTH: if not len(self._node_id) == constants.HASH_LENGTH:
raise ValueError("invalid node_id: {}".format(hexlify(self._node_id).decode())) raise ValueError("invalid node_id: {}".format(self._node_id.hex()))
if self.udp_port is not None and not 1 <= self.udp_port <= 65535: if self.udp_port is not None and not 1024 <= self.udp_port <= 65535:
raise ValueError("invalid udp port") raise ValueError(f"invalid udp port: {self.address}:{self.udp_port}")
if self.tcp_port is not None and not 1 <= self.tcp_port <= 65535: if self.tcp_port is not None and not 1024 <= self.tcp_port <= 65535:
raise ValueError("invalid tcp port") raise ValueError(f"invalid tcp port: {self.address}:{self.tcp_port}")
if not is_valid_public_ipv4(self.address, self.allow_localhost): if not is_valid_public_ipv4(self.address, self.allow_localhost):
raise ValueError(f"invalid ip address: '{self.address}'") raise ValueError(f"invalid ip address: '{self.address}'")
@ -177,3 +194,6 @@ class KademliaPeer:
def compact_ip(self): def compact_ip(self):
return make_compact_ip(self.address) return make_compact_ip(self.address)
def __str__(self):
return f"{self.__class__.__name__}({self.node_id.hex()[:8]}@{self.address}:{self.udp_port}-{self.tcp_port})"

View file

@ -16,6 +16,12 @@ class DictDataStore:
self._peer_manager = peer_manager self._peer_manager = peer_manager
self.completed_blobs: typing.Set[str] = set() self.completed_blobs: typing.Set[str] = set()
def keys(self):
return self._data_store.keys()
def __len__(self):
return self._data_store.__len__()
def removed_expired_peers(self): def removed_expired_peers(self):
now = self.loop.time() now = self.loop.time()
keys = list(self._data_store.keys()) keys = list(self._data_store.keys())

View file

@ -1,18 +1,17 @@
import asyncio import asyncio
from binascii import hexlify
from itertools import chain from itertools import chain
from collections import defaultdict from collections import defaultdict, OrderedDict
from collections.abc import AsyncIterator
import typing import typing
import logging import logging
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from lbry.dht import constants from lbry.dht import constants
from lbry.dht.error import RemoteException, TransportNotConnected from lbry.dht.error import RemoteException, TransportNotConnected
from lbry.dht.protocol.distance import Distance from lbry.dht.protocol.distance import Distance
from lbry.dht.peer import make_kademlia_peer from lbry.dht.peer import make_kademlia_peer, decode_tcp_peer_from_compact_address
from lbry.dht.serialization.datagram import PAGE_KEY from lbry.dht.serialization.datagram import PAGE_KEY
if TYPE_CHECKING: if TYPE_CHECKING:
from lbry.dht.protocol.routing_table import TreeRoutingTable
from lbry.dht.protocol.protocol import KademliaProtocol from lbry.dht.protocol.protocol import KademliaProtocol
from lbry.dht.peer import PeerManager, KademliaPeer from lbry.dht.peer import PeerManager, KademliaPeer
@ -27,6 +26,15 @@ class FindResponse:
def get_close_triples(self) -> typing.List[typing.Tuple[bytes, str, int]]: def get_close_triples(self) -> typing.List[typing.Tuple[bytes, str, int]]:
raise NotImplementedError() raise NotImplementedError()
def get_close_kademlia_peers(self, peer_info) -> typing.Generator[typing.Iterator['KademliaPeer'], None, None]:
for contact_triple in self.get_close_triples():
node_id, address, udp_port = contact_triple
try:
yield make_kademlia_peer(node_id, address, udp_port)
except ValueError:
log.warning("misbehaving peer %s:%i returned peer with reserved ip %s:%i", peer_info.address,
peer_info.udp_port, address, udp_port)
class FindNodeResponse(FindResponse): class FindNodeResponse(FindResponse):
def __init__(self, key: bytes, close_triples: typing.List[typing.Tuple[bytes, str, int]]): def __init__(self, key: bytes, close_triples: typing.List[typing.Tuple[bytes, str, int]]):
@ -57,57 +65,33 @@ class FindValueResponse(FindResponse):
return [(node_id, address.decode(), port) for node_id, address, port in self.close_triples] return [(node_id, address.decode(), port) for node_id, address, port in self.close_triples]
def get_shortlist(routing_table: 'TreeRoutingTable', key: bytes, class IterativeFinder(AsyncIterator):
shortlist: typing.Optional[typing.List['KademliaPeer']]) -> typing.List['KademliaPeer']: def __init__(self, loop: asyncio.AbstractEventLoop,
""" protocol: 'KademliaProtocol', key: bytes,
If not provided, initialize the shortlist of peers to probe to the (up to) k closest peers in the routing table max_results: typing.Optional[int] = constants.K,
:param routing_table: a TreeRoutingTable
:param key: a 48 byte hash
:param shortlist: optional manually provided shortlist, this is done during bootstrapping when there are no
peers in the routing table. During bootstrap the shortlist is set to be the seed nodes.
"""
if len(key) != constants.HASH_LENGTH:
raise ValueError("invalid key length: %i" % len(key))
return shortlist or routing_table.find_close_peers(key)
class IterativeFinder:
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager',
routing_table: 'TreeRoutingTable', protocol: 'KademliaProtocol', key: bytes,
bottom_out_limit: typing.Optional[int] = 2, max_results: typing.Optional[int] = constants.K,
exclude: typing.Optional[typing.List[typing.Tuple[str, int]]] = None,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None): shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
if len(key) != constants.HASH_LENGTH: if len(key) != constants.HASH_LENGTH:
raise ValueError("invalid key length: %i" % len(key)) raise ValueError("invalid key length: %i" % len(key))
self.loop = loop self.loop = loop
self.peer_manager = peer_manager self.peer_manager = protocol.peer_manager
self.routing_table = routing_table
self.protocol = protocol self.protocol = protocol
self.key = key self.key = key
self.bottom_out_limit = bottom_out_limit self.max_results = max(constants.K, max_results)
self.max_results = max_results
self.exclude = exclude or []
self.active: typing.Set['KademliaPeer'] = set() self.active: typing.Dict['KademliaPeer', int] = OrderedDict() # peer: distance, sorted
self.contacted: typing.Set['KademliaPeer'] = set() self.contacted: typing.Set['KademliaPeer'] = set()
self.distance = Distance(key) self.distance = Distance(key)
self.closest_peer: typing.Optional['KademliaPeer'] = None self.iteration_queue = asyncio.Queue()
self.prev_closest_peer: typing.Optional['KademliaPeer'] = None
self.iteration_queue = asyncio.Queue(loop=self.loop) self.running_probes: typing.Dict['KademliaPeer', asyncio.Task] = {}
self.running_probes: typing.Set[asyncio.Task] = set()
self.iteration_count = 0 self.iteration_count = 0
self.bottom_out_count = 0
self.running = False self.running = False
self.tasks: typing.List[asyncio.Task] = [] self.tasks: typing.List[asyncio.Task] = []
self.delayed_calls: typing.List[asyncio.Handle] = [] for peer in shortlist:
for peer in get_shortlist(routing_table, key, shortlist):
if peer.node_id: if peer.node_id:
self._add_active(peer) self._add_active(peer, force=True)
else: else:
# seed nodes # seed nodes
self._schedule_probe(peer) self._schedule_probe(peer)
@ -139,66 +123,79 @@ class IterativeFinder:
""" """
return [] return []
def _is_closer(self, peer: 'KademliaPeer') -> bool: def _add_active(self, peer, force=False):
return not self.closest_peer or self.distance.is_closer(peer.node_id, self.closest_peer.node_id) if not force and self.peer_manager.peer_is_good(peer) is False:
return
def _add_active(self, peer): if peer in self.contacted:
return
if peer not in self.active and peer.node_id and peer.node_id != self.protocol.node_id: if peer not in self.active and peer.node_id and peer.node_id != self.protocol.node_id:
self.active.add(peer) self.active[peer] = self.distance(peer.node_id)
if self._is_closer(peer): self.active = OrderedDict(sorted(self.active.items(), key=lambda item: item[1]))
self.prev_closest_peer = self.closest_peer
self.closest_peer = peer
async def _handle_probe_result(self, peer: 'KademliaPeer', response: FindResponse): async def _handle_probe_result(self, peer: 'KademliaPeer', response: FindResponse):
self._add_active(peer) self._add_active(peer)
for contact_triple in response.get_close_triples(): for new_peer in response.get_close_kademlia_peers(peer):
node_id, address, udp_port = contact_triple self._add_active(new_peer)
try:
self._add_active(make_kademlia_peer(node_id, address, udp_port))
except ValueError:
log.warning("misbehaving peer %s:%i returned peer with reserved ip %s:%i", peer.address,
peer.udp_port, address, udp_port)
self.check_result_ready(response) self.check_result_ready(response)
self._log_state(reason="check result")
def _reset_closest(self, peer):
if peer in self.active:
del self.active[peer]
async def _send_probe(self, peer: 'KademliaPeer'): async def _send_probe(self, peer: 'KademliaPeer'):
try: try:
response = await self.send_probe(peer) response = await self.send_probe(peer)
except asyncio.TimeoutError: except asyncio.TimeoutError:
self.active.discard(peer) self._reset_closest(peer)
return return
except asyncio.CancelledError:
log.debug("%s[%x] cancelled probe",
type(self).__name__, id(self))
raise
except ValueError as err: except ValueError as err:
log.warning(str(err)) log.warning(str(err))
self.active.discard(peer) self._reset_closest(peer)
return return
except TransportNotConnected: except TransportNotConnected:
return self.aclose() await self._aclose(reason="not connected")
return
except RemoteException: except RemoteException:
self._reset_closest(peer)
return return
return await self._handle_probe_result(peer, response) return await self._handle_probe_result(peer, response)
async def _search_round(self): def _search_round(self):
""" """
Send up to constants.alpha (5) probes to closest active peers Send up to constants.alpha (5) probes to closest active peers
""" """
added = 0 added = 0
to_probe = list(self.active - self.contacted) for index, peer in enumerate(self.active.keys()):
to_probe.sort(key=lambda peer: self.distance(self.key)) if index == 0:
for peer in to_probe: log.debug("%s[%x] closest to probe: %s",
if added >= constants.ALPHA: type(self).__name__, id(self),
peer.node_id.hex()[:8])
if peer in self.contacted:
continue
if len(self.running_probes) >= constants.ALPHA:
break
if index > (constants.K + len(self.running_probes)):
break break
origin_address = (peer.address, peer.udp_port) origin_address = (peer.address, peer.udp_port)
if origin_address in self.exclude:
continue
if peer.node_id == self.protocol.node_id: if peer.node_id == self.protocol.node_id:
continue continue
if origin_address == (self.protocol.external_ip, self.protocol.udp_port): if origin_address == (self.protocol.external_ip, self.protocol.udp_port):
continue continue
self._schedule_probe(peer) self._schedule_probe(peer)
added += 1 added += 1
log.debug("running %d probes", len(self.running_probes)) log.debug("%s[%x] running %d probes for key %s",
type(self).__name__, id(self),
len(self.running_probes), self.key.hex()[:8])
if not added and not self.running_probes: if not added and not self.running_probes:
log.debug("search for %s exhausted", hexlify(self.key)[:8]) log.debug("%s[%x] search for %s exhausted",
type(self).__name__, id(self),
self.key.hex()[:8])
self.search_exhausted() self.search_exhausted()
def _schedule_probe(self, peer: 'KademliaPeer'): def _schedule_probe(self, peer: 'KademliaPeer'):
@ -207,33 +204,24 @@ class IterativeFinder:
t = self.loop.create_task(self._send_probe(peer)) t = self.loop.create_task(self._send_probe(peer))
def callback(_): def callback(_):
self.running_probes.difference_update({ self.running_probes.pop(peer, None)
probe for probe in self.running_probes if probe.done() or probe == t if self.running:
}) self._search_round()
if not self.running_probes:
self.tasks.append(self.loop.create_task(self._search_task(0.0)))
t.add_done_callback(callback) t.add_done_callback(callback)
self.running_probes.add(t) self.running_probes[peer] = t
async def _search_task(self, delay: typing.Optional[float] = constants.ITERATIVE_LOOKUP_DELAY): def _log_state(self, reason="?"):
try: log.debug("%s[%x] [%s] %s: %i active nodes %i contacted %i produced %i queued",
if self.running: type(self).__name__, id(self), self.key.hex()[:8],
await self._search_round() reason, len(self.active), len(self.contacted),
if self.running: self.iteration_count, self.iteration_queue.qsize())
self.delayed_calls.append(self.loop.call_later(delay, self._search))
except (asyncio.CancelledError, StopAsyncIteration, TransportNotConnected):
if self.running:
self.loop.call_soon(self.aclose)
def _search(self):
self.tasks.append(self.loop.create_task(self._search_task()))
def __aiter__(self): def __aiter__(self):
if self.running: if self.running:
raise Exception("already running") raise Exception("already running")
self.running = True self.running = True
self._search() self.loop.call_soon(self._search_round)
return self return self
async def __anext__(self) -> typing.List['KademliaPeer']: async def __anext__(self) -> typing.List['KademliaPeer']:
@ -246,47 +234,57 @@ class IterativeFinder:
raise StopAsyncIteration raise StopAsyncIteration
self.iteration_count += 1 self.iteration_count += 1
return result return result
except (asyncio.CancelledError, StopAsyncIteration): except asyncio.CancelledError:
self.loop.call_soon(self.aclose) await self._aclose(reason="cancelled")
raise
except StopAsyncIteration:
await self._aclose(reason="no more results")
raise raise
def aclose(self): async def _aclose(self, reason="?"):
log.debug("%s[%x] [%s] shutdown because %s: %i active nodes %i contacted %i produced %i queued",
type(self).__name__, id(self), self.key.hex()[:8],
reason, len(self.active), len(self.contacted),
self.iteration_count, self.iteration_queue.qsize())
self.running = False self.running = False
self.iteration_queue.put_nowait(None) self.iteration_queue.put_nowait(None)
for task in chain(self.tasks, self.running_probes, self.delayed_calls): for task in chain(self.tasks, self.running_probes.values()):
task.cancel() task.cancel()
self.tasks.clear() self.tasks.clear()
self.running_probes.clear() self.running_probes.clear()
self.delayed_calls.clear()
async def aclose(self):
if self.running:
await self._aclose(reason="aclose")
log.debug("%s[%x] [%s] async close completed",
type(self).__name__, id(self), self.key.hex()[:8])
class IterativeNodeFinder(IterativeFinder): class IterativeNodeFinder(IterativeFinder):
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', def __init__(self, loop: asyncio.AbstractEventLoop,
routing_table: 'TreeRoutingTable', protocol: 'KademliaProtocol', key: bytes, protocol: 'KademliaProtocol', key: bytes,
bottom_out_limit: typing.Optional[int] = 2, max_results: typing.Optional[int] = constants.K, max_results: typing.Optional[int] = constants.K,
exclude: typing.Optional[typing.List[typing.Tuple[str, int]]] = None,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None): shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
super().__init__(loop, peer_manager, routing_table, protocol, key, bottom_out_limit, max_results, exclude, super().__init__(loop, protocol, key, max_results, shortlist)
shortlist)
self.yielded_peers: typing.Set['KademliaPeer'] = set() self.yielded_peers: typing.Set['KademliaPeer'] = set()
async def send_probe(self, peer: 'KademliaPeer') -> FindNodeResponse: async def send_probe(self, peer: 'KademliaPeer') -> FindNodeResponse:
log.debug("probing %s:%d %s", peer.address, peer.udp_port, hexlify(peer.node_id)[:8] if peer.node_id else '') log.debug("probe %s:%d (%s) for NODE %s",
peer.address, peer.udp_port, peer.node_id.hex()[:8] if peer.node_id else '', self.key.hex()[:8])
response = await self.protocol.get_rpc_peer(peer).find_node(self.key) response = await self.protocol.get_rpc_peer(peer).find_node(self.key)
return FindNodeResponse(self.key, response) return FindNodeResponse(self.key, response)
def search_exhausted(self): def search_exhausted(self):
self.put_result(self.active, finish=True) self.put_result(self.active.keys(), finish=True)
def put_result(self, from_iter: typing.Iterable['KademliaPeer'], finish=False): def put_result(self, from_iter: typing.Iterable['KademliaPeer'], finish=False):
not_yet_yielded = [ not_yet_yielded = [
peer for peer in from_iter peer for peer in from_iter
if peer not in self.yielded_peers if peer not in self.yielded_peers
and peer.node_id != self.protocol.node_id and peer.node_id != self.protocol.node_id
and self.peer_manager.peer_is_good(peer) is not False and self.peer_manager.peer_is_good(peer) is True # return only peers who answered
] ]
not_yet_yielded.sort(key=lambda peer: self.distance(peer.node_id)) not_yet_yielded.sort(key=lambda peer: self.distance(peer.node_id))
to_yield = not_yet_yielded[:min(constants.K, len(not_yet_yielded))] to_yield = not_yet_yielded[:max(constants.K, self.max_results)]
if to_yield: if to_yield:
self.yielded_peers.update(to_yield) self.yielded_peers.update(to_yield)
self.iteration_queue.put_nowait(to_yield) self.iteration_queue.put_nowait(to_yield)
@ -298,27 +296,15 @@ class IterativeNodeFinder(IterativeFinder):
if found: if found:
log.debug("found") log.debug("found")
return self.put_result(self.active, finish=True) return self.put_result(self.active.keys(), finish=True)
if self.prev_closest_peer and self.closest_peer and not self._is_closer(self.prev_closest_peer):
# log.info("improving, %i %i %i %i %i", len(self.shortlist), len(self.active), len(self.contacted),
# self.bottom_out_count, self.iteration_count)
self.bottom_out_count = 0
elif self.prev_closest_peer and self.closest_peer:
self.bottom_out_count += 1
log.info("bottom out %i %i %i", len(self.active), len(self.contacted), self.bottom_out_count)
if self.bottom_out_count >= self.bottom_out_limit or self.iteration_count >= self.bottom_out_limit:
log.info("limit hit")
self.put_result(self.active, True)
class IterativeValueFinder(IterativeFinder): class IterativeValueFinder(IterativeFinder):
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', def __init__(self, loop: asyncio.AbstractEventLoop,
routing_table: 'TreeRoutingTable', protocol: 'KademliaProtocol', key: bytes, protocol: 'KademliaProtocol', key: bytes,
bottom_out_limit: typing.Optional[int] = 2, max_results: typing.Optional[int] = constants.K, max_results: typing.Optional[int] = constants.K,
exclude: typing.Optional[typing.List[typing.Tuple[str, int]]] = None,
shortlist: typing.Optional[typing.List['KademliaPeer']] = None): shortlist: typing.Optional[typing.List['KademliaPeer']] = None):
super().__init__(loop, peer_manager, routing_table, protocol, key, bottom_out_limit, max_results, exclude, super().__init__(loop, protocol, key, max_results, shortlist)
shortlist)
self.blob_peers: typing.Set['KademliaPeer'] = set() self.blob_peers: typing.Set['KademliaPeer'] = set()
# this tracks the index of the most recent page we requested from each peer # this tracks the index of the most recent page we requested from each peer
self.peer_pages: typing.DefaultDict['KademliaPeer', int] = defaultdict(int) self.peer_pages: typing.DefaultDict['KademliaPeer', int] = defaultdict(int)
@ -326,6 +312,8 @@ class IterativeValueFinder(IterativeFinder):
self.discovered_peers: typing.Dict['KademliaPeer', typing.Set['KademliaPeer']] = defaultdict(set) self.discovered_peers: typing.Dict['KademliaPeer', typing.Set['KademliaPeer']] = defaultdict(set)
async def send_probe(self, peer: 'KademliaPeer') -> FindValueResponse: async def send_probe(self, peer: 'KademliaPeer') -> FindValueResponse:
log.debug("probe %s:%d (%s) for VALUE %s",
peer.address, peer.udp_port, peer.node_id.hex()[:8], self.key.hex()[:8])
page = self.peer_pages[peer] page = self.peer_pages[peer]
response = await self.protocol.get_rpc_peer(peer).find_value(self.key, page=page) response = await self.protocol.get_rpc_peer(peer).find_value(self.key, page=page)
parsed = FindValueResponse(self.key, response) parsed = FindValueResponse(self.key, response)
@ -335,7 +323,7 @@ class IterativeValueFinder(IterativeFinder):
decoded_peers = set() decoded_peers = set()
for compact_addr in parsed.found_compact_addresses: for compact_addr in parsed.found_compact_addresses:
try: try:
decoded_peers.add(self.peer_manager.decode_tcp_peer_from_compact_address(compact_addr)) decoded_peers.add(decode_tcp_peer_from_compact_address(compact_addr))
except ValueError: except ValueError:
log.warning("misbehaving peer %s:%i returned invalid peer for blob", log.warning("misbehaving peer %s:%i returned invalid peer for blob",
peer.address, peer.udp_port) peer.address, peer.udp_port)
@ -347,7 +335,6 @@ class IterativeValueFinder(IterativeFinder):
already_known + len(parsed.found_compact_addresses)) already_known + len(parsed.found_compact_addresses))
if len(self.discovered_peers[peer]) != already_known + len(parsed.found_compact_addresses): if len(self.discovered_peers[peer]) != already_known + len(parsed.found_compact_addresses):
log.warning("misbehaving peer %s:%i returned duplicate peers for blob", peer.address, peer.udp_port) log.warning("misbehaving peer %s:%i returned duplicate peers for blob", peer.address, peer.udp_port)
parsed.found_compact_addresses.clear()
elif len(parsed.found_compact_addresses) >= constants.K and self.peer_pages[peer] < parsed.pages: elif len(parsed.found_compact_addresses) >= constants.K and self.peer_pages[peer] < parsed.pages:
# the peer returned a full page and indicates it has more # the peer returned a full page and indicates it has more
self.peer_pages[peer] += 1 self.peer_pages[peer] += 1
@ -358,26 +345,15 @@ class IterativeValueFinder(IterativeFinder):
def check_result_ready(self, response: FindValueResponse): def check_result_ready(self, response: FindValueResponse):
if response.found: if response.found:
blob_peers = [self.peer_manager.decode_tcp_peer_from_compact_address(compact_addr) blob_peers = [decode_tcp_peer_from_compact_address(compact_addr)
for compact_addr in response.found_compact_addresses] for compact_addr in response.found_compact_addresses]
to_yield = [] to_yield = []
self.bottom_out_count = 0
for blob_peer in blob_peers: for blob_peer in blob_peers:
if blob_peer not in self.blob_peers: if blob_peer not in self.blob_peers:
self.blob_peers.add(blob_peer) self.blob_peers.add(blob_peer)
to_yield.append(blob_peer) to_yield.append(blob_peer)
if to_yield: if to_yield:
# log.info("found %i new peers for blob", len(to_yield))
self.iteration_queue.put_nowait(to_yield) self.iteration_queue.put_nowait(to_yield)
# if self.max_results and len(self.blob_peers) >= self.max_results:
# log.info("enough blob peers found")
# if not self.finished.is_set():
# self.finished.set()
elif self.prev_closest_peer and self.closest_peer:
self.bottom_out_count += 1
if self.bottom_out_count >= self.bottom_out_limit:
log.info("blob peer search bottomed out")
self.iteration_queue.put_nowait(None)
def get_initial_result(self) -> typing.List['KademliaPeer']: def get_initial_result(self) -> typing.List['KademliaPeer']:
if self.protocol.data_store.has_peers_for_blob(self.key): if self.protocol.data_store.has_peers_for_blob(self.key):

View file

@ -3,12 +3,14 @@ import socket
import functools import functools
import hashlib import hashlib
import asyncio import asyncio
import time
import typing import typing
import binascii
import random import random
from asyncio.protocols import DatagramProtocol from asyncio.protocols import DatagramProtocol
from asyncio.transports import DatagramTransport from asyncio.transports import DatagramTransport
from prometheus_client import Gauge, Counter, Histogram
from lbry.dht import constants from lbry.dht import constants
from lbry.dht.serialization.bencoding import DecodeError from lbry.dht.serialization.bencoding import DecodeError
from lbry.dht.serialization.datagram import decode_datagram, ErrorDatagram, ResponseDatagram, RequestDatagram from lbry.dht.serialization.datagram import decode_datagram, ErrorDatagram, ResponseDatagram, RequestDatagram
@ -31,6 +33,11 @@ OLD_PROTOCOL_ERRORS = {
class KademliaRPC: class KademliaRPC:
stored_blob_metric = Gauge(
"stored_blobs", "Number of blobs announced by other peers", namespace="dht_node",
labelnames=("scope",),
)
def __init__(self, protocol: 'KademliaProtocol', loop: asyncio.AbstractEventLoop, peer_port: int = 3333): def __init__(self, protocol: 'KademliaProtocol', loop: asyncio.AbstractEventLoop, peer_port: int = 3333):
self.protocol = protocol self.protocol = protocol
self.loop = loop self.loop = loop
@ -62,6 +69,7 @@ class KademliaRPC:
self.protocol.data_store.add_peer_to_blob( self.protocol.data_store.add_peer_to_blob(
rpc_contact, blob_hash rpc_contact, blob_hash
) )
self.stored_blob_metric.labels("global").set(len(self.protocol.data_store))
return b'OK' return b'OK'
def find_node(self, rpc_contact: 'KademliaPeer', key: bytes) -> typing.List[typing.Tuple[bytes, str, int]]: def find_node(self, rpc_contact: 'KademliaPeer', key: bytes) -> typing.List[typing.Tuple[bytes, str, int]]:
@ -97,7 +105,7 @@ class KademliaRPC:
if not rpc_contact.tcp_port or peer.compact_address_tcp() != rpc_contact.compact_address_tcp() if not rpc_contact.tcp_port or peer.compact_address_tcp() != rpc_contact.compact_address_tcp()
] ]
# if we don't have k storing peers to return and we have this hash locally, include our contact information # if we don't have k storing peers to return and we have this hash locally, include our contact information
if len(peers) < constants.K and binascii.hexlify(key).decode() in self.protocol.data_store.completed_blobs: if len(peers) < constants.K and key.hex() in self.protocol.data_store.completed_blobs:
peers.append(self.compact_address()) peers.append(self.compact_address())
if not peers: if not peers:
response[PAGE_KEY] = 0 response[PAGE_KEY] = 0
@ -210,6 +218,10 @@ class PingQueue:
def running(self): def running(self):
return self._running return self._running
@property
def busy(self):
return self._running and (any(self._running_pings) or any(self._pending_contacts))
def enqueue_maybe_ping(self, *peers: 'KademliaPeer', delay: typing.Optional[float] = None): def enqueue_maybe_ping(self, *peers: 'KademliaPeer', delay: typing.Optional[float] = None):
delay = delay if delay is not None else self._default_delay delay = delay if delay is not None else self._default_delay
now = self._loop.time() now = self._loop.time()
@ -221,7 +233,7 @@ class PingQueue:
async def ping_task(): async def ping_task():
try: try:
if self._protocol.peer_manager.peer_is_good(peer): if self._protocol.peer_manager.peer_is_good(peer):
if peer not in self._protocol.routing_table.get_peers(): if not self._protocol.routing_table.get_peer(peer.node_id):
self._protocol.add_peer(peer) self._protocol.add_peer(peer)
return return
await self._protocol.get_rpc_peer(peer).ping() await self._protocol.get_rpc_peer(peer).ping()
@ -241,7 +253,7 @@ class PingQueue:
del self._pending_contacts[peer] del self._pending_contacts[peer]
self.maybe_ping(peer) self.maybe_ping(peer)
break break
await asyncio.sleep(1, loop=self._loop) await asyncio.sleep(1)
def start(self): def start(self):
assert not self._running assert not self._running
@ -260,9 +272,33 @@ class PingQueue:
class KademliaProtocol(DatagramProtocol): class KademliaProtocol(DatagramProtocol):
request_sent_metric = Counter(
"request_sent", "Number of requests send from DHT RPC protocol", namespace="dht_node",
labelnames=("method",),
)
request_success_metric = Counter(
"request_success", "Number of successful requests", namespace="dht_node",
labelnames=("method",),
)
request_error_metric = Counter(
"request_error", "Number of errors returned from request to other peers", namespace="dht_node",
labelnames=("method",),
)
HISTOGRAM_BUCKETS = (
.005, .01, .025, .05, .075, .1, .25, .5, .75, 1.0, 2.5, 3.0, 3.5, 4.0, 4.50, 5.0, 5.50, 6.0, float('inf')
)
response_time_metric = Histogram(
"response_time", "Response times of DHT RPC requests", namespace="dht_node", buckets=HISTOGRAM_BUCKETS,
labelnames=("method",)
)
received_request_metric = Counter(
"received_request", "Number of received DHT RPC requests", namespace="dht_node",
labelnames=("method",),
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, external_ip: str, def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', node_id: bytes, external_ip: str,
udp_port: int, peer_port: int, rpc_timeout: float = constants.RPC_TIMEOUT, udp_port: int, peer_port: int, rpc_timeout: float = constants.RPC_TIMEOUT,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX): split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_boostrap_node: bool = False):
self.peer_manager = peer_manager self.peer_manager = peer_manager
self.loop = loop self.loop = loop
self.node_id = node_id self.node_id = node_id
@ -277,15 +313,16 @@ class KademliaProtocol(DatagramProtocol):
self.transport: DatagramTransport = None self.transport: DatagramTransport = None
self.old_token_secret = constants.generate_id() self.old_token_secret = constants.generate_id()
self.token_secret = constants.generate_id() self.token_secret = constants.generate_id()
self.routing_table = TreeRoutingTable(self.loop, self.peer_manager, self.node_id, split_buckets_under_index) self.routing_table = TreeRoutingTable(
self.loop, self.peer_manager, self.node_id, split_buckets_under_index, is_bootstrap_node=is_boostrap_node)
self.data_store = DictDataStore(self.loop, self.peer_manager) self.data_store = DictDataStore(self.loop, self.peer_manager)
self.ping_queue = PingQueue(self.loop, self) self.ping_queue = PingQueue(self.loop, self)
self.node_rpc = KademliaRPC(self, self.loop, self.peer_port) self.node_rpc = KademliaRPC(self, self.loop, self.peer_port)
self.rpc_timeout = rpc_timeout self.rpc_timeout = rpc_timeout
self._split_lock = asyncio.Lock(loop=self.loop) self._split_lock = asyncio.Lock()
self._to_remove: typing.Set['KademliaPeer'] = set() self._to_remove: typing.Set['KademliaPeer'] = set()
self._to_add: typing.Set['KademliaPeer'] = set() self._to_add: typing.Set['KademliaPeer'] = set()
self._wakeup_routing_task = asyncio.Event(loop=self.loop) self._wakeup_routing_task = asyncio.Event()
self.maintaing_routing_task: typing.Optional[asyncio.Task] = None self.maintaing_routing_task: typing.Optional[asyncio.Task] = None
@functools.lru_cache(128) @functools.lru_cache(128)
@ -324,72 +361,10 @@ class KademliaProtocol(DatagramProtocol):
return args, {} return args, {}
async def _add_peer(self, peer: 'KademliaPeer'): async def _add_peer(self, peer: 'KademliaPeer'):
if not peer.node_id: async def probe(some_peer: 'KademliaPeer'):
log.warning("Tried adding a peer with no node id!") rpc_peer = self.get_rpc_peer(some_peer)
return False await rpc_peer.ping()
for my_peer in self.routing_table.get_peers(): return await self.routing_table.add_peer(peer, probe)
if (my_peer.address, my_peer.udp_port) == (peer.address, peer.udp_port) and my_peer.node_id != peer.node_id:
self.routing_table.remove_peer(my_peer)
self.routing_table.join_buckets()
bucket_index = self.routing_table.kbucket_index(peer.node_id)
if self.routing_table.buckets[bucket_index].add_peer(peer):
return True
# The bucket is full; see if it can be split (by checking if its range includes the host node's node_id)
if self.routing_table.should_split(bucket_index, peer.node_id):
self.routing_table.split_bucket(bucket_index)
# Retry the insertion attempt
result = await self._add_peer(peer)
self.routing_table.join_buckets()
return result
else:
# We can't split the k-bucket
#
# The 13 page kademlia paper specifies that the least recently contacted node in the bucket
# shall be pinged. If it fails to reply it is replaced with the new contact. If the ping is successful
# the new contact is ignored and not added to the bucket (sections 2.2 and 2.4).
#
# A reasonable extension to this is BEP 0005, which extends the above:
#
# Not all nodes that we learn about are equal. Some are "good" and some are not.
# Many nodes using the DHT are able to send queries and receive responses,
# but are not able to respond to queries from other nodes. It is important that
# each node's routing table must contain only known good nodes. A good node is
# a node has responded to one of our queries within the last 15 minutes. A node
# is also good if it has ever responded to one of our queries and has sent us a
# query within the last 15 minutes. After 15 minutes of inactivity, a node becomes
# questionable. Nodes become bad when they fail to respond to multiple queries
# in a row. Nodes that we know are good are given priority over nodes with unknown status.
#
# When there are bad or questionable nodes in the bucket, the least recent is selected for
# potential replacement (BEP 0005). When all nodes in the bucket are fresh, the head (least recent)
# contact is selected as described in section 2.2 of the kademlia paper. In both cases the new contact
# is ignored if the pinged node replies.
not_good_contacts = self.routing_table.buckets[bucket_index].get_bad_or_unknown_peers()
not_recently_replied = []
for my_peer in not_good_contacts:
last_replied = self.peer_manager.get_last_replied(my_peer.address, my_peer.udp_port)
if not last_replied or last_replied + 60 < self.loop.time():
not_recently_replied.append(my_peer)
if not_recently_replied:
to_replace = not_recently_replied[0]
else:
to_replace = self.routing_table.buckets[bucket_index].peers[0]
last_replied = self.peer_manager.get_last_replied(to_replace.address, to_replace.udp_port)
if last_replied and last_replied + 60 > self.loop.time():
return False
log.debug("pinging %s:%s", to_replace.address, to_replace.udp_port)
try:
to_replace_rpc = self.get_rpc_peer(to_replace)
await to_replace_rpc.ping()
return False
except asyncio.TimeoutError:
log.debug("Replacing dead contact in bucket %i: %s:%i with %s:%i ", bucket_index,
to_replace.address, to_replace.udp_port, peer.address, peer.udp_port)
if to_replace in self.routing_table.buckets[bucket_index]:
self.routing_table.buckets[bucket_index].remove_peer(to_replace)
return await self._add_peer(peer)
def add_peer(self, peer: 'KademliaPeer'): def add_peer(self, peer: 'KademliaPeer'):
if peer.node_id == self.node_id: if peer.node_id == self.node_id:
@ -407,16 +382,15 @@ class KademliaProtocol(DatagramProtocol):
async with self._split_lock: async with self._split_lock:
peer = self._to_remove.pop() peer = self._to_remove.pop()
self.routing_table.remove_peer(peer) self.routing_table.remove_peer(peer)
self.routing_table.join_buckets()
while self._to_add: while self._to_add:
async with self._split_lock: async with self._split_lock:
await self._add_peer(self._to_add.pop()) await self._add_peer(self._to_add.pop())
await asyncio.gather(self._wakeup_routing_task.wait(), asyncio.sleep(.1, loop=self.loop), loop=self.loop) await asyncio.gather(self._wakeup_routing_task.wait(), asyncio.sleep(.1))
self._wakeup_routing_task.clear() self._wakeup_routing_task.clear()
def _handle_rpc(self, sender_contact: 'KademliaPeer', message: RequestDatagram): def _handle_rpc(self, sender_contact: 'KademliaPeer', message: RequestDatagram):
assert sender_contact.node_id != self.node_id, (binascii.hexlify(sender_contact.node_id)[:8].decode(), assert sender_contact.node_id != self.node_id, (sender_contact.node_id.hex()[:8],
binascii.hexlify(self.node_id)[:8].decode()) self.node_id.hex()[:8])
method = message.method method = message.method
if method not in [b'ping', b'store', b'findNode', b'findValue']: if method not in [b'ping', b'store', b'findNode', b'findValue']:
raise AttributeError('Invalid method: %s' % message.method.decode()) raise AttributeError('Invalid method: %s' % message.method.decode())
@ -448,11 +422,15 @@ class KademliaProtocol(DatagramProtocol):
def handle_request_datagram(self, address: typing.Tuple[str, int], request_datagram: RequestDatagram): def handle_request_datagram(self, address: typing.Tuple[str, int], request_datagram: RequestDatagram):
# This is an RPC method request # This is an RPC method request
self.received_request_metric.labels(method=request_datagram.method).inc()
self.peer_manager.report_last_requested(address[0], address[1]) self.peer_manager.report_last_requested(address[0], address[1])
try: peer = self.routing_table.get_peer(request_datagram.node_id)
peer = self.routing_table.get_peer(request_datagram.node_id) if not peer:
except IndexError: try:
peer = make_kademlia_peer(request_datagram.node_id, address[0], address[1]) peer = make_kademlia_peer(request_datagram.node_id, address[0], address[1])
except ValueError as err:
log.warning("error replying to %s: %s", address[0], str(err))
return
try: try:
self._handle_rpc(peer, request_datagram) self._handle_rpc(peer, request_datagram)
# if the contact is not known to be bad (yet) and we haven't yet queried it, send it a ping so that it # if the contact is not known to be bad (yet) and we haven't yet queried it, send it a ping so that it
@ -552,12 +530,12 @@ class KademliaProtocol(DatagramProtocol):
address[0], address[1], OLD_PROTOCOL_ERRORS[error_datagram.response] address[0], address[1], OLD_PROTOCOL_ERRORS[error_datagram.response]
) )
def datagram_received(self, datagram: bytes, address: typing.Tuple[str, int]) -> None: # pylint: disable=arguments-differ def datagram_received(self, datagram: bytes, address: typing.Tuple[str, int]) -> None: # pylint: disable=arguments-renamed
try: try:
message = decode_datagram(datagram) message = decode_datagram(datagram)
except (ValueError, TypeError, DecodeError): except (ValueError, TypeError, DecodeError):
self.peer_manager.report_failure(address[0], address[1]) self.peer_manager.report_failure(address[0], address[1])
log.warning("Couldn't decode dht datagram from %s: %s", address, binascii.hexlify(datagram).decode()) log.warning("Couldn't decode dht datagram from %s: %s", address, datagram.hex())
return return
if isinstance(message, RequestDatagram): if isinstance(message, RequestDatagram):
@ -572,14 +550,19 @@ class KademliaProtocol(DatagramProtocol):
self._send(peer, request) self._send(peer, request)
response_fut = self.sent_messages[request.rpc_id][1] response_fut = self.sent_messages[request.rpc_id][1]
try: try:
self.request_sent_metric.labels(method=request.method).inc()
start = time.perf_counter()
response = await asyncio.wait_for(response_fut, self.rpc_timeout) response = await asyncio.wait_for(response_fut, self.rpc_timeout)
self.response_time_metric.labels(method=request.method).observe(time.perf_counter() - start)
self.peer_manager.report_last_replied(peer.address, peer.udp_port) self.peer_manager.report_last_replied(peer.address, peer.udp_port)
self.request_success_metric.labels(method=request.method).inc()
return response return response
except asyncio.CancelledError: except asyncio.CancelledError:
if not response_fut.done(): if not response_fut.done():
response_fut.cancel() response_fut.cancel()
raise raise
except (asyncio.TimeoutError, RemoteException): except (asyncio.TimeoutError, RemoteException):
self.request_error_metric.labels(method=request.method).inc()
self.peer_manager.report_failure(peer.address, peer.udp_port) self.peer_manager.report_failure(peer.address, peer.udp_port)
if self.peer_manager.peer_is_good(peer) is False: if self.peer_manager.peer_is_good(peer) is False:
self.remove_peer(peer) self.remove_peer(peer)
@ -599,7 +582,7 @@ class KademliaProtocol(DatagramProtocol):
if len(data) > constants.MSG_SIZE_LIMIT: if len(data) > constants.MSG_SIZE_LIMIT:
log.warning("cannot send datagram larger than %i bytes (packet is %i bytes)", log.warning("cannot send datagram larger than %i bytes (packet is %i bytes)",
constants.MSG_SIZE_LIMIT, len(data)) constants.MSG_SIZE_LIMIT, len(data))
log.debug("Packet is too large to send: %s", binascii.hexlify(data[:3500]).decode()) log.debug("Packet is too large to send: %s", data[:3500].hex())
raise ValueError( raise ValueError(
f"cannot send datagram larger than {constants.MSG_SIZE_LIMIT} bytes (packet is {len(data)} bytes)" f"cannot send datagram larger than {constants.MSG_SIZE_LIMIT} bytes (packet is {len(data)} bytes)"
) )
@ -659,13 +642,13 @@ class KademliaProtocol(DatagramProtocol):
res = await self.get_rpc_peer(peer).store(hash_value) res = await self.get_rpc_peer(peer).store(hash_value)
if res != b"OK": if res != b"OK":
raise ValueError(res) raise ValueError(res)
log.debug("Stored %s to %s", binascii.hexlify(hash_value).decode()[:8], peer) log.debug("Stored %s to %s", hash_value.hex()[:8], peer)
return peer.node_id, True return peer.node_id, True
try: try:
return await __store() return await __store()
except asyncio.TimeoutError: except asyncio.TimeoutError:
log.debug("Timeout while storing blob_hash %s at %s", binascii.hexlify(hash_value).decode()[:8], peer) log.debug("Timeout while storing blob_hash %s at %s", hash_value.hex()[:8], peer)
return peer.node_id, False return peer.node_id, False
except ValueError as err: except ValueError as err:
log.error("Unexpected response: %s", err) log.error("Unexpected response: %s", err)

View file

@ -4,7 +4,11 @@ import logging
import typing import typing
import itertools import itertools
from prometheus_client import Gauge
from lbry import utils
from lbry.dht import constants from lbry.dht import constants
from lbry.dht.error import RemoteException
from lbry.dht.protocol.distance import Distance from lbry.dht.protocol.distance import Distance
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.dht.peer import KademliaPeer, PeerManager from lbry.dht.peer import KademliaPeer, PeerManager
@ -13,10 +17,20 @@ log = logging.getLogger(__name__)
class KBucket: class KBucket:
""" Description - later
""" """
Kademlia K-bucket implementation.
"""
peer_in_routing_table_metric = Gauge(
"peers_in_routing_table", "Number of peers on routing table", namespace="dht_node",
labelnames=("scope",)
)
peer_with_x_bit_colliding_metric = Gauge(
"peer_x_bit_colliding", "Number of peers with at least X bits colliding with this node id",
namespace="dht_node", labelnames=("amount",)
)
def __init__(self, peer_manager: 'PeerManager', range_min: int, range_max: int, node_id: bytes): def __init__(self, peer_manager: 'PeerManager', range_min: int, range_max: int,
node_id: bytes, capacity: int = constants.K):
""" """
@param range_min: The lower boundary for the range in the n-bit ID @param range_min: The lower boundary for the range in the n-bit ID
space covered by this k-bucket space covered by this k-bucket
@ -24,12 +38,12 @@ class KBucket:
covered by this k-bucket covered by this k-bucket
""" """
self._peer_manager = peer_manager self._peer_manager = peer_manager
self.last_accessed = 0
self.range_min = range_min self.range_min = range_min
self.range_max = range_max self.range_max = range_max
self.peers: typing.List['KademliaPeer'] = [] self.peers: typing.List['KademliaPeer'] = []
self._node_id = node_id self._node_id = node_id
self._distance_to_self = Distance(node_id) self._distance_to_self = Distance(node_id)
self.capacity = capacity
def add_peer(self, peer: 'KademliaPeer') -> bool: def add_peer(self, peer: 'KademliaPeer') -> bool:
""" Add contact to _contact list in the right order. This will move the """ Add contact to _contact list in the right order. This will move the
@ -50,24 +64,25 @@ class KBucket:
self.peers.append(peer) self.peers.append(peer)
return True return True
else: else:
for i in range(len(self.peers)): for i, _ in enumerate(self.peers):
local_peer = self.peers[i] local_peer = self.peers[i]
if local_peer.node_id == peer.node_id: if local_peer.node_id == peer.node_id:
self.peers.remove(local_peer) self.peers.remove(local_peer)
self.peers.append(peer) self.peers.append(peer)
return True return True
if len(self.peers) < constants.K: if len(self.peers) < self.capacity:
self.peers.append(peer) self.peers.append(peer)
self.peer_in_routing_table_metric.labels("global").inc()
bits_colliding = utils.get_colliding_prefix_bits(peer.node_id, self._node_id)
self.peer_with_x_bit_colliding_metric.labels(amount=bits_colliding).inc()
return True return True
else: else:
return False return False
# raise BucketFull("No space in bucket to insert contact")
def get_peer(self, node_id: bytes) -> 'KademliaPeer': def get_peer(self, node_id: bytes) -> 'KademliaPeer':
for peer in self.peers: for peer in self.peers:
if peer.node_id == node_id: if peer.node_id == node_id:
return peer return peer
raise IndexError(node_id)
def get_peers(self, count=-1, exclude_contact=None, sort_distance_to=None) -> typing.List['KademliaPeer']: def get_peers(self, count=-1, exclude_contact=None, sort_distance_to=None) -> typing.List['KademliaPeer']:
""" Returns a list containing up to the first count number of contacts """ Returns a list containing up to the first count number of contacts
@ -124,6 +139,9 @@ class KBucket:
def remove_peer(self, peer: 'KademliaPeer') -> None: def remove_peer(self, peer: 'KademliaPeer') -> None:
self.peers.remove(peer) self.peers.remove(peer)
self.peer_in_routing_table_metric.labels("global").dec()
bits_colliding = utils.get_colliding_prefix_bits(peer.node_id, self._node_id)
self.peer_with_x_bit_colliding_metric.labels(amount=bits_colliding).dec()
def key_in_range(self, key: bytes) -> bool: def key_in_range(self, key: bytes) -> bool:
""" Tests whether the specified key (i.e. node ID) is in the range """ Tests whether the specified key (i.e. node ID) is in the range
@ -161,24 +179,36 @@ class TreeRoutingTable:
version of the Kademlia paper, in section 2.4. It does, however, use the version of the Kademlia paper, in section 2.4. It does, however, use the
ping RPC-based k-bucket eviction algorithm described in section 2.2 of ping RPC-based k-bucket eviction algorithm described in section 2.2 of
that paper. that paper.
BOOTSTRAP MODE: if set to True, we always add all peers. This is so a
bootstrap node does not get a bias towards its own node id and replies are
the best it can provide (joining peer knows its neighbors immediately).
Over time, this will need to be optimized so we use the disk as holding
everything in memory won't be feasible anymore.
See: https://github.com/bittorrent/bootstrap-dht
""" """
bucket_in_routing_table_metric = Gauge(
"buckets_in_routing_table", "Number of buckets on routing table", namespace="dht_node",
labelnames=("scope",)
)
def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', parent_node_id: bytes, def __init__(self, loop: asyncio.AbstractEventLoop, peer_manager: 'PeerManager', parent_node_id: bytes,
split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX): split_buckets_under_index: int = constants.SPLIT_BUCKETS_UNDER_INDEX, is_bootstrap_node: bool = False):
self._loop = loop self._loop = loop
self._peer_manager = peer_manager self._peer_manager = peer_manager
self._parent_node_id = parent_node_id self._parent_node_id = parent_node_id
self._split_buckets_under_index = split_buckets_under_index self._split_buckets_under_index = split_buckets_under_index
self.buckets: typing.List[KBucket] = [ self.buckets: typing.List[KBucket] = [
KBucket( KBucket(
self._peer_manager, range_min=0, range_max=2 ** constants.HASH_BITS, node_id=self._parent_node_id self._peer_manager, range_min=0, range_max=2 ** constants.HASH_BITS, node_id=self._parent_node_id,
capacity=1 << 32 if is_bootstrap_node else constants.K
) )
] ]
def get_peers(self) -> typing.List['KademliaPeer']: def get_peers(self) -> typing.List['KademliaPeer']:
return list(itertools.chain.from_iterable(map(lambda bucket: bucket.peers, self.buckets))) return list(itertools.chain.from_iterable(map(lambda bucket: bucket.peers, self.buckets)))
def should_split(self, bucket_index: int, to_add: bytes) -> bool: def _should_split(self, bucket_index: int, to_add: bytes) -> bool:
# https://stackoverflow.com/questions/32129978/highly-unbalanced-kademlia-routing-table/32187456#32187456 # https://stackoverflow.com/questions/32129978/highly-unbalanced-kademlia-routing-table/32187456#32187456
if bucket_index < self._split_buckets_under_index: if bucket_index < self._split_buckets_under_index:
return True return True
@ -203,39 +233,32 @@ class TreeRoutingTable:
return [] return []
def get_peer(self, contact_id: bytes) -> 'KademliaPeer': def get_peer(self, contact_id: bytes) -> 'KademliaPeer':
""" return self.buckets[self._kbucket_index(contact_id)].get_peer(contact_id)
@raise IndexError: No contact with the specified contact ID is known
by this node
"""
return self.buckets[self.kbucket_index(contact_id)].get_peer(contact_id)
def get_refresh_list(self, start_index: int = 0, force: bool = False) -> typing.List[bytes]: def get_refresh_list(self, start_index: int = 0, force: bool = False) -> typing.List[bytes]:
bucket_index = start_index
refresh_ids = [] refresh_ids = []
now = int(self._loop.time()) for offset, _ in enumerate(self.buckets[start_index:]):
for bucket in self.buckets[start_index:]: refresh_ids.append(self._midpoint_id_in_bucket_range(start_index + offset))
if force or now - bucket.last_accessed >= constants.REFRESH_INTERVAL: # if we have 3 or fewer populated buckets get two random ids in the range of each to try and
to_search = self.midpoint_id_in_bucket_range(bucket_index) # populate/split the buckets further
refresh_ids.append(to_search) buckets_with_contacts = self.buckets_with_contacts()
bucket_index += 1 if buckets_with_contacts <= 3:
for i in range(buckets_with_contacts):
refresh_ids.append(self._random_id_in_bucket_range(i))
refresh_ids.append(self._random_id_in_bucket_range(i))
return refresh_ids return refresh_ids
def remove_peer(self, peer: 'KademliaPeer') -> None: def remove_peer(self, peer: 'KademliaPeer') -> None:
if not peer.node_id: if not peer.node_id:
return return
bucket_index = self.kbucket_index(peer.node_id) bucket_index = self._kbucket_index(peer.node_id)
try: try:
self.buckets[bucket_index].remove_peer(peer) self.buckets[bucket_index].remove_peer(peer)
self._join_buckets()
except ValueError: except ValueError:
return return
def touch_kbucket(self, key: bytes) -> None: def _kbucket_index(self, key: bytes) -> int:
self.touch_kbucket_by_index(self.kbucket_index(key))
def touch_kbucket_by_index(self, bucket_index: int):
self.buckets[bucket_index].last_accessed = int(self._loop.time())
def kbucket_index(self, key: bytes) -> int:
i = 0 i = 0
for bucket in self.buckets: for bucket in self.buckets:
if bucket.key_in_range(key): if bucket.key_in_range(key):
@ -244,19 +267,19 @@ class TreeRoutingTable:
i += 1 i += 1
return i return i
def random_id_in_bucket_range(self, bucket_index: int) -> bytes: def _random_id_in_bucket_range(self, bucket_index: int) -> bytes:
random_id = int(random.randrange(self.buckets[bucket_index].range_min, self.buckets[bucket_index].range_max)) random_id = int(random.randrange(self.buckets[bucket_index].range_min, self.buckets[bucket_index].range_max))
return Distance( return Distance(
self._parent_node_id self._parent_node_id
)(random_id.to_bytes(constants.HASH_LENGTH, 'big')).to_bytes(constants.HASH_LENGTH, 'big') )(random_id.to_bytes(constants.HASH_LENGTH, 'big')).to_bytes(constants.HASH_LENGTH, 'big')
def midpoint_id_in_bucket_range(self, bucket_index: int) -> bytes: def _midpoint_id_in_bucket_range(self, bucket_index: int) -> bytes:
half = int((self.buckets[bucket_index].range_max - self.buckets[bucket_index].range_min) // 2) half = int((self.buckets[bucket_index].range_max - self.buckets[bucket_index].range_min) // 2)
return Distance(self._parent_node_id)( return Distance(self._parent_node_id)(
int(self.buckets[bucket_index].range_min + half).to_bytes(constants.HASH_LENGTH, 'big') int(self.buckets[bucket_index].range_min + half).to_bytes(constants.HASH_LENGTH, 'big')
).to_bytes(constants.HASH_LENGTH, 'big') ).to_bytes(constants.HASH_LENGTH, 'big')
def split_bucket(self, old_bucket_index: int) -> None: def _split_bucket(self, old_bucket_index: int) -> None:
""" Splits the specified k-bucket into two new buckets which together """ Splits the specified k-bucket into two new buckets which together
cover the same range in the key/ID space cover the same range in the key/ID space
@ -279,8 +302,9 @@ class TreeRoutingTable:
# ...and remove them from the old bucket # ...and remove them from the old bucket
for contact in new_bucket.peers: for contact in new_bucket.peers:
old_bucket.remove_peer(contact) old_bucket.remove_peer(contact)
self.bucket_in_routing_table_metric.labels("global").set(len(self.buckets))
def join_buckets(self): def _join_buckets(self):
if len(self.buckets) == 1: if len(self.buckets) == 1:
return return
to_pop = [i for i, bucket in enumerate(self.buckets) if len(bucket) == 0] to_pop = [i for i, bucket in enumerate(self.buckets) if len(bucket) == 0]
@ -302,14 +326,8 @@ class TreeRoutingTable:
elif can_go_higher: elif can_go_higher:
self.buckets[bucket_index_to_pop + 1].range_min = bucket.range_min self.buckets[bucket_index_to_pop + 1].range_min = bucket.range_min
self.buckets.remove(bucket) self.buckets.remove(bucket)
return self.join_buckets() self.bucket_in_routing_table_metric.labels("global").set(len(self.buckets))
return self._join_buckets()
def contact_in_routing_table(self, address_tuple: typing.Tuple[str, int]) -> bool:
for bucket in self.buckets:
for contact in bucket.get_peers(sort_distance_to=False):
if address_tuple[0] == contact.address and address_tuple[1] == contact.udp_port:
return True
return False
def buckets_with_contacts(self) -> int: def buckets_with_contacts(self) -> int:
count = 0 count = 0
@ -317,3 +335,70 @@ class TreeRoutingTable:
if len(bucket) > 0: if len(bucket) > 0:
count += 1 count += 1
return count return count
async def add_peer(self, peer: 'KademliaPeer', probe: typing.Callable[['KademliaPeer'], typing.Awaitable]):
if not peer.node_id:
log.warning("Tried adding a peer with no node id!")
return False
for my_peer in self.get_peers():
if (my_peer.address, my_peer.udp_port) == (peer.address, peer.udp_port) and my_peer.node_id != peer.node_id:
self.remove_peer(my_peer)
self._join_buckets()
bucket_index = self._kbucket_index(peer.node_id)
if self.buckets[bucket_index].add_peer(peer):
return True
# The bucket is full; see if it can be split (by checking if its range includes the host node's node_id)
if self._should_split(bucket_index, peer.node_id):
self._split_bucket(bucket_index)
# Retry the insertion attempt
result = await self.add_peer(peer, probe)
self._join_buckets()
return result
else:
# We can't split the k-bucket
#
# The 13 page kademlia paper specifies that the least recently contacted node in the bucket
# shall be pinged. If it fails to reply it is replaced with the new contact. If the ping is successful
# the new contact is ignored and not added to the bucket (sections 2.2 and 2.4).
#
# A reasonable extension to this is BEP 0005, which extends the above:
#
# Not all nodes that we learn about are equal. Some are "good" and some are not.
# Many nodes using the DHT are able to send queries and receive responses,
# but are not able to respond to queries from other nodes. It is important that
# each node's routing table must contain only known good nodes. A good node is
# a node has responded to one of our queries within the last 15 minutes. A node
# is also good if it has ever responded to one of our queries and has sent us a
# query within the last 15 minutes. After 15 minutes of inactivity, a node becomes
# questionable. Nodes become bad when they fail to respond to multiple queries
# in a row. Nodes that we know are good are given priority over nodes with unknown status.
#
# When there are bad or questionable nodes in the bucket, the least recent is selected for
# potential replacement (BEP 0005). When all nodes in the bucket are fresh, the head (least recent)
# contact is selected as described in section 2.2 of the kademlia paper. In both cases the new contact
# is ignored if the pinged node replies.
not_good_contacts = self.buckets[bucket_index].get_bad_or_unknown_peers()
not_recently_replied = []
for my_peer in not_good_contacts:
last_replied = self._peer_manager.get_last_replied(my_peer.address, my_peer.udp_port)
if not last_replied or last_replied + 60 < self._loop.time():
not_recently_replied.append(my_peer)
if not_recently_replied:
to_replace = not_recently_replied[0]
else:
to_replace = self.buckets[bucket_index].peers[0]
last_replied = self._peer_manager.get_last_replied(to_replace.address, to_replace.udp_port)
if last_replied and last_replied + 60 > self._loop.time():
return False
log.debug("pinging %s:%s", to_replace.address, to_replace.udp_port)
try:
await probe(to_replace)
return False
except (asyncio.TimeoutError, RemoteException):
log.debug("Replacing dead contact in bucket %i: %s:%i with %s:%i ", bucket_index,
to_replace.address, to_replace.udp_port, peer.address, peer.udp_port)
if to_replace in self.buckets[bucket_index]:
self.buckets[bucket_index].remove_peer(to_replace)
return await self.add_peer(peer, probe)

View file

@ -181,7 +181,7 @@ def decode_datagram(datagram: bytes) -> typing.Union[RequestDatagram, ResponseDa
def make_compact_ip(address: str) -> bytearray: def make_compact_ip(address: str) -> bytearray:
compact_ip = reduce(lambda buff, x: buff + bytearray([int(x)]), address.split('.'), bytearray()) compact_ip = reduce(lambda buff, x: buff + bytearray([int(x)]), address.split('.'), bytearray())
if len(compact_ip) != 4: if len(compact_ip) != 4:
raise ValueError(f"invalid IPv4 length") raise ValueError("invalid IPv4 length")
return compact_ip return compact_ip
@ -190,7 +190,7 @@ def make_compact_address(node_id: bytes, address: str, port: int) -> bytearray:
if not 0 < port < 65536: if not 0 < port < 65536:
raise ValueError(f'Invalid port: {port}') raise ValueError(f'Invalid port: {port}')
if len(node_id) != constants.HASH_BITS // 8: if len(node_id) != constants.HASH_BITS // 8:
raise ValueError(f"invalid node node_id length") raise ValueError("invalid node node_id length")
return compact_ip + port.to_bytes(2, 'big') + node_id return compact_ip + port.to_bytes(2, 'big') + node_id
@ -201,5 +201,5 @@ def decode_compact_address(compact_address: bytes) -> typing.Tuple[bytes, str, i
if not 0 < port < 65536: if not 0 < port < 65536:
raise ValueError(f'Invalid port: {port}') raise ValueError(f'Invalid port: {port}')
if len(node_id) != constants.HASH_BITS // 8: if len(node_id) != constants.HASH_BITS // 8:
raise ValueError(f"invalid node node_id length") raise ValueError("invalid node node_id length")
return node_id, address, port return node_id, address, port

View file

@ -34,6 +34,11 @@ Code | Name | Message
**11x** | InputValue(ValueError) | Invalid argument value provided to command. **11x** | InputValue(ValueError) | Invalid argument value provided to command.
111 | GenericInputValue | The value '{value}' for argument '{argument}' is not valid. 111 | GenericInputValue | The value '{value}' for argument '{argument}' is not valid.
112 | InputValueIsNone | None or null is not valid value for argument '{argument}'. 112 | InputValueIsNone | None or null is not valid value for argument '{argument}'.
113 | ConflictingInputValue | Only '{first_argument}' or '{second_argument}' is allowed, not both.
114 | InputStringIsBlank | {argument} cannot be blank.
115 | EmptyPublishedFile | Cannot publish empty file: {file_path}
116 | MissingPublishedFile | File does not exist: {file_path}
117 | InvalidStreamURL | Invalid LBRY stream URL: '{url}' -- When an URL cannot be downloaded, such as '@Channel/' or a collection
**2xx** | Configuration | Configuration errors. **2xx** | Configuration | Configuration errors.
201 | ConfigWrite | Cannot write configuration file '{path}'. -- When writing the default config fails on startup, such as due to permission issues. 201 | ConfigWrite | Cannot write configuration file '{path}'. -- When writing the default config fails on startup, such as due to permission issues.
202 | ConfigRead | Cannot find provided configuration file '{path}'. -- Can't open the config file user provided via command line args. 202 | ConfigRead | Cannot find provided configuration file '{path}'. -- Can't open the config file user provided via command line args.
@ -51,15 +56,22 @@ Code | Name | Message
405 | ChannelKeyNotFound | Channel signing key not found. 405 | ChannelKeyNotFound | Channel signing key not found.
406 | ChannelKeyInvalid | Channel signing key is out of date. -- For example, channel was updated but you don't have the updated key. 406 | ChannelKeyInvalid | Channel signing key is out of date. -- For example, channel was updated but you don't have the updated key.
407 | DataDownload | Failed to download blob. *generic* 407 | DataDownload | Failed to download blob. *generic*
408 | PrivateKeyNotFound | Couldn't find private key for {key} '{value}'.
410 | Resolve | Failed to resolve '{url}'. 410 | Resolve | Failed to resolve '{url}'.
411 | ResolveTimeout | Failed to resolve '{url}' within the timeout. 411 | ResolveTimeout | Failed to resolve '{url}' within the timeout.
411 | ResolveCensored | Resolve of '{url}' was censored by channel with claim id '{claim_id(censor_hash)}'. 411 | ResolveCensored | Resolve of '{url}' was censored by channel with claim id '{censor_id}'.
420 | KeyFeeAboveMaxAllowed | {message} 420 | KeyFeeAboveMaxAllowed | {message}
421 | InvalidPassword | Password is invalid. 421 | InvalidPassword | Password is invalid.
422 | IncompatibleWalletServer | '{server}:{port}' has an incompatibly old version. 422 | IncompatibleWalletServer | '{server}:{port}' has an incompatibly old version.
423 | TooManyClaimSearchParameters | {key} cant have more than {limit} items.
424 | AlreadyPurchased | You already have a purchase for claim_id '{claim_id_hex}'. Use --allow-duplicate-purchase flag to override.
431 | ServerPaymentInvalidAddress | Invalid address from wallet server: '{address}' - skipping payment round. 431 | ServerPaymentInvalidAddress | Invalid address from wallet server: '{address}' - skipping payment round.
432 | ServerPaymentWalletLocked | Cannot spend funds with locked wallet, skipping payment round. 432 | ServerPaymentWalletLocked | Cannot spend funds with locked wallet, skipping payment round.
433 | ServerPaymentFeeAboveMaxAllowed | Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC. 433 | ServerPaymentFeeAboveMaxAllowed | Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC.
434 | WalletNotLoaded | Wallet {wallet_id} is not loaded.
435 | WalletAlreadyLoaded | Wallet {wallet_path} is already loaded.
436 | WalletNotFound | Wallet not found at {wallet_path}.
437 | WalletAlreadyExists | Wallet {wallet_path} already exists, use `wallet_add` to load it.
**5xx** | Blob | **Blobs** **5xx** | Blob | **Blobs**
500 | BlobNotFound | Blob not found. 500 | BlobNotFound | Blob not found.
501 | BlobPermissionDenied | Permission denied to read blob. 501 | BlobPermissionDenied | Permission denied to read blob.

View file

@ -76,6 +76,45 @@ class InputValueIsNoneError(InputValueError):
super().__init__(f"None or null is not valid value for argument '{argument}'.") super().__init__(f"None or null is not valid value for argument '{argument}'.")
class ConflictingInputValueError(InputValueError):
def __init__(self, first_argument, second_argument):
self.first_argument = first_argument
self.second_argument = second_argument
super().__init__(f"Only '{first_argument}' or '{second_argument}' is allowed, not both.")
class InputStringIsBlankError(InputValueError):
def __init__(self, argument):
self.argument = argument
super().__init__(f"{argument} cannot be blank.")
class EmptyPublishedFileError(InputValueError):
def __init__(self, file_path):
self.file_path = file_path
super().__init__(f"Cannot publish empty file: {file_path}")
class MissingPublishedFileError(InputValueError):
def __init__(self, file_path):
self.file_path = file_path
super().__init__(f"File does not exist: {file_path}")
class InvalidStreamURLError(InputValueError):
"""
When an URL cannot be downloaded, such as '@Channel/' or a collection
"""
def __init__(self, url):
self.url = url
super().__init__(f"Invalid LBRY stream URL: '{url}'")
class ConfigurationError(BaseError): class ConfigurationError(BaseError):
""" """
Configuration errors. Configuration errors.
@ -199,6 +238,14 @@ class DataDownloadError(WalletError):
super().__init__("Failed to download blob. *generic*") super().__init__("Failed to download blob. *generic*")
class PrivateKeyNotFoundError(WalletError):
def __init__(self, key, value):
self.key = key
self.value = value
super().__init__(f"Couldn't find private key for {key} '{value}'.")
class ResolveError(WalletError): class ResolveError(WalletError):
def __init__(self, url): def __init__(self, url):
@ -215,10 +262,11 @@ class ResolveTimeoutError(WalletError):
class ResolveCensoredError(WalletError): class ResolveCensoredError(WalletError):
def __init__(self, url, censor_hash): def __init__(self, url, censor_id, censor_row):
self.url = url self.url = url
self.censor_hash = censor_hash self.censor_id = censor_id
super().__init__(f"Resolve of '{url}' was censored by channel with claim id '{claim_id(censor_hash)}'.") self.censor_row = censor_row
super().__init__(f"Resolve of '{url}' was censored by channel with claim id '{censor_id}'.")
class KeyFeeAboveMaxAllowedError(WalletError): class KeyFeeAboveMaxAllowedError(WalletError):
@ -242,6 +290,24 @@ class IncompatibleWalletServerError(WalletError):
super().__init__(f"'{server}:{port}' has an incompatibly old version.") super().__init__(f"'{server}:{port}' has an incompatibly old version.")
class TooManyClaimSearchParametersError(WalletError):
def __init__(self, key, limit):
self.key = key
self.limit = limit
super().__init__(f"{key} cant have more than {limit} items.")
class AlreadyPurchasedError(WalletError):
"""
allow-duplicate-purchase flag to override.
"""
def __init__(self, claim_id_hex):
self.claim_id_hex = claim_id_hex
super().__init__(f"You already have a purchase for claim_id '{claim_id_hex}'. Use")
class ServerPaymentInvalidAddressError(WalletError): class ServerPaymentInvalidAddressError(WalletError):
def __init__(self, address): def __init__(self, address):
@ -263,6 +329,34 @@ class ServerPaymentFeeAboveMaxAllowedError(WalletError):
super().__init__(f"Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC.") super().__init__(f"Daily server fee of {daily_fee} exceeds maximum configured of {max_fee} LBC.")
class WalletNotLoadedError(WalletError):
def __init__(self, wallet_id):
self.wallet_id = wallet_id
super().__init__(f"Wallet {wallet_id} is not loaded.")
class WalletAlreadyLoadedError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet {wallet_path} is already loaded.")
class WalletNotFoundError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet not found at {wallet_path}.")
class WalletAlreadyExistsError(WalletError):
def __init__(self, wallet_path):
self.wallet_path = wallet_path
super().__init__(f"Wallet {wallet_path} already exists, use `wallet_add` to load it.")
class BlobError(BaseError): class BlobError(BaseError):
""" """
**Blobs** **Blobs**

View file

@ -63,7 +63,7 @@ class ErrorClass:
@staticmethod @staticmethod
def get_fields(args): def get_fields(args):
if len(args) > 1: if len(args) > 1:
return f''.join(f'\n{INDENT*2}self.{field} = {field}' for field in args[1:]) return ''.join(f'\n{INDENT*2}self.{field} = {field}' for field in args[1:])
return '' return ''
@staticmethod @staticmethod

View file

@ -101,7 +101,7 @@ class ArgumentParser(argparse.ArgumentParser):
self._optionals.title = 'Options' self._optionals.title = 'Options'
if group_name is None: if group_name is None:
self.epilog = ( self.epilog = (
f"Run 'lbrynet COMMAND --help' for more information on a command or group." "Run 'lbrynet COMMAND --help' for more information on a command or group."
) )
else: else:
self.epilog = ( self.epilog = (
@ -226,6 +226,9 @@ def get_argument_parser():
def ensure_directory_exists(path: str): def ensure_directory_exists(path: str):
if not os.path.isdir(path): if not os.path.isdir(path):
pathlib.Path(path).mkdir(parents=True, exist_ok=True) pathlib.Path(path).mkdir(parents=True, exist_ok=True)
use_effective_ids = os.access in os.supports_effective_ids
if not os.access(path, os.W_OK, effective_ids=use_effective_ids):
raise PermissionError(f"The following directory is not writable: {path}")
LOG_MODULES = 'lbry', 'aioupnp' LOG_MODULES = 'lbry', 'aioupnp'

View file

@ -18,6 +18,7 @@ DOWNLOAD_STARTED = 'Download Started'
DOWNLOAD_ERRORED = 'Download Errored' DOWNLOAD_ERRORED = 'Download Errored'
DOWNLOAD_FINISHED = 'Download Finished' DOWNLOAD_FINISHED = 'Download Finished'
HEARTBEAT = 'Heartbeat' HEARTBEAT = 'Heartbeat'
DISK_SPACE = 'Disk Space'
CLAIM_ACTION = 'Claim Action' # publish/create/update/abandon CLAIM_ACTION = 'Claim Action' # publish/create/update/abandon
NEW_CHANNEL = 'New Channel' NEW_CHANNEL = 'New Channel'
CREDITS_SENT = 'Credits Sent' CREDITS_SENT = 'Credits Sent'
@ -169,6 +170,15 @@ class AnalyticsManager:
}) })
) )
async def send_disk_space_used(self, storage_used, storage_limit, is_from_network_quota):
await self.track(
self._event(DISK_SPACE, {
'used': storage_used,
'limit': storage_limit,
'from_network_quota': is_from_network_quota
})
)
async def send_server_startup(self): async def send_server_startup(self):
await self.track(self._event(SERVER_STARTUP)) await self.track(self._event(SERVER_STARTUP))

View file

@ -1,5 +1,5 @@
from lbry.conf import Config
from lbry.extras.cli import execute_command from lbry.extras.cli import execute_command
from lbry.conf import Config
def daemon_rpc(conf: Config, method: str, **kwargs): def daemon_rpc(conf: Config, method: str, **kwargs):

View file

@ -1,79 +0,0 @@
import logging
import time
import hashlib
import binascii
import ecdsa
from lbry import utils
from lbry.crypto.hash import sha256
from lbry.wallet.transaction import Output
log = logging.getLogger(__name__)
def get_encoded_signature(signature):
signature = signature.encode() if isinstance(signature, str) else signature
r = int(signature[:int(len(signature) / 2)], 16)
s = int(signature[int(len(signature) / 2):], 16)
return ecdsa.util.sigencode_der(r, s, len(signature) * 4)
def cid2hash(claim_id: str) -> bytes:
return binascii.unhexlify(claim_id.encode())[::-1]
def is_comment_signed_by_channel(comment: dict, channel: Output, sign_comment_id=False):
if isinstance(channel, Output):
try:
signing_field = comment['comment_id'] if sign_comment_id else comment['comment']
return verify(channel, signing_field.encode(), comment, cid2hash(comment['channel_id']))
except KeyError:
pass
return False
def verify(channel, data, signature, channel_hash=None):
pieces = [
signature['signing_ts'].encode(),
channel_hash or channel.claim_hash,
data
]
return Output.is_signature_valid(
get_encoded_signature(signature['signature']),
sha256(b''.join(pieces)),
channel.claim.channel.public_key_bytes
)
def sign_comment(comment: dict, channel: Output, sign_comment_id=False):
signing_field = comment['comment_id'] if sign_comment_id else comment['comment']
comment.update(sign(channel, signing_field.encode()))
def sign(channel, data):
timestamp = str(int(time.time()))
pieces = [timestamp.encode(), channel.claim_hash, data]
digest = sha256(b''.join(pieces))
signature = channel.private_key.sign_digest_deterministic(digest, hashfunc=hashlib.sha256)
return {
'signature': binascii.hexlify(signature).decode(),
'signing_ts': timestamp
}
def sign_reaction(reaction: dict, channel: Output):
signing_field = reaction['channel_name']
reaction.update(sign(channel, signing_field.encode()))
async def jsonrpc_post(url: str, method: str, params: dict = None, **kwargs) -> any:
params = params or {}
params.update(kwargs)
json_body = {'jsonrpc': '2.0', 'id': 1, 'method': method, 'params': params}
async with utils.aiohttp_request('POST', url, json=json_body) as response:
try:
result = await response.json()
return result['result'] if 'result' in result else result
except Exception as cte:
log.exception('Unable to decode response from server: %s', cte)
return await response.text()

View file

@ -37,7 +37,7 @@ class Component(metaclass=ComponentType):
def running(self): def running(self):
return self._running return self._running
async def get_status(self): async def get_status(self): # pylint: disable=no-self-use
return return
async def start(self): async def start(self):

View file

@ -42,7 +42,7 @@ class ComponentManager:
self.analytics_manager = analytics_manager self.analytics_manager = analytics_manager
self.component_classes = {} self.component_classes = {}
self.components = set() self.components = set()
self.started = asyncio.Event(loop=self.loop) self.started = asyncio.Event()
self.peer_manager = peer_manager or PeerManager(asyncio.get_event_loop_policy().get_event_loop()) self.peer_manager = peer_manager or PeerManager(asyncio.get_event_loop_policy().get_event_loop())
for component_name, component_class in self.default_component_classes.items(): for component_name, component_class in self.default_component_classes.items():
@ -118,7 +118,7 @@ class ComponentManager:
component._setup() for component in stage if not component.running component._setup() for component in stage if not component.running
] ]
if needing_start: if needing_start:
await asyncio.wait(needing_start) await asyncio.wait(map(asyncio.create_task, needing_start))
self.started.set() self.started.set()
async def stop(self): async def stop(self):
@ -131,7 +131,7 @@ class ComponentManager:
component._stop() for component in stage if component.running component._stop() for component in stage if component.running
] ]
if needing_stop: if needing_stop:
await asyncio.wait(needing_stop) await asyncio.wait(map(asyncio.create_task, needing_stop))
def all_components_running(self, *component_names): def all_components_running(self, *component_names):
""" """

View file

@ -4,6 +4,7 @@ import asyncio
import logging import logging
import binascii import binascii
import typing import typing
import base58 import base58
from aioupnp import __version__ as aioupnp_version from aioupnp import __version__ as aioupnp_version
@ -15,7 +16,9 @@ from lbry.dht.node import Node
from lbry.dht.peer import is_valid_public_ipv4 from lbry.dht.peer import is_valid_public_ipv4
from lbry.dht.blob_announcer import BlobAnnouncer from lbry.dht.blob_announcer import BlobAnnouncer
from lbry.blob.blob_manager import BlobManager from lbry.blob.blob_manager import BlobManager
from lbry.blob.disk_space_manager import DiskSpaceManager
from lbry.blob_exchange.server import BlobServer from lbry.blob_exchange.server import BlobServer
from lbry.stream.background_downloader import BackgroundDownloader
from lbry.stream.stream_manager import StreamManager from lbry.stream.stream_manager import StreamManager
from lbry.file.file_manager import FileManager from lbry.file.file_manager import FileManager
from lbry.extras.daemon.component import Component from lbry.extras.daemon.component import Component
@ -24,10 +27,8 @@ from lbry.extras.daemon.storage import SQLiteStorage
from lbry.torrent.torrent_manager import TorrentManager from lbry.torrent.torrent_manager import TorrentManager
from lbry.wallet import WalletManager from lbry.wallet import WalletManager
from lbry.wallet.usage_payment import WalletServerPayer from lbry.wallet.usage_payment import WalletServerPayer
try: from lbry.torrent.tracker import TrackerClient
from lbry.torrent.session import TorrentSession from lbry.torrent.session import TorrentSession
except ImportError:
TorrentSession = None
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -40,9 +41,12 @@ WALLET_SERVER_PAYMENTS_COMPONENT = "wallet_server_payments"
DHT_COMPONENT = "dht" DHT_COMPONENT = "dht"
HASH_ANNOUNCER_COMPONENT = "hash_announcer" HASH_ANNOUNCER_COMPONENT = "hash_announcer"
FILE_MANAGER_COMPONENT = "file_manager" FILE_MANAGER_COMPONENT = "file_manager"
DISK_SPACE_COMPONENT = "disk_space"
BACKGROUND_DOWNLOADER_COMPONENT = "background_downloader"
PEER_PROTOCOL_SERVER_COMPONENT = "peer_protocol_server" PEER_PROTOCOL_SERVER_COMPONENT = "peer_protocol_server"
UPNP_COMPONENT = "upnp" UPNP_COMPONENT = "upnp"
EXCHANGE_RATE_MANAGER_COMPONENT = "exchange_rate_manager" EXCHANGE_RATE_MANAGER_COMPONENT = "exchange_rate_manager"
TRACKER_ANNOUNCER_COMPONENT = "tracker_announcer_component"
LIBTORRENT_COMPONENT = "libtorrent_component" LIBTORRENT_COMPONENT = "libtorrent_component"
@ -59,7 +63,7 @@ class DatabaseComponent(Component):
@staticmethod @staticmethod
def get_current_db_revision(): def get_current_db_revision():
return 14 return 15
@property @property
def revision_filename(self): def revision_filename(self):
@ -138,7 +142,7 @@ class WalletComponent(Component):
'availability': session.available, 'availability': session.available,
} for session in sessions } for session in sessions
], ],
'known_servers': len(self.wallet_manager.ledger.network.config['default_servers']), 'known_servers': len(self.wallet_manager.ledger.network.known_hubs),
'available_servers': 1 if is_connected else 0 'available_servers': 1 if is_connected else 0
} }
@ -289,6 +293,7 @@ class DHTComponent(Component):
peer_port=self.external_peer_port, peer_port=self.external_peer_port,
rpc_timeout=self.conf.node_rpc_timeout, rpc_timeout=self.conf.node_rpc_timeout,
split_buckets_under_index=self.conf.split_buckets_under_index, split_buckets_under_index=self.conf.split_buckets_under_index,
is_bootstrap_node=self.conf.is_bootstrap_node,
storage=storage storage=storage
) )
self.dht_node.start(self.conf.network_interface, self.conf.known_dht_nodes) self.dht_node.start(self.conf.network_interface, self.conf.known_dht_nodes)
@ -352,10 +357,6 @@ class FileManagerComponent(Component):
wallet = self.component_manager.get_component(WALLET_COMPONENT) wallet = self.component_manager.get_component(WALLET_COMPONENT)
node = self.component_manager.get_component(DHT_COMPONENT) \ node = self.component_manager.get_component(DHT_COMPONENT) \
if self.component_manager.has_component(DHT_COMPONENT) else None if self.component_manager.has_component(DHT_COMPONENT) else None
try:
torrent = self.component_manager.get_component(LIBTORRENT_COMPONENT) if TorrentSession else None
except NameError:
torrent = None
log.info('Starting the file manager') log.info('Starting the file manager')
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
self.file_manager = FileManager( self.file_manager = FileManager(
@ -364,7 +365,8 @@ class FileManagerComponent(Component):
self.file_manager.source_managers['stream'] = StreamManager( self.file_manager.source_managers['stream'] = StreamManager(
loop, self.conf, blob_manager, wallet, storage, node, loop, self.conf, blob_manager, wallet, storage, node,
) )
if TorrentSession and LIBTORRENT_COMPONENT not in self.conf.components_to_skip: if self.component_manager.has_component(LIBTORRENT_COMPONENT):
torrent = self.component_manager.get_component(LIBTORRENT_COMPONENT)
self.file_manager.source_managers['torrent'] = TorrentManager( self.file_manager.source_managers['torrent'] = TorrentManager(
loop, self.conf, torrent, storage, self.component_manager.analytics_manager loop, self.conf, torrent, storage, self.component_manager.analytics_manager
) )
@ -372,7 +374,106 @@ class FileManagerComponent(Component):
log.info('Done setting up file manager') log.info('Done setting up file manager')
async def stop(self): async def stop(self):
self.file_manager.stop() await self.file_manager.stop()
class BackgroundDownloaderComponent(Component):
MIN_PREFIX_COLLIDING_BITS = 8
component_name = BACKGROUND_DOWNLOADER_COMPONENT
depends_on = [DATABASE_COMPONENT, BLOB_COMPONENT, DISK_SPACE_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.background_task: typing.Optional[asyncio.Task] = None
self.download_loop_delay_seconds = 60
self.ongoing_download: typing.Optional[asyncio.Task] = None
self.space_manager: typing.Optional[DiskSpaceManager] = None
self.blob_manager: typing.Optional[BlobManager] = None
self.background_downloader: typing.Optional[BackgroundDownloader] = None
self.dht_node: typing.Optional[Node] = None
self.space_available: typing.Optional[int] = None
@property
def is_busy(self):
return bool(self.ongoing_download and not self.ongoing_download.done())
@property
def component(self) -> 'BackgroundDownloaderComponent':
return self
async def get_status(self):
return {'running': self.background_task is not None and not self.background_task.done(),
'available_free_space_mb': self.space_available,
'ongoing_download': self.is_busy}
async def download_blobs_in_background(self):
while True:
self.space_available = await self.space_manager.get_free_space_mb(True)
if not self.is_busy and self.space_available > 10:
self._download_next_close_blob_hash()
await asyncio.sleep(self.download_loop_delay_seconds)
def _download_next_close_blob_hash(self):
node_id = self.dht_node.protocol.node_id
for blob_hash in self.dht_node.stored_blob_hashes:
if blob_hash.hex() in self.blob_manager.completed_blob_hashes:
continue
if utils.get_colliding_prefix_bits(node_id, blob_hash) >= self.MIN_PREFIX_COLLIDING_BITS:
self.ongoing_download = asyncio.create_task(self.background_downloader.download_blobs(blob_hash.hex()))
return
async def start(self):
self.space_manager: DiskSpaceManager = self.component_manager.get_component(DISK_SPACE_COMPONENT)
if not self.component_manager.has_component(DHT_COMPONENT):
return
self.dht_node = self.component_manager.get_component(DHT_COMPONENT)
self.blob_manager = self.component_manager.get_component(BLOB_COMPONENT)
storage = self.component_manager.get_component(DATABASE_COMPONENT)
self.background_downloader = BackgroundDownloader(self.conf, storage, self.blob_manager, self.dht_node)
self.background_task = asyncio.create_task(self.download_blobs_in_background())
async def stop(self):
if self.ongoing_download and not self.ongoing_download.done():
self.ongoing_download.cancel()
if self.background_task:
self.background_task.cancel()
class DiskSpaceComponent(Component):
component_name = DISK_SPACE_COMPONENT
depends_on = [DATABASE_COMPONENT, BLOB_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.disk_space_manager: typing.Optional[DiskSpaceManager] = None
@property
def component(self) -> typing.Optional[DiskSpaceManager]:
return self.disk_space_manager
async def get_status(self):
if self.disk_space_manager:
space_used = await self.disk_space_manager.get_space_used_mb(cached=True)
return {
'total_used_mb': space_used['total'],
'published_blobs_storage_used_mb': space_used['private_storage'],
'content_blobs_storage_used_mb': space_used['content_storage'],
'seed_blobs_storage_used_mb': space_used['network_storage'],
'running': self.disk_space_manager.running,
}
return {'space_used': '0', 'network_seeding_space_used': '0', 'running': False}
async def start(self):
db = self.component_manager.get_component(DATABASE_COMPONENT)
blob_manager = self.component_manager.get_component(BLOB_COMPONENT)
self.disk_space_manager = DiskSpaceManager(
self.conf, db, blob_manager,
analytics=self.component_manager.analytics_manager
)
await self.disk_space_manager.start()
async def stop(self):
await self.disk_space_manager.stop()
class TorrentComponent(Component): class TorrentComponent(Component):
@ -394,9 +495,8 @@ class TorrentComponent(Component):
} }
async def start(self): async def start(self):
if TorrentSession: self.torrent_session = TorrentSession(asyncio.get_event_loop(), None)
self.torrent_session = TorrentSession(asyncio.get_event_loop(), None) await self.torrent_session.bind() # TODO: specify host/port
await self.torrent_session.bind() # TODO: specify host/port
async def stop(self): async def stop(self):
if self.torrent_session: if self.torrent_session:
@ -451,7 +551,7 @@ class UPnPComponent(Component):
while True: while True:
if now: if now:
await self._maintain_redirects() await self._maintain_redirects()
await asyncio.sleep(360, loop=self.component_manager.loop) await asyncio.sleep(360)
async def _maintain_redirects(self): async def _maintain_redirects(self):
# setup the gateway if necessary # setup the gateway if necessary
@ -460,8 +560,6 @@ class UPnPComponent(Component):
self.upnp = await UPnP.discover(loop=self.component_manager.loop) self.upnp = await UPnP.discover(loop=self.component_manager.loop)
log.info("found upnp gateway: %s", self.upnp.gateway.manufacturer_string) log.info("found upnp gateway: %s", self.upnp.gateway.manufacturer_string)
except Exception as err: except Exception as err:
if isinstance(err, asyncio.CancelledError): # TODO: remove when updated to 3.8
raise
log.warning("upnp discovery failed: %s", err) log.warning("upnp discovery failed: %s", err)
self.upnp = None self.upnp = None
@ -481,6 +579,10 @@ class UPnPComponent(Component):
log.info("external ip changed from %s to %s", self.external_ip, external_ip) log.info("external ip changed from %s to %s", self.external_ip, external_ip)
if external_ip: if external_ip:
self.external_ip = external_ip self.external_ip = external_ip
dht_component = self.component_manager.get_component(DHT_COMPONENT)
if dht_component:
dht_node = dht_component.component
dht_node.protocol.external_ip = external_ip
# assert self.external_ip is not None # TODO: handle going/starting offline # assert self.external_ip is not None # TODO: handle going/starting offline
if not self.upnp_redirects and self.upnp: # setup missing redirects if not self.upnp_redirects and self.upnp: # setup missing redirects
@ -539,8 +641,10 @@ class UPnPComponent(Component):
success = False success = False
await self._maintain_redirects() await self._maintain_redirects()
if self.upnp: if self.upnp:
if not self.upnp_redirects and not all([x in self.component_manager.skip_components for x in if not self.upnp_redirects and not all(
(DHT_COMPONENT, PEER_PROTOCOL_SERVER_COMPONENT)]): x in self.component_manager.skip_components
for x in (DHT_COMPONENT, PEER_PROTOCOL_SERVER_COMPONENT)
):
log.error("failed to setup upnp") log.error("failed to setup upnp")
else: else:
success = True success = True
@ -567,7 +671,7 @@ class UPnPComponent(Component):
log.info("Removing upnp redirects: %s", self.upnp_redirects) log.info("Removing upnp redirects: %s", self.upnp_redirects)
await asyncio.wait([ await asyncio.wait([
self.upnp.delete_port_mapping(port, protocol) for protocol, port in self.upnp_redirects.items() self.upnp.delete_port_mapping(port, protocol) for protocol, port in self.upnp_redirects.items()
], loop=self.component_manager.loop) ])
if self._maintain_redirects_task and not self._maintain_redirects_task.done(): if self._maintain_redirects_task and not self._maintain_redirects_task.done():
self._maintain_redirects_task.cancel() self._maintain_redirects_task.cancel()
@ -598,3 +702,49 @@ class ExchangeRateManagerComponent(Component):
async def stop(self): async def stop(self):
self.exchange_rate_manager.stop() self.exchange_rate_manager.stop()
class TrackerAnnouncerComponent(Component):
component_name = TRACKER_ANNOUNCER_COMPONENT
depends_on = [FILE_MANAGER_COMPONENT]
def __init__(self, component_manager):
super().__init__(component_manager)
self.file_manager = None
self.announce_task = None
self.tracker_client: typing.Optional[TrackerClient] = None
@property
def component(self):
return self.tracker_client
@property
def running(self):
return self._running and self.announce_task and not self.announce_task.done()
async def announce_forever(self):
while True:
sleep_seconds = 60.0
announce_sd_hashes = []
for file in self.file_manager.get_filtered():
if not file.downloader:
continue
announce_sd_hashes.append(bytes.fromhex(file.sd_hash))
await self.tracker_client.announce_many(*announce_sd_hashes)
await asyncio.sleep(sleep_seconds)
async def start(self):
node = self.component_manager.get_component(DHT_COMPONENT) \
if self.component_manager.has_component(DHT_COMPONENT) else None
node_id = node.protocol.node_id if node else None
self.tracker_client = TrackerClient(node_id, self.conf.tcp_port, lambda: self.conf.tracker_servers)
await self.tracker_client.start()
self.file_manager = self.component_manager.get_component(FILE_MANAGER_COMPONENT)
self.announce_task = asyncio.create_task(self.announce_forever())
async def stop(self):
self.file_manager = None
if self.announce_task and not self.announce_task.done():
self.announce_task.cancel()
self.announce_task = None
self.tracker_client.stop()

File diff suppressed because it is too large Load diff

View file

@ -5,7 +5,7 @@ import logging
from statistics import median from statistics import median
from decimal import Decimal from decimal import Decimal
from typing import Optional, Iterable, Type from typing import Optional, Iterable, Type
from aiohttp.client_exceptions import ContentTypeError from aiohttp.client_exceptions import ContentTypeError, ClientConnectionError
from lbry.error import InvalidExchangeRateResponseError, CurrencyConversionError from lbry.error import InvalidExchangeRateResponseError, CurrencyConversionError
from lbry.utils import aiohttp_request from lbry.utils import aiohttp_request
from lbry.wallet.dewies import lbc_to_dewies from lbry.wallet.dewies import lbc_to_dewies
@ -79,18 +79,21 @@ class MarketFeed:
log.debug("Saving rate update %f for %s from %s", rate, self.market, self.name) log.debug("Saving rate update %f for %s from %s", rate, self.market, self.name)
self.rate = ExchangeRate(self.market, rate, int(time.time())) self.rate = ExchangeRate(self.market, rate, int(time.time()))
self.last_check = time.time() self.last_check = time.time()
self.event.set()
return self.rate return self.rate
except asyncio.CancelledError:
raise
except asyncio.TimeoutError: except asyncio.TimeoutError:
log.warning("Timed out fetching exchange rate from %s.", self.name) log.warning("Timed out fetching exchange rate from %s.", self.name)
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
log.warning("Could not parse exchange rate response from %s: %s", self.name, e.doc) msg = e.doc if '<html>' not in e.doc else 'unexpected content type.'
log.warning("Could not parse exchange rate response from %s: %s", self.name, msg)
log.debug(e.doc)
except InvalidExchangeRateResponseError as e: except InvalidExchangeRateResponseError as e:
log.warning(str(e)) log.warning(str(e))
except ClientConnectionError as e:
log.warning("Error trying to connect to exchange rate %s: %s", self.name, str(e))
except Exception as e: except Exception as e:
log.exception("Exchange rate error (%s from %s):", self.market, self.name) log.exception("Exchange rate error (%s from %s):", self.market, self.name)
finally:
self.event.set()
async def keep_updated(self): async def keep_updated(self):
while True: while True:
@ -130,27 +133,6 @@ class BittrexUSDFeed(BaseBittrexFeed):
url = "https://api.bittrex.com/v3/markets/LBC-USD/ticker" url = "https://api.bittrex.com/v3/markets/LBC-USD/ticker"
class BaseCryptonatorFeed(MarketFeed):
name = "Cryptonator"
market = None
url = None
def get_rate_from_response(self, json_response):
if 'ticker' not in json_response or 'price' not in json_response['ticker']:
raise InvalidExchangeRateResponseError(self.name, 'result not found')
return float(json_response['ticker']['price'])
class CryptonatorBTCFeed(BaseCryptonatorFeed):
market = "BTCLBC"
url = "https://api.cryptonator.com/api/ticker/btc-lbc"
class CryptonatorUSDFeed(BaseCryptonatorFeed):
market = "USDLBC"
url = "https://api.cryptonator.com/api/ticker/usd-lbc"
class BaseCoinExFeed(MarketFeed): class BaseCoinExFeed(MarketFeed):
name = "CoinEx" name = "CoinEx"
market = None market = None
@ -202,7 +184,7 @@ class UPbitBTCFeed(MarketFeed):
params = {"markets": "BTC-LBC"} params = {"markets": "BTC-LBC"}
def get_rate_from_response(self, json_response): def get_rate_from_response(self, json_response):
if len(json_response) != 1 or 'trade_price' not in json_response[0]: if "error" in json_response or len(json_response) != 1 or 'trade_price' not in json_response[0]:
raise InvalidExchangeRateResponseError(self.name, 'result not found') raise InvalidExchangeRateResponseError(self.name, 'result not found')
return 1.0 / float(json_response[0]['trade_price']) return 1.0 / float(json_response[0]['trade_price'])
@ -210,13 +192,11 @@ class UPbitBTCFeed(MarketFeed):
FEEDS: Iterable[Type[MarketFeed]] = ( FEEDS: Iterable[Type[MarketFeed]] = (
BittrexBTCFeed, BittrexBTCFeed,
BittrexUSDFeed, BittrexUSDFeed,
CryptonatorBTCFeed,
CryptonatorUSDFeed,
CoinExBTCFeed, CoinExBTCFeed,
CoinExUSDFeed, CoinExUSDFeed,
HotbitBTCFeed, # HotbitBTCFeed,
HotbitUSDFeed, # HotbitUSDFeed,
UPbitBTCFeed, # UPbitBTCFeed,
) )

View file

@ -10,7 +10,7 @@ from lbry.schema.claim import Claim
from lbry.schema.support import Support from lbry.schema.support import Support
from lbry.torrent.torrent_manager import TorrentSource from lbry.torrent.torrent_manager import TorrentSource
from lbry.wallet import Wallet, Ledger, Account, Transaction, Output from lbry.wallet import Wallet, Ledger, Account, Transaction, Output
from lbry.wallet.bip32 import PubKey from lbry.wallet.bip32 import PublicKey
from lbry.wallet.dewies import dewies_to_lbc from lbry.wallet.dewies import dewies_to_lbc
from lbry.stream.managed_stream import ManagedStream from lbry.stream.managed_stream import ManagedStream
@ -123,7 +123,7 @@ class JSONResponseEncoder(JSONEncoder):
self.ledger = ledger self.ledger = ledger
self.include_protobuf = include_protobuf self.include_protobuf = include_protobuf
def default(self, obj): # pylint: disable=method-hidden,arguments-differ,too-many-return-statements def default(self, obj): # pylint: disable=method-hidden,arguments-renamed,too-many-return-statements
if isinstance(obj, Account): if isinstance(obj, Account):
return self.encode_account(obj) return self.encode_account(obj)
if isinstance(obj, Wallet): if isinstance(obj, Wallet):
@ -138,7 +138,7 @@ class JSONResponseEncoder(JSONEncoder):
return self.encode_claim(obj) return self.encode_claim(obj)
if isinstance(obj, Support): if isinstance(obj, Support):
return obj.to_dict() return obj.to_dict()
if isinstance(obj, PubKey): if isinstance(obj, PublicKey):
return obj.extended_key_string() return obj.extended_key_string()
if isinstance(obj, datetime): if isinstance(obj, datetime):
return obj.strftime("%Y%m%dT%H:%M:%S") return obj.strftime("%Y%m%dT%H:%M:%S")
@ -234,8 +234,6 @@ class JSONResponseEncoder(JSONEncoder):
output['value_type'] = txo.claim.claim_type output['value_type'] = txo.claim.claim_type
if txo.claim.is_channel: if txo.claim.is_channel:
output['has_signing_key'] = txo.has_private_key output['has_signing_key'] = txo.has_private_key
elif txo.script.is_support_claim_data:
output['value_type'] = 'emoji'
if check_signature and txo.signable.is_signed: if check_signature and txo.signable.is_signed:
if txo.channel is not None: if txo.channel is not None:
output['signing_channel'] = self.encode_output(txo.channel) output['signing_channel'] = self.encode_output(txo.channel)
@ -330,8 +328,8 @@ class JSONResponseEncoder(JSONEncoder):
result.update({ result.update({
'streaming_url': managed_stream.stream_url, 'streaming_url': managed_stream.stream_url,
'stream_hash': managed_stream.stream_hash, 'stream_hash': managed_stream.stream_hash,
'stream_name': managed_stream.descriptor.stream_name, 'stream_name': managed_stream.stream_name,
'suggested_file_name': managed_stream.descriptor.suggested_file_name, 'suggested_file_name': managed_stream.suggested_file_name,
'sd_hash': managed_stream.descriptor.sd_hash, 'sd_hash': managed_stream.descriptor.sd_hash,
'mime_type': managed_stream.mime_type, 'mime_type': managed_stream.mime_type,
'key': managed_stream.descriptor.key, 'key': managed_stream.descriptor.key,

View file

@ -35,6 +35,10 @@ def migrate_db(conf, start, end):
from .migrate12to13 import do_migration from .migrate12to13 import do_migration
elif current == 13: elif current == 13:
from .migrate13to14 import do_migration from .migrate13to14 import do_migration
elif current == 14:
from .migrate14to15 import do_migration
elif current == 15:
from .migrate15to16 import do_migration
else: else:
raise Exception(f"DB migration of version {current} to {current+1} is not available") raise Exception(f"DB migration of version {current} to {current+1} is not available")
try: try:

View file

@ -0,0 +1,16 @@
import os
import sqlite3
def do_migration(conf):
db_path = os.path.join(conf.data_dir, "lbrynet.sqlite")
connection = sqlite3.connect(db_path)
cursor = connection.cursor()
cursor.executescript("""
alter table blob add column added_on integer not null default 0;
alter table blob add column is_mine integer not null default 1;
""")
connection.commit()
connection.close()

View file

@ -0,0 +1,17 @@
import os
import sqlite3
def do_migration(conf):
db_path = os.path.join(conf.data_dir, "lbrynet.sqlite")
connection = sqlite3.connect(db_path)
cursor = connection.cursor()
cursor.executescript("""
update blob set should_announce=0
where should_announce=1 and
blob.blob_hash in (select stream_blob.blob_hash from stream_blob where position=0);
""")
connection.commit()
connection.close()

View file

@ -20,7 +20,7 @@ def do_migration(conf):
"left outer join blob b ON b.blob_hash=s.blob_hash order by s.position").fetchall() "left outer join blob b ON b.blob_hash=s.blob_hash order by s.position").fetchall()
blobs_by_stream = {} blobs_by_stream = {}
for stream_hash, position, iv, blob_hash, blob_length in blobs: for stream_hash, position, iv, blob_hash, blob_length in blobs:
blobs_by_stream.setdefault(stream_hash, []).append(BlobInfo(position, blob_length or 0, iv, blob_hash)) blobs_by_stream.setdefault(stream_hash, []).append(BlobInfo(position, blob_length or 0, iv, 0, blob_hash))
for stream_name, stream_key, suggested_filename, sd_hash, stream_hash in streams: for stream_name, stream_key, suggested_filename, sd_hash, stream_hash in streams:
sd = StreamDescriptor(None, blob_dir, stream_name, stream_key, suggested_filename, sd = StreamDescriptor(None, blob_dir, stream_name, stream_key, suggested_filename,

View file

@ -170,8 +170,8 @@ def get_all_lbry_files(transaction: sqlite3.Connection) -> typing.List[typing.Di
def store_stream(transaction: sqlite3.Connection, sd_blob: 'BlobFile', descriptor: 'StreamDescriptor'): def store_stream(transaction: sqlite3.Connection, sd_blob: 'BlobFile', descriptor: 'StreamDescriptor'):
# add all blobs, except the last one, which is empty # add all blobs, except the last one, which is empty
transaction.executemany( transaction.executemany(
"insert or ignore into blob values (?, ?, ?, ?, ?, ?, ?)", "insert or ignore into blob values (?, ?, ?, ?, ?, ?, ?, ?, ?)",
((blob.blob_hash, blob.length, 0, 0, "pending", 0, 0) ((blob.blob_hash, blob.length, 0, 0, "pending", 0, 0, blob.added_on, blob.is_mine)
for blob in (descriptor.blobs[:-1] if len(descriptor.blobs) > 1 else descriptor.blobs) + [sd_blob]) for blob in (descriptor.blobs[:-1] if len(descriptor.blobs) > 1 else descriptor.blobs) + [sd_blob])
).fetchall() ).fetchall()
# associate the blobs to the stream # associate the blobs to the stream
@ -187,8 +187,8 @@ def store_stream(transaction: sqlite3.Connection, sd_blob: 'BlobFile', descripto
).fetchall() ).fetchall()
# ensure should_announce is set regardless if insert was ignored # ensure should_announce is set regardless if insert was ignored
transaction.execute( transaction.execute(
"update blob set should_announce=1 where blob_hash in (?, ?)", "update blob set should_announce=1 where blob_hash in (?)",
(sd_blob.blob_hash, descriptor.blobs[0].blob_hash,) (sd_blob.blob_hash,)
).fetchall() ).fetchall()
@ -242,7 +242,9 @@ class SQLiteStorage(SQLiteMixin):
should_announce integer not null default 0, should_announce integer not null default 0,
status text not null, status text not null,
last_announced_time integer, last_announced_time integer,
single_announce integer single_announce integer,
added_on integer not null,
is_mine integer not null default 0
); );
create table if not exists stream ( create table if not exists stream (
@ -335,6 +337,7 @@ class SQLiteStorage(SQLiteMixin):
tcp_port integer, tcp_port integer,
unique (address, udp_port) unique (address, udp_port)
); );
create index if not exists blob_data on blob(blob_hash, blob_length, is_mine);
""" """
def __init__(self, conf: Config, path, loop=None, time_getter: typing.Optional[typing.Callable[[], float]] = None): def __init__(self, conf: Config, path, loop=None, time_getter: typing.Optional[typing.Callable[[], float]] = None):
@ -356,19 +359,19 @@ class SQLiteStorage(SQLiteMixin):
# # # # # # # # # blob functions # # # # # # # # # # # # # # # # # # blob functions # # # # # # # # #
async def add_blobs(self, *blob_hashes_and_lengths: typing.Tuple[str, int], finished=False): async def add_blobs(self, *blob_hashes_and_lengths: typing.Tuple[str, int, int, int], finished=False):
def _add_blobs(transaction: sqlite3.Connection): def _add_blobs(transaction: sqlite3.Connection):
transaction.executemany( transaction.executemany(
"insert or ignore into blob values (?, ?, ?, ?, ?, ?, ?)", "insert or ignore into blob values (?, ?, ?, ?, ?, ?, ?, ?, ?)",
( (
(blob_hash, length, 0, 0, "pending" if not finished else "finished", 0, 0) (blob_hash, length, 0, 0, "pending" if not finished else "finished", 0, 0, added_on, is_mine)
for blob_hash, length in blob_hashes_and_lengths for blob_hash, length, added_on, is_mine in blob_hashes_and_lengths
) )
).fetchall() ).fetchall()
if finished: if finished:
transaction.executemany( transaction.executemany(
"update blob set status='finished' where blob.blob_hash=?", ( "update blob set status='finished' where blob.blob_hash=?", (
(blob_hash, ) for blob_hash, _ in blob_hashes_and_lengths (blob_hash, ) for blob_hash, _, _, _ in blob_hashes_and_lengths
) )
).fetchall() ).fetchall()
return await self.db.run(_add_blobs) return await self.db.run(_add_blobs)
@ -378,6 +381,11 @@ class SQLiteStorage(SQLiteMixin):
"select status from blob where blob_hash=?", blob_hash "select status from blob where blob_hash=?", blob_hash
) )
def set_announce(self, *blob_hashes):
return self.db.execute_fetchall(
"update blob set should_announce=1 where blob_hash in (?, ?)", blob_hashes
)
def update_last_announced_blobs(self, blob_hashes: typing.List[str]): def update_last_announced_blobs(self, blob_hashes: typing.List[str]):
def _update_last_announced_blobs(transaction: sqlite3.Connection): def _update_last_announced_blobs(transaction: sqlite3.Connection):
last_announced = self.time_getter() last_announced = self.time_getter()
@ -435,6 +443,62 @@ class SQLiteStorage(SQLiteMixin):
def get_all_blob_hashes(self): def get_all_blob_hashes(self):
return self.run_and_return_list("select blob_hash from blob") return self.run_and_return_list("select blob_hash from blob")
async def get_stored_blobs(self, is_mine: bool, is_network_blob=False):
is_mine = 1 if is_mine else 0
if is_network_blob:
return await self.db.execute_fetchall(
"select blob.blob_hash, blob.blob_length, blob.added_on "
"from blob left join stream_blob using (blob_hash) "
"where stream_blob.stream_hash is null and blob.is_mine=? and blob.status='finished'"
"order by blob.blob_length desc, blob.added_on asc",
(is_mine,)
)
sd_blobs = await self.db.execute_fetchall(
"select blob.blob_hash, blob.blob_length, blob.added_on "
"from blob join stream on blob.blob_hash=stream.sd_hash join file using (stream_hash) "
"where blob.is_mine=? order by blob.added_on asc",
(is_mine,)
)
content_blobs = await self.db.execute_fetchall(
"select blob.blob_hash, blob.blob_length, blob.added_on "
"from blob join stream_blob using (blob_hash) cross join stream using (stream_hash)"
"cross join file using (stream_hash)"
"where blob.is_mine=? and blob.status='finished' order by blob.added_on asc, blob.blob_length asc",
(is_mine,)
)
return content_blobs + sd_blobs
async def get_stored_blob_disk_usage(self):
total, network_size, content_size, private_size = await self.db.execute_fetchone("""
select coalesce(sum(blob_length), 0) as total,
coalesce(sum(case when
stream_blob.stream_hash is null
then blob_length else 0 end), 0) as network_storage,
coalesce(sum(case when
stream_blob.blob_hash is not null and is_mine=0
then blob_length else 0 end), 0) as content_storage,
coalesce(sum(case when
is_mine=1
then blob_length else 0 end), 0) as private_storage
from blob left join stream_blob using (blob_hash)
where blob_hash not in (select sd_hash from stream) and blob.status="finished"
""")
return {
'network_storage': network_size,
'content_storage': content_size,
'private_storage': private_size,
'total': total
}
async def update_blob_ownership(self, sd_hash, is_mine: bool):
is_mine = 1 if is_mine else 0
await self.db.execute_fetchall(
"update blob set is_mine = ? where blob_hash in ("
" select blob_hash from blob natural join stream_blob natural join stream where sd_hash = ?"
") OR blob_hash = ?", (is_mine, sd_hash, sd_hash)
)
def sync_missing_blobs(self, blob_files: typing.Set[str]) -> typing.Awaitable[typing.Set[str]]: def sync_missing_blobs(self, blob_files: typing.Set[str]) -> typing.Awaitable[typing.Set[str]]:
def _sync_blobs(transaction: sqlite3.Connection) -> typing.Set[str]: def _sync_blobs(transaction: sqlite3.Connection) -> typing.Set[str]:
finished_blob_hashes = tuple( finished_blob_hashes = tuple(
@ -470,7 +534,8 @@ class SQLiteStorage(SQLiteMixin):
def _get_blobs_for_stream(transaction): def _get_blobs_for_stream(transaction):
crypt_blob_infos = [] crypt_blob_infos = []
stream_blobs = transaction.execute( stream_blobs = transaction.execute(
"select blob_hash, position, iv from stream_blob where stream_hash=? " "select s.blob_hash, s.position, s.iv, b.added_on "
"from stream_blob s left outer join blob b on b.blob_hash=s.blob_hash where stream_hash=? "
"order by position asc", (stream_hash, ) "order by position asc", (stream_hash, )
).fetchall() ).fetchall()
if only_completed: if only_completed:
@ -490,9 +555,10 @@ class SQLiteStorage(SQLiteMixin):
for blob_hash, length in lengths: for blob_hash, length in lengths:
blob_length_dict[blob_hash] = length blob_length_dict[blob_hash] = length
for blob_hash, position, iv in stream_blobs: current_time = time.time()
for blob_hash, position, iv, added_on in stream_blobs:
blob_length = blob_length_dict.get(blob_hash, 0) blob_length = blob_length_dict.get(blob_hash, 0)
crypt_blob_infos.append(BlobInfo(position, blob_length, iv, blob_hash)) crypt_blob_infos.append(BlobInfo(position, blob_length, iv, added_on or current_time, blob_hash))
if not blob_hash: if not blob_hash:
break break
return crypt_blob_infos return crypt_blob_infos
@ -570,6 +636,10 @@ class SQLiteStorage(SQLiteMixin):
log.debug("update file status %s -> %s", stream_hash, new_status) log.debug("update file status %s -> %s", stream_hash, new_status)
return self.db.execute_fetchall("update file set status=? where stream_hash=?", (new_status, stream_hash)) return self.db.execute_fetchall("update file set status=? where stream_hash=?", (new_status, stream_hash))
def stop_all_files(self):
log.debug("stopping all files")
return self.db.execute_fetchall("update file set status=?", ("stopped",))
async def change_file_download_dir_and_file_name(self, stream_hash: str, download_dir: typing.Optional[str], async def change_file_download_dir_and_file_name(self, stream_hash: str, download_dir: typing.Optional[str],
file_name: typing.Optional[str]): file_name: typing.Optional[str]):
if not file_name or not download_dir: if not file_name or not download_dir:
@ -617,7 +687,7 @@ class SQLiteStorage(SQLiteMixin):
).fetchall() ).fetchall()
download_dir = binascii.hexlify(self.conf.download_dir.encode()).decode() download_dir = binascii.hexlify(self.conf.download_dir.encode()).decode()
transaction.executemany( transaction.executemany(
f"update file set download_directory=? where stream_hash=?", "update file set download_directory=? where stream_hash=?",
((download_dir, stream_hash) for stream_hash in stream_hashes) ((download_dir, stream_hash) for stream_hash in stream_hashes)
).fetchall() ).fetchall()
await self.db.run_with_foreign_keys_disabled(_recover) await self.db.run_with_foreign_keys_disabled(_recover)
@ -723,7 +793,7 @@ class SQLiteStorage(SQLiteMixin):
await self.db.run(_save_claims) await self.db.run(_save_claims)
if update_file_callbacks: if update_file_callbacks:
await asyncio.wait(update_file_callbacks) await asyncio.wait(map(asyncio.create_task, update_file_callbacks))
if claim_id_to_supports: if claim_id_to_supports:
await self.save_supports(claim_id_to_supports) await self.save_supports(claim_id_to_supports)
@ -861,6 +931,6 @@ class SQLiteStorage(SQLiteMixin):
transaction.execute('delete from peer').fetchall() transaction.execute('delete from peer').fetchall()
transaction.executemany( transaction.executemany(
'insert into peer(node_id, address, udp_port, tcp_port) values (?, ?, ?, ?)', 'insert into peer(node_id, address, udp_port, tcp_port) values (?, ?, ?, ?)',
tuple([(binascii.hexlify(p.node_id), p.address, p.udp_port, p.tcp_port) for p in peers]) ((binascii.hexlify(p.node_id), p.address, p.udp_port, p.tcp_port) for p in peers)
).fetchall() ).fetchall()
return await self.db.run(_save_kademlia_peers) return await self.db.run(_save_kademlia_peers)

View file

@ -5,6 +5,7 @@ from typing import Optional
from aiohttp.web import Request from aiohttp.web import Request
from lbry.error import ResolveError, DownloadSDTimeoutError, InsufficientFundsError from lbry.error import ResolveError, DownloadSDTimeoutError, InsufficientFundsError
from lbry.error import ResolveTimeoutError, DownloadDataTimeoutError, KeyFeeAboveMaxAllowedError from lbry.error import ResolveTimeoutError, DownloadDataTimeoutError, KeyFeeAboveMaxAllowedError
from lbry.error import InvalidStreamURLError
from lbry.stream.managed_stream import ManagedStream from lbry.stream.managed_stream import ManagedStream
from lbry.torrent.torrent_manager import TorrentSource from lbry.torrent.torrent_manager import TorrentSource
from lbry.utils import cache_concurrent from lbry.utils import cache_concurrent
@ -12,11 +13,12 @@ from lbry.schema.url import URL
from lbry.wallet.dewies import dewies_to_lbc from lbry.wallet.dewies import dewies_to_lbc
from lbry.file.source_manager import SourceManager from lbry.file.source_manager import SourceManager
from lbry.file.source import ManagedDownloadSource from lbry.file.source import ManagedDownloadSource
from lbry.extras.daemon.storage import StoredContentClaim
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.conf import Config from lbry.conf import Config
from lbry.extras.daemon.analytics import AnalyticsManager from lbry.extras.daemon.analytics import AnalyticsManager
from lbry.extras.daemon.storage import SQLiteStorage from lbry.extras.daemon.storage import SQLiteStorage
from lbry.wallet import WalletManager, Output from lbry.wallet import WalletManager
from lbry.extras.daemon.exchange_rate_manager import ExchangeRateManager from lbry.extras.daemon.exchange_rate_manager import ExchangeRateManager
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -48,10 +50,10 @@ class FileManager:
await manager.started.wait() await manager.started.wait()
self.started.set() self.started.set()
def stop(self): async def stop(self):
for manager in self.source_managers.values(): for manager in self.source_managers.values():
# fixme: pop or not? # fixme: pop or not?
manager.stop() await manager.stop()
self.started.clear() self.started.clear()
@cache_concurrent @cache_concurrent
@ -81,8 +83,11 @@ class FileManager:
payment = None payment = None
try: try:
# resolve the claim # resolve the claim
if not URL.parse(uri).has_stream: try:
raise ResolveError("cannot download a channel claim, specify a /path") if not URL.parse(uri).has_stream:
raise InvalidStreamURLError(uri)
except ValueError:
raise InvalidStreamURLError(uri)
try: try:
resolved_result = await asyncio.wait_for( resolved_result = await asyncio.wait_for(
self.wallet_manager.ledger.resolve( self.wallet_manager.ledger.resolve(
@ -94,8 +99,6 @@ class FileManager:
except asyncio.TimeoutError: except asyncio.TimeoutError:
raise ResolveTimeoutError(uri) raise ResolveTimeoutError(uri)
except Exception as err: except Exception as err:
if isinstance(err, asyncio.CancelledError):
raise
log.exception("Unexpected error resolving stream:") log.exception("Unexpected error resolving stream:")
raise ResolveError(f"Unexpected error resolving stream: {str(err)}") raise ResolveError(f"Unexpected error resolving stream: {str(err)}")
if 'error' in resolved_result: if 'error' in resolved_result:
@ -117,9 +120,11 @@ class FileManager:
if claim.stream.source.bt_infohash: if claim.stream.source.bt_infohash:
source_manager = self.source_managers['torrent'] source_manager = self.source_managers['torrent']
existing = source_manager.get_filtered(bt_infohash=claim.stream.source.bt_infohash) existing = source_manager.get_filtered(bt_infohash=claim.stream.source.bt_infohash)
else: elif claim.stream.source.sd_hash:
source_manager = self.source_managers['stream'] source_manager = self.source_managers['stream']
existing = source_manager.get_filtered(sd_hash=claim.stream.source.sd_hash) existing = source_manager.get_filtered(sd_hash=claim.stream.source.sd_hash)
else:
raise ResolveError(f"There is nothing to download at {uri} - Source is unknown or unset")
# resume or update an existing stream, if the stream changed: download it and delete the old one after # resume or update an existing stream, if the stream changed: download it and delete the old one after
to_replace, updated_stream = None, None to_replace, updated_stream = None, None
@ -188,21 +193,24 @@ class FileManager:
#################### ####################
# make downloader and wait for start # make downloader and wait for start
#################### ####################
# temporary with fields we know so downloader can start. Missing fields are populated later.
stored_claim = StoredContentClaim(outpoint=outpoint, claim_id=txo.claim_id, name=txo.claim_name,
amount=txo.amount, height=txo.tx_ref.height,
serialized=claim.to_bytes().hex())
if not claim.stream.source.bt_infohash: if not claim.stream.source.bt_infohash:
# fixme: this shouldnt be here # fixme: this shouldnt be here
stream = ManagedStream( stream = ManagedStream(
self.loop, self.config, source_manager.blob_manager, claim.stream.source.sd_hash, self.loop, self.config, source_manager.blob_manager, claim.stream.source.sd_hash,
download_directory, file_name, ManagedStream.STATUS_RUNNING, content_fee=payment, download_directory, file_name, ManagedStream.STATUS_RUNNING, content_fee=payment,
analytics_manager=self.analytics_manager analytics_manager=self.analytics_manager, claim=stored_claim
) )
stream.downloader.node = source_manager.node stream.downloader.node = source_manager.node
else: else:
stream = TorrentSource( stream = TorrentSource(
self.loop, self.config, self.storage, identifier=claim.stream.source.bt_infohash, self.loop, self.config, self.storage, identifier=claim.stream.source.bt_infohash,
file_name=file_name, download_directory=download_directory or self.config.download_dir, file_name=file_name, download_directory=download_directory or self.config.download_dir,
status=ManagedStream.STATUS_RUNNING, status=ManagedStream.STATUS_RUNNING, claim=stored_claim, analytics_manager=self.analytics_manager,
analytics_manager=self.analytics_manager,
torrent_session=source_manager.torrent_session torrent_session=source_manager.torrent_session
) )
log.info("starting download for %s", uri) log.info("starting download for %s", uri)
@ -234,15 +242,14 @@ class FileManager:
claim_info = await self.storage.get_content_claim_for_torrent(stream.identifier) claim_info = await self.storage.get_content_claim_for_torrent(stream.identifier)
stream.set_claim(claim_info, claim) stream.set_claim(claim_info, claim)
if save_file: if save_file:
await asyncio.wait_for(stream.save_file(), timeout - (self.loop.time() - before_download), await asyncio.wait_for(stream.save_file(), timeout - (self.loop.time() - before_download))
loop=self.loop)
return stream return stream
except asyncio.TimeoutError: except asyncio.TimeoutError:
error = DownloadDataTimeoutError(stream.sd_hash) error = DownloadDataTimeoutError(stream.sd_hash)
raise error raise error
except Exception as err: # forgive data timeout, don't delete stream except (Exception, asyncio.CancelledError) as err: # forgive data timeout, don't delete stream
expected = (DownloadSDTimeoutError, DownloadDataTimeoutError, InsufficientFundsError, expected = (DownloadSDTimeoutError, DownloadDataTimeoutError, InsufficientFundsError,
KeyFeeAboveMaxAllowedError) KeyFeeAboveMaxAllowedError, ResolveError, InvalidStreamURLError)
if isinstance(err, expected): if isinstance(err, expected):
log.warning("Failed to download %s: %s", uri, str(err)) log.warning("Failed to download %s: %s", uri, str(err))
elif isinstance(err, asyncio.CancelledError): elif isinstance(err, asyncio.CancelledError):

View file

@ -45,11 +45,12 @@ class ManagedDownloadSource:
self.purchase_receipt = None self.purchase_receipt = None
self._added_on = added_on self._added_on = added_on
self.analytics_manager = analytics_manager self.analytics_manager = analytics_manager
self.downloader = None
self.saving = asyncio.Event(loop=self.loop) self.saving = asyncio.Event()
self.finished_writing = asyncio.Event(loop=self.loop) self.finished_writing = asyncio.Event()
self.started_writing = asyncio.Event(loop=self.loop) self.started_writing = asyncio.Event()
self.finished_write_attempt = asyncio.Event(loop=self.loop) self.finished_write_attempt = asyncio.Event()
# @classmethod # @classmethod
# async def create(cls, loop: asyncio.AbstractEventLoop, config: 'Config', file_path: str, # async def create(cls, loop: asyncio.AbstractEventLoop, config: 'Config', file_path: str,
@ -66,7 +67,7 @@ class ManagedDownloadSource:
async def save_file(self, file_name: Optional[str] = None, download_directory: Optional[str] = None): async def save_file(self, file_name: Optional[str] = None, download_directory: Optional[str] = None):
raise NotImplementedError() raise NotImplementedError()
def stop_tasks(self): async def stop_tasks(self):
raise NotImplementedError() raise NotImplementedError()
def set_claim(self, claim_info: typing.Dict, claim: 'Claim'): def set_claim(self, claim_info: typing.Dict, claim: 'Claim'):

View file

@ -54,16 +54,16 @@ class SourceManager:
self.storage = storage self.storage = storage
self.analytics_manager = analytics_manager self.analytics_manager = analytics_manager
self._sources: typing.Dict[str, ManagedDownloadSource] = {} self._sources: typing.Dict[str, ManagedDownloadSource] = {}
self.started = asyncio.Event(loop=self.loop) self.started = asyncio.Event()
def add(self, source: ManagedDownloadSource): def add(self, source: ManagedDownloadSource):
self._sources[source.identifier] = source self._sources[source.identifier] = source
def remove(self, source: ManagedDownloadSource): async def remove(self, source: ManagedDownloadSource):
if source.identifier not in self._sources: if source.identifier not in self._sources:
return return
self._sources.pop(source.identifier) self._sources.pop(source.identifier)
source.stop_tasks() await source.stop_tasks()
async def initialize_from_database(self): async def initialize_from_database(self):
raise NotImplementedError() raise NotImplementedError()
@ -72,10 +72,10 @@ class SourceManager:
await self.initialize_from_database() await self.initialize_from_database()
self.started.set() self.started.set()
def stop(self): async def stop(self):
while self._sources: while self._sources:
_, source = self._sources.popitem() _, source = self._sources.popitem()
source.stop_tasks() await source.stop_tasks()
self.started.clear() self.started.clear()
async def create(self, file_path: str, key: Optional[bytes] = None, async def create(self, file_path: str, key: Optional[bytes] = None,
@ -83,7 +83,7 @@ class SourceManager:
raise NotImplementedError() raise NotImplementedError()
async def delete(self, source: ManagedDownloadSource, delete_file: Optional[bool] = False): async def delete(self, source: ManagedDownloadSource, delete_file: Optional[bool] = False):
self.remove(source) await self.remove(source)
if delete_file and source.output_file_exists: if delete_file and source.output_file_exists:
os.remove(source.full_path) os.remove(source.full_path)
@ -132,7 +132,7 @@ class SourceManager:
else: else:
streams = list(self._sources.values()) streams = list(self._sources.values())
if sort_by: if sort_by:
streams.sort(key=lambda s: getattr(s, sort_by)) streams.sort(key=lambda s: getattr(s, sort_by) or "")
if reverse: if reverse:
streams.reverse() streams.reverse()
return streams return streams

View file

@ -69,8 +69,8 @@ class VideoFileAnalyzer:
version = str(e) version = str(e)
if code != 0 or not version.startswith("ffmpeg"): if code != 0 or not version.startswith("ffmpeg"):
log.warning("Unable to run ffmpeg, but it was requested. Code: %d; Message: %s", code, version) log.warning("Unable to run ffmpeg, but it was requested. Code: %d; Message: %s", code, version)
raise FileNotFoundError(f"Unable to locate or run ffmpeg or ffprobe. Please install FFmpeg " raise FileNotFoundError("Unable to locate or run ffmpeg or ffprobe. Please install FFmpeg "
f"and ensure that it is callable via PATH or conf.ffmpeg_path") "and ensure that it is callable via PATH or conf.ffmpeg_path")
log.debug("Using %s at %s", version.splitlines()[0].split(" Copyright")[0], self._which_ffmpeg) log.debug("Using %s at %s", version.splitlines()[0].split(" Copyright")[0], self._which_ffmpeg)
return version return version

View file

@ -2,4 +2,5 @@ build:
rm types/v2/* -rf rm types/v2/* -rf
touch types/v2/__init__.py touch types/v2/__init__.py
cd types/v2/ && protoc --python_out=. -I ../../../../../types/v2/proto/ ../../../../../types/v2/proto/*.proto cd types/v2/ && protoc --python_out=. -I ../../../../../types/v2/proto/ ../../../../../types/v2/proto/*.proto
cd types/v2/ && cp ../../../../../types/jsonschema/* ./
sed -e 's/^import\ \(.*\)_pb2\ /from . import\ \1_pb2\ /g' -i types/v2/*.py sed -e 's/^import\ \(.*\)_pb2\ /from . import\ \1_pb2\ /g' -i types/v2/*.py

24
lbry/schema/README.md Normal file
View file

@ -0,0 +1,24 @@
Schema
=====
Those files are generated from the [types repo](https://github.com/lbryio/types). If you are modifying/adding a new type, make sure it is cloned in the same root folder as the SDK repo, like:
```
repos/
- lbry-sdk/
- types/
```
Then, [download protoc 3.2.0](https://github.com/protocolbuffers/protobuf/releases/tag/v3.2.0), add it to your PATH. On linux it is:
```bash
cd ~/.local/bin
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.2.0/protoc-3.2.0-linux-x86_64.zip
unzip protoc-3.2.0-linux-x86_64.zip bin/protoc -d..
```
Finally, `make` should update everything in place.
### Why protoc 3.2.0?
Different/newer versions will generate larger diffs and we need to make sure they are good. In theory, we can just update to latest and it will all work, but it is a good practice to check blockchain data and retro compatibility before bumping versions (if you do, please update this section!).

View file

@ -10,6 +10,7 @@ from google.protobuf.json_format import MessageToDict
from lbry.crypto.base58 import Base58 from lbry.crypto.base58 import Base58
from lbry.constants import COIN from lbry.constants import COIN
from lbry.error import MissingPublishedFileError, EmptyPublishedFileError
from lbry.schema.mime_types import guess_media_type from lbry.schema.mime_types import guess_media_type
from lbry.schema.base import Metadata, BaseMessageList from lbry.schema.base import Metadata, BaseMessageList
@ -32,6 +33,17 @@ def calculate_sha384_file_hash(file_path):
return sha384.digest() return sha384.digest()
def country_int_to_str(country: int) -> str:
r = LocationMessage.Country.Name(country)
return r[1:] if r.startswith('R') else r
def country_str_to_int(country: str) -> int:
if len(country) == 3:
country = 'R' + country
return LocationMessage.Country.Value(country)
class Dimmensional(Metadata): class Dimmensional(Metadata):
__slots__ = () __slots__ = ()
@ -128,10 +140,10 @@ class Source(Metadata):
self.name = os.path.basename(file_path) self.name = os.path.basename(file_path)
self.media_type, stream_type = guess_media_type(file_path) self.media_type, stream_type = guess_media_type(file_path)
if not os.path.isfile(file_path): if not os.path.isfile(file_path):
raise Exception(f"File does not exist: {file_path}") raise MissingPublishedFileError(file_path)
self.size = os.path.getsize(file_path) self.size = os.path.getsize(file_path)
if self.size == 0: if self.size == 0:
raise Exception(f"Cannot publish empty file: {file_path}") raise EmptyPublishedFileError(file_path)
self.file_hash_bytes = calculate_sha384_file_hash(file_path) self.file_hash_bytes = calculate_sha384_file_hash(file_path)
return stream_type return stream_type
@ -423,14 +435,11 @@ class Language(Metadata):
@property @property
def region(self) -> str: def region(self) -> str:
if self.message.region: if self.message.region:
r = LocationMessage.Country.Name(self.message.region) return country_int_to_str(self.message.region)
return r[1:] if r.startswith('R') else r
@region.setter @region.setter
def region(self, region: str): def region(self, region: str):
if len(region) == 3: self.message.region = country_str_to_int(region)
region = 'R'+region
self.message.region = LocationMessage.Country.Value(region)
class LanguageList(BaseMessageList[Language]): class LanguageList(BaseMessageList[Language]):

View file

@ -2,6 +2,9 @@ import logging
from typing import List from typing import List
from binascii import hexlify, unhexlify from binascii import hexlify, unhexlify
from asn1crypto.keys import PublicKeyInfo
from coincurve import PublicKey as cPublicKey
from google.protobuf.json_format import MessageToDict from google.protobuf.json_format import MessageToDict
from google.protobuf.message import DecodeError from google.protobuf.message import DecodeError
from hachoir.core.log import log as hachoir_log from hachoir.core.log import log as hachoir_log
@ -303,6 +306,10 @@ class Stream(BaseClaim):
def has_fee(self) -> bool: def has_fee(self) -> bool:
return self.message.HasField('fee') return self.message.HasField('fee')
@property
def has_source(self) -> bool:
return self.message.HasField('source')
@property @property
def source(self) -> Source: def source(self) -> Source:
return Source(self.message.source) return Source(self.message.source)
@ -342,7 +349,7 @@ class Channel(BaseClaim):
@property @property
def public_key(self) -> str: def public_key(self) -> str:
return hexlify(self.message.public_key).decode() return hexlify(self.public_key_bytes).decode()
@public_key.setter @public_key.setter
def public_key(self, sd_public_key: str): def public_key(self, sd_public_key: str):
@ -350,7 +357,11 @@ class Channel(BaseClaim):
@property @property
def public_key_bytes(self) -> bytes: def public_key_bytes(self) -> bytes:
return self.message.public_key if len(self.message.public_key) == 33:
return self.message.public_key
public_key_info = PublicKeyInfo.load(self.message.public_key)
public_key = cPublicKey(public_key_info.native['public_key'])
return public_key.format(compressed=True)
@public_key_bytes.setter @public_key_bytes.setter
def public_key_bytes(self, public_key: bytes): def public_key_bytes(self, public_key: bytes):
@ -387,6 +398,12 @@ class Repost(BaseClaim):
claim_type = Claim.REPOST claim_type = Claim.REPOST
def to_dict(self):
claim = super().to_dict()
if claim.pop('claim_hash', None):
claim['claim_id'] = self.reference.claim_id
return claim
@property @property
def reference(self) -> ClaimReference: def reference(self) -> ClaimReference:
return ClaimReference(self.message) return ClaimReference(self.message)

View file

@ -1,4 +1,6 @@
import os import os
import filetype
import logging
types_map = { types_map = {
# http://www.iana.org/assignments/media-types # http://www.iana.org/assignments/media-types
@ -166,10 +168,38 @@ types_map = {
'.wmv': ('video/x-ms-wmv', 'video') '.wmv': ('video/x-ms-wmv', 'video')
} }
# maps detected extensions to the possible analogs
# i.e. .cbz file is actually a .zip
synonyms_map = {
'.zip': ['.cbz'],
'.rar': ['.cbr'],
'.ar': ['.a']
}
log = logging.getLogger(__name__)
def guess_media_type(path): def guess_media_type(path):
_, ext = os.path.splitext(path) _, ext = os.path.splitext(path)
extension = ext.strip().lower() extension = ext.strip().lower()
try:
kind = filetype.guess(path)
if kind:
real_extension = f".{kind.extension}"
if extension != real_extension:
if extension:
log.warning(f"file extension does not match it's contents: {path}, identified as {real_extension}")
else:
log.debug(f"file {path} does not have extension, identified by it's contents as {real_extension}")
if extension not in synonyms_map.get(real_extension, []):
extension = real_extension
except OSError as error:
pass
if extension[1:]: if extension[1:]:
if extension in types_map: if extension in types_map:
return types_map[extension] return types_map[extension]

View file

@ -1,6 +1,5 @@
import base64 import base64
import struct from typing import List, Union, Optional, NamedTuple
from typing import List
from binascii import hexlify from binascii import hexlify
from itertools import chain from itertools import chain
@ -16,54 +15,70 @@ BLOCKED = ErrorMessage.Code.Name(ErrorMessage.BLOCKED)
def set_reference(reference, claim_hash, rows): def set_reference(reference, claim_hash, rows):
if claim_hash: if claim_hash:
for txo in rows: for txo in rows:
if claim_hash == txo['claim_hash']: if claim_hash == txo.claim_hash:
reference.tx_hash = txo['txo_hash'][:32] reference.tx_hash = txo.tx_hash
reference.nout = struct.unpack('<I', txo['txo_hash'][32:])[0] reference.nout = txo.position
reference.height = txo['height'] reference.height = txo.height
return return
class ResolveResult(NamedTuple):
name: str
normalized_name: str
claim_hash: bytes
tx_num: int
position: int
tx_hash: bytes
height: int
amount: int
short_url: str
is_controlling: bool
canonical_url: str
creation_height: int
activation_height: int
expiration_height: int
effective_amount: int
support_amount: int
reposted: int
last_takeover_height: Optional[int]
claims_in_channel: Optional[int]
channel_hash: Optional[bytes]
reposted_claim_hash: Optional[bytes]
signature_valid: Optional[bool]
class Censor: class Censor:
__slots__ = 'streams', 'channels', 'limit_claims_per_channel', 'censored', 'claims_in_channel', 'total' NOT_CENSORED = 0
SEARCH = 1
RESOLVE = 2
def __init__(self, streams: dict = None, channels: dict = None, limit_claims_per_channel: int = None): __slots__ = 'censor_type', 'censored'
self.streams = streams or {}
self.channels = channels or {} def __init__(self, censor_type):
self.limit_claims_per_channel = limit_claims_per_channel # doesn't count as censored self.censor_type = censor_type
self.censored = {} self.censored = {}
self.claims_in_channel = {}
self.total = 0
def censor(self, row) -> bool: def is_censored(self, row):
was_censored = False return (row.get('censor_type') or self.NOT_CENSORED) >= self.censor_type
for claim_hash, lookup in (
(row['claim_hash'], self.streams),
(row['claim_hash'], self.channels),
(row['channel_hash'], self.channels),
(row['reposted_claim_hash'], self.streams),
(row['reposted_claim_hash'], self.channels)):
censoring_channel_hash = lookup.get(claim_hash)
if censoring_channel_hash:
was_censored = True
self.censored.setdefault(censoring_channel_hash, 0)
self.censored[censoring_channel_hash] += 1
break
if was_censored:
self.total += 1
if not was_censored and self.limit_claims_per_channel is not None and row['channel_hash']:
self.claims_in_channel.setdefault(row['channel_hash'], 0)
self.claims_in_channel[row['channel_hash']] += 1
if self.claims_in_channel[row['channel_hash']] > self.limit_claims_per_channel:
return True
return was_censored
def to_message(self, outputs: OutputsMessage, extra_txo_rows): def apply(self, rows):
outputs.blocked_total = self.total return [row for row in rows if not self.censor(row)]
def censor(self, row) -> Optional[bytes]:
if self.is_censored(row):
censoring_channel_hash = bytes.fromhex(row['censoring_channel_id'])[::-1]
self.censored.setdefault(censoring_channel_hash, set())
self.censored[censoring_channel_hash].add(row['tx_hash'])
return censoring_channel_hash
return None
def to_message(self, outputs: OutputsMessage, extra_txo_rows: dict):
for censoring_channel_hash, count in self.censored.items(): for censoring_channel_hash, count in self.censored.items():
blocked = outputs.blocked.add() blocked = outputs.blocked.add()
blocked.count = count blocked.count = len(count)
set_reference(blocked.channel, censoring_channel_hash, extra_txo_rows) set_reference(blocked.channel, censoring_channel_hash, extra_txo_rows)
outputs.blocked_total += len(count)
class Outputs: class Outputs:
@ -127,10 +142,10 @@ class Outputs:
'expiration_height': claim.expiration_height, 'expiration_height': claim.expiration_height,
'effective_amount': claim.effective_amount, 'effective_amount': claim.effective_amount,
'support_amount': claim.support_amount, 'support_amount': claim.support_amount,
'trending_group': claim.trending_group, # 'trending_group': claim.trending_group,
'trending_mixed': claim.trending_mixed, # 'trending_mixed': claim.trending_mixed,
'trending_local': claim.trending_local, # 'trending_local': claim.trending_local,
'trending_global': claim.trending_global, # 'trending_global': claim.trending_global,
} }
if claim.HasField('channel'): if claim.HasField('channel'):
txo.channel = tx_map[claim.channel.tx_hash].outputs[claim.channel.nout] txo.channel = tx_map[claim.channel.tx_hash].outputs[claim.channel.nout]
@ -174,44 +189,54 @@ class Outputs:
page.total = total page.total = total
if blocked is not None: if blocked is not None:
blocked.to_message(page, extra_txo_rows) blocked.to_message(page, extra_txo_rows)
for row in txo_rows:
cls.row_to_message(row, page.txos.add(), extra_txo_rows)
for row in extra_txo_rows: for row in extra_txo_rows:
cls.row_to_message(row, page.extra_txos.add(), extra_txo_rows) txo_message: 'OutputsMessage' = page.extra_txos.add()
if not isinstance(row, Exception):
if row.channel_hash:
set_reference(txo_message.claim.channel, row.channel_hash, extra_txo_rows)
if row.reposted_claim_hash:
set_reference(txo_message.claim.repost, row.reposted_claim_hash, extra_txo_rows)
cls.encode_txo(txo_message, row)
for row in txo_rows:
# cls.row_to_message(row, page.txos.add(), extra_txo_rows)
txo_message: 'OutputsMessage' = page.txos.add()
cls.encode_txo(txo_message, row)
if not isinstance(row, Exception):
if row.channel_hash:
set_reference(txo_message.claim.channel, row.channel_hash, extra_txo_rows)
if row.reposted_claim_hash:
set_reference(txo_message.claim.repost, row.reposted_claim_hash, extra_txo_rows)
elif isinstance(row, ResolveCensoredError):
set_reference(txo_message.error.blocked.channel, row.censor_id, extra_txo_rows)
return page.SerializeToString() return page.SerializeToString()
@classmethod @classmethod
def row_to_message(cls, txo, txo_message, extra_txo_rows): def encode_txo(cls, txo_message, resolve_result: Union['ResolveResult', Exception]):
if isinstance(txo, Exception): if isinstance(resolve_result, Exception):
txo_message.error.text = txo.args[0] txo_message.error.text = resolve_result.args[0]
if isinstance(txo, ValueError): if isinstance(resolve_result, ValueError):
txo_message.error.code = ErrorMessage.INVALID txo_message.error.code = ErrorMessage.INVALID
elif isinstance(txo, LookupError): elif isinstance(resolve_result, LookupError):
txo_message.error.code = ErrorMessage.NOT_FOUND txo_message.error.code = ErrorMessage.NOT_FOUND
elif isinstance(txo, ResolveCensoredError): elif isinstance(resolve_result, ResolveCensoredError):
txo_message.error.code = ErrorMessage.BLOCKED txo_message.error.code = ErrorMessage.BLOCKED
set_reference(txo_message.error.blocked.channel, txo.censor_hash, extra_txo_rows)
return return
txo_message.tx_hash = txo['txo_hash'][:32] txo_message.tx_hash = resolve_result.tx_hash
txo_message.nout, = struct.unpack('<I', txo['txo_hash'][32:]) txo_message.nout = resolve_result.position
txo_message.height = txo['height'] txo_message.height = resolve_result.height
txo_message.claim.short_url = txo['short_url'] txo_message.claim.short_url = resolve_result.short_url
txo_message.claim.reposted = txo['reposted'] txo_message.claim.reposted = resolve_result.reposted
if txo['canonical_url'] is not None: txo_message.claim.is_controlling = resolve_result.is_controlling
txo_message.claim.canonical_url = txo['canonical_url'] txo_message.claim.creation_height = resolve_result.creation_height
txo_message.claim.is_controlling = bool(txo['is_controlling']) txo_message.claim.activation_height = resolve_result.activation_height
if txo['last_take_over_height'] is not None: txo_message.claim.expiration_height = resolve_result.expiration_height
txo_message.claim.take_over_height = txo['last_take_over_height'] txo_message.claim.effective_amount = resolve_result.effective_amount
txo_message.claim.creation_height = txo['creation_height'] txo_message.claim.support_amount = resolve_result.support_amount
txo_message.claim.activation_height = txo['activation_height']
txo_message.claim.expiration_height = txo['expiration_height'] if resolve_result.canonical_url is not None:
if txo['claims_in_channel'] is not None: txo_message.claim.canonical_url = resolve_result.canonical_url
txo_message.claim.claims_in_channel = txo['claims_in_channel'] if resolve_result.last_takeover_height is not None:
txo_message.claim.effective_amount = txo['effective_amount'] txo_message.claim.take_over_height = resolve_result.last_takeover_height
txo_message.claim.support_amount = txo['support_amount'] if resolve_result.claims_in_channel is not None:
txo_message.claim.trending_group = txo['trending_group'] txo_message.claim.claims_in_channel = resolve_result.claims_in_channel
txo_message.claim.trending_mixed = txo['trending_mixed']
txo_message.claim.trending_local = txo['trending_local']
txo_message.claim.trending_global = txo['trending_global']
set_reference(txo_message.claim.channel, txo['channel_hash'], extra_txo_rows)
set_reference(txo_message.claim.repost, txo['reposted_claim_hash'], extra_txo_rows)

View file

@ -13,3 +13,11 @@ class Support(Signable):
@emoji.setter @emoji.setter
def emoji(self, emoji: str): def emoji(self, emoji: str):
self.message.emoji = emoji self.message.emoji = emoji
@property
def comment(self) -> str:
return self.message.comment
@comment.setter
def comment(self, comment: str):
self.message.comment = comment

View file

@ -1,13 +1,11 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: result.proto # source: result.proto
"""Generated protocol buffer code."""
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message from google.protobuf import message as _message
from google.protobuf import reflection as _reflection from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports) # @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default() _sym_db = _symbol_database.Default()
@ -19,9 +17,10 @@ DESCRIPTOR = _descriptor.FileDescriptor(
name='result.proto', name='result.proto',
package='pb', package='pb',
syntax='proto3', syntax='proto3',
serialized_pb=_b('\n\x0cresult.proto\x12\x02pb\"\x97\x01\n\x07Outputs\x12\x18\n\x04txos\x18\x01 \x03(\x0b\x32\n.pb.Output\x12\x1e\n\nextra_txos\x18\x02 \x03(\x0b\x32\n.pb.Output\x12\r\n\x05total\x18\x03 \x01(\r\x12\x0e\n\x06offset\x18\x04 \x01(\r\x12\x1c\n\x07\x62locked\x18\x05 \x03(\x0b\x32\x0b.pb.Blocked\x12\x15\n\rblocked_total\x18\x06 \x01(\r\"{\n\x06Output\x12\x0f\n\x07tx_hash\x18\x01 \x01(\x0c\x12\x0c\n\x04nout\x18\x02 \x01(\r\x12\x0e\n\x06height\x18\x03 \x01(\r\x12\x1e\n\x05\x63laim\x18\x07 \x01(\x0b\x32\r.pb.ClaimMetaH\x00\x12\x1a\n\x05\x65rror\x18\x0f \x01(\x0b\x32\t.pb.ErrorH\x00\x42\x06\n\x04meta\"\xaf\x03\n\tClaimMeta\x12\x1b\n\x07\x63hannel\x18\x01 \x01(\x0b\x32\n.pb.Output\x12\x1a\n\x06repost\x18\x02 \x01(\x0b\x32\n.pb.Output\x12\x11\n\tshort_url\x18\x03 \x01(\t\x12\x15\n\rcanonical_url\x18\x04 \x01(\t\x12\x16\n\x0eis_controlling\x18\x05 \x01(\x08\x12\x18\n\x10take_over_height\x18\x06 \x01(\r\x12\x17\n\x0f\x63reation_height\x18\x07 \x01(\r\x12\x19\n\x11\x61\x63tivation_height\x18\x08 \x01(\r\x12\x19\n\x11\x65xpiration_height\x18\t \x01(\r\x12\x19\n\x11\x63laims_in_channel\x18\n \x01(\r\x12\x10\n\x08reposted\x18\x0b \x01(\r\x12\x18\n\x10\x65\x66\x66\x65\x63tive_amount\x18\x14 \x01(\x04\x12\x16\n\x0esupport_amount\x18\x15 \x01(\x04\x12\x16\n\x0etrending_group\x18\x16 \x01(\r\x12\x16\n\x0etrending_mixed\x18\x17 \x01(\x02\x12\x16\n\x0etrending_local\x18\x18 \x01(\x02\x12\x17\n\x0ftrending_global\x18\x19 \x01(\x02\"\x94\x01\n\x05\x45rror\x12\x1c\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x0e.pb.Error.Code\x12\x0c\n\x04text\x18\x02 \x01(\t\x12\x1c\n\x07\x62locked\x18\x03 \x01(\x0b\x32\x0b.pb.Blocked\"A\n\x04\x43ode\x12\x10\n\x0cUNKNOWN_CODE\x10\x00\x12\r\n\tNOT_FOUND\x10\x01\x12\x0b\n\x07INVALID\x10\x02\x12\x0b\n\x07\x42LOCKED\x10\x03\"5\n\x07\x42locked\x12\r\n\x05\x63ount\x18\x01 \x01(\r\x12\x1b\n\x07\x63hannel\x18\x02 \x01(\x0b\x32\n.pb.Outputb\x06proto3') serialized_options=b'Z$github.com/lbryio/hub/protobuf/go/pb',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\x0cresult.proto\x12\x02pb\"\x97\x01\n\x07Outputs\x12\x18\n\x04txos\x18\x01 \x03(\x0b\x32\n.pb.Output\x12\x1e\n\nextra_txos\x18\x02 \x03(\x0b\x32\n.pb.Output\x12\r\n\x05total\x18\x03 \x01(\r\x12\x0e\n\x06offset\x18\x04 \x01(\r\x12\x1c\n\x07\x62locked\x18\x05 \x03(\x0b\x32\x0b.pb.Blocked\x12\x15\n\rblocked_total\x18\x06 \x01(\r\"{\n\x06Output\x12\x0f\n\x07tx_hash\x18\x01 \x01(\x0c\x12\x0c\n\x04nout\x18\x02 \x01(\r\x12\x0e\n\x06height\x18\x03 \x01(\r\x12\x1e\n\x05\x63laim\x18\x07 \x01(\x0b\x32\r.pb.ClaimMetaH\x00\x12\x1a\n\x05\x65rror\x18\x0f \x01(\x0b\x32\t.pb.ErrorH\x00\x42\x06\n\x04meta\"\xe6\x02\n\tClaimMeta\x12\x1b\n\x07\x63hannel\x18\x01 \x01(\x0b\x32\n.pb.Output\x12\x1a\n\x06repost\x18\x02 \x01(\x0b\x32\n.pb.Output\x12\x11\n\tshort_url\x18\x03 \x01(\t\x12\x15\n\rcanonical_url\x18\x04 \x01(\t\x12\x16\n\x0eis_controlling\x18\x05 \x01(\x08\x12\x18\n\x10take_over_height\x18\x06 \x01(\r\x12\x17\n\x0f\x63reation_height\x18\x07 \x01(\r\x12\x19\n\x11\x61\x63tivation_height\x18\x08 \x01(\r\x12\x19\n\x11\x65xpiration_height\x18\t \x01(\r\x12\x19\n\x11\x63laims_in_channel\x18\n \x01(\r\x12\x10\n\x08reposted\x18\x0b \x01(\r\x12\x18\n\x10\x65\x66\x66\x65\x63tive_amount\x18\x14 \x01(\x04\x12\x16\n\x0esupport_amount\x18\x15 \x01(\x04\x12\x16\n\x0etrending_score\x18\x16 \x01(\x01\"\x94\x01\n\x05\x45rror\x12\x1c\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x0e.pb.Error.Code\x12\x0c\n\x04text\x18\x02 \x01(\t\x12\x1c\n\x07\x62locked\x18\x03 \x01(\x0b\x32\x0b.pb.Blocked\"A\n\x04\x43ode\x12\x10\n\x0cUNKNOWN_CODE\x10\x00\x12\r\n\tNOT_FOUND\x10\x01\x12\x0b\n\x07INVALID\x10\x02\x12\x0b\n\x07\x42LOCKED\x10\x03\"5\n\x07\x42locked\x12\r\n\x05\x63ount\x18\x01 \x01(\r\x12\x1b\n\x07\x63hannel\x18\x02 \x01(\x0b\x32\n.pb.OutputB&Z$github.com/lbryio/hub/protobuf/go/pbb\x06proto3'
) )
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
@ -30,28 +29,33 @@ _ERROR_CODE = _descriptor.EnumDescriptor(
full_name='pb.Error.Code', full_name='pb.Error.Code',
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
create_key=_descriptor._internal_create_key,
values=[ values=[
_descriptor.EnumValueDescriptor( _descriptor.EnumValueDescriptor(
name='UNKNOWN_CODE', index=0, number=0, name='UNKNOWN_CODE', index=0, number=0,
options=None, serialized_options=None,
type=None), type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor( _descriptor.EnumValueDescriptor(
name='NOT_FOUND', index=1, number=1, name='NOT_FOUND', index=1, number=1,
options=None, serialized_options=None,
type=None), type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor( _descriptor.EnumValueDescriptor(
name='INVALID', index=2, number=2, name='INVALID', index=2, number=2,
options=None, serialized_options=None,
type=None), type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor( _descriptor.EnumValueDescriptor(
name='BLOCKED', index=3, number=3, name='BLOCKED', index=3, number=3,
options=None, serialized_options=None,
type=None), type=None,
create_key=_descriptor._internal_create_key),
], ],
containing_type=None, containing_type=None,
options=None, serialized_options=None,
serialized_start=817, serialized_start=744,
serialized_end=882, serialized_end=809,
) )
_sym_db.RegisterEnumDescriptor(_ERROR_CODE) _sym_db.RegisterEnumDescriptor(_ERROR_CODE)
@ -62,6 +66,7 @@ _OUTPUTS = _descriptor.Descriptor(
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
containing_type=None, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[ fields=[
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='txos', full_name='pb.Outputs.txos', index=0, name='txos', full_name='pb.Outputs.txos', index=0,
@ -69,49 +74,49 @@ _OUTPUTS = _descriptor.Descriptor(
has_default_value=False, default_value=[], has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='extra_txos', full_name='pb.Outputs.extra_txos', index=1, name='extra_txos', full_name='pb.Outputs.extra_txos', index=1,
number=2, type=11, cpp_type=10, label=3, number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[], has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='total', full_name='pb.Outputs.total', index=2, name='total', full_name='pb.Outputs.total', index=2,
number=3, type=13, cpp_type=3, label=1, number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='offset', full_name='pb.Outputs.offset', index=3, name='offset', full_name='pb.Outputs.offset', index=3,
number=4, type=13, cpp_type=3, label=1, number=4, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='blocked', full_name='pb.Outputs.blocked', index=4, name='blocked', full_name='pb.Outputs.blocked', index=4,
number=5, type=11, cpp_type=10, label=3, number=5, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[], has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='blocked_total', full_name='pb.Outputs.blocked_total', index=5, name='blocked_total', full_name='pb.Outputs.blocked_total', index=5,
number=6, type=13, cpp_type=3, label=1, number=6, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
], ],
extensions=[ extensions=[
], ],
nested_types=[], nested_types=[],
enum_types=[ enum_types=[
], ],
options=None, serialized_options=None,
is_extendable=False, is_extendable=False,
syntax='proto3', syntax='proto3',
extension_ranges=[], extension_ranges=[],
@ -128,56 +133,59 @@ _OUTPUT = _descriptor.Descriptor(
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
containing_type=None, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[ fields=[
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='tx_hash', full_name='pb.Output.tx_hash', index=0, name='tx_hash', full_name='pb.Output.tx_hash', index=0,
number=1, type=12, cpp_type=9, label=1, number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""), has_default_value=False, default_value=b"",
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='nout', full_name='pb.Output.nout', index=1, name='nout', full_name='pb.Output.nout', index=1,
number=2, type=13, cpp_type=3, label=1, number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='height', full_name='pb.Output.height', index=2, name='height', full_name='pb.Output.height', index=2,
number=3, type=13, cpp_type=3, label=1, number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='claim', full_name='pb.Output.claim', index=3, name='claim', full_name='pb.Output.claim', index=3,
number=7, type=11, cpp_type=10, label=1, number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='error', full_name='pb.Output.error', index=4, name='error', full_name='pb.Output.error', index=4,
number=15, type=11, cpp_type=10, label=1, number=15, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
], ],
extensions=[ extensions=[
], ],
nested_types=[], nested_types=[],
enum_types=[ enum_types=[
], ],
options=None, serialized_options=None,
is_extendable=False, is_extendable=False,
syntax='proto3', syntax='proto3',
extension_ranges=[], extension_ranges=[],
oneofs=[ oneofs=[
_descriptor.OneofDescriptor( _descriptor.OneofDescriptor(
name='meta', full_name='pb.Output.meta', name='meta', full_name='pb.Output.meta',
index=0, containing_type=None, fields=[]), index=0, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[]),
], ],
serialized_start=174, serialized_start=174,
serialized_end=297, serialized_end=297,
@ -190,6 +198,7 @@ _CLAIMMETA = _descriptor.Descriptor(
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
containing_type=None, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[ fields=[
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='channel', full_name='pb.ClaimMeta.channel', index=0, name='channel', full_name='pb.ClaimMeta.channel', index=0,
@ -197,133 +206,112 @@ _CLAIMMETA = _descriptor.Descriptor(
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='repost', full_name='pb.ClaimMeta.repost', index=1, name='repost', full_name='pb.ClaimMeta.repost', index=1,
number=2, type=11, cpp_type=10, label=1, number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='short_url', full_name='pb.ClaimMeta.short_url', index=2, name='short_url', full_name='pb.ClaimMeta.short_url', index=2,
number=3, type=9, cpp_type=9, label=1, number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'), has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='canonical_url', full_name='pb.ClaimMeta.canonical_url', index=3, name='canonical_url', full_name='pb.ClaimMeta.canonical_url', index=3,
number=4, type=9, cpp_type=9, label=1, number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'), has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='is_controlling', full_name='pb.ClaimMeta.is_controlling', index=4, name='is_controlling', full_name='pb.ClaimMeta.is_controlling', index=4,
number=5, type=8, cpp_type=7, label=1, number=5, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False, has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='take_over_height', full_name='pb.ClaimMeta.take_over_height', index=5, name='take_over_height', full_name='pb.ClaimMeta.take_over_height', index=5,
number=6, type=13, cpp_type=3, label=1, number=6, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='creation_height', full_name='pb.ClaimMeta.creation_height', index=6, name='creation_height', full_name='pb.ClaimMeta.creation_height', index=6,
number=7, type=13, cpp_type=3, label=1, number=7, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='activation_height', full_name='pb.ClaimMeta.activation_height', index=7, name='activation_height', full_name='pb.ClaimMeta.activation_height', index=7,
number=8, type=13, cpp_type=3, label=1, number=8, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='expiration_height', full_name='pb.ClaimMeta.expiration_height', index=8, name='expiration_height', full_name='pb.ClaimMeta.expiration_height', index=8,
number=9, type=13, cpp_type=3, label=1, number=9, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='claims_in_channel', full_name='pb.ClaimMeta.claims_in_channel', index=9, name='claims_in_channel', full_name='pb.ClaimMeta.claims_in_channel', index=9,
number=10, type=13, cpp_type=3, label=1, number=10, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='reposted', full_name='pb.ClaimMeta.reposted', index=10, name='reposted', full_name='pb.ClaimMeta.reposted', index=10,
number=11, type=13, cpp_type=3, label=1, number=11, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='effective_amount', full_name='pb.ClaimMeta.effective_amount', index=11, name='effective_amount', full_name='pb.ClaimMeta.effective_amount', index=11,
number=20, type=4, cpp_type=4, label=1, number=20, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='support_amount', full_name='pb.ClaimMeta.support_amount', index=12, name='support_amount', full_name='pb.ClaimMeta.support_amount', index=12,
number=21, type=4, cpp_type=4, label=1, number=21, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='trending_group', full_name='pb.ClaimMeta.trending_group', index=13, name='trending_score', full_name='pb.ClaimMeta.trending_score', index=13,
number=22, type=13, cpp_type=3, label=1, number=22, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='trending_mixed', full_name='pb.ClaimMeta.trending_mixed', index=14,
number=23, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0), has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='trending_local', full_name='pb.ClaimMeta.trending_local', index=15,
number=24, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='trending_global', full_name='pb.ClaimMeta.trending_global', index=16,
number=25, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
], ],
extensions=[ extensions=[
], ],
nested_types=[], nested_types=[],
enum_types=[ enum_types=[
], ],
options=None, serialized_options=None,
is_extendable=False, is_extendable=False,
syntax='proto3', syntax='proto3',
extension_ranges=[], extension_ranges=[],
oneofs=[ oneofs=[
], ],
serialized_start=300, serialized_start=300,
serialized_end=731, serialized_end=658,
) )
@ -333,6 +321,7 @@ _ERROR = _descriptor.Descriptor(
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
containing_type=None, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[ fields=[
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='code', full_name='pb.Error.code', index=0, name='code', full_name='pb.Error.code', index=0,
@ -340,21 +329,21 @@ _ERROR = _descriptor.Descriptor(
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='text', full_name='pb.Error.text', index=1, name='text', full_name='pb.Error.text', index=1,
number=2, type=9, cpp_type=9, label=1, number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'), has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='blocked', full_name='pb.Error.blocked', index=2, name='blocked', full_name='pb.Error.blocked', index=2,
number=3, type=11, cpp_type=10, label=1, number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
], ],
extensions=[ extensions=[
], ],
@ -362,14 +351,14 @@ _ERROR = _descriptor.Descriptor(
enum_types=[ enum_types=[
_ERROR_CODE, _ERROR_CODE,
], ],
options=None, serialized_options=None,
is_extendable=False, is_extendable=False,
syntax='proto3', syntax='proto3',
extension_ranges=[], extension_ranges=[],
oneofs=[ oneofs=[
], ],
serialized_start=734, serialized_start=661,
serialized_end=882, serialized_end=809,
) )
@ -379,6 +368,7 @@ _BLOCKED = _descriptor.Descriptor(
filename=None, filename=None,
file=DESCRIPTOR, file=DESCRIPTOR,
containing_type=None, containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[ fields=[
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='count', full_name='pb.Blocked.count', index=0, name='count', full_name='pb.Blocked.count', index=0,
@ -386,28 +376,28 @@ _BLOCKED = _descriptor.Descriptor(
has_default_value=False, default_value=0, has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor( _descriptor.FieldDescriptor(
name='channel', full_name='pb.Blocked.channel', index=1, name='channel', full_name='pb.Blocked.channel', index=1,
number=2, type=11, cpp_type=10, label=1, number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None, has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
], ],
extensions=[ extensions=[
], ],
nested_types=[], nested_types=[],
enum_types=[ enum_types=[
], ],
options=None, serialized_options=None,
is_extendable=False, is_extendable=False,
syntax='proto3', syntax='proto3',
extension_ranges=[], extension_ranges=[],
oneofs=[ oneofs=[
], ],
serialized_start=884, serialized_start=811,
serialized_end=937, serialized_end=864,
) )
_OUTPUTS.fields_by_name['txos'].message_type = _OUTPUT _OUTPUTS.fields_by_name['txos'].message_type = _OUTPUT
@ -432,41 +422,43 @@ DESCRIPTOR.message_types_by_name['Output'] = _OUTPUT
DESCRIPTOR.message_types_by_name['ClaimMeta'] = _CLAIMMETA DESCRIPTOR.message_types_by_name['ClaimMeta'] = _CLAIMMETA
DESCRIPTOR.message_types_by_name['Error'] = _ERROR DESCRIPTOR.message_types_by_name['Error'] = _ERROR
DESCRIPTOR.message_types_by_name['Blocked'] = _BLOCKED DESCRIPTOR.message_types_by_name['Blocked'] = _BLOCKED
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Outputs = _reflection.GeneratedProtocolMessageType('Outputs', (_message.Message,), dict( Outputs = _reflection.GeneratedProtocolMessageType('Outputs', (_message.Message,), {
DESCRIPTOR = _OUTPUTS, 'DESCRIPTOR' : _OUTPUTS,
__module__ = 'result_pb2' '__module__' : 'result_pb2'
# @@protoc_insertion_point(class_scope:pb.Outputs) # @@protoc_insertion_point(class_scope:pb.Outputs)
)) })
_sym_db.RegisterMessage(Outputs) _sym_db.RegisterMessage(Outputs)
Output = _reflection.GeneratedProtocolMessageType('Output', (_message.Message,), dict( Output = _reflection.GeneratedProtocolMessageType('Output', (_message.Message,), {
DESCRIPTOR = _OUTPUT, 'DESCRIPTOR' : _OUTPUT,
__module__ = 'result_pb2' '__module__' : 'result_pb2'
# @@protoc_insertion_point(class_scope:pb.Output) # @@protoc_insertion_point(class_scope:pb.Output)
)) })
_sym_db.RegisterMessage(Output) _sym_db.RegisterMessage(Output)
ClaimMeta = _reflection.GeneratedProtocolMessageType('ClaimMeta', (_message.Message,), dict( ClaimMeta = _reflection.GeneratedProtocolMessageType('ClaimMeta', (_message.Message,), {
DESCRIPTOR = _CLAIMMETA, 'DESCRIPTOR' : _CLAIMMETA,
__module__ = 'result_pb2' '__module__' : 'result_pb2'
# @@protoc_insertion_point(class_scope:pb.ClaimMeta) # @@protoc_insertion_point(class_scope:pb.ClaimMeta)
)) })
_sym_db.RegisterMessage(ClaimMeta) _sym_db.RegisterMessage(ClaimMeta)
Error = _reflection.GeneratedProtocolMessageType('Error', (_message.Message,), dict( Error = _reflection.GeneratedProtocolMessageType('Error', (_message.Message,), {
DESCRIPTOR = _ERROR, 'DESCRIPTOR' : _ERROR,
__module__ = 'result_pb2' '__module__' : 'result_pb2'
# @@protoc_insertion_point(class_scope:pb.Error) # @@protoc_insertion_point(class_scope:pb.Error)
)) })
_sym_db.RegisterMessage(Error) _sym_db.RegisterMessage(Error)
Blocked = _reflection.GeneratedProtocolMessageType('Blocked', (_message.Message,), dict( Blocked = _reflection.GeneratedProtocolMessageType('Blocked', (_message.Message,), {
DESCRIPTOR = _BLOCKED, 'DESCRIPTOR' : _BLOCKED,
__module__ = 'result_pb2' '__module__' : 'result_pb2'
# @@protoc_insertion_point(class_scope:pb.Blocked) # @@protoc_insertion_point(class_scope:pb.Blocked)
)) })
_sym_db.RegisterMessage(Blocked) _sym_db.RegisterMessage(Blocked)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View file

@ -19,7 +19,7 @@ DESCRIPTOR = _descriptor.FileDescriptor(
name='support.proto', name='support.proto',
package='pb', package='pb',
syntax='proto3', syntax='proto3',
serialized_pb=_b('\n\rsupport.proto\x12\x02pb\"\x18\n\x07Support\x12\r\n\x05\x65moji\x18\x01 \x01(\tb\x06proto3') serialized_pb=_b('\n\rsupport.proto\x12\x02pb\")\n\x07Support\x12\r\n\x05\x65moji\x18\x01 \x01(\t\x12\x0f\n\x07\x63omment\x18\x02 \x01(\tb\x06proto3')
) )
_sym_db.RegisterFileDescriptor(DESCRIPTOR) _sym_db.RegisterFileDescriptor(DESCRIPTOR)
@ -40,6 +40,13 @@ _SUPPORT = _descriptor.Descriptor(
message_type=None, enum_type=None, containing_type=None, message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None, is_extension=False, extension_scope=None,
options=None), options=None),
_descriptor.FieldDescriptor(
name='comment', full_name='pb.Support.comment', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
], ],
extensions=[ extensions=[
], ],
@ -53,7 +60,7 @@ _SUPPORT = _descriptor.Descriptor(
oneofs=[ oneofs=[
], ],
serialized_start=21, serialized_start=21,
serialized_end=45, serialized_end=62,
) )
DESCRIPTOR.message_types_by_name['Support'] = _SUPPORT DESCRIPTOR.message_types_by_name['Support'] = _SUPPORT

View file

@ -0,0 +1,139 @@
{
"title": "Wallet",
"description": "An LBC wallet",
"type": "object",
"required": ["name", "version", "accounts", "preferences"],
"additionalProperties": false,
"properties": {
"name": {
"description": "Human readable name for this wallet",
"type": "string"
},
"version": {
"description": "Wallet spec version",
"type": "integer",
"$comment": "Should this be a string? We may need some sort of decimal type if we want exact decimal versions."
},
"accounts": {
"description": "Accounts associated with this wallet",
"type": "array",
"items": {
"type": "object",
"required": ["address_generator", "certificates", "encrypted", "ledger", "modified_on", "name", "private_key", "public_key", "seed"],
"additionalProperties": false,
"properties": {
"address_generator": {
"description": "Higher level manager of either singular or deterministically generated addresses",
"type": "object",
"oneOf": [
{
"required": ["name", "change", "receiving"],
"additionalProperties": false,
"properties": {
"name": {
"description": "type of address generator: a deterministic chain of addresses",
"enum": ["deterministic-chain"],
"type": "string"
},
"change": {
"$ref": "#/$defs/address_manager",
"description": "Manager for deterministically generated change address (not used for single address)"
},
"receiving": {
"$ref": "#/$defs/address_manager",
"description": "Manager for deterministically generated receiving address (not used for single address)"
}
}
}, {
"required": ["name"],
"additionalProperties": false,
"properties": {
"name": {
"description": "type of address generator: a single address",
"enum": ["single-address"],
"type": "string"
}
}
}
]
},
"certificates": {
"type": "object",
"description": "Channel keys. Mapping from public key address to pem-formatted private key.",
"additionalProperties": {"type": "string"}
},
"encrypted": {
"type": "boolean",
"description": "Whether private key and seed are encrypted with a password"
},
"ledger": {
"description": "Which network to use",
"type": "string",
"examples": [
"lbc_mainnet",
"lbc_testnet"
]
},
"modified_on": {
"description": "last modified time in Unix Time",
"type": "integer"
},
"name": {
"description": "Name for account, possibly human readable",
"type": "string"
},
"private_key": {
"description": "Private key for address if `address_generator` is a single address. Root of chain of private keys for addresses if `address_generator` is a deterministic chain of addresses. Encrypted if `encrypted` is true.",
"type": "string"
},
"public_key": {
"description": "Public key for address if `address_generator` is a single address. Root of chain of public keys for addresses if `address_generator` is a deterministic chain of addresses.",
"type": "string"
},
"seed": {
"description": "Human readable representation of `private_key`. encrypted if `encrypted` is set to `true`",
"type": "string"
}
}
}
},
"preferences": {
"description": "Timestamped application-level preferences. Values can be objects or of a primitive type.",
"$comment": "enable-sync is seen in example wallet. encrypt-on-disk is seen in example wallet. they both have a boolean `value` field. Do we want them explicitly defined here? local and shared seem to have at least a similar structure (type, value [yes, again], version), value being the free-form part. Should we define those here? Or can there be any key under preferences, and `value` be literally be anything in any form?",
"type": "object",
"additionalProperties": {
"type": "object",
"required": ["ts", "value"],
"additionalProperties": false,
"properties": {
"ts": {
"type": "number",
"description": "When the item was set, in Unix time format.",
"$comment": "Do we want a string (decimal)?"
},
"value": {
"$comment": "Sometimes this has been an object, sometimes just a boolean. I don't want to prescribe anything."
}
}
}
}
},
"$defs": {
"address_manager": {
"description": "Manager for deterministically generated addresses",
"type": "object",
"required": ["gap", "maximum_uses_per_address"],
"additionalProperties": false,
"properties": {
"gap": {
"description": "Maximum allowed consecutive generated addresses with no transactions",
"type": "integer"
},
"maximum_uses_per_address": {
"description": "Maximum number of uses for each generated address",
"type": "integer"
}
}
}
}
}

View file

@ -55,6 +55,14 @@ class PathSegment(NamedTuple):
def normalized(self): def normalized(self):
return normalize_name(self.name) return normalize_name(self.name)
@property
def is_shortid(self):
return self.claim_id is not None and len(self.claim_id) < 40
@property
def is_fullid(self):
return self.claim_id is not None and len(self.claim_id) == 40
def to_dict(self): def to_dict(self):
q = {'name': self.name} q = {'name': self.name}
if self.claim_id is not None: if self.claim_id is not None:

View file

@ -0,0 +1,31 @@
import asyncio
import logging
from lbry.stream.downloader import StreamDownloader
log = logging.getLogger(__name__)
class BackgroundDownloader:
def __init__(self, conf, storage, blob_manager, dht_node=None):
self.storage = storage
self.blob_manager = blob_manager
self.node = dht_node
self.conf = conf
async def download_blobs(self, sd_hash):
downloader = StreamDownloader(asyncio.get_running_loop(), self.conf, self.blob_manager, sd_hash)
try:
await downloader.start(self.node, save_stream=False)
for blob_info in downloader.descriptor.blobs[:-1]:
await downloader.download_stream_blob(blob_info)
except ValueError:
return
except asyncio.CancelledError:
log.debug("Cancelled background downloader")
raise
except Exception:
log.error("Unexpected download error on background downloader")
finally:
downloader.stop()

View file

@ -4,6 +4,7 @@ import binascii
import logging import logging
import typing import typing
import asyncio import asyncio
import time
import re import re
from collections import OrderedDict from collections import OrderedDict
from cryptography.hazmat.primitives.ciphers.algorithms import AES from cryptography.hazmat.primitives.ciphers.algorithms import AES
@ -152,15 +153,19 @@ class StreamDescriptor:
h.update(self.old_sort_json()) h.update(self.old_sort_json())
return h.hexdigest() return h.hexdigest()
async def make_sd_blob(self, blob_file_obj: typing.Optional[AbstractBlob] = None, async def make_sd_blob(
old_sort: typing.Optional[bool] = False, self, blob_file_obj: typing.Optional[AbstractBlob] = None, old_sort: typing.Optional[bool] = False,
blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], None]] = None): blob_completed_callback: typing.Optional[typing.Callable[['AbstractBlob'], None]] = None,
added_on: float = None, is_mine: bool = False
):
sd_hash = self.calculate_sd_hash() if not old_sort else self.calculate_old_sort_sd_hash() sd_hash = self.calculate_sd_hash() if not old_sort else self.calculate_old_sort_sd_hash()
if not old_sort: if not old_sort:
sd_data = self.as_json() sd_data = self.as_json()
else: else:
sd_data = self.old_sort_json() sd_data = self.old_sort_json()
sd_blob = blob_file_obj or BlobFile(self.loop, sd_hash, len(sd_data), blob_completed_callback, self.blob_dir) sd_blob = blob_file_obj or BlobFile(
self.loop, sd_hash, len(sd_data), blob_completed_callback, self.blob_dir, added_on, is_mine
)
if blob_file_obj: if blob_file_obj:
blob_file_obj.set_length(len(sd_data)) blob_file_obj.set_length(len(sd_data))
if not sd_blob.get_is_verified(): if not sd_blob.get_is_verified():
@ -183,18 +188,19 @@ class StreamDescriptor:
raise InvalidStreamDescriptorError("Does not decode as valid JSON") raise InvalidStreamDescriptorError("Does not decode as valid JSON")
if decoded['blobs'][-1]['length'] != 0: if decoded['blobs'][-1]['length'] != 0:
raise InvalidStreamDescriptorError("Does not end with a zero-length blob.") raise InvalidStreamDescriptorError("Does not end with a zero-length blob.")
if any([blob_info['length'] == 0 for blob_info in decoded['blobs'][:-1]]): if any(blob_info['length'] == 0 for blob_info in decoded['blobs'][:-1]):
raise InvalidStreamDescriptorError("Contains zero-length data blob") raise InvalidStreamDescriptorError("Contains zero-length data blob")
if 'blob_hash' in decoded['blobs'][-1]: if 'blob_hash' in decoded['blobs'][-1]:
raise InvalidStreamDescriptorError("Stream terminator blob should not have a hash") raise InvalidStreamDescriptorError("Stream terminator blob should not have a hash")
if any([i != blob_info['blob_num'] for i, blob_info in enumerate(decoded['blobs'])]): if any(i != blob_info['blob_num'] for i, blob_info in enumerate(decoded['blobs'])):
raise InvalidStreamDescriptorError("Stream contains out of order or skipped blobs") raise InvalidStreamDescriptorError("Stream contains out of order or skipped blobs")
added_on = time.time()
descriptor = cls( descriptor = cls(
loop, blob_dir, loop, blob_dir,
binascii.unhexlify(decoded['stream_name']).decode(), binascii.unhexlify(decoded['stream_name']).decode(),
decoded['key'], decoded['key'],
binascii.unhexlify(decoded['suggested_file_name']).decode(), binascii.unhexlify(decoded['suggested_file_name']).decode(),
[BlobInfo(info['blob_num'], info['length'], info['iv'], info.get('blob_hash')) [BlobInfo(info['blob_num'], info['length'], info['iv'], added_on, info.get('blob_hash'))
for info in decoded['blobs']], for info in decoded['blobs']],
decoded['stream_hash'], decoded['stream_hash'],
blob.blob_hash blob.blob_hash
@ -252,20 +258,25 @@ class StreamDescriptor:
iv_generator = iv_generator or random_iv_generator() iv_generator = iv_generator or random_iv_generator()
key = key or os.urandom(AES.block_size // 8) key = key or os.urandom(AES.block_size // 8)
blob_num = -1 blob_num = -1
added_on = time.time()
async for blob_bytes in file_reader(file_path): async for blob_bytes in file_reader(file_path):
blob_num += 1 blob_num += 1
blob_info = await BlobFile.create_from_unencrypted( blob_info = await BlobFile.create_from_unencrypted(
loop, blob_dir, key, next(iv_generator), blob_bytes, blob_num, blob_completed_callback loop, blob_dir, key, next(iv_generator), blob_bytes, blob_num, added_on, True, blob_completed_callback
) )
blobs.append(blob_info) blobs.append(blob_info)
blobs.append( blobs.append(
BlobInfo(len(blobs), 0, binascii.hexlify(next(iv_generator)).decode())) # add the stream terminator # add the stream terminator
BlobInfo(len(blobs), 0, binascii.hexlify(next(iv_generator)).decode(), added_on, None, True)
)
file_name = os.path.basename(file_path) file_name = os.path.basename(file_path)
suggested_file_name = sanitize_file_name(file_name) suggested_file_name = sanitize_file_name(file_name)
descriptor = cls( descriptor = cls(
loop, blob_dir, file_name, binascii.hexlify(key).decode(), suggested_file_name, blobs loop, blob_dir, file_name, binascii.hexlify(key).decode(), suggested_file_name, blobs
) )
sd_blob = await descriptor.make_sd_blob(old_sort=old_sort, blob_completed_callback=blob_completed_callback) sd_blob = await descriptor.make_sd_blob(
old_sort=old_sort, blob_completed_callback=blob_completed_callback, added_on=added_on, is_mine=True
)
descriptor.sd_hash = sd_blob.blob_hash descriptor.sd_hash = sd_blob.blob_hash
return descriptor return descriptor

View file

@ -3,11 +3,13 @@ import typing
import logging import logging
import binascii import binascii
from lbry.dht.peer import make_kademlia_peer from lbry.dht.node import get_kademlia_peers_from_hosts
from lbry.error import DownloadSDTimeoutError from lbry.error import DownloadSDTimeoutError
from lbry.utils import resolve_host, lru_cache_concurrent from lbry.utils import lru_cache_concurrent
from lbry.stream.descriptor import StreamDescriptor from lbry.stream.descriptor import StreamDescriptor
from lbry.blob_exchange.downloader import BlobDownloader from lbry.blob_exchange.downloader import BlobDownloader
from lbry.torrent.tracker import enqueue_tracker_search
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.conf import Config from lbry.conf import Config
from lbry.dht.node import Node from lbry.dht.node import Node
@ -25,8 +27,8 @@ class StreamDownloader:
self.config = config self.config = config
self.blob_manager = blob_manager self.blob_manager = blob_manager
self.sd_hash = sd_hash self.sd_hash = sd_hash
self.search_queue = asyncio.Queue(loop=loop) # blob hashes to feed into the iterative finder self.search_queue = asyncio.Queue() # blob hashes to feed into the iterative finder
self.peer_queue = asyncio.Queue(loop=loop) # new peers to try self.peer_queue = asyncio.Queue() # new peers to try
self.blob_downloader = BlobDownloader(self.loop, self.config, self.blob_manager, self.peer_queue) self.blob_downloader = BlobDownloader(self.loop, self.config, self.blob_manager, self.peer_queue)
self.descriptor: typing.Optional[StreamDescriptor] = descriptor self.descriptor: typing.Optional[StreamDescriptor] = descriptor
self.node: typing.Optional['Node'] = None self.node: typing.Optional['Node'] = None
@ -48,26 +50,19 @@ class StreamDownloader:
self.cached_read_blob = cached_read_blob self.cached_read_blob = cached_read_blob
async def add_fixed_peers(self): async def add_fixed_peers(self):
def _delayed_add_fixed_peers(): def _add_fixed_peers(fixed_peers):
self.peer_queue.put_nowait(fixed_peers)
self.added_fixed_peers = True self.added_fixed_peers = True
self.peer_queue.put_nowait([
make_kademlia_peer(None, address, None, tcp_port=port, allow_localhost=True)
for address, port in addresses
])
if not self.config.fixed_peers: if not self.config.fixed_peers:
return return
addresses = [
(await resolve_host(url, port, proto='tcp'), port)
for url, port in self.config.fixed_peers
]
if 'dht' in self.config.components_to_skip or not self.node or not \ if 'dht' in self.config.components_to_skip or not self.node or not \
len(self.node.protocol.routing_table.get_peers()) > 0: len(self.node.protocol.routing_table.get_peers()) > 0:
self.fixed_peers_delay = 0.0 self.fixed_peers_delay = 0.0
else: else:
self.fixed_peers_delay = self.config.fixed_peer_delay self.fixed_peers_delay = self.config.fixed_peer_delay
fixed_peers = await get_kademlia_peers_from_hosts(self.config.fixed_peers)
self.fixed_peers_handle = self.loop.call_later(self.fixed_peers_delay, _delayed_add_fixed_peers) self.fixed_peers_handle = self.loop.call_later(self.fixed_peers_delay, _add_fixed_peers, fixed_peers)
async def load_descriptor(self, connection_id: int = 0): async def load_descriptor(self, connection_id: int = 0):
# download or get the sd blob # download or get the sd blob
@ -77,7 +72,7 @@ class StreamDownloader:
now = self.loop.time() now = self.loop.time()
sd_blob = await asyncio.wait_for( sd_blob = await asyncio.wait_for(
self.blob_downloader.download_blob(self.sd_hash, connection_id), self.blob_downloader.download_blob(self.sd_hash, connection_id),
self.config.blob_download_timeout, loop=self.loop self.config.blob_download_timeout
) )
log.info("downloaded sd blob %s", self.sd_hash) log.info("downloaded sd blob %s", self.sd_hash)
self.time_to_descriptor = self.loop.time() - now self.time_to_descriptor = self.loop.time() - now
@ -90,7 +85,7 @@ class StreamDownloader:
) )
log.info("loaded stream manifest %s", self.sd_hash) log.info("loaded stream manifest %s", self.sd_hash)
async def start(self, node: typing.Optional['Node'] = None, connection_id: int = 0): async def start(self, node: typing.Optional['Node'] = None, connection_id: int = 0, save_stream=True):
# set up peer accumulation # set up peer accumulation
self.node = node or self.node # fixme: this shouldnt be set here! self.node = node or self.node # fixme: this shouldnt be set here!
if self.node: if self.node:
@ -98,6 +93,7 @@ class StreamDownloader:
self.accumulate_task.cancel() self.accumulate_task.cancel()
_, self.accumulate_task = self.node.accumulate_peers(self.search_queue, self.peer_queue) _, self.accumulate_task = self.node.accumulate_peers(self.search_queue, self.peer_queue)
await self.add_fixed_peers() await self.add_fixed_peers()
enqueue_tracker_search(bytes.fromhex(self.sd_hash), self.peer_queue)
# start searching for peers for the sd hash # start searching for peers for the sd hash
self.search_queue.put_nowait(self.sd_hash) self.search_queue.put_nowait(self.sd_hash)
log.info("searching for peers for stream %s", self.sd_hash) log.info("searching for peers for stream %s", self.sd_hash)
@ -105,11 +101,7 @@ class StreamDownloader:
if not self.descriptor: if not self.descriptor:
await self.load_descriptor(connection_id) await self.load_descriptor(connection_id)
# add the head blob to the peer search if not await self.blob_manager.storage.stream_exists(self.sd_hash) and save_stream:
self.search_queue.put_nowait(self.descriptor.blobs[0].blob_hash)
log.info("added head blob to peer search for stream %s", self.sd_hash)
if not await self.blob_manager.storage.stream_exists(self.sd_hash):
await self.blob_manager.storage.store_stream( await self.blob_manager.storage.store_stream(
self.blob_manager.get_blob(self.sd_hash, length=self.descriptor.length), self.descriptor self.blob_manager.get_blob(self.sd_hash, length=self.descriptor.length), self.descriptor
) )
@ -119,7 +111,7 @@ class StreamDownloader:
raise ValueError(f"blob {blob_info.blob_hash} is not part of stream with sd hash {self.sd_hash}") raise ValueError(f"blob {blob_info.blob_hash} is not part of stream with sd hash {self.sd_hash}")
blob = await asyncio.wait_for( blob = await asyncio.wait_for(
self.blob_downloader.download_blob(blob_info.blob_hash, blob_info.length, connection_id), self.blob_downloader.download_blob(blob_info.blob_hash, blob_info.length, connection_id),
self.config.blob_download_timeout * 10, loop=self.loop self.config.blob_download_timeout * 10
) )
return blob return blob

View file

@ -16,10 +16,8 @@ from lbry.file.source import ManagedDownloadSource
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.conf import Config from lbry.conf import Config
from lbry.schema.claim import Claim
from lbry.blob.blob_manager import BlobManager from lbry.blob.blob_manager import BlobManager
from lbry.blob.blob_info import BlobInfo from lbry.blob.blob_info import BlobInfo
from lbry.dht.node import Node
from lbry.extras.daemon.analytics import AnalyticsManager from lbry.extras.daemon.analytics import AnalyticsManager
from lbry.wallet.transaction import Transaction from lbry.wallet.transaction import Transaction
@ -62,9 +60,9 @@ class ManagedStream(ManagedDownloadSource):
self.file_output_task: typing.Optional[asyncio.Task] = None self.file_output_task: typing.Optional[asyncio.Task] = None
self.delayed_stop_task: typing.Optional[asyncio.Task] = None self.delayed_stop_task: typing.Optional[asyncio.Task] = None
self.streaming_responses: typing.List[typing.Tuple[Request, StreamResponse]] = [] self.streaming_responses: typing.List[typing.Tuple[Request, StreamResponse]] = []
self.fully_reflected = asyncio.Event(loop=self.loop) self.fully_reflected = asyncio.Event()
self.streaming = asyncio.Event(loop=self.loop) self.streaming = asyncio.Event()
self._running = asyncio.Event(loop=self.loop) self._running = asyncio.Event()
@property @property
def sd_hash(self) -> str: def sd_hash(self) -> str:
@ -84,7 +82,19 @@ class ManagedStream(ManagedDownloadSource):
@property @property
def file_name(self) -> Optional[str]: def file_name(self) -> Optional[str]:
return self._file_name or (self.descriptor.suggested_file_name if self.descriptor else None) return self._file_name or self.suggested_file_name
@property
def suggested_file_name(self) -> Optional[str]:
first_option = ((self.descriptor and self.descriptor.suggested_file_name) or '').strip()
return sanitize_file_name(first_option or (self.stream_claim_info and self.stream_claim_info.claim and
self.stream_claim_info.claim.stream.source.name))
@property
def stream_name(self) -> Optional[str]:
first_option = ((self.descriptor and self.descriptor.stream_name) or '').strip()
return first_option or (self.stream_claim_info and self.stream_claim_info.claim and
self.stream_claim_info.claim.stream.source.name)
@property @property
def written_bytes(self) -> int: def written_bytes(self) -> int:
@ -118,7 +128,7 @@ class ManagedStream(ManagedDownloadSource):
@property @property
def mime_type(self): def mime_type(self):
return guess_media_type(os.path.basename(self.descriptor.suggested_file_name))[0] return guess_media_type(os.path.basename(self.suggested_file_name))[0]
@property @property
def download_path(self): def download_path(self):
@ -151,7 +161,7 @@ class ManagedStream(ManagedDownloadSource):
log.info("start downloader for stream (sd hash: %s)", self.sd_hash) log.info("start downloader for stream (sd hash: %s)", self.sd_hash)
self._running.set() self._running.set()
try: try:
await asyncio.wait_for(self.downloader.start(), timeout, loop=self.loop) await asyncio.wait_for(self.downloader.start(), timeout)
except asyncio.TimeoutError: except asyncio.TimeoutError:
self._running.clear() self._running.clear()
raise DownloadSDTimeoutError(self.sd_hash) raise DownloadSDTimeoutError(self.sd_hash)
@ -164,7 +174,7 @@ class ManagedStream(ManagedDownloadSource):
if not self._file_name: if not self._file_name:
self._file_name = await get_next_available_file_name( self._file_name = await get_next_available_file_name(
self.loop, self.download_directory, self.loop, self.download_directory,
self._file_name or sanitize_file_name(self.descriptor.suggested_file_name) self._file_name or sanitize_file_name(self.suggested_file_name)
) )
file_name, download_dir = self._file_name, self.download_directory file_name, download_dir = self._file_name, self.download_directory
else: else:
@ -181,7 +191,7 @@ class ManagedStream(ManagedDownloadSource):
Stop any running save/stream tasks as well as the downloader and update the status in the database Stop any running save/stream tasks as well as the downloader and update the status in the database
""" """
self.stop_tasks() await self.stop_tasks()
if (finished and self.status != self.STATUS_FINISHED) or self.status == self.STATUS_RUNNING: if (finished and self.status != self.STATUS_FINISHED) or self.status == self.STATUS_RUNNING:
await self.update_status(self.STATUS_FINISHED if finished else self.STATUS_STOPPED) await self.update_status(self.STATUS_FINISHED if finished else self.STATUS_STOPPED)
@ -254,7 +264,7 @@ class ManagedStream(ManagedDownloadSource):
self.finished_writing.clear() self.finished_writing.clear()
self.started_writing.clear() self.started_writing.clear()
try: try:
open(output_path, 'wb').close() open(output_path, 'wb').close() # pylint: disable=consider-using-with
async for blob_info, decrypted in self._aiter_read_stream(connection_id=self.SAVING_ID): async for blob_info, decrypted in self._aiter_read_stream(connection_id=self.SAVING_ID):
log.info("write blob %i/%i", blob_info.blob_num + 1, len(self.descriptor.blobs) - 1) log.info("write blob %i/%i", blob_info.blob_num + 1, len(self.descriptor.blobs) - 1)
await self.loop.run_in_executor(None, self._write_decrypted_blob, output_path, decrypted) await self.loop.run_in_executor(None, self._write_decrypted_blob, output_path, decrypted)
@ -269,7 +279,7 @@ class ManagedStream(ManagedDownloadSource):
log.info("finished saving file for lbry://%s#%s (sd hash %s...) -> %s", self.claim_name, self.claim_id, log.info("finished saving file for lbry://%s#%s (sd hash %s...) -> %s", self.claim_name, self.claim_id,
self.sd_hash[:6], self.full_path) self.sd_hash[:6], self.full_path)
await self.blob_manager.storage.set_saved_file(self.stream_hash) await self.blob_manager.storage.set_saved_file(self.stream_hash)
except Exception as err: except (Exception, asyncio.CancelledError) as err:
if os.path.isfile(output_path): if os.path.isfile(output_path):
log.warning("removing incomplete download %s for %s", output_path, self.sd_hash) log.warning("removing incomplete download %s for %s", output_path, self.sd_hash)
os.remove(output_path) os.remove(output_path)
@ -296,14 +306,14 @@ class ManagedStream(ManagedDownloadSource):
self.download_directory = download_directory or self.download_directory or self.config.download_dir self.download_directory = download_directory or self.download_directory or self.config.download_dir
if not self.download_directory: if not self.download_directory:
raise ValueError("no directory to download to") raise ValueError("no directory to download to")
if not (file_name or self._file_name or self.descriptor.suggested_file_name): if not (file_name or self._file_name or self.suggested_file_name):
raise ValueError("no file name to download to") raise ValueError("no file name to download to")
if not os.path.isdir(self.download_directory): if not os.path.isdir(self.download_directory):
log.warning("download directory '%s' does not exist, attempting to make it", self.download_directory) log.warning("download directory '%s' does not exist, attempting to make it", self.download_directory)
os.mkdir(self.download_directory) os.mkdir(self.download_directory)
self._file_name = await get_next_available_file_name( self._file_name = await get_next_available_file_name(
self.loop, self.download_directory, self.loop, self.download_directory,
file_name or self._file_name or sanitize_file_name(self.descriptor.suggested_file_name) file_name or self._file_name or sanitize_file_name(self.suggested_file_name)
) )
await self.blob_manager.storage.change_file_download_dir_and_file_name( await self.blob_manager.storage.change_file_download_dir_and_file_name(
self.stream_hash, self.download_directory, self.file_name self.stream_hash, self.download_directory, self.file_name
@ -311,15 +321,16 @@ class ManagedStream(ManagedDownloadSource):
await self.update_status(ManagedStream.STATUS_RUNNING) await self.update_status(ManagedStream.STATUS_RUNNING)
self.file_output_task = self.loop.create_task(self._save_file(self.full_path)) self.file_output_task = self.loop.create_task(self._save_file(self.full_path))
try: try:
await asyncio.wait_for(self.started_writing.wait(), self.config.download_timeout, loop=self.loop) await asyncio.wait_for(self.started_writing.wait(), self.config.download_timeout)
except asyncio.TimeoutError: except asyncio.TimeoutError:
log.warning("timeout starting to write data for lbry://%s#%s", self.claim_name, self.claim_id) log.warning("timeout starting to write data for lbry://%s#%s", self.claim_name, self.claim_id)
self.stop_tasks() await self.stop_tasks()
await self.update_status(ManagedStream.STATUS_STOPPED) await self.update_status(ManagedStream.STATUS_STOPPED)
def stop_tasks(self): async def stop_tasks(self):
if self.file_output_task and not self.file_output_task.done(): if self.file_output_task and not self.file_output_task.done():
self.file_output_task.cancel() self.file_output_task.cancel()
await asyncio.gather(self.file_output_task, return_exceptions=True)
self.file_output_task = None self.file_output_task = None
while self.streaming_responses: while self.streaming_responses:
req, response = self.streaming_responses.pop() req, response = self.streaming_responses.pop()
@ -356,7 +367,7 @@ class ManagedStream(ManagedDownloadSource):
return sent return sent
except ConnectionError: except ConnectionError:
return sent return sent
except (OSError, Exception) as err: except (OSError, Exception, asyncio.CancelledError) as err:
if isinstance(err, asyncio.CancelledError): if isinstance(err, asyncio.CancelledError):
log.warning("stopped uploading %s#%s to reflector", self.claim_name, self.claim_id) log.warning("stopped uploading %s#%s to reflector", self.claim_name, self.claim_id)
elif isinstance(err, OSError): elif isinstance(err, OSError):
@ -372,9 +383,6 @@ class ManagedStream(ManagedDownloadSource):
protocol.transport.close() protocol.transport.close()
self.uploading_to_reflector = False self.uploading_to_reflector = False
if not self.fully_reflected.is_set():
self.fully_reflected.set()
await self.blob_manager.storage.update_reflected_stream(self.sd_hash, f"{host}:{port}")
return sent return sent
async def update_content_claim(self, claim_info: Optional[typing.Dict] = None): async def update_content_claim(self, claim_info: Optional[typing.Dict] = None):
@ -394,7 +402,7 @@ class ManagedStream(ManagedDownloadSource):
self.sd_hash[:6]) self.sd_hash[:6])
await self.stop() await self.stop()
return return
await asyncio.sleep(1, loop=self.loop) await asyncio.sleep(1)
def _prepare_range_response_headers(self, get_range: str) -> typing.Tuple[typing.Dict[str, str], int, int, int]: def _prepare_range_response_headers(self, get_range: str) -> typing.Tuple[typing.Dict[str, str], int, int, int]:
if '=' in get_range: if '=' in get_range:

View file

@ -59,7 +59,7 @@ class StreamReflectorClient(asyncio.Protocol):
return return
async def send_request(self, request_dict: typing.Dict, timeout: int = 180): async def send_request(self, request_dict: typing.Dict, timeout: int = 180):
msg = json.dumps(request_dict) msg = json.dumps(request_dict, sort_keys=True)
try: try:
self.transport.write(msg.encode()) self.transport.write(msg.encode())
self.pending_request = self.loop.create_task(asyncio.wait_for(self.response_queue.get(), timeout)) self.pending_request = self.loop.create_task(asyncio.wait_for(self.response_queue.get(), timeout))

View file

@ -17,11 +17,11 @@ log = logging.getLogger(__name__)
class ReflectorServerProtocol(asyncio.Protocol): class ReflectorServerProtocol(asyncio.Protocol):
def __init__(self, blob_manager: 'BlobManager', response_chunk_size: int = 10000, def __init__(self, blob_manager: 'BlobManager', response_chunk_size: int = 10000,
stop_event: asyncio.Event = None, incoming_event: asyncio.Event = None, stop_event: asyncio.Event = None, incoming_event: asyncio.Event = None,
not_incoming_event: asyncio.Event = None): not_incoming_event: asyncio.Event = None, partial_event: asyncio.Event = None):
self.loop = asyncio.get_event_loop() self.loop = asyncio.get_event_loop()
self.blob_manager = blob_manager self.blob_manager = blob_manager
self.server_task: asyncio.Task = None self.server_task: asyncio.Task = None
self.started_listening = asyncio.Event(loop=self.loop) self.started_listening = asyncio.Event()
self.buf = b'' self.buf = b''
self.transport: asyncio.StreamWriter = None self.transport: asyncio.StreamWriter = None
self.writer: typing.Optional['HashBlobWriter'] = None self.writer: typing.Optional['HashBlobWriter'] = None
@ -29,11 +29,12 @@ class ReflectorServerProtocol(asyncio.Protocol):
self.descriptor: typing.Optional['StreamDescriptor'] = None self.descriptor: typing.Optional['StreamDescriptor'] = None
self.sd_blob: typing.Optional['BlobFile'] = None self.sd_blob: typing.Optional['BlobFile'] = None
self.received = [] self.received = []
self.incoming = incoming_event or asyncio.Event(loop=self.loop) self.incoming = incoming_event or asyncio.Event()
self.not_incoming = not_incoming_event or asyncio.Event(loop=self.loop) self.not_incoming = not_incoming_event or asyncio.Event()
self.stop_event = stop_event or asyncio.Event(loop=self.loop) self.stop_event = stop_event or asyncio.Event()
self.chunk_size = response_chunk_size self.chunk_size = response_chunk_size
self.wait_for_stop_task: typing.Optional[asyncio.Task] = None self.wait_for_stop_task: typing.Optional[asyncio.Task] = None
self.partial_event = partial_event
async def wait_for_stop(self): async def wait_for_stop(self):
await self.stop_event.wait() await self.stop_event.wait()
@ -93,7 +94,7 @@ class ReflectorServerProtocol(asyncio.Protocol):
self.incoming.set() self.incoming.set()
self.send_response({"send_sd_blob": True}) self.send_response({"send_sd_blob": True})
try: try:
await asyncio.wait_for(self.sd_blob.verified.wait(), 30, loop=self.loop) await asyncio.wait_for(self.sd_blob.verified.wait(), 30)
self.descriptor = await StreamDescriptor.from_stream_descriptor_blob( self.descriptor = await StreamDescriptor.from_stream_descriptor_blob(
self.loop, self.blob_manager.blob_dir, self.sd_blob self.loop, self.blob_manager.blob_dir, self.sd_blob
) )
@ -115,10 +116,14 @@ class ReflectorServerProtocol(asyncio.Protocol):
if self.writer: if self.writer:
self.writer.close_handle() self.writer.close_handle()
self.writer = None self.writer = None
self.send_response({"send_sd_blob": False, 'needed': [
blob.blob_hash for blob in self.descriptor.blobs[:-1] needs = [blob.blob_hash
if not self.blob_manager.get_blob(blob.blob_hash).get_is_verified() for blob in self.descriptor.blobs[:-1]
]}) if not self.blob_manager.get_blob(blob.blob_hash).get_is_verified()]
if needs and not self.partial_event.is_set():
needs = needs[:3]
self.partial_event.set()
self.send_response({"send_sd_blob": False, 'needed_blobs': needs})
return return
return return
elif self.descriptor: elif self.descriptor:
@ -135,7 +140,7 @@ class ReflectorServerProtocol(asyncio.Protocol):
self.incoming.set() self.incoming.set()
self.send_response({"send_blob": True}) self.send_response({"send_blob": True})
try: try:
await asyncio.wait_for(blob.verified.wait(), 30, loop=self.loop) await asyncio.wait_for(blob.verified.wait(), 30)
self.send_response({"received_blob": True}) self.send_response({"received_blob": True})
except asyncio.TimeoutError: except asyncio.TimeoutError:
self.send_response({"received_blob": False}) self.send_response({"received_blob": False})
@ -153,29 +158,29 @@ class ReflectorServerProtocol(asyncio.Protocol):
class ReflectorServer: class ReflectorServer:
def __init__(self, blob_manager: 'BlobManager', response_chunk_size: int = 10000, def __init__(self, blob_manager: 'BlobManager', response_chunk_size: int = 10000,
stop_event: asyncio.Event = None, incoming_event: asyncio.Event = None, stop_event: asyncio.Event = None, incoming_event: asyncio.Event = None,
not_incoming_event: asyncio.Event = None): not_incoming_event: asyncio.Event = None, partial_needs=False):
self.loop = asyncio.get_event_loop() self.loop = asyncio.get_event_loop()
self.blob_manager = blob_manager self.blob_manager = blob_manager
self.server_task: typing.Optional[asyncio.Task] = None self.server_task: typing.Optional[asyncio.Task] = None
self.started_listening = asyncio.Event(loop=self.loop) self.started_listening = asyncio.Event()
self.stopped_listening = asyncio.Event(loop=self.loop) self.stopped_listening = asyncio.Event()
self.incoming_event = incoming_event or asyncio.Event(loop=self.loop) self.incoming_event = incoming_event or asyncio.Event()
self.not_incoming_event = not_incoming_event or asyncio.Event(loop=self.loop) self.not_incoming_event = not_incoming_event or asyncio.Event()
self.response_chunk_size = response_chunk_size self.response_chunk_size = response_chunk_size
self.stop_event = stop_event self.stop_event = stop_event
self.partial_needs = partial_needs # for testing cases where it doesn't know what it wants
def start_server(self, port: int, interface: typing.Optional[str] = '0.0.0.0'): def start_server(self, port: int, interface: typing.Optional[str] = '0.0.0.0'):
if self.server_task is not None: if self.server_task is not None:
raise Exception("already running") raise Exception("already running")
async def _start_server(): async def _start_server():
server = await self.loop.create_server( partial_event = asyncio.Event()
lambda: ReflectorServerProtocol( if not self.partial_needs:
self.blob_manager, self.response_chunk_size, self.stop_event, self.incoming_event, partial_event.set()
self.not_incoming_event server = await self.loop.create_server(lambda: ReflectorServerProtocol(
), self.blob_manager, self.response_chunk_size, self.stop_event, self.incoming_event,
interface, port self.not_incoming_event, partial_event), interface, port)
)
self.started_listening.set() self.started_listening.set()
self.stopped_listening.clear() self.stopped_listening.clear()
log.info("Reflector server listening on TCP %s:%i", interface, port) log.info("Reflector server listening on TCP %s:%i", interface, port)

View file

@ -54,7 +54,7 @@ class StreamManager(SourceManager):
self.re_reflect_task: Optional[asyncio.Task] = None self.re_reflect_task: Optional[asyncio.Task] = None
self.update_stream_finished_futs: typing.List[asyncio.Future] = [] self.update_stream_finished_futs: typing.List[asyncio.Future] = []
self.running_reflector_uploads: typing.Dict[str, asyncio.Task] = {} self.running_reflector_uploads: typing.Dict[str, asyncio.Task] = {}
self.started = asyncio.Event(loop=self.loop) self.started = asyncio.Event()
@property @property
def streams(self): def streams(self):
@ -70,6 +70,7 @@ class StreamManager(SourceManager):
async def recover_streams(self, file_infos: typing.List[typing.Dict]): async def recover_streams(self, file_infos: typing.List[typing.Dict]):
to_restore = [] to_restore = []
to_check = []
async def recover_stream(sd_hash: str, stream_hash: str, stream_name: str, async def recover_stream(sd_hash: str, stream_hash: str, stream_name: str,
suggested_file_name: str, key: str, suggested_file_name: str, key: str,
@ -82,6 +83,7 @@ class StreamManager(SourceManager):
if not descriptor: if not descriptor:
return return
to_restore.append((descriptor, sd_blob, content_fee)) to_restore.append((descriptor, sd_blob, content_fee))
to_check.extend([sd_blob.blob_hash] + [blob.blob_hash for blob in descriptor.blobs[:-1]])
await asyncio.gather(*[ await asyncio.gather(*[
recover_stream( recover_stream(
@ -93,6 +95,8 @@ class StreamManager(SourceManager):
if to_restore: if to_restore:
await self.storage.recover_streams(to_restore, self.config.download_dir) await self.storage.recover_streams(to_restore, self.config.download_dir)
if to_check:
await self.blob_manager.ensure_completed_blobs_status(to_check)
# if self.blob_manager._save_blobs: # if self.blob_manager._save_blobs:
# log.info("Recovered %i/%i attempted streams", len(to_restore), len(file_infos)) # log.info("Recovered %i/%i attempted streams", len(to_restore), len(file_infos))
@ -146,7 +150,7 @@ class StreamManager(SourceManager):
file_info['added_on'], file_info['fully_reflected'] file_info['added_on'], file_info['fully_reflected']
))) )))
if add_stream_tasks: if add_stream_tasks:
await asyncio.gather(*add_stream_tasks, loop=self.loop) await asyncio.gather(*add_stream_tasks)
log.info("Started stream manager with %i files", len(self._sources)) log.info("Started stream manager with %i files", len(self._sources))
if not self.node: if not self.node:
log.info("no DHT node given, resuming downloads trusting that we can contact reflector") log.info("no DHT node given, resuming downloads trusting that we can contact reflector")
@ -155,14 +159,11 @@ class StreamManager(SourceManager):
self.resume_saving_task = asyncio.ensure_future(asyncio.gather( self.resume_saving_task = asyncio.ensure_future(asyncio.gather(
*(self._sources[sd_hash].save_file(file_name, download_directory) *(self._sources[sd_hash].save_file(file_name, download_directory)
for (file_name, download_directory, sd_hash) in to_resume_saving), for (file_name, download_directory, sd_hash) in to_resume_saving),
loop=self.loop
)) ))
async def reflect_streams(self): async def reflect_streams(self):
try: try:
return await self._reflect_streams() return await self._reflect_streams()
except asyncio.CancelledError:
raise
except Exception: except Exception:
log.exception("reflector task encountered an unexpected error!") log.exception("reflector task encountered an unexpected error!")
@ -182,21 +183,21 @@ class StreamManager(SourceManager):
batch.append(self.reflect_stream(stream)) batch.append(self.reflect_stream(stream))
if len(batch) >= self.config.concurrent_reflector_uploads: if len(batch) >= self.config.concurrent_reflector_uploads:
log.debug("waiting for batch of %s reflecting streams", len(batch)) log.debug("waiting for batch of %s reflecting streams", len(batch))
await asyncio.gather(*batch, loop=self.loop) await asyncio.gather(*batch)
log.debug("done processing %s streams", len(batch)) log.debug("done processing %s streams", len(batch))
batch = [] batch = []
if batch: if batch:
log.debug("waiting for batch of %s reflecting streams", len(batch)) log.debug("waiting for batch of %s reflecting streams", len(batch))
await asyncio.gather(*batch, loop=self.loop) await asyncio.gather(*batch)
log.debug("done processing %s streams", len(batch)) log.debug("done processing %s streams", len(batch))
await asyncio.sleep(300, loop=self.loop) await asyncio.sleep(300)
async def start(self): async def start(self):
await super().start() await super().start()
self.re_reflect_task = self.loop.create_task(self.reflect_streams()) self.re_reflect_task = self.loop.create_task(self.reflect_streams())
def stop(self): async def stop(self):
super().stop() await super().stop()
if self.resume_saving_task and not self.resume_saving_task.done(): if self.resume_saving_task and not self.resume_saving_task.done():
self.resume_saving_task.cancel() self.resume_saving_task.cancel()
if self.re_reflect_task and not self.re_reflect_task.done(): if self.re_reflect_task and not self.re_reflect_task.done():
@ -215,7 +216,7 @@ class StreamManager(SourceManager):
server, port = random.choice(self.config.reflector_servers) server, port = random.choice(self.config.reflector_servers)
if stream.sd_hash in self.running_reflector_uploads: if stream.sd_hash in self.running_reflector_uploads:
return self.running_reflector_uploads[stream.sd_hash] return self.running_reflector_uploads[stream.sd_hash]
task = self.loop.create_task(stream.upload_to_reflector(server, port)) task = self.loop.create_task(self._retriable_reflect_stream(stream, server, port))
self.running_reflector_uploads[stream.sd_hash] = task self.running_reflector_uploads[stream.sd_hash] = task
task.add_done_callback( task.add_done_callback(
lambda _: None if stream.sd_hash not in self.running_reflector_uploads else lambda _: None if stream.sd_hash not in self.running_reflector_uploads else
@ -223,6 +224,14 @@ class StreamManager(SourceManager):
) )
return task return task
@staticmethod
async def _retriable_reflect_stream(stream, host, port):
sent = await stream.upload_to_reflector(host, port)
while not stream.is_fully_reflected and stream.reflector_progress > 0 and len(sent) > 0:
stream.reflector_progress = 0
sent = await stream.upload_to_reflector(host, port)
return sent
async def create(self, file_path: str, key: Optional[bytes] = None, async def create(self, file_path: str, key: Optional[bytes] = None,
iv_generator: Optional[typing.Generator[bytes, None, None]] = None) -> ManagedStream: iv_generator: Optional[typing.Generator[bytes, None, None]] = None) -> ManagedStream:
descriptor = await StreamDescriptor.create_stream( descriptor = await StreamDescriptor.create_stream(
@ -230,7 +239,7 @@ class StreamManager(SourceManager):
blob_completed_callback=self.blob_manager.blob_completed blob_completed_callback=self.blob_manager.blob_completed
) )
await self.storage.store_stream( await self.storage.store_stream(
self.blob_manager.get_blob(descriptor.sd_hash), descriptor self.blob_manager.get_blob(descriptor.sd_hash, is_mine=True), descriptor
) )
row_id = await self.storage.save_published_file( row_id = await self.storage.save_published_file(
descriptor.stream_hash, os.path.basename(file_path), os.path.dirname(file_path), 0 descriptor.stream_hash, os.path.basename(file_path), os.path.dirname(file_path), 0
@ -251,7 +260,7 @@ class StreamManager(SourceManager):
return return
if source.identifier in self.running_reflector_uploads: if source.identifier in self.running_reflector_uploads:
self.running_reflector_uploads[source.identifier].cancel() self.running_reflector_uploads[source.identifier].cancel()
source.stop_tasks() await source.stop_tasks()
if source.identifier in self.streams: if source.identifier in self.streams:
del self.streams[source.identifier] del self.streams[source.identifier]
blob_hashes = [source.identifier] + [b.blob_hash for b in source.descriptor.blobs[:-1]] blob_hashes = [source.identifier] + [b.blob_hash for b in source.descriptor.blobs[:-1]]

View file

@ -17,8 +17,10 @@ from functools import partial
from lbry.wallet import WalletManager, Wallet, Ledger, Account, Transaction from lbry.wallet import WalletManager, Wallet, Ledger, Account, Transaction
from lbry.conf import Config from lbry.conf import Config
from lbry.wallet.util import satoshis_to_coins from lbry.wallet.util import satoshis_to_coins
from lbry.wallet.dewies import lbc_to_dewies
from lbry.wallet.orchstr8 import Conductor from lbry.wallet.orchstr8 import Conductor
from lbry.wallet.orchstr8.node import BlockchainNode, WalletNode from lbry.wallet.orchstr8.node import LBCWalletNode, WalletNode
from lbry.schema.claim import Claim
from lbry.extras.daemon.daemon import Daemon, jsonrpc_dumps_pretty from lbry.extras.daemon.daemon import Daemon, jsonrpc_dumps_pretty
from lbry.extras.daemon.components import Component, WalletComponent from lbry.extras.daemon.components import Component, WalletComponent
@ -84,6 +86,7 @@ class AsyncioTestCase(unittest.TestCase):
# https://bugs.python.org/issue32972 # https://bugs.python.org/issue32972
LOOP_SLOW_CALLBACK_DURATION = 0.2 LOOP_SLOW_CALLBACK_DURATION = 0.2
TIMEOUT = 120.0
maxDiff = None maxDiff = None
@ -131,15 +134,18 @@ class AsyncioTestCase(unittest.TestCase):
with outcome.testPartExecutor(self): with outcome.testPartExecutor(self):
self.setUp() self.setUp()
self.add_timeout()
self.loop.run_until_complete(self.asyncSetUp()) self.loop.run_until_complete(self.asyncSetUp())
if outcome.success: if outcome.success:
outcome.expecting_failure = expecting_failure outcome.expecting_failure = expecting_failure
with outcome.testPartExecutor(self, isTest=True): with outcome.testPartExecutor(self, isTest=True):
maybe_coroutine = testMethod() maybe_coroutine = testMethod()
if asyncio.iscoroutine(maybe_coroutine): if asyncio.iscoroutine(maybe_coroutine):
self.add_timeout()
self.loop.run_until_complete(maybe_coroutine) self.loop.run_until_complete(maybe_coroutine)
outcome.expecting_failure = False outcome.expecting_failure = False
with outcome.testPartExecutor(self): with outcome.testPartExecutor(self):
self.add_timeout()
self.loop.run_until_complete(self.asyncTearDown()) self.loop.run_until_complete(self.asyncTearDown())
self.tearDown() self.tearDown()
@ -187,8 +193,25 @@ class AsyncioTestCase(unittest.TestCase):
with outcome.testPartExecutor(self): with outcome.testPartExecutor(self):
maybe_coroutine = function(*args, **kwargs) maybe_coroutine = function(*args, **kwargs)
if asyncio.iscoroutine(maybe_coroutine): if asyncio.iscoroutine(maybe_coroutine):
self.add_timeout()
self.loop.run_until_complete(maybe_coroutine) self.loop.run_until_complete(maybe_coroutine)
def cancel(self):
for task in asyncio.all_tasks(self.loop):
if not task.done():
task.print_stack()
task.cancel()
def add_timeout(self):
if self.TIMEOUT:
self.loop.call_later(self.TIMEOUT, self.check_timeout, time())
def check_timeout(self, started):
if time() - started >= self.TIMEOUT:
self.cancel()
else:
self.loop.call_later(self.TIMEOUT, self.check_timeout, started)
class AdvanceTimeTestCase(AsyncioTestCase): class AdvanceTimeTestCase(AsyncioTestCase):
@ -213,7 +236,7 @@ class IntegrationTestCase(AsyncioTestCase):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.conductor: Optional[Conductor] = None self.conductor: Optional[Conductor] = None
self.blockchain: Optional[BlockchainNode] = None self.blockchain: Optional[LBCWalletNode] = None
self.wallet_node: Optional[WalletNode] = None self.wallet_node: Optional[WalletNode] = None
self.manager: Optional[WalletManager] = None self.manager: Optional[WalletManager] = None
self.ledger: Optional[Ledger] = None self.ledger: Optional[Ledger] = None
@ -222,13 +245,15 @@ class IntegrationTestCase(AsyncioTestCase):
async def asyncSetUp(self): async def asyncSetUp(self):
self.conductor = Conductor(seed=self.SEED) self.conductor = Conductor(seed=self.SEED)
await self.conductor.start_blockchain() await self.conductor.start_lbcd()
self.addCleanup(self.conductor.stop_blockchain) self.addCleanup(self.conductor.stop_lbcd)
await self.conductor.start_lbcwallet()
self.addCleanup(self.conductor.stop_lbcwallet)
await self.conductor.start_spv() await self.conductor.start_spv()
self.addCleanup(self.conductor.stop_spv) self.addCleanup(self.conductor.stop_spv)
await self.conductor.start_wallet() await self.conductor.start_wallet()
self.addCleanup(self.conductor.stop_wallet) self.addCleanup(self.conductor.stop_wallet)
self.blockchain = self.conductor.blockchain_node self.blockchain = self.conductor.lbcwallet_node
self.wallet_node = self.conductor.wallet_node self.wallet_node = self.conductor.wallet_node
self.manager = self.wallet_node.manager self.manager = self.wallet_node.manager
self.ledger = self.wallet_node.ledger self.ledger = self.wallet_node.ledger
@ -242,6 +267,13 @@ class IntegrationTestCase(AsyncioTestCase):
def broadcast(self, tx): def broadcast(self, tx):
return self.ledger.broadcast(tx) return self.ledger.broadcast(tx)
async def broadcast_and_confirm(self, tx, ledger=None):
ledger = ledger or self.ledger
notifications = asyncio.create_task(ledger.wait(tx))
await ledger.broadcast(tx)
await notifications
await self.generate_and_wait(1, [tx.id], ledger)
async def on_header(self, height): async def on_header(self, height):
if self.ledger.headers.height < height: if self.ledger.headers.height < height:
await self.ledger.on_header.where( await self.ledger.on_header.where(
@ -249,11 +281,29 @@ class IntegrationTestCase(AsyncioTestCase):
) )
return True return True
def on_transaction_id(self, txid, ledger=None): async def send_to_address_and_wait(self, address, amount, blocks_to_generate=0, ledger=None):
return (ledger or self.ledger).on_transaction.where( tx_watch = []
lambda e: e.tx.id == txid txid = None
done = False
watcher = (ledger or self.ledger).on_transaction.where(
lambda e: e.tx.id == txid or done or tx_watch.append(e.tx.id)
) )
txid = await self.blockchain.send_to_address(address, amount)
done = txid in tx_watch
await watcher
await self.generate_and_wait(blocks_to_generate, [txid], ledger)
return txid
async def generate_and_wait(self, blocks_to_generate, txids, ledger=None):
if blocks_to_generate > 0:
watcher = (ledger or self.ledger).on_transaction.where(
lambda e: ((e.tx.id in txids and txids.remove(e.tx.id)), len(txids) <= 0)[-1] # multi-statement lambda
)
await self.generate(blocks_to_generate)
await watcher
def on_address_update(self, address): def on_address_update(self, address):
return self.ledger.on_transaction.where( return self.ledger.on_transaction.where(
lambda e: e.address == address lambda e: e.address == address
@ -264,6 +314,22 @@ class IntegrationTestCase(AsyncioTestCase):
lambda e: e.tx.id == tx.id and e.address == address lambda e: e.tx.id == tx.id and e.address == address
) )
async def generate(self, blocks):
""" Ask lbrycrd to generate some blocks and wait until ledger has them. """
prepare = self.ledger.on_header.where(self.blockchain.is_expected_block)
self.conductor.spv_node.server.synchronized.clear()
await self.blockchain.generate(blocks)
height = self.blockchain.block_expected
await prepare # no guarantee that it didn't happen already, so start waiting from before calling generate
while True:
await self.conductor.spv_node.server.synchronized.wait()
self.conductor.spv_node.server.synchronized.clear()
if self.conductor.spv_node.server.db.db_height < height:
continue
if self.conductor.spv_node.server._es_height < height:
continue
break
class FakeExchangeRateManager(ExchangeRateManager): class FakeExchangeRateManager(ExchangeRateManager):
@ -324,24 +390,28 @@ class CommandTestCase(IntegrationTestCase):
self.skip_libtorrent = True self.skip_libtorrent = True
async def asyncSetUp(self): async def asyncSetUp(self):
await super().asyncSetUp()
logging.getLogger('lbry.blob_exchange').setLevel(self.VERBOSITY) logging.getLogger('lbry.blob_exchange').setLevel(self.VERBOSITY)
logging.getLogger('lbry.daemon').setLevel(self.VERBOSITY) logging.getLogger('lbry.daemon').setLevel(self.VERBOSITY)
logging.getLogger('lbry.stream').setLevel(self.VERBOSITY) logging.getLogger('lbry.stream').setLevel(self.VERBOSITY)
logging.getLogger('lbry.wallet').setLevel(self.VERBOSITY) logging.getLogger('lbry.wallet').setLevel(self.VERBOSITY)
await super().asyncSetUp()
self.daemon = await self.add_daemon(self.wallet_node) self.daemon = await self.add_daemon(self.wallet_node)
await self.account.ensure_address_gap() await self.account.ensure_address_gap()
address = (await self.account.receiving.get_addresses(limit=1, only_usable=True))[0] address = (await self.account.receiving.get_addresses(limit=1, only_usable=True))[0]
sendtxid = await self.blockchain.send_to_address(address, 10) await self.send_to_address_and_wait(address, 10, 6)
await self.confirm_tx(sendtxid)
await self.generate(5)
server_tmp_dir = tempfile.mkdtemp() server_tmp_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, server_tmp_dir) self.addCleanup(shutil.rmtree, server_tmp_dir)
self.server_config = Config() self.server_config = Config(
data_dir=server_tmp_dir,
wallet_dir=server_tmp_dir,
save_files=True,
download_dir=server_tmp_dir
)
self.server_config.transaction_cache_size = 10000 self.server_config.transaction_cache_size = 10000
self.server_storage = SQLiteStorage(self.server_config, ':memory:') self.server_storage = SQLiteStorage(self.server_config, ':memory:')
await self.server_storage.open() await self.server_storage.open()
@ -365,6 +435,7 @@ class CommandTestCase(IntegrationTestCase):
await daemon.stop() await daemon.stop()
async def add_daemon(self, wallet_node=None, seed=None): async def add_daemon(self, wallet_node=None, seed=None):
start_wallet_node = False
if wallet_node is None: if wallet_node is None:
wallet_node = WalletNode( wallet_node = WalletNode(
self.wallet_node.manager_class, self.wallet_node.manager_class,
@ -372,22 +443,24 @@ class CommandTestCase(IntegrationTestCase):
port=self.extra_wallet_node_port port=self.extra_wallet_node_port
) )
self.extra_wallet_node_port += 1 self.extra_wallet_node_port += 1
await wallet_node.start(self.conductor.spv_node, seed=seed) start_wallet_node = True
self.extra_wallet_nodes.append(wallet_node)
upload_dir = os.path.join(wallet_node.data_path, 'uploads') upload_dir = os.path.join(wallet_node.data_path, 'uploads')
os.mkdir(upload_dir) os.mkdir(upload_dir)
conf = Config() conf = Config(
conf.data_dir = wallet_node.data_path # needed during instantiation to access known_hubs path
conf.wallet_dir = wallet_node.data_path data_dir=wallet_node.data_path,
conf.download_dir = wallet_node.data_path wallet_dir=wallet_node.data_path,
save_files=True,
download_dir=wallet_node.data_path
)
conf.upload_dir = upload_dir # not a real conf setting conf.upload_dir = upload_dir # not a real conf setting
conf.share_usage_data = False conf.share_usage_data = False
conf.use_upnp = False conf.use_upnp = False
conf.reflect_streams = True conf.reflect_streams = True
conf.blockchain_name = 'lbrycrd_regtest' conf.blockchain_name = 'lbrycrd_regtest'
conf.lbryum_servers = [('127.0.0.1', 50001)] conf.lbryum_servers = [(self.conductor.spv_node.hostname, self.conductor.spv_node.port)]
conf.reflector_servers = [('127.0.0.1', 5566)] conf.reflector_servers = [('127.0.0.1', 5566)]
conf.fixed_peers = [('127.0.0.1', 5567)] conf.fixed_peers = [('127.0.0.1', 5567)]
conf.known_dht_nodes = [] conf.known_dht_nodes = []
@ -399,7 +472,13 @@ class CommandTestCase(IntegrationTestCase):
] ]
if self.skip_libtorrent: if self.skip_libtorrent:
conf.components_to_skip.append(LIBTORRENT_COMPONENT) conf.components_to_skip.append(LIBTORRENT_COMPONENT)
wallet_node.manager.config = conf
if start_wallet_node:
await wallet_node.start(self.conductor.spv_node, seed=seed, config=conf)
self.extra_wallet_nodes.append(wallet_node)
else:
wallet_node.manager.config = conf
wallet_node.manager.ledger.config['known_hubs'] = conf.known_hubs
def wallet_maker(component_manager): def wallet_maker(component_manager):
wallet_component = WalletComponent(component_manager) wallet_component = WalletComponent(component_manager)
@ -420,9 +499,14 @@ class CommandTestCase(IntegrationTestCase):
async def confirm_tx(self, txid, ledger=None): async def confirm_tx(self, txid, ledger=None):
""" Wait for tx to be in mempool, then generate a block, wait for tx to be in a block. """ """ Wait for tx to be in mempool, then generate a block, wait for tx to be in a block. """
await self.on_transaction_id(txid, ledger) # await (ledger or self.ledger).on_transaction.where(lambda e: e.tx.id == txid)
await self.generate(1) on_tx = (ledger or self.ledger).on_transaction.where(lambda e: e.tx.id == txid)
await self.on_transaction_id(txid, ledger) await asyncio.wait([self.generate(1), on_tx], timeout=5)
# # actually, if it's in the mempool or in the block we're fine
# await self.generate_and_wait(1, [txid], ledger=ledger)
# return txid
return txid return txid
async def on_transaction_dict(self, tx): async def on_transaction_dict(self, tx):
@ -437,11 +521,6 @@ class CommandTestCase(IntegrationTestCase):
addresses.add(txo['address']) addresses.add(txo['address'])
return list(addresses) return list(addresses)
async def generate(self, blocks):
""" Ask lbrycrd to generate some blocks and wait until ledger has them. """
await self.blockchain.generate(blocks)
await self.ledger.on_header.where(self.blockchain.is_expected_block)
async def blockchain_claim_name(self, name: str, value: str, amount: str, confirm=True): async def blockchain_claim_name(self, name: str, value: str, amount: str, confirm=True):
txid = await self.blockchain._cli_cmnd('claimname', name, value, amount) txid = await self.blockchain._cli_cmnd('claimname', name, value, amount)
if confirm: if confirm:
@ -462,12 +541,27 @@ class CommandTestCase(IntegrationTestCase):
""" Synchronous version of `out` method. """ """ Synchronous version of `out` method. """
return json.loads(jsonrpc_dumps_pretty(value, ledger=self.ledger))['result'] return json.loads(jsonrpc_dumps_pretty(value, ledger=self.ledger))['result']
async def confirm_and_render(self, awaitable, confirm) -> Transaction: async def confirm_and_render(self, awaitable, confirm, return_tx=False) -> Transaction:
tx = await awaitable tx = await awaitable
if confirm: if confirm:
await self.ledger.wait(tx) await self.ledger.wait(tx)
await self.generate(1) await self.generate(1)
await self.ledger.wait(tx, self.blockchain.block_expected) await self.ledger.wait(tx, self.blockchain.block_expected)
if not return_tx:
return self.sout(tx)
return tx
async def create_nondeterministic_channel(self, name, price, pubkey_bytes, daemon=None, blocking=False):
account = (daemon or self.daemon).wallet_manager.default_account
claim_address = await account.receiving.get_or_create_usable_address()
claim = Claim()
claim.channel.public_key_bytes = pubkey_bytes
tx = await Transaction.claim_create(
name, claim, lbc_to_dewies(price),
claim_address, [self.account], self.account
)
await tx.sign([self.account])
await (daemon or self.daemon).broadcast_or_release(tx, blocking)
return self.sout(tx) return self.sout(tx)
def create_upload_file(self, data, prefix=None, suffix=None): def create_upload_file(self, data, prefix=None, suffix=None):
@ -479,19 +573,19 @@ class CommandTestCase(IntegrationTestCase):
async def stream_create( async def stream_create(
self, name='hovercraft', bid='1.0', file_path=None, self, name='hovercraft', bid='1.0', file_path=None,
data=b'hi!', confirm=True, prefix=None, suffix=None, **kwargs): data=b'hi!', confirm=True, prefix=None, suffix=None, return_tx=False, **kwargs):
if file_path is None: if file_path is None and data is not None:
file_path = self.create_upload_file(data=data, prefix=prefix, suffix=suffix) file_path = self.create_upload_file(data=data, prefix=prefix, suffix=suffix)
return await self.confirm_and_render( return await self.confirm_and_render(
self.daemon.jsonrpc_stream_create(name, bid, file_path=file_path, **kwargs), confirm self.daemon.jsonrpc_stream_create(name, bid, file_path=file_path, **kwargs), confirm, return_tx
) )
async def stream_update( async def stream_update(
self, claim_id, data=None, prefix=None, suffix=None, confirm=True, **kwargs): self, claim_id, data=None, prefix=None, suffix=None, confirm=True, return_tx=False, **kwargs):
if data is not None: if data is not None:
file_path = self.create_upload_file(data=data, prefix=prefix, suffix=suffix) file_path = self.create_upload_file(data=data, prefix=prefix, suffix=suffix)
return await self.confirm_and_render( return await self.confirm_and_render(
self.daemon.jsonrpc_stream_update(claim_id, file_path=file_path, **kwargs), confirm self.daemon.jsonrpc_stream_update(claim_id, file_path=file_path, **kwargs), confirm, return_tx
) )
return await self.confirm_and_render( return await self.confirm_and_render(
self.daemon.jsonrpc_stream_update(claim_id, **kwargs), confirm self.daemon.jsonrpc_stream_update(claim_id, **kwargs), confirm
@ -567,6 +661,11 @@ class CommandTestCase(IntegrationTestCase):
self.daemon.jsonrpc_support_abandon(*args, **kwargs), confirm self.daemon.jsonrpc_support_abandon(*args, **kwargs), confirm
) )
async def account_send(self, *args, confirm=True, **kwargs):
return await self.confirm_and_render(
self.daemon.jsonrpc_account_send(*args, **kwargs), confirm
)
async def wallet_send(self, *args, confirm=True, **kwargs): async def wallet_send(self, *args, confirm=True, **kwargs):
return await self.confirm_and_render( return await self.confirm_and_render(
self.daemon.jsonrpc_wallet_send(*args, **kwargs), confirm self.daemon.jsonrpc_wallet_send(*args, **kwargs), confirm
@ -580,12 +679,21 @@ class CommandTestCase(IntegrationTestCase):
await asyncio.wait([self.ledger.wait(tx, self.blockchain.block_expected) for tx in txs]) await asyncio.wait([self.ledger.wait(tx, self.blockchain.block_expected) for tx in txs])
return self.sout(txs) return self.sout(txs)
async def blob_clean(self):
return await self.out(self.daemon.jsonrpc_blob_clean())
async def status(self):
return await self.out(self.daemon.jsonrpc_status())
async def resolve(self, uri, **kwargs): async def resolve(self, uri, **kwargs):
return (await self.out(self.daemon.jsonrpc_resolve(uri, **kwargs)))[uri] return (await self.out(self.daemon.jsonrpc_resolve(uri, **kwargs)))[uri]
async def claim_search(self, **kwargs): async def claim_search(self, **kwargs):
return (await self.out(self.daemon.jsonrpc_claim_search(**kwargs)))['items'] return (await self.out(self.daemon.jsonrpc_claim_search(**kwargs)))['items']
async def get_claim_by_claim_id(self, claim_id):
return await self.out(self.ledger.get_claim_by_claim_id(claim_id))
async def file_list(self, *args, **kwargs): async def file_list(self, *args, **kwargs):
return (await self.out(self.daemon.jsonrpc_file_list(*args, **kwargs)))['items'] return (await self.out(self.daemon.jsonrpc_file_list(*args, **kwargs)))['items']
@ -610,6 +718,9 @@ class CommandTestCase(IntegrationTestCase):
async def transaction_list(self, *args, **kwargs): async def transaction_list(self, *args, **kwargs):
return (await self.out(self.daemon.jsonrpc_transaction_list(*args, **kwargs)))['items'] return (await self.out(self.daemon.jsonrpc_transaction_list(*args, **kwargs)))['items']
async def blob_list(self, *args, **kwargs):
return (await self.out(self.daemon.jsonrpc_blob_list(*args, **kwargs)))['items']
@staticmethod @staticmethod
def get_claim_id(tx): def get_claim_id(tx):
return tx['outputs'][0]['claim_id'] return tx['outputs'][0]['claim_id']

View file

@ -10,47 +10,13 @@ from typing import Optional
import libtorrent import libtorrent
NOTIFICATION_MASKS = [
"error",
"peer",
"port_mapping",
"storage",
"tracker",
"debug",
"status",
"progress",
"ip_block",
"dht",
"stats",
"session_log",
"torrent_log",
"peer_log",
"incoming_request",
"dht_log",
"dht_operation",
"port_mapping_log",
"picker_log",
"file_progress",
"piece_progress",
"upload",
"block_progress"
]
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
DEFAULT_FLAGS = ( # fixme: somehow the logic here is inverted? DEFAULT_FLAGS = ( # fixme: somehow the logic here is inverted?
libtorrent.add_torrent_params_flags_t.flag_auto_managed libtorrent.add_torrent_params_flags_t.flag_auto_managed
| libtorrent.add_torrent_params_flags_t.flag_update_subscribe | libtorrent.add_torrent_params_flags_t.flag_update_subscribe
) )
def get_notification_type(notification) -> str:
for i, notification_type in enumerate(NOTIFICATION_MASKS):
if (1 << i) & notification:
return notification_type
raise ValueError("unrecognized notification type")
class TorrentHandle: class TorrentHandle:
def __init__(self, loop, executor, handle): def __init__(self, loop, executor, handle):
self._loop = loop self._loop = loop
@ -121,7 +87,7 @@ class TorrentHandle:
self._show_status() self._show_status()
if self.finished.is_set(): if self.finished.is_set():
break break
await asyncio.sleep(0.1, loop=self._loop) await asyncio.sleep(0.1)
async def pause(self): async def pause(self):
await self._loop.run_in_executor( await self._loop.run_in_executor(
@ -156,10 +122,8 @@ class TorrentSession:
async def bind(self, interface: str = '0.0.0.0', port: int = 10889): async def bind(self, interface: str = '0.0.0.0', port: int = 10889):
settings = { settings = {
'listen_interfaces': f"{interface}:{port}", 'listen_interfaces': f"{interface}:{port}",
'enable_outgoing_utp': True, 'enable_natpmp': False,
'enable_incoming_utp': True, 'enable_upnp': False
'enable_outgoing_tcp': False,
'enable_incoming_tcp': False
} }
self._session = await self._loop.run_in_executor( self._session = await self._loop.run_in_executor(
self._executor, libtorrent.session, settings # pylint: disable=c-extension-no-member self._executor, libtorrent.session, settings # pylint: disable=c-extension-no-member
@ -186,7 +150,7 @@ class TorrentSession:
await self._loop.run_in_executor( await self._loop.run_in_executor(
self._executor, self._pop_alerts self._executor, self._pop_alerts
) )
await asyncio.sleep(1, loop=self._loop) await asyncio.sleep(1)
async def pause(self): async def pause(self):
await self._loop.run_in_executor( await self._loop.run_in_executor(

View file

@ -36,7 +36,7 @@ class Torrent:
def __init__(self, loop, handle): def __init__(self, loop, handle):
self._loop = loop self._loop = loop
self._handle = handle self._handle = handle
self.finished = asyncio.Event(loop=loop) self.finished = asyncio.Event()
def _threaded_update_status(self): def _threaded_update_status(self):
status = self._handle.status() status = self._handle.status()
@ -58,7 +58,7 @@ class Torrent:
log.info("finished downloading torrent!") log.info("finished downloading torrent!")
await self.pause() await self.pause()
break break
await asyncio.sleep(1, loop=self._loop) await asyncio.sleep(1)
async def pause(self): async def pause(self):
log.info("pause torrent") log.info("pause torrent")

View file

@ -74,7 +74,7 @@ class TorrentSource(ManagedDownloadSource):
def bt_infohash(self): def bt_infohash(self):
return self.identifier return self.identifier
def stop_tasks(self): async def stop_tasks(self):
pass pass
@property @property
@ -118,8 +118,8 @@ class TorrentManager(SourceManager):
async def start(self): async def start(self):
await super().start() await super().start()
def stop(self): async def stop(self):
super().stop() await super().stop()
log.info("finished stopping the torrent manager") log.info("finished stopping the torrent manager")
async def delete(self, source: ManagedDownloadSource, delete_file: Optional[bool] = False): async def delete(self, source: ManagedDownloadSource, delete_file: Optional[bool] = False):

285
lbry/torrent/tracker.py Normal file
View file

@ -0,0 +1,285 @@
import random
import socket
import string
import struct
import asyncio
import logging
import time
import ipaddress
from collections import namedtuple
from functools import reduce
from typing import Optional
from lbry.dht.node import get_kademlia_peers_from_hosts
from lbry.utils import resolve_host, async_timed_cache, cache_concurrent
from lbry.wallet.stream import StreamController
from lbry import version
log = logging.getLogger(__name__)
CONNECTION_EXPIRES_AFTER_SECONDS = 50
PREFIX = 'LB' # todo: PR BEP20 to add ourselves
DEFAULT_TIMEOUT_SECONDS = 10.0
DEFAULT_CONCURRENCY_LIMIT = 100
# see: http://bittorrent.org/beps/bep_0015.html and http://xbtt.sourceforge.net/udp_tracker_protocol.html
ConnectRequest = namedtuple("ConnectRequest", ["connection_id", "action", "transaction_id"])
ConnectResponse = namedtuple("ConnectResponse", ["action", "transaction_id", "connection_id"])
AnnounceRequest = namedtuple("AnnounceRequest",
["connection_id", "action", "transaction_id", "info_hash", "peer_id", "downloaded", "left",
"uploaded", "event", "ip_addr", "key", "num_want", "port"])
AnnounceResponse = namedtuple("AnnounceResponse",
["action", "transaction_id", "interval", "leechers", "seeders", "peers"])
CompactIPv4Peer = namedtuple("CompactPeer", ["address", "port"])
ScrapeRequest = namedtuple("ScrapeRequest", ["connection_id", "action", "transaction_id", "infohashes"])
ScrapeResponse = namedtuple("ScrapeResponse", ["action", "transaction_id", "items"])
ScrapeResponseItem = namedtuple("ScrapeResponseItem", ["seeders", "completed", "leechers"])
ErrorResponse = namedtuple("ErrorResponse", ["action", "transaction_id", "message"])
structs = {
ConnectRequest: struct.Struct(">QII"),
ConnectResponse: struct.Struct(">IIQ"),
AnnounceRequest: struct.Struct(">QII20s20sQQQIIIiH"),
AnnounceResponse: struct.Struct(">IIIII"),
CompactIPv4Peer: struct.Struct(">IH"),
ScrapeRequest: struct.Struct(">QII"),
ScrapeResponse: struct.Struct(">II"),
ScrapeResponseItem: struct.Struct(">III"),
ErrorResponse: struct.Struct(">II")
}
def decode(cls, data, offset=0):
decoder = structs[cls]
if cls is AnnounceResponse:
return AnnounceResponse(*decoder.unpack_from(data, offset),
peers=[decode(CompactIPv4Peer, data, index) for index in range(20, len(data), 6)])
elif cls is ScrapeResponse:
return ScrapeResponse(*decoder.unpack_from(data, offset),
items=[decode(ScrapeResponseItem, data, index) for index in range(8, len(data), 12)])
elif cls is ErrorResponse:
return ErrorResponse(*decoder.unpack_from(data, offset), data[decoder.size:])
return cls(*decoder.unpack_from(data, offset))
def encode(obj):
if isinstance(obj, ScrapeRequest):
return structs[ScrapeRequest].pack(*obj[:-1]) + b''.join(obj.infohashes)
elif isinstance(obj, ErrorResponse):
return structs[ErrorResponse].pack(*obj[:-1]) + obj.message
elif isinstance(obj, AnnounceResponse):
return structs[AnnounceResponse].pack(*obj[:-1]) + b''.join([encode(peer) for peer in obj.peers])
return structs[type(obj)].pack(*obj)
def make_peer_id(random_part: Optional[str] = None) -> bytes:
# see https://wiki.theory.org/BitTorrentSpecification#peer_id and https://www.bittorrent.org/beps/bep_0020.html
# not to confuse with node id; peer id identifies uniquely the software, version and instance
random_part = random_part or ''.join(random.choice(string.ascii_letters) for _ in range(20))
return f"{PREFIX}-{'-'.join(map(str, version))}-{random_part}"[:20].encode()
class UDPTrackerClientProtocol(asyncio.DatagramProtocol):
def __init__(self, timeout: float = DEFAULT_TIMEOUT_SECONDS):
self.transport = None
self.data_queue = {}
self.timeout = timeout
self.semaphore = asyncio.Semaphore(DEFAULT_CONCURRENCY_LIMIT)
def connection_made(self, transport: asyncio.DatagramTransport) -> None:
self.transport = transport
async def request(self, obj, tracker_ip, tracker_port):
self.data_queue[obj.transaction_id] = asyncio.get_running_loop().create_future()
try:
async with self.semaphore:
self.transport.sendto(encode(obj), (tracker_ip, tracker_port))
return await asyncio.wait_for(self.data_queue[obj.transaction_id], self.timeout)
finally:
self.data_queue.pop(obj.transaction_id, None)
async def connect(self, tracker_ip, tracker_port):
transaction_id = random.getrandbits(32)
return decode(ConnectResponse,
await self.request(ConnectRequest(0x41727101980, 0, transaction_id), tracker_ip, tracker_port))
@cache_concurrent
@async_timed_cache(CONNECTION_EXPIRES_AFTER_SECONDS)
async def ensure_connection_id(self, peer_id, tracker_ip, tracker_port):
# peer_id is just to ensure cache coherency
return (await self.connect(tracker_ip, tracker_port)).connection_id
async def announce(self, info_hash, peer_id, port, tracker_ip, tracker_port, stopped=False):
connection_id = await self.ensure_connection_id(peer_id, tracker_ip, tracker_port)
# this should make the key deterministic but unique per info hash + peer id
key = int.from_bytes(info_hash[:4], "big") ^ int.from_bytes(peer_id[:4], "big") ^ port
transaction_id = random.getrandbits(32)
req = AnnounceRequest(
connection_id, 1, transaction_id, info_hash, peer_id, 0, 0, 0, 3 if stopped else 1, 0, key, -1, port)
return decode(AnnounceResponse, await self.request(req, tracker_ip, tracker_port))
async def scrape(self, infohashes, tracker_ip, tracker_port, connection_id=None):
connection_id = await self.ensure_connection_id(None, tracker_ip, tracker_port)
transaction_id = random.getrandbits(32)
reply = await self.request(
ScrapeRequest(connection_id, 2, transaction_id, infohashes), tracker_ip, tracker_port)
return decode(ScrapeResponse, reply), connection_id
def datagram_received(self, data: bytes, addr: (str, int)) -> None:
if len(data) < 8:
return
transaction_id = int.from_bytes(data[4:8], byteorder="big", signed=False)
if transaction_id in self.data_queue:
if not self.data_queue[transaction_id].done():
if data[3] == 3:
return self.data_queue[transaction_id].set_exception(Exception(decode(ErrorResponse, data).message))
return self.data_queue[transaction_id].set_result(data)
log.debug("unexpected packet (can be a response for a previously timed out request): %s", data.hex())
def connection_lost(self, exc: Exception = None) -> None:
self.transport = None
class TrackerClient:
event_controller = StreamController()
def __init__(self, node_id, announce_port, get_servers, timeout=10.0):
self.client = UDPTrackerClientProtocol(timeout=timeout)
self.transport = None
self.peer_id = make_peer_id(node_id.hex() if node_id else None)
self.announce_port = announce_port
self._get_servers = get_servers
self.results = {} # we can't probe the server before the interval, so we keep the result here until it expires
self.tasks = {}
async def start(self):
self.transport, _ = await asyncio.get_running_loop().create_datagram_endpoint(
lambda: self.client, local_addr=("0.0.0.0", 0))
self.event_controller.stream.listen(
lambda request: self.on_hash(request[1], request[2]) if request[0] == 'search' else None)
def stop(self):
while self.tasks:
self.tasks.popitem()[1].cancel()
if self.transport is not None:
self.transport.close()
self.client = None
self.transport = None
self.event_controller.close()
def on_hash(self, info_hash, on_announcement=None):
if info_hash not in self.tasks:
task = asyncio.create_task(self.get_peer_list(info_hash, on_announcement=on_announcement))
task.add_done_callback(lambda *_: self.tasks.pop(info_hash, None))
self.tasks[info_hash] = task
async def announce_many(self, *info_hashes, stopped=False):
await asyncio.gather(
*[self._announce_many(server, info_hashes, stopped=stopped) for server in self._get_servers()],
return_exceptions=True)
async def _announce_many(self, server, info_hashes, stopped=False):
tracker_ip = await resolve_host(*server, 'udp')
still_good_info_hashes = {
info_hash for (info_hash, (next_announcement, _)) in self.results.get(tracker_ip, {}).items()
if time.time() < next_announcement
}
results = await asyncio.gather(
*[self._probe_server(info_hash, tracker_ip, server[1], stopped=stopped)
for info_hash in info_hashes if info_hash not in still_good_info_hashes],
return_exceptions=True)
if results:
errors = sum([1 for result in results if result is None or isinstance(result, Exception)])
log.info("Tracker: finished announcing %d files to %s:%d, %d errors", len(results), *server, errors)
async def get_peer_list(self, info_hash, stopped=False, on_announcement=None, no_port=False):
found = []
probes = [self._probe_server(info_hash, *server, stopped, no_port) for server in self._get_servers()]
for done in asyncio.as_completed(probes):
result = await done
if result is not None:
await asyncio.gather(*filter(asyncio.iscoroutine, [on_announcement(result)] if on_announcement else []))
found.append(result)
return found
async def get_kademlia_peer_list(self, info_hash):
responses = await self.get_peer_list(info_hash, no_port=True)
return await announcement_to_kademlia_peers(*responses)
async def _probe_server(self, info_hash, tracker_host, tracker_port, stopped=False, no_port=False):
result = None
try:
tracker_host = await resolve_host(tracker_host, tracker_port, 'udp')
except socket.error:
log.warning("DNS failure while resolving tracker host: %s, skipping.", tracker_host)
return
self.results.setdefault(tracker_host, {})
if info_hash in self.results[tracker_host]:
next_announcement, result = self.results[tracker_host][info_hash]
if time.time() < next_announcement:
return result
try:
result = await self.client.announce(
info_hash, self.peer_id, 0 if no_port else self.announce_port, tracker_host, tracker_port, stopped)
self.results[tracker_host][info_hash] = (time.time() + result.interval, result)
except asyncio.TimeoutError: # todo: this is UDP, timeout is common, we need a better metric for failures
self.results[tracker_host][info_hash] = (time.time() + 60.0, result)
log.debug("Tracker timed out: %s:%d", tracker_host, tracker_port)
return None
log.debug("Announced: %s found %d peers for %s", tracker_host, len(result.peers), info_hash.hex()[:8])
return result
def enqueue_tracker_search(info_hash: bytes, peer_q: asyncio.Queue):
async def on_announcement(announcement: AnnounceResponse):
peers = await announcement_to_kademlia_peers(announcement)
log.info("Found %d peers from tracker for %s", len(peers), info_hash.hex()[:8])
peer_q.put_nowait(peers)
TrackerClient.event_controller.add(('search', info_hash, on_announcement))
def announcement_to_kademlia_peers(*announcements: AnnounceResponse):
peers = [
(str(ipaddress.ip_address(peer.address)), peer.port)
for announcement in announcements for peer in announcement.peers if peer.port > 1024 # no privileged or 0
]
return get_kademlia_peers_from_hosts(peers)
class UDPTrackerServerProtocol(asyncio.DatagramProtocol): # for testing. Not suitable for production
def __init__(self):
self.transport = None
self.known_conns = set()
self.peers = {}
def connection_made(self, transport: asyncio.DatagramTransport) -> None:
self.transport = transport
def add_peer(self, info_hash, ip_address: str, port: int):
self.peers.setdefault(info_hash, [])
self.peers[info_hash].append(encode_peer(ip_address, port))
def datagram_received(self, data: bytes, addr: (str, int)) -> None:
if len(data) < 16:
return
action = int.from_bytes(data[8:12], "big", signed=False)
if action == 0:
req = decode(ConnectRequest, data)
connection_id = random.getrandbits(32)
self.known_conns.add(connection_id)
return self.transport.sendto(encode(ConnectResponse(0, req.transaction_id, connection_id)), addr)
elif action == 1:
req = decode(AnnounceRequest, data)
if req.connection_id not in self.known_conns:
resp = encode(ErrorResponse(3, req.transaction_id, b'Connection ID missmatch.\x00'))
else:
compact_address = encode_peer(addr[0], req.port)
if req.event != 3:
self.add_peer(req.info_hash, addr[0], req.port)
elif compact_address in self.peers.get(req.info_hash, []):
self.peers[req.info_hash].remove(compact_address)
peers = [decode(CompactIPv4Peer, peer) for peer in self.peers[req.info_hash]]
resp = encode(AnnounceResponse(1, req.transaction_id, 1700, 0, len(peers), peers))
return self.transport.sendto(resp, addr)
def encode_peer(ip_address: str, port: int):
compact_ip = reduce(lambda buff, x: buff + bytearray([int(x)]), ip_address.split('.'), bytearray())
return compact_ip + port.to_bytes(2, "big", signed=False)

View file

@ -21,7 +21,6 @@ import pkg_resources
import certifi import certifi
import aiohttp import aiohttp
from prometheus_client import Counter from prometheus_client import Counter
from prometheus_client.registry import REGISTRY
from lbry.schema.claim import Claim from lbry.schema.claim import Claim
@ -105,10 +104,6 @@ def check_connection(server="lbry.com", port=80, timeout=5) -> bool:
return False return False
async def async_check_connection(server="lbry.com", port=80, timeout=1) -> bool:
return await asyncio.get_event_loop().run_in_executor(None, check_connection, server, port, timeout)
def random_string(length=10, chars=string.ascii_lowercase): def random_string(length=10, chars=string.ascii_lowercase):
return ''.join([random.choice(chars) for _ in range(length)]) return ''.join([random.choice(chars) for _ in range(length)])
@ -135,21 +130,16 @@ def get_sd_hash(stream_info):
def json_dumps_pretty(obj, **kwargs): def json_dumps_pretty(obj, **kwargs):
return json.dumps(obj, sort_keys=True, indent=2, separators=(',', ': '), **kwargs) return json.dumps(obj, sort_keys=True, indent=2, separators=(',', ': '), **kwargs)
try:
def cancel_task(task: typing.Optional[asyncio.Task]): # the standard contextlib.aclosing() is available in 3.10+
if task and not task.done(): from contextlib import aclosing # pylint: disable=unused-import
task.cancel() except ImportError:
@contextlib.asynccontextmanager
async def aclosing(thing):
def cancel_tasks(tasks: typing.List[typing.Optional[asyncio.Task]]): try:
for task in tasks: yield thing
cancel_task(task) finally:
await thing.aclose()
def drain_tasks(tasks: typing.List[typing.Optional[asyncio.Task]]):
while tasks:
cancel_task(tasks.pop())
def async_timed_cache(duration: int): def async_timed_cache(duration: int):
def wrapper(func): def wrapper(func):
@ -160,7 +150,7 @@ def async_timed_cache(duration: int):
async def _inner(*args, **kwargs) -> typing.Any: async def _inner(*args, **kwargs) -> typing.Any:
loop = asyncio.get_running_loop() loop = asyncio.get_running_loop()
time_now = loop.time() time_now = loop.time()
key = tuple([args, tuple([tuple([k, kwargs[k]]) for k in kwargs])]) key = (args, tuple(kwargs.items()))
if key in cache and (time_now - cache[key][1] < duration): if key in cache and (time_now - cache[key][1] < duration):
return cache[key][0] return cache[key][0]
to_cache = await func(*args, **kwargs) to_cache = await func(*args, **kwargs)
@ -178,7 +168,7 @@ def cache_concurrent(async_fn):
@functools.wraps(async_fn) @functools.wraps(async_fn)
async def wrapper(*args, **kwargs): async def wrapper(*args, **kwargs):
key = tuple([args, tuple([tuple([k, kwargs[k]]) for k in kwargs])]) key = (args, tuple(kwargs.items()))
cache[key] = cache.get(key) or asyncio.create_task(async_fn(*args, **kwargs)) cache[key] = cache.get(key) or asyncio.create_task(async_fn(*args, **kwargs))
try: try:
return await cache[key] return await cache[key]
@ -280,12 +270,6 @@ class LRUCacheWithMetrics:
def __del__(self): def __del__(self):
self.clear() self.clear()
if self._track_metrics: # needed for tests
try:
REGISTRY.unregister(self.hits)
REGISTRY.unregister(self.misses)
except AttributeError:
pass
class LRUCache: class LRUCache:
@ -314,11 +298,14 @@ class LRUCache:
self.cache.popitem(last=False) self.cache.popitem(last=False)
self.cache[key] = value self.cache[key] = value
def items(self):
return self.cache.items()
def clear(self): def clear(self):
self.cache.clear() self.cache.clear()
def pop(self, key): def pop(self, key, default=None):
return self.cache.pop(key) return self.cache.pop(key, default)
def __setitem__(self, key, value): def __setitem__(self, key, value):
return self.set(key, value) return self.set(key, value)
@ -350,7 +337,7 @@ def lru_cache_concurrent(cache_size: typing.Optional[int] = None,
@functools.wraps(async_fn) @functools.wraps(async_fn)
async def _inner(*args, **kwargs): async def _inner(*args, **kwargs):
key = tuple([args, tuple([tuple([k, kwargs[k]]) for k in kwargs])]) key = (args, tuple(kwargs.items()))
if key in lru_cache: if key in lru_cache:
return lru_cache.get(key) return lru_cache.get(key)
@ -384,13 +371,15 @@ CARRIER_GRADE_NAT_SUBNET = ipaddress.ip_network('100.64.0.0/10')
IPV4_TO_6_RELAY_SUBNET = ipaddress.ip_network('192.88.99.0/24') IPV4_TO_6_RELAY_SUBNET = ipaddress.ip_network('192.88.99.0/24')
def is_valid_public_ipv4(address, allow_localhost: bool = False): def is_valid_public_ipv4(address, allow_localhost: bool = False, allow_lan: bool = False):
try: try:
parsed_ip = ipaddress.ip_address(address) parsed_ip = ipaddress.ip_address(address)
if parsed_ip.is_loopback and allow_localhost: if parsed_ip.is_loopback and allow_localhost:
return True return True
if allow_lan and parsed_ip.is_private:
return True
if any((parsed_ip.version != 4, parsed_ip.is_unspecified, parsed_ip.is_link_local, parsed_ip.is_loopback, if any((parsed_ip.version != 4, parsed_ip.is_unspecified, parsed_ip.is_link_local, parsed_ip.is_loopback,
parsed_ip.is_multicast, parsed_ip.is_reserved, parsed_ip.is_private, parsed_ip.is_reserved)): parsed_ip.is_multicast, parsed_ip.is_reserved, parsed_ip.is_private)):
return False return False
else: else:
return not any((CARRIER_GRADE_NAT_SUBNET.supernet_of(ipaddress.ip_network(f"{address}/32")), return not any((CARRIER_GRADE_NAT_SUBNET.supernet_of(ipaddress.ip_network(f"{address}/32")),
@ -411,7 +400,7 @@ async def fallback_get_external_ip(): # used if spv servers can't be used for i
async def _get_external_ip(default_servers) -> typing.Tuple[typing.Optional[str], typing.Optional[str]]: async def _get_external_ip(default_servers) -> typing.Tuple[typing.Optional[str], typing.Optional[str]]:
# used if upnp is disabled or non-functioning # used if upnp is disabled or non-functioning
from lbry.wallet.server.udp import SPVStatusClientProtocol # pylint: disable=C0415 from lbry.wallet.udp import SPVStatusClientProtocol # pylint: disable=C0415
hostname_to_ip = {} hostname_to_ip = {}
ip_to_hostnames = collections.defaultdict(list) ip_to_hostnames = collections.defaultdict(list)
@ -461,8 +450,8 @@ def is_running_from_bundle():
class LockWithMetrics(asyncio.Lock): class LockWithMetrics(asyncio.Lock):
def __init__(self, acquire_metric, held_time_metric, loop=None): def __init__(self, acquire_metric, held_time_metric):
super().__init__(loop=loop) super().__init__()
self._acquire_metric = acquire_metric self._acquire_metric = acquire_metric
self._lock_held_time_metric = held_time_metric self._lock_held_time_metric = held_time_metric
self._lock_acquired_time = None self._lock_acquired_time = None
@ -480,3 +469,18 @@ class LockWithMetrics(asyncio.Lock):
return super().release() return super().release()
finally: finally:
self._lock_held_time_metric.observe(time.perf_counter() - self._lock_acquired_time) self._lock_held_time_metric.observe(time.perf_counter() - self._lock_acquired_time)
def get_colliding_prefix_bits(first_value: bytes, second_value: bytes):
"""
Calculates the amount of colliding prefix bits between <first_value> and <second_value>.
This is given by the amount of bits that are the same until the first different one (via XOR),
starting from the most significant bit to the least significant bit.
:param first_value: first value to compare, bigger than size.
:param second_value: second value to compare, bigger than size.
:return: amount of prefix colliding bits.
"""
assert len(first_value) == len(second_value), "length should be the same"
size = len(first_value) * 8
first_value, second_value = int.from_bytes(first_value, "big"), int.from_bytes(second_value, "big")
return size - (first_value ^ second_value).bit_length()

View file

@ -1,17 +1,23 @@
__node_daemon__ = 'lbrycrdd' __lbcd__ = 'lbcd'
__node_cli__ = 'lbrycrd-cli' __lbcctl__ = 'lbcctl'
__node_bin__ = '' __lbcwallet__ = 'lbcwallet'
__node_url__ = ( __lbcd_url__ = (
'https://github.com/lbryio/lbrycrd/releases/download/v0.17.4.6/lbrycrd-linux-1746.zip' 'https://github.com/lbryio/lbcd/releases/download/' +
'v0.22.100-rc.0/lbcd_0.22.100-rc.0_TARGET_PLATFORM.tar.gz'
)
__lbcwallet_url__ = (
'https://github.com/lbryio/lbcwallet/releases/download/' +
'v0.13.100-alpha.0/lbcwallet_0.13.100-alpha.0_TARGET_PLATFORM.tar.gz'
) )
__spvserver__ = 'lbry.wallet.server.coin.LBCRegTest' __spvserver__ = 'lbry.wallet.server.coin.LBCRegTest'
from .wallet import Wallet, WalletStorage, TimestampedPreferences, ENCRYPT_ON_DISK from lbry.wallet.wallet import Wallet, WalletStorage, TimestampedPreferences, ENCRYPT_ON_DISK
from .manager import WalletManager from lbry.wallet.manager import WalletManager
from .network import Network from lbry.wallet.network import Network
from .ledger import Ledger, RegTestLedger, TestNetLedger, BlockHeightEvent from lbry.wallet.ledger import Ledger, RegTestLedger, TestNetLedger, BlockHeightEvent
from .account import Account, AddressManager, SingleKey, HierarchicalDeterministic from lbry.wallet.account import Account, AddressManager, SingleKey, HierarchicalDeterministic, \
from .transaction import Transaction, Output, Input DeterministicChannelKeyManager
from .script import OutputScript, InputScript from lbry.wallet.transaction import Transaction, Output, Input
from .database import SQLiteMixin, Database from lbry.wallet.script import OutputScript, InputScript
from .header import Headers from lbry.wallet.database import SQLiteMixin, Database
from lbry.wallet.header import Headers

View file

@ -5,18 +5,16 @@ import logging
import typing import typing
import asyncio import asyncio
import random import random
from functools import partial
from hashlib import sha256 from hashlib import sha256
from string import hexdigits from string import hexdigits
from typing import Type, Dict, Tuple, Optional, Any, List from typing import Type, Dict, Tuple, Optional, Any, List
import ecdsa
from lbry.error import InvalidPasswordError from lbry.error import InvalidPasswordError
from lbry.crypto.crypt import aes_encrypt, aes_decrypt from lbry.crypto.crypt import aes_encrypt, aes_decrypt
from .bip32 import PrivateKey, PubKey, from_extended_key_string from .bip32 import PrivateKey, PublicKey, KeyPath, from_extended_key_string
from .mnemonic import Mnemonic from .mnemonic import Mnemonic
from .constants import COIN, CLAIM_TYPES, TXO_TYPES from .constants import COIN, TXO_TYPES
from .transaction import Transaction, Input, Output from .transaction import Transaction, Input, Output
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
@ -35,6 +33,49 @@ def validate_claim_id(claim_id):
raise Exception("Claim id is not hex encoded") raise Exception("Claim id is not hex encoded")
class DeterministicChannelKeyManager:
def __init__(self, account: 'Account'):
self.account = account
self.last_known = 0
self.cache = {}
self._private_key: Optional[PrivateKey] = None
@property
def private_key(self):
if self._private_key is None:
if self.account.private_key is not None:
self._private_key = self.account.private_key.child(KeyPath.CHANNEL)
return self._private_key
def maybe_generate_deterministic_key_for_channel(self, txo):
if self.private_key is None:
return
next_private_key = self.private_key.child(self.last_known)
public_key = next_private_key.public_key
public_key_bytes = public_key.pubkey_bytes
if txo.claim.channel.public_key_bytes == public_key_bytes:
self.cache[public_key.address] = next_private_key
self.last_known += 1
async def ensure_cache_primed(self):
if self.private_key is not None:
await self.generate_next_key()
async def generate_next_key(self) -> PrivateKey:
db = self.account.ledger.db
while True:
next_private_key = self.private_key.child(self.last_known)
public_key = next_private_key.public_key
self.cache[public_key.address] = next_private_key
if not await db.is_channel_key_used(self.account, public_key):
return next_private_key
self.last_known += 1
def get_private_key_from_pubkey_hash(self, pubkey_hash) -> PrivateKey:
return self.cache.get(pubkey_hash)
class AddressManager: class AddressManager:
name: str name: str
@ -80,7 +121,7 @@ class AddressManager:
def get_private_key(self, index: int) -> PrivateKey: def get_private_key(self, index: int) -> PrivateKey:
raise NotImplementedError raise NotImplementedError
def get_public_key(self, index: int) -> PubKey: def get_public_key(self, index: int) -> PublicKey:
raise NotImplementedError raise NotImplementedError
async def get_max_gap(self): async def get_max_gap(self):
@ -97,7 +138,8 @@ class AddressManager:
return [r['address'] for r in records] return [r['address'] for r in records]
async def get_or_create_usable_address(self) -> str: async def get_or_create_usable_address(self) -> str:
addresses = await self.get_addresses(only_usable=True, limit=10) async with self.address_generator_lock:
addresses = await self.get_addresses(only_usable=True, limit=10)
if addresses: if addresses:
return random.choice(addresses) return random.choice(addresses)
addresses = await self.ensure_address_gap() addresses = await self.ensure_address_gap()
@ -119,8 +161,8 @@ class HierarchicalDeterministic(AddressManager):
@classmethod @classmethod
def from_dict(cls, account: 'Account', d: dict) -> Tuple[AddressManager, AddressManager]: def from_dict(cls, account: 'Account', d: dict) -> Tuple[AddressManager, AddressManager]:
return ( return (
cls(account, 0, **d.get('receiving', {'gap': 20, 'maximum_uses_per_address': 1})), cls(account, KeyPath.RECEIVE, **d.get('receiving', {'gap': 20, 'maximum_uses_per_address': 1})),
cls(account, 1, **d.get('change', {'gap': 6, 'maximum_uses_per_address': 1})) cls(account, KeyPath.CHANGE, **d.get('change', {'gap': 6, 'maximum_uses_per_address': 1}))
) )
def merge(self, d: dict): def merge(self, d: dict):
@ -133,7 +175,7 @@ class HierarchicalDeterministic(AddressManager):
def get_private_key(self, index: int) -> PrivateKey: def get_private_key(self, index: int) -> PrivateKey:
return self.account.private_key.child(self.chain_number).child(index) return self.account.private_key.child(self.chain_number).child(index)
def get_public_key(self, index: int) -> PubKey: def get_public_key(self, index: int) -> PublicKey:
return self.account.public_key.child(self.chain_number).child(index) return self.account.public_key.child(self.chain_number).child(index)
async def get_max_gap(self) -> int: async def get_max_gap(self) -> int:
@ -193,7 +235,7 @@ class SingleKey(AddressManager):
@classmethod @classmethod
def from_dict(cls, account: 'Account', d: dict) \ def from_dict(cls, account: 'Account', d: dict) \
-> Tuple[AddressManager, AddressManager]: -> Tuple[AddressManager, AddressManager]:
same_address_manager = cls(account, account.public_key, 0) same_address_manager = cls(account, account.public_key, KeyPath.RECEIVE)
return same_address_manager, same_address_manager return same_address_manager, same_address_manager
def to_dict_instance(self): def to_dict_instance(self):
@ -202,7 +244,7 @@ class SingleKey(AddressManager):
def get_private_key(self, index: int) -> PrivateKey: def get_private_key(self, index: int) -> PrivateKey:
return self.account.private_key return self.account.private_key
def get_public_key(self, index: int) -> PubKey: def get_public_key(self, index: int) -> PublicKey:
return self.account.public_key return self.account.public_key
async def get_max_gap(self) -> int: async def get_max_gap(self) -> int:
@ -224,9 +266,6 @@ class SingleKey(AddressManager):
class Account: class Account:
mnemonic_class = Mnemonic
private_key_class = PrivateKey
public_key_class = PubKey
address_generators: Dict[str, Type[AddressManager]] = { address_generators: Dict[str, Type[AddressManager]] = {
SingleKey.name: SingleKey, SingleKey.name: SingleKey,
HierarchicalDeterministic.name: HierarchicalDeterministic, HierarchicalDeterministic.name: HierarchicalDeterministic,
@ -234,7 +273,7 @@ class Account:
def __init__(self, ledger: 'Ledger', wallet: 'Wallet', name: str, def __init__(self, ledger: 'Ledger', wallet: 'Wallet', name: str,
seed: str, private_key_string: str, encrypted: bool, seed: str, private_key_string: str, encrypted: bool,
private_key: Optional[PrivateKey], public_key: PubKey, private_key: Optional[PrivateKey], public_key: PublicKey,
address_generator: dict, modified_on: float, channel_keys: dict) -> None: address_generator: dict, modified_on: float, channel_keys: dict) -> None:
self.ledger = ledger self.ledger = ledger
self.wallet = wallet self.wallet = wallet
@ -245,13 +284,14 @@ class Account:
self.private_key_string = private_key_string self.private_key_string = private_key_string
self.init_vectors: Dict[str, bytes] = {} self.init_vectors: Dict[str, bytes] = {}
self.encrypted = encrypted self.encrypted = encrypted
self.private_key = private_key self.private_key: Optional[PrivateKey] = private_key
self.public_key = public_key self.public_key: PublicKey = public_key
generator_name = address_generator.get('name', HierarchicalDeterministic.name) generator_name = address_generator.get('name', HierarchicalDeterministic.name)
self.address_generator = self.address_generators[generator_name] self.address_generator = self.address_generators[generator_name]
self.receiving, self.change = self.address_generator.from_dict(self, address_generator) self.receiving, self.change = self.address_generator.from_dict(self, address_generator)
self.address_managers = {am.chain_number: am for am in {self.receiving, self.change}} self.address_managers = {am.chain_number: am for am in (self.receiving, self.change)}
self.channel_keys = channel_keys self.channel_keys = channel_keys
self.deterministic_channel_keys = DeterministicChannelKeyManager(self)
ledger.add_account(self) ledger.add_account(self)
wallet.add_account(self) wallet.add_account(self)
@ -266,19 +306,19 @@ class Account:
name: str = None, address_generator: dict = None): name: str = None, address_generator: dict = None):
return cls.from_dict(ledger, wallet, { return cls.from_dict(ledger, wallet, {
'name': name, 'name': name,
'seed': cls.mnemonic_class().make_seed(), 'seed': Mnemonic().make_seed(),
'address_generator': address_generator or {} 'address_generator': address_generator or {}
}) })
@classmethod @classmethod
def get_private_key_from_seed(cls, ledger: 'Ledger', seed: str, password: str): def get_private_key_from_seed(cls, ledger: 'Ledger', seed: str, password: str):
return cls.private_key_class.from_seed( return PrivateKey.from_seed(
ledger, cls.mnemonic_class.mnemonic_to_seed(seed, password or 'lbryum') ledger, Mnemonic.mnemonic_to_seed(seed, password or 'lbryum')
) )
@classmethod @classmethod
def keys_from_dict(cls, ledger: 'Ledger', d: dict) \ def keys_from_dict(cls, ledger: 'Ledger', d: dict) \
-> Tuple[str, Optional[PrivateKey], PubKey]: -> Tuple[str, Optional[PrivateKey], PublicKey]:
seed = d.get('seed', '') seed = d.get('seed', '')
private_key_string = d.get('private_key', '') private_key_string = d.get('private_key', '')
private_key = None private_key = None
@ -449,7 +489,7 @@ class Account:
assert not self.encrypted, "Cannot get private key on encrypted wallet account." assert not self.encrypted, "Cannot get private key on encrypted wallet account."
return self.address_managers[chain].get_private_key(index) return self.address_managers[chain].get_private_key(index)
def get_public_key(self, chain: int, index: int) -> PubKey: def get_public_key(self, chain: int, index: int) -> PublicKey:
return self.address_managers[chain].get_public_key(index) return self.address_managers[chain].get_public_key(index)
def get_balance(self, confirmations=0, include_claims=False, read_only=False, **constraints): def get_balance(self, confirmations=0, include_claims=False, read_only=False, **constraints):
@ -520,33 +560,30 @@ class Account:
return tx return tx
def add_channel_private_key(self, private_key): async def generate_channel_private_key(self):
public_key_bytes = private_key.get_verifying_key().to_der() return await self.deterministic_channel_keys.generate_next_key()
channel_pubkey_hash = self.ledger.public_key_to_address(public_key_bytes)
self.channel_keys[channel_pubkey_hash] = private_key.to_pem().decode()
async def get_channel_private_key(self, public_key_bytes): def add_channel_private_key(self, private_key: PrivateKey):
self.channel_keys[private_key.address] = private_key.to_pem().decode()
async def get_channel_private_key(self, public_key_bytes) -> PrivateKey:
channel_pubkey_hash = self.ledger.public_key_to_address(public_key_bytes) channel_pubkey_hash = self.ledger.public_key_to_address(public_key_bytes)
private_key_pem = self.channel_keys.get(channel_pubkey_hash) private_key_pem = self.channel_keys.get(channel_pubkey_hash)
if private_key_pem: if private_key_pem:
return await asyncio.get_event_loop().run_in_executor( return PrivateKey.from_pem(self.ledger, private_key_pem)
None, ecdsa.SigningKey.from_pem, private_key_pem, sha256 return self.deterministic_channel_keys.get_private_key_from_pubkey_hash(channel_pubkey_hash)
)
async def maybe_migrate_certificates(self): async def maybe_migrate_certificates(self):
def to_der(private_key_pem):
return ecdsa.SigningKey.from_pem(private_key_pem, hashfunc=sha256).get_verifying_key().to_der()
if not self.channel_keys: if not self.channel_keys:
return return
channel_keys = {} channel_keys = {}
for private_key_pem in self.channel_keys.values(): for private_key_pem in self.channel_keys.values():
if not isinstance(private_key_pem, str): if not isinstance(private_key_pem, str):
continue continue
if "-----BEGIN EC PRIVATE KEY-----" not in private_key_pem: if not private_key_pem.startswith("-----BEGIN"):
continue continue
public_key_der = await asyncio.get_event_loop().run_in_executor(None, to_der, private_key_pem) private_key = PrivateKey.from_pem(self.ledger, private_key_pem)
channel_keys[self.ledger.public_key_to_address(public_key_der)] = private_key_pem channel_keys[private_key.address] = private_key_pem
if self.channel_keys != channel_keys: if self.channel_keys != channel_keys:
self.channel_keys = channel_keys self.channel_keys = channel_keys
self.wallet.save() self.wallet.save()
@ -566,35 +603,14 @@ class Account:
if gap_changed: if gap_changed:
self.wallet.save() self.wallet.save()
async def get_detailed_balance(self, confirmations=0, reserved_subtotals=False, read_only=False): async def get_detailed_balance(self, confirmations=0, read_only=False):
tips_balance, supports_balance, claims_balance = 0, 0, 0 constraints = {}
get_total_balance = partial(self.get_balance, read_only=read_only, confirmations=confirmations, if confirmations > 0:
include_claims=True) height = self.ledger.headers.height - (confirmations-1)
total = await get_total_balance() constraints.update({'height__lte': height, 'height__gt': 0})
if reserved_subtotals: return await self.ledger.db.get_detailed_balance(
claims_balance = await get_total_balance(txo_type__in=CLAIM_TYPES) accounts=[self], read_only=read_only, **constraints
for txo in await self.get_support_summary(): )
if confirmations > 0 and not 0 < txo.tx_ref.height <= self.ledger.headers.height - (confirmations - 1):
continue
if txo.is_my_input:
supports_balance += txo.amount
else:
tips_balance += txo.amount
reserved = claims_balance + supports_balance + tips_balance
else:
reserved = await self.get_balance(
confirmations=confirmations, include_claims=True, txo_type__gt=0
)
return {
'total': total,
'available': total - reserved,
'reserved': reserved,
'reserved_subtotals': {
'claims': claims_balance,
'supports': supports_balance,
'tips': tips_balance
} if reserved_subtotals else None
}
def get_transaction_history(self, read_only=False, **constraints): def get_transaction_history(self, read_only=False, **constraints):
return self.ledger.get_transaction_history( return self.ledger.get_transaction_history(

View file

@ -1,10 +1,21 @@
from coincurve import PublicKey, PrivateKey as _PrivateKey from asn1crypto.keys import PrivateKeyInfo, ECPrivateKey
from coincurve import PublicKey as cPublicKey, PrivateKey as cPrivateKey
from coincurve.utils import (
pem_to_der, lib as libsecp256k1, ffi as libsecp256k1_ffi
)
from coincurve.ecdsa import CDATA_SIG_LENGTH
from lbry.crypto.hash import hmac_sha512, hash160, double_sha256 from lbry.crypto.hash import hmac_sha512, hash160, double_sha256
from lbry.crypto.base58 import Base58 from lbry.crypto.base58 import Base58
from .util import cachedproperty from .util import cachedproperty
class KeyPath:
RECEIVE = 0
CHANGE = 1
CHANNEL = 2
class DerivationError(Exception): class DerivationError(Exception):
""" Raised when an invalid derivation occurs. """ """ Raised when an invalid derivation occurs. """
@ -46,9 +57,11 @@ class _KeyBase:
if len(raw_serkey) != 33: if len(raw_serkey) != 33:
raise ValueError('raw_serkey must have length 33') raise ValueError('raw_serkey must have length 33')
return (ver_bytes + bytes((self.depth,)) return (
+ self.parent_fingerprint() + self.n.to_bytes(4, 'big') ver_bytes + bytes((self.depth,))
+ self.chain_code + raw_serkey) + self.parent_fingerprint() + self.n.to_bytes(4, 'big')
+ self.chain_code + raw_serkey
)
def identifier(self): def identifier(self):
raise NotImplementedError raise NotImplementedError
@ -69,26 +82,30 @@ class _KeyBase:
return Base58.encode_check(self.extended_key()) return Base58.encode_check(self.extended_key())
class PubKey(_KeyBase): class PublicKey(_KeyBase):
""" A BIP32 public key. """ """ A BIP32 public key. """
def __init__(self, ledger, pubkey, chain_code, n, depth, parent=None): def __init__(self, ledger, pubkey, chain_code, n, depth, parent=None):
super().__init__(ledger, chain_code, n, depth, parent) super().__init__(ledger, chain_code, n, depth, parent)
if isinstance(pubkey, PublicKey): if isinstance(pubkey, cPublicKey):
self.verifying_key = pubkey self.verifying_key = pubkey
else: else:
self.verifying_key = self._verifying_key_from_pubkey(pubkey) self.verifying_key = self._verifying_key_from_pubkey(pubkey)
@classmethod
def from_compressed(cls, public_key_bytes, ledger=None) -> 'PublicKey':
return cls(ledger, public_key_bytes, bytes((0,)*32), 0, 0)
@classmethod @classmethod
def _verifying_key_from_pubkey(cls, pubkey): def _verifying_key_from_pubkey(cls, pubkey):
""" Converts a 33-byte compressed pubkey into an PublicKey object. """ """ Converts a 33-byte compressed pubkey into an coincurve.PublicKey object. """
if not isinstance(pubkey, (bytes, bytearray)): if not isinstance(pubkey, (bytes, bytearray)):
raise TypeError('pubkey must be raw bytes') raise TypeError('pubkey must be raw bytes')
if len(pubkey) != 33: if len(pubkey) != 33:
raise ValueError('pubkey must be 33 bytes') raise ValueError('pubkey must be 33 bytes')
if pubkey[0] not in (2, 3): if pubkey[0] not in (2, 3):
raise ValueError('invalid pubkey prefix byte') raise ValueError('invalid pubkey prefix byte')
return PublicKey(pubkey) return cPublicKey(pubkey)
@cachedproperty @cachedproperty
def pubkey_bytes(self): def pubkey_bytes(self):
@ -103,7 +120,7 @@ class PubKey(_KeyBase):
def ec_point(self): def ec_point(self):
return self.verifying_key.point() return self.verifying_key.point()
def child(self, n: int): def child(self, n: int) -> 'PublicKey':
""" Return the derived child extended pubkey at index N. """ """ Return the derived child extended pubkey at index N. """
if not 0 <= n < (1 << 31): if not 0 <= n < (1 << 31):
raise ValueError('invalid BIP32 public key child number') raise ValueError('invalid BIP32 public key child number')
@ -111,7 +128,7 @@ class PubKey(_KeyBase):
msg = self.pubkey_bytes + n.to_bytes(4, 'big') msg = self.pubkey_bytes + n.to_bytes(4, 'big')
L_b, R_b = self._hmac_sha512(msg) # pylint: disable=invalid-name L_b, R_b = self._hmac_sha512(msg) # pylint: disable=invalid-name
derived_key = self.verifying_key.add(L_b) derived_key = self.verifying_key.add(L_b)
return PubKey(self.ledger, derived_key, R_b, n, self.depth + 1, self) return PublicKey(self.ledger, derived_key, R_b, n, self.depth + 1, self)
def identifier(self): def identifier(self):
""" Return the key's identifier as 20 bytes. """ """ Return the key's identifier as 20 bytes. """
@ -124,6 +141,36 @@ class PubKey(_KeyBase):
self.pubkey_bytes self.pubkey_bytes
) )
def verify(self, signature, digest) -> bool:
""" Verify that a signature is valid for a 32 byte digest. """
if len(signature) != 64:
raise ValueError('Signature must be 64 bytes long.')
if len(digest) != 32:
raise ValueError('Digest must be 32 bytes long.')
key = self.verifying_key
raw_signature = libsecp256k1_ffi.new('secp256k1_ecdsa_signature *')
parsed = libsecp256k1.secp256k1_ecdsa_signature_parse_compact(
key.context.ctx, raw_signature, signature
)
assert parsed == 1
normalized_signature = libsecp256k1_ffi.new('secp256k1_ecdsa_signature *')
libsecp256k1.secp256k1_ecdsa_signature_normalize(
key.context.ctx, normalized_signature, raw_signature
)
verified = libsecp256k1.secp256k1_ecdsa_verify(
key.context.ctx, normalized_signature, digest, key.public_key
)
return bool(verified)
class PrivateKey(_KeyBase): class PrivateKey(_KeyBase):
"""A BIP32 private key.""" """A BIP32 private key."""
@ -132,7 +179,7 @@ class PrivateKey(_KeyBase):
def __init__(self, ledger, privkey, chain_code, n, depth, parent=None): def __init__(self, ledger, privkey, chain_code, n, depth, parent=None):
super().__init__(ledger, chain_code, n, depth, parent) super().__init__(ledger, chain_code, n, depth, parent)
if isinstance(privkey, _PrivateKey): if isinstance(privkey, cPrivateKey):
self.signing_key = privkey self.signing_key = privkey
else: else:
self.signing_key = self._signing_key_from_privkey(privkey) self.signing_key = self._signing_key_from_privkey(privkey)
@ -140,7 +187,7 @@ class PrivateKey(_KeyBase):
@classmethod @classmethod
def _signing_key_from_privkey(cls, private_key): def _signing_key_from_privkey(cls, private_key):
""" Converts a 32-byte private key into an coincurve.PrivateKey object. """ """ Converts a 32-byte private key into an coincurve.PrivateKey object. """
return _PrivateKey.from_int(PrivateKey._private_key_secret_exponent(private_key)) return cPrivateKey.from_int(PrivateKey._private_key_secret_exponent(private_key))
@classmethod @classmethod
def _private_key_secret_exponent(cls, private_key): def _private_key_secret_exponent(cls, private_key):
@ -152,24 +199,40 @@ class PrivateKey(_KeyBase):
return int.from_bytes(private_key, 'big') return int.from_bytes(private_key, 'big')
@classmethod @classmethod
def from_seed(cls, ledger, seed): def from_seed(cls, ledger, seed) -> 'PrivateKey':
# This hard-coded message string seems to be coin-independent... # This hard-coded message string seems to be coin-independent...
hmac = hmac_sha512(b'Bitcoin seed', seed) hmac = hmac_sha512(b'Bitcoin seed', seed)
privkey, chain_code = hmac[:32], hmac[32:] privkey, chain_code = hmac[:32], hmac[32:]
return cls(ledger, privkey, chain_code, 0, 0) return cls(ledger, privkey, chain_code, 0, 0)
@classmethod
def from_pem(cls, ledger, pem) -> 'PrivateKey':
der = pem_to_der(pem.encode())
try:
key_int = ECPrivateKey.load(der).native['private_key']
except ValueError:
key_int = PrivateKeyInfo.load(der).native['private_key']['private_key']
private_key = cPrivateKey.from_int(key_int)
return cls(ledger, private_key, bytes((0,)*32), 0, 0)
@classmethod
def from_bytes(cls, ledger, key_bytes) -> 'PrivateKey':
return cls(ledger, cPrivateKey(key_bytes), bytes((0,)*32), 0, 0)
@cachedproperty @cachedproperty
def private_key_bytes(self): def private_key_bytes(self):
""" Return the serialized private key (no leading zero byte). """ """ Return the serialized private key (no leading zero byte). """
return self.signing_key.secret return self.signing_key.secret
@cachedproperty @cachedproperty
def public_key(self): def public_key(self) -> PublicKey:
""" Return the corresponding extended public key. """ """ Return the corresponding extended public key. """
verifying_key = self.signing_key.public_key verifying_key = self.signing_key.public_key
parent_pubkey = self.parent.public_key if self.parent else None parent_pubkey = self.parent.public_key if self.parent else None
return PubKey(self.ledger, verifying_key, self.chain_code, self.n, self.depth, return PublicKey(
parent_pubkey) self.ledger, verifying_key, self.chain_code,
self.n, self.depth, parent_pubkey
)
def ec_point(self): def ec_point(self):
return self.public_key.ec_point() return self.public_key.ec_point()
@ -182,11 +245,12 @@ class PrivateKey(_KeyBase):
""" Return the private key encoded in Wallet Import Format. """ """ Return the private key encoded in Wallet Import Format. """
return self.ledger.private_key_to_wif(self.private_key_bytes) return self.ledger.private_key_to_wif(self.private_key_bytes)
@property
def address(self): def address(self):
""" The public key as a P2PKH address. """ """ The public key as a P2PKH address. """
return self.public_key.address return self.public_key.address
def child(self, n): def child(self, n) -> 'PrivateKey':
""" Return the derived child extended private key at index N.""" """ Return the derived child extended private key at index N."""
if not 0 <= n < (1 << 32): if not 0 <= n < (1 << 32):
raise ValueError('invalid BIP32 private key child number') raise ValueError('invalid BIP32 private key child number')
@ -205,6 +269,28 @@ class PrivateKey(_KeyBase):
""" Produce a signature for piece of data by double hashing it and signing the hash. """ """ Produce a signature for piece of data by double hashing it and signing the hash. """
return self.signing_key.sign(data, hasher=double_sha256) return self.signing_key.sign(data, hasher=double_sha256)
def sign_compact(self, digest):
""" Produce a compact signature. """
key = self.signing_key
signature = libsecp256k1_ffi.new('secp256k1_ecdsa_signature *')
signed = libsecp256k1.secp256k1_ecdsa_sign(
key.context.ctx, signature, digest, key.secret,
libsecp256k1_ffi.NULL, libsecp256k1_ffi.NULL
)
if not signed:
raise ValueError('The private key was invalid.')
serialized = libsecp256k1_ffi.new('unsigned char[%d]' % CDATA_SIG_LENGTH)
compacted = libsecp256k1.secp256k1_ecdsa_signature_serialize_compact(
key.context.ctx, serialized, signature
)
if compacted != 1:
raise ValueError('The signature could not be compacted.')
return bytes(libsecp256k1_ffi.buffer(serialized, CDATA_SIG_LENGTH))
def identifier(self): def identifier(self):
"""Return the key's identifier as 20 bytes.""" """Return the key's identifier as 20 bytes."""
return self.public_key.identifier() return self.public_key.identifier()
@ -216,9 +302,12 @@ class PrivateKey(_KeyBase):
b'\0' + self.private_key_bytes b'\0' + self.private_key_bytes
) )
def to_pem(self):
return self.signing_key.to_pem()
def _from_extended_key(ledger, ekey): def _from_extended_key(ledger, ekey):
"""Return a PubKey or PrivateKey from an extended key raw bytes.""" """Return a PublicKey or PrivateKey from an extended key raw bytes."""
if not isinstance(ekey, (bytes, bytearray)): if not isinstance(ekey, (bytes, bytearray)):
raise TypeError('extended key must be raw bytes') raise TypeError('extended key must be raw bytes')
if len(ekey) != 78: if len(ekey) != 78:
@ -230,7 +319,7 @@ def _from_extended_key(ledger, ekey):
if ekey[:4] == ledger.extended_public_key_prefix: if ekey[:4] == ledger.extended_public_key_prefix:
pubkey = ekey[45:] pubkey = ekey[45:]
key = PubKey(ledger, pubkey, chain_code, n, depth) key = PublicKey(ledger, pubkey, chain_code, n, depth)
elif ekey[:4] == ledger.extended_private_key_prefix: elif ekey[:4] == ledger.extended_private_key_prefix:
if ekey[45] != 0: if ekey[45] != 0:
raise ValueError('invalid extended private key prefix byte') raise ValueError('invalid extended private key prefix byte')
@ -248,6 +337,6 @@ def from_extended_key_string(ledger, ekey_str):
xpub6BsnM1W2Y7qLMiuhi7f7dbAwQZ5Cz5gYJCRzTNainXzQXYjFwtuQXHd xpub6BsnM1W2Y7qLMiuhi7f7dbAwQZ5Cz5gYJCRzTNainXzQXYjFwtuQXHd
3qfi3t3KJtHxshXezfjft93w4UE7BGMtKwhqEHae3ZA7d823DVrL 3qfi3t3KJtHxshXezfjft93w4UE7BGMtKwhqEHae3ZA7d823DVrL
return a PubKey or PrivateKey. return a PublicKey or PrivateKey.
""" """
return _from_extended_key(ledger, Base58.decode_check(ekey_str)) return _from_extended_key(ledger, Base58.decode_check(ekey_str))

View file

@ -881,4 +881,365 @@ HASHES = {
879000: '0eb0810f4b81d1845b0a88f05449408df2e45715c9210a656f45278c5fdf7956', 879000: '0eb0810f4b81d1845b0a88f05449408df2e45715c9210a656f45278c5fdf7956',
880000: 'e7d613027e3b4ca38d09bbef07998b57db237c6d67f1e8ea50024d2e0d9a1a72', 880000: 'e7d613027e3b4ca38d09bbef07998b57db237c6d67f1e8ea50024d2e0d9a1a72',
881000: '21af4d355d8756b8bf0369b2d79b5c824148ae069026ba5c14f9dd6b7555e1db', 881000: '21af4d355d8756b8bf0369b2d79b5c824148ae069026ba5c14f9dd6b7555e1db',
882000: 'bc26f028e547ec44fc3864925bd1493211773b5cb9a9583ba4c1909b89fe0d33',
883000: '170a624f4be04cd2fd435cfb6ba1f31b9ef5d7b084a25dfa23cd118c2752029e',
884000: '46cccb7a12b4d01d07c211b7b8db41321cd73f30069df27bcdb3bb600c0272b0',
885000: '7c27f79d5a99baf0f81f2b09eb5c1bf905976a0f872e02bd4ca9e82f0ed50cb0',
886000: '256e3e00cecc72dbbfef5cea627ecf1d43b56edd5fd1642a2bc4e97c17056f34',
887000: '658ebac7dfa62bc7a22b1a9ba4e5b425a866f7550a6b40fd07de47119fd1f7e8',
888000: '497a9d02868605b9ff6e7f15948a83a7e07606829107e63c2e091c90c7a7b4d4',
889000: '561daaa7ebc87e586d37a96ecfbc72484d7eb602824f38f484ed333e78208e9e',
890000: 'ab5a8cb625b28343f8fac858eab6576c856dab88bde8cda02b80b3edfd307d71',
891000: '2e81d9fc885ddc09222b298ac9efbb73638a5721802b9256de6505ecf122dbaa',
892000: '73be08881b8832e986c0bb9a06c70fff346edb2afaf69630e47e4a4a90c5fece',
893000: 'd39079dcaa4d8af1c26f0edf7e16df43cd857a31e0aa4c4123226793f1ab497f',
894000: '0a3b677d72c590d4b1ff7a9b4098d6b52d0dc10d64c30c2766d18e6eb02872cd',
895000: 'a3bbba831f48c5b68e494ee63015b487782c64c5c24bb29436283360c28fd1e0',
896000: '20af178a192ca43975ab6c838fe97ca42ba6c682682eddbc6481efd153ecb0a2',
897000: '8d0ee14b9fdb853a09ab2951d26b8f7cb8bc8038b09513bd330ee4b0bdcc4780',
898000: 'c97fbb70f804408b131a98f9fb4c04cdf2df1655d3e8ff2e0d58ed8537349f4e',
899000: 'eba2be80478e8dec2d66ca40b853580c5dad040351c64c177e3d8c25aff6c1b6',
900000: 'c4dc344a993558418b93b3f60aaef0030e2a4116086577fbf1e2f544bdbddae1',
901000: '36d84229afa63045875fc8fea0c55de8eb90694b3a37cceb825c87abf1fea998',
902000: '8ca4890ecfc5e3f9d767e4fcdf318a1e3e3597675bbcfe534d64e76bc4e8fbf4',
903000: '8b9f6a7514033c57668ca94fb3758cc6d1ef37ac982c2ff5a9f0f206fcd8d0a8',
904000: 'e9ae813991f35ca89af2fe1f1b6adf9e93c6b1dd6a74f003ebbe699a30b252ea',
905000: 'd426489d01d4f4c829f2eb68a67721d2c0e1c71e8c33ef9253593447e8603462',
906000: '63000bbed97451e68d64485c02c1c3d90b4156237dac315f4e012ffb538e375b',
907000: '96759653a4e514541effa7ef86d9f22a272ddde7b069149d17e9d9203a1edafb',
908000: 'eec6477d2f3b71bde76dc2380d6e06aa8aa306ca56ba1dd15a31c22ae0db501b',
909000: 'd5c2984cf130335aa29296ba5b17672d00360fe0ec73977326180014908c0b55',
910000: '7b99cb1c94144f606937903e173bd9ef63bfffd3db8110693fa4c2caa0abc21f',
911000: '95eed0d9dd9869ac6f83fa67863e77f24df69bcb90fef70918f30b2400e24ea8',
912000: '34c3c8780c54ecced50f0a6b394309d09ee6ce37cd98794699c63771d1d91144',
913000: '536052ddcd445702160288ef3f669ce56868c085315556c9f5ca081ef0c0b9e1',
914000: '1bcd1fe9632f93a0a1fe7d8a1891a4fc6ef1be40ccf887524a9095ed7aa9fa44',
915000: '139bad9fa12ec72a37b62ad8511300ebfda89330fa5d5a83861f864b6adeae67',
916000: '81d15282214ff83e2a034212eb58abeafcb5664d3734bff13b22b4c093b20fea',
917000: 'f31081031cebe450e4450ef397d91790fc0068e98e6746cd0aab86d17e4448f5',
918000: '4af8eb28616ef0e859b5471650c7f8e910cd692a6b4ff3a7171a709db2f18e4e',
919000: '78a197b5f9733e9e4dc9820e1c79bd335beb19f6b87056e48e8e21fbe27d83d6',
920000: '33d20f86d1367f07d6731e1e2cc9305252b281b1b092403133924cc1052f501d',
921000: '6926f1e31e7fe9b8f7a81efa73d5635f8f28c1db1708e4d57f6e7ead951a4beb',
922000: '811e2335798eb54696a4b11ca3a44b9d79486262119383d542491afa9ae80204',
923000: '8f47ac365bc380885db809f2818ffc7dd2076aaa0f9bf6c180df1b4358dc842e',
924000: '535e79802c10630c17fb8fddec3ba2bf85eedbc0c076f3575f8189fe887ba993',
925000: 'ca43bd24d17d75d55e72e45549384b395c62e1daf0d3f58f296e18168b918fbf',
926000: '9a03be89e0725877d42296e6c995d9c48bb5f4bbd971f5a9add191af2d1c144b',
927000: 'a14e0ef6bd1bc221dbba99031c16ddbbd76394186677c29bdf07b89fa2a6efac',
928000: 'b16931bd7392e9db26be975b072024210fb5fe6ee22fc0809d51980aa8068a98',
929000: '4da56a2e66fcd98a70039d9061ea5eb0fb6d9460b437d2191e47441182419a04',
930000: '87e820e2237a54c4ea100bdd0145598f05add92185cd3d0929aa2d5099f4d5e0',
931000: '515b22c91172157c443a47cf213014aff144181a77e276e291535ab3762bb1ae',
932000: 'e130c6a9eb416f96256d1f90256a148957daa32f56af228d2d9ce6ff27ce2011',
933000: '30c992ec7a9a320fb4db260373121efc7b5e7fc744f4b31defbe6a7608e0749e',
934000: 'ec490fa0de6b1d78a4121a5044f501bbb3bd9e448c18121cea87eb8e3cadba41',
935000: '603e4ae6a6d936c79b3f1c9f9e88305930953b9b390dac442976a6e8395fc520',
936000: '2b756fe2de4328e598ed511b8828e5c2c6b5cdda1b5e7c1c26f8e0424c81afa9',
937000: '1ae0f15f14a0d4819e34a6c18de9428a9e43e17d75383bffa9ffb18358e93b63',
938000: 'cbd7001825ec87b8c6917d6e9e7dc5c8d7767788b6ffd61a61d0c612dbe5de66',
939000: 'd770d0395aa79076044783fb37a1bb173cb95c93ff1ba82c34a72c4d8e425a03',
940000: '3341d0a0349d091d88d233cd6ea6e0ad553d52039b4d47af51b8a8e7573a7916',
941000: '16123b8758e99344ebe6670cd95826881b274c31d4da2a051052955a32bade3a',
942000: 'ac7430961e77f902918fe79a52cbf6b523e3f2804ec83d0b17908e131ea9ea68',
943000: '2ad08a6877e4687dcb7a623adeddc88403e8082efd6de28328b351282dc141e2',
944000: '81382e8c1f47fa7c03fa1726f9b09ed1cd38140fe50683896eaa1b403d7e5fe3',
945000: '152bfbb166da04dab16030af28ae65b3275819eed1d0bbfc11eba65616ebefd6',
946000: '25b3da0962f87a0d3e4aec8b16483efbcab9514893a42fd31f4cb544ddc45a1f',
947000: '2cb738ba342436628ff292797e3d36c4752d71bdc1af87fe758d469d06e36e0e',
948000: 'b3683e18570fcc8b986720514539181ec43fb5dbc20fe314c56ab6bd31ab766a',
949000: '94ced5bfba55ccffc909bf098d537e047d8d4cbb79f5e2a74146073f39804865',
950000: 'b11543cd2aedae27f6ddc3d2b431c897fdcfe59ed3c926b0777bc1e99de4d12a',
951000: '21508881a7f80fcd0b9b27bbcfba634b39c6525f5313968c4605cd55b4fec446',
952000: 'f9b3ed919c9ca20cd2927d899ee7a86c93c2dd919dafb6fdb792f2d9f1895cb0',
953000: 'cf578d8e80eec4102dc1b5321f10b36020b3b32f4b5d4664c90c412ca2ef6b42',
954000: 'ed17c919ae5c4be835966b47f667d6082c75917b95584b2d2aff0e32f5c8aa98',
955000: '948ea467fa01a20122e2146669214fdd3bb025038554609f7299ece5bca63e39',
956000: 'b50ff4c02957ed8764215d25f206f6f1fe6d0eb712a378b937ff952dd479afd2',
957000: '169922a3e51517ba6104a883d29aac03a9d20b4d448bd2773137b0d790e3db6b',
958000: '92258ac2e8b53167dc30436d93f385d432bd549711ab9790ba4e8263c5c54382',
959000: '7ca824697459eb302bcd7fba9d255fb269555abe7cf9d2dd5e54e196d751e682',
960000: '89f9ec925d23698076d84f9e852ab04fc956ac4465827303de0c3bb0b685eb32',
961000: '41cf75cd71bc12b93674c416e8b01b7410eb9e09eb8727ad93ff0b833c9966c9',
962000: '7db1f1dbff3e389713067879bfedf9513ec74bb1e128b13fc2fe23ad55fd0306',
963000: 'a35e71c611b2227adeac824d151d2f09bdbecd5765a4e62c6e74a3e4290abc66',
964000: 'dc1811130e249d2208d6f85838512b4e5482efb0bd2f619164a68a0c60d7f248',
965000: '92f5e25dd1c03102720dd0c3136b1a0769901bf89fcc0262a5e24405f349ca07',
966000: '08243d780d8ba96a940f409b87d9c6b8a95c92804173b9156ada0dad35b628dc',
967000: 'cb769a8935bb6faeb981da74f4079babbbb89476f825cc897f43e79790295260',
968000: 'ff3fc27d2998f4dc4ac1ff378afe14c7d0f43cc328deb9c978ec0e067d1dfaf9',
969000: 'e41a3452f45d5f025627d08c9c41017679e9c4804371dd1cc02f3ed49f85dbb2',
970000: 'f5eaaf7ba6b47245a4a8096a7785c7b25dc6db342ac2ccbba0c321e97ab58284',
971000: '75414062f1d4ed675dadc8f04ba10147a484aaca1ae316dc0b896a92809b3db6',
972000: '5bcf2ee00133774c7d060a1a1863dfccc20d5127ecb542470f607dec2504fe6f',
973000: '07d15b9656ecde2cd86a9d22c3de8b6505d6bab2aa5a94560b0db9119f1f6f6c',
974000: '2059e7924d7a210a88f5a65abc61152506a82edccd27416e796c81b9b8003f13',
975000: '7fcf5d8b2c0e51cfbdaa2502a9da0bdb323646899dad37dacc39af9f9e16fc5c',
976000: '02acb8cf87a0900436eccfca50371948531041d7b8b410a902205f84dd7fb88e',
977000: '2636dfd5a47016c893265473e78ecbf2000769d886f0d01ee7a91e9397210d15',
978000: 'ce92f52a35096b94bea73a7d4e113bc4564a4a589b66f1ab86f61c822cf9ee76',
979000: '21b8102f5b76be0c8e20d537ebc78ebe46bfcea6b6d2dda950ce5b48e85f72d7',
980000: 'f4df0bd63b36105705de62266d654612d9804bad7069d41344de269657e6f084',
981000: 'f006cd2718d98d774a5cd18394db7744c812fa149c8a63e76bab934aee89f571',
982000: 'da5d6609265d9153022d823b0260aa07e7511ceff7a3fd2ca7ce83cb3900a661',
983000: '3a26f3f02aa145fa8c5268fbe10dd9c3546d7dda57489ca5d4b161beb0d5a6e2',
984000: '968e8cd37a1137797d40f39f106cae62d1e252b46c7473b9434ad5f870ee88fb',
985000: '3129c3bf20deace1a9c92646a9d769da7a07f18dcd5b7a7b1e8cf5fd5390f8e1',
986000: '6ce830ca5da322ddbb97fc572ea03218913d070e5910516b33c6113b02b23c21',
987000: '7fb1a8635623847132ab766a99b792953379f782d1115b9649f5f9c5a742ca04',
988000: '5e8e6c6da7f271129c20c4dd891dcb1df4f9d690ee7cf391c6b7fbd028a0da4c',
989000: '12919e34bb9a9ac1d2a01e221eb8c511117fc4e1b3ae15355d95caf4673bdb08',
990000: '016f8b18227a0c09da55594a98638ad5b0fbb4896e2ab6163ac40b6015b2811e',
991000: 'ddf8cd6e2f4ee07530ae7567cef4fa2c2fd4a655cb20e20422e66fd49bde6489',
992000: 'dca77707c0caa3a9605f3dadf593402339c29448869907fb31f6c624e942dcbd',
993000: 'de9acc4c7c482ecac741fd6acbbc3a333afab52f3fe5eea4130c0770299a56dd',
994000: '54420631f8a801a1b8f391088f599ee22cedc06f24bf67f18272feb8fe70c682',
995000: '4b44b26e3e2495716dfd86fc42594cd4b1e4b70bdab4f0905cce4cb9556e008a',
996000: 'd6e41fd301fc5f519c343ceb39c9ff845656a4482e4e182abdcd3963fd5fde1c',
997000: 'd68b6a509d742b182ffb5a98b0e585a2320a5d3fe6977ad3e6cd06835ef2ea55',
998000: '1efcdcbadbec54ce3a93a1857253614536c34f05a0b1924f24bff194dc3392e1',
999000: '10a7713e46f47527f3819b4a9257a03f3e207d18e4917d6bcb43fdea3ba82b9a',
1000000: '1b4ddb1436df05f07807d6337b93ee1aa8b600fd6a910a8fd5313a39e0440eec',
1001000: 'cde0df1abdae26d2c2bdc111be15fb33231c5e167bb8b8f8eec667d71379fee4',
1002000: 'd7ce7a96a3ca73a4dfd6a1780e23f834f339142519ea7f45d256c113e27e4857',
1003000: 'b1a9b1c562ec62b9dd746d336b4211afc37482d0274ff692a44fa17ac9fe9a28',
1004000: '7afd6d0fb0014fbe16a31c84d3f1731736eaeef35e40bb1a1f232fb00345deae',
1005000: '4af61ce4cda5de58277f7a67cadea5d3f6ce56e54785b188e32306e00b0414df',
1006000: '08e1fb7295efd4a48cb999d899a3d481b682ddbce738fecd88a6d32cbe8234f0',
1007000: '14a367a41603dd690541daee8aa4a2882260059e3f85bd8978b7431e8f7db844',
1008000: 'e673230e62aaefad0678611f94ff35ee8a6e18eb96438bdfb4b614f54f54dba7',
1009000: 'e191af8fb71d0d91419abd19443af3d3f23ee4fe359bb8c390429cc838132bde',
1010000: 'ffdba58f184cf60838b75b7899b6633e7cfd34cf36eded572c0133d07387bc49',
1011000: '40801af3a5546cb9d53e05e21b74be09de9a421b762ca1d52d2266f5c2055ce8',
1012000: '552519acebed0e38102f5270dc60b1da7a123600b6b94169ae74462ae454693f',
1013000: '1eee96f48418929927eaa9642777bc806d326cfffaf077bc8695a7ecd438d631',
1014000: 'a471093e1de2a8db586412d7351c8d88e44ea890f46e9b43251af427a0a4a879',
1015000: '57532f5a522295cc139f008bdcb7a1e6d02e6035d5221b2687c7c216f06297a2',
1016000: 'ec46dba07addcb6e62f58456a53c513d876f1c49ae7d76d230adb8debd26027d',
1017000: '33ea8d25f342a7465ed71e4bab2b91007991e0994c61d321e3625301a1390322',
1018000: '4871c03cc95d4ce0a39bd2cebbb001b2ea1cce1b3561bb841d88f43bb9d12ffd',
1019000: 'f5248257576eb2ff4139d6374cc7ce34121cc942598cf9e04d2bd572e09189bb',
1020000: 'e7785286897c85cfb0276957bff216039eeb11bc1ebca89d0bb586022caa5750',
1021000: 'a30220f17d060634c5f6a1ddc5ea34b01c18fb5eb7e0e8267b66bf5a49525627',
1022000: '6083ea49e64ac0d4507c674237cf87d30b90b285ec63d082e626df0223eb7c9c',
1023000: '1dc5596d716bc33ee0f56fc40c1f073155a58a7692935c9e5854ef3b65b76828',
1024000: '065adfee40dc33abff07fb55339571712b959bc1830dc60b6691e36eab1508ae',
1025000: 'bb6903752d31278570e774b80a80782179c78f099e58c3dc4cba7afea7a471c4',
1026000: 'f3050f3c2f3a76f5084856b0f089383517caa3f51530fbc29335308f5f170625',
1027000: '746ed3701510d07958d11a06f22dbb839d9858373dc5a33249dd69e91bab01fd',
1028000: '43f7a96ea6a45b78c29ad4a2f8680ef184438c2bd3686172b0564e0ae6dd7ba1',
1029000: 'cbb9916099c59e14fe61d284374f4feaa3d43afec59e4698ed92143576f24b34',
1030000: '2e805fc2331e32e586ea692bc3d4e6b11e1ec3f1cab6e331b459f9f1ac9a1f1e',
1031000: '04f324f8f6d4f9901cf65f78dc91d6010ea6cf125f5ac0253b57b5f1f79e81e0',
1032000: '60ca62f52fdfd858b0ee0fdb380648bde85ca14e2a73565205ed4ee0bc861c77',
1033000: 'eb60aac23d599d3099cf98ed8fc3213f1bc06bc1c677429b303e9c81f79f1340',
1034000: 'f0328df2daf119ce673ddfa7a39a84576985f701f7a7dec3f56f58c2019ebd4d',
1035000: 'f9d3cbce3854de168d8835c96917c01be6244c8f82641e8d9398dfffec4e7107',
1036000: '7dca97e6e1d6ed70aa7805f74b768009a270e7ebe1dd951e8727d1d2f2d271f2',
1037000: '5329504126b2845b3044f423b521e77ff58d7d242f24bf87c87f4d8d4e03a947',
1038000: '5bad3ad55e3daa415f3182a1f2a099fe1767e8fae34e9bb95d47e242b8971434',
1039000: 'c29729b8ba49ac0043fe4aa6fc971f8ac3eda68ff92970957ada39a2989b2491',
1040000: 'f303aebfc9267600c081d0c021065743f93790df6f5c924a86b773788e0c45be',
1041000: 'a1cbe5059fa2275707785b77970c36d79b12c1ba93121bc9064ab9b64abacf7b',
1042000: '004b0dd4e438abc54ae832d733df32a6ba35b75e6d3e0c9c1dee5a7950507295',
1043000: '31893a3fe7bb4f6dd546c7a8de4a65990e94046aab442d18c68b6bf6acd54518',
1044000: '2c4dd479948acc42946f94050810000b0539864ad24a67a7251bff1c4971b035',
1045000: '1cea782d60df35a88b30ae205ce37e30abc7cad2b22181722be150bd92c53814',
1046000: 'ee808f0efb0f2ef93e8599d8b7f0e2e7c3cdc42353e4ea5165028b961f43d548',
1047000: '75f057e2a8cb1d46e5c943d63cc56936a6bac8b1cb89300593845a20baf39765',
1048000: '2abcd227f5314baed85e3c5b49d3888a60085c1845c955a8bf96aa3dd6394798',
1049000: '5d0ec24b9acd5ab21b42f68e1f3142b7bf83433b98f2fa9794586c8eff45893e',
1050000: '1d364b13a4c17bd67a6d1e5f77c26d02faa014d7cd152b4da70380f168b8e0ff',
1051000: 'b9a20cec21de84433be9b85817dd4803e875d9275dbc02907b29888431859bae',
1052000: '424cb56b00407d73b309b2081dd0bf89213cf024e3aafb3090506aa0ba10f835',
1053000: '6df3041a32fafd6a4e08778546d077cf591e1a2a16e77fe7a610efc2b542a9ff',
1054000: '78f8dee794f3d4366019339d7ba74ad2b543ecd25dc575620f66e1d535411971',
1055000: '43b8e9dae5addd58a7cccf62ba57ab46ffdaa2dcd113cc8ca537e9101b54c096',
1056000: '86b7f3741343f85d93410b78cc3fbf03d49b60a664e908703016aa56a206ae7e',
1057000: 'b033cf6ec622be6a99dff536a2cf73b36d3c3f8c3835ee17e0dd357403e85c41',
1058000: 'a65a6db692a8358e399a5ac3c818902fdb60595262ae05531084848febead249',
1059000: 'f6d781d2e2fdb4b7b074d1d8123875d899cdbd6be375cb4288e86f1d14a929f6',
1060000: 'cd9019bb1de4926cca16a7bef1a46786f10a3260d467cda0775f73361795abc9',
1061000: 'ed4f5dc6f475f95b40595632fafd9e7e5eef388b6cc15772204c0b0e9ee4e542',
1062000: 'c44d02a890aa66979b10d1cfa597c877f498841b4e12dd9a7bdf8d4a5fccab80',
1063000: '1c093734f5f241b36c1b9971e2759983f88f4033405a2588b4ebfd6998ac7465',
1064000: '9e354a83b71bbb9704053bfeea038a9c3d5daad080c6406c698b047c634706a6',
1065000: '563188accc4a6e311bd5046516a92a233f11f891b2304d37f151c5a6002b6958',
1066000: '333f1b4e996fac87e32dec667533715b31f1736b4342806a81d568b5c5238456',
1067000: 'df59a0b7319d5269bdf55043d91ec62bbb30829bb7054da623717a394b6ed678',
1068000: '06d8b674a205393edaf20c1d837baadc9caf0b0a675645246263cc163302241d',
1069000: 'ac065c48fad1383039d39e23c8367bad7cf9a37e07a5294cd7b04af5827b9961',
1070000: '90cd8b50f94208bc459081356474a961f6b764a1217f8fd291f5e4828081b730',
1071000: '3c0aa207ba9eea45458ab4fa26d6a027862592adb9bcce30915816e777dc6cfc',
1072000: '3d556c08f2300b67b704d3cbf46e22866e3ac164472b5930e2ada23b08475a0f',
1073000: 'a39b5c54c24efe3066aa203358b96baea405cd59aac6b0b48930e77799b4dd7d',
1074000: 'e8c8273d5a50a60e8744716c9f31496fb29eca87b4d68643f4ecd7ec4e400e23',
1075000: 'b8043ae41a1d0d7d4310c85764fcba1424733df347ffc2e8cbda1fe6ccbb5153',
1076000: '58468db1f91805e767d334824d6bffe54e0f900d1fb2a89b105086a493053b3d',
1077000: '04a78749b58465efa3a56d1735cd082c1f0f796e26486c7136950dbaf6effaa4',
1078000: 'e1dd6b58c75b01a67d4a4594dc7b4b2ee9e7d7fa7b25fd6246ce0e86eff33c75',
1079000: 'd239af017a6bb664485b14ad15e0eb703775e43018a045a8612b3697794460da',
1080000: '29ae5503f8c1249fefeb63fd967a71a70588ee0db1c97497e16366163a684341',
1081000: '05103ab27469e0859cbcd3daf42faa2bae798f522534697c7f2b34f7a050ee0f',
1082000: '4553d2cb7e90b6db11d242e287fe96822e6cd60e6388b94bf9006411f202ba03',
1083000: '97995acd178b2a142d571d5ae1c2a3deaf93a909fd91fb9c541d57f73e32dc99',
1084000: '9e3f23376af14d76ab24cd54e321dec019af73ad61067d959ff90043acc5ffcc',
1085000: '81c056b14f13cee0d6d6c8079fdd5a1a84c3a5c76cc9448612e8ef6d3531300e',
1086000: '8a0004f6809bdd075915a804e43991dfe8f22e05679d2fdaf8e373f101bac5c2',
1087000: '27c45a4c9ad24e038f2ebe40835a1c49ac7221d7185082866ee354351ba87c7a',
1088000: 'fd27e21747117b00b4ada1cba161ac49edb57cca540f86ac5ba885050f08f824',
1089000: 'bff867335767103bc3ed15ede5b9fde88016f8ede15dc5bf3e81ea40dcfc61ae',
1090000: '608f75016d1db08888dd59640f63e838c19bdfa833c0cc177ad3d2b818b0db5b',
1091000: '90750b452bd4dedaab6b57fecbfe88f71ce3d5437fad7f9ec0fdd270445c7526',
1092000: '98287b39f9f1233017dc5d932e5c77f0521ca84587eb3f39f0e7b6c297c749af',
1093000: '68a5846ed05c9bb142197849106838765f90f15c10b2cc938eef49b95eaa9d33',
1094000: '5660a1aac2fc763a417fc656c8887fc8186bf613ae1ccbb1a664fb43ce1fa1d6',
1095000: '62bad3db418b3f4cad3596881b645b72479c71deb0d39c7a4c8bd1577dc225fd',
1096000: 'e0e4b2b183591f10dd5614c289412f2fb5e320b7d3278f7c028f42f591872666',
1097000: 'a233a233fc2aa5dab9e75106d91388343ef969458ea974f1409a2ab5fc441911',
1098000: '16dfa5fa6cbd1188e562697b5f00ac206960d0851ed84adf37ae975fd5ffdd6a',
1099000: 'b8a870b7dc6d3263730c00f59d52aa6cce35dc59aa8fba715034cc2d14927260',
1100000: 'a3cd7749743da22a3846dcc2edbf1df21b938e829419389e3bc09284797c5b43',
1101000: '75b14c2a95e2a095949729b7c0b624bd725a2de98404a8e3247b60c977d0198e',
1102000: '4d3af64d37064dd5f57e25d61f248a1e21c1b1cadd7bb1404e35c9fbe06f1fd4',
1103000: 'd73c92bfed358dfcd7659228974ab75ea2fc86f2301ee47133adad8075203872',
1104000: '30cd82354f37bc0b412123867c7e1835206022a7501853bf8c0d3df02f291645',
1105000: '1d2ef984f26693dce77460cd2694e5da46e675077e91a1cea26051733b01a7ef',
1106000: '51c076c304222fe3ca308ba6968c46fef448f85be13a095cecb75b90e7954698',
1107000: '99e2221339e16acc34c9816f2ef7b866c2dd753aa3cbe484ae831959a23ece68',
1108000: '0f1227c250296bfe88eb7eb41703f99f633cfe02870816111e0cadfe778ddb19',
1109000: 'b35447f1ad76f95bc4f5886e4028d33acb3ad7b5000dd15516d3f11ce4baa990',
1110000: 'ac7baff996062bfaaaddd7d496b17e3ec1c8d34b2143095645ff22fb3888ae00',
1111000: '430bbbdcca36b2d69b6a2dd8b07c583a060a467e5f9acbc6de62462e1f7c7036',
1112000: 'e5274dea029dc44baff55c05b0555f91b74d29ffd40e3a8c4e2c5b57f9d40bef',
1113000: 'cf43863249fa42cfe108220dd40169dac702b0dd9cf5cb699cf2fc96feda8371',
1114000: 'fa1c0e551784d21c451564124d2d730e616724f3e535de3c186bcdeb47e80a8f',
1115000: '49fe6ecee35a397b83b5a704e950ad028cfb4b7e7a524021e789f4acc0fd6ffe',
1116000: '74ecded36751aa8b7901b31f0d16d75d111fc3c40b567f649c04f74ed028aa5c',
1117000: 'd9ca760a22190bdf545766b47d963c738a4edcc27f4d15ca801b35751577cfa7',
1118000: 'c28d42f871682800ac4e867608227cfb6bc4c00b618e83a8556f201a1c28813c',
1119000: 'c5fafc4e1785b0b9e84bb052e392154a5ba1aefe612998017e90772bcd554e08',
1120000: 'aa054d428bc9ccee0761da92163817163413065fe1e67ef79a056c5233ea3476',
1121000: '0df295bb944218503bd1bf66d2ece0c50fd22dae3391b80673a7ad1e4e5c3934',
1122000: 'a13abb350a26673b3933b1de307a60a6845ca594d502599548c6253e21a6d8e8',
1123000: 'a4bc6a3abf9ed1f4b14338ff0f03f83456312bc91a93fa89ae6db493050115e1',
1124000: '65869938df99adf0dda76200291ce09a54c9bcc787e4bb62cd72c367db58f4f0',
1125000: 'ea5e918233b14c3c73d488a906e3741c61bdcafe0393bd0404168fe80c950a46',
1126000: 'ce88cd35104fcec51bcee77302e03162dc694802536f5b668786b2245e61bca5',
1127000: 'ea19c0c8d205be4be87d02c5301c9ed331e7d75e25b93d1c2137c248882af515',
1128000: '006f32d63c2a3adcf4fbad0b0629c97f1beab6446a9c27fbde9472f2d066219e',
1129000: '218e5392e1ecf471c3bbc3d79c24dee30ac8db315dbeb61317318efb3f221163',
1130000: '30b9da0bd8364e9cd5551b2529341a01a3b7257a238d15b2560e2c99fdb324e8',
1131000: '8a7f382cfa023d2eba6639443e67206f8883b57d23ce7e1339234b8bb3098a82',
1132000: 'bf9af68a6fe2112d8fe311dfd52334ae2e7b0bac6675c9ebfddb1f386c212668',
1133000: '1a30951e2be633502a47c255a93ddbb9ed231d6bb4c55a807c0e910b437766b3',
1134000: 'a9bcaf3300b7915e701a8e396eb13f0c7287576323420be7aab3c3ba48020f76',
1135000: '337eed9ed072b5ad862af2d3d651f1b49fa852abc590b7e1c2dc381b496f438a',
1136000: '208761dbc29ec58302d722a05e937a3cf9e78bfb6495be395dd7b54f02e169dc',
1137000: '4e5b67ff3324b64e268049fdc3d82982b847ee359d409ade6368864c38a111e5',
1138000: '55d1d0833021a664e85eec8cc90a0985e67cc80d28841aaa8c2231ec28087ebb',
1139000: 'e750ada1ec9fa0f2f2461ed68958c7d116a699a82ec12911da5563139f8df19e',
1140000: '9cf81407b6ccc8046f0233f97484166945758f7392bb54841c912fcb34cf205c',
1141000: 'fccf32b2fae03e3b6b562483776625f9843cd68734c55659e2069cde7e383170',
1142000: 'c3608c215dd6569da6c1871c4d72a09ab1caa9663647f2a9454b5693d5d72a65',
1143000: 'bd39cb8c4e529d15bbea6baeec66afe52ca18afe32bd812f28fbb0676647cdff',
1144000: '6e42d02538565ce7e2d9bf31a304f1fd0ac122d35d17a030160575815901b0b1',
1145000: 'b9722e1de2904ce1219140fffb1f4f9f5a041f885faa634404238d103c738b4c',
1146000: 'd4de4271459966cee774f538a243d7db0689b213b296463d42e45c93194d7861',
1147000: '51fadf109f22bb85574d0fbcbd0b20992983e89aee3d415a7b1c37c44775d9a9',
1148000: '137e1fe8da31680d21a42e7421eb608a883a497314e4404625ce44b0edadde6a',
1149000: 'cb87867eb04203ce15e0763a2f4389376cea75e0a2877f55e2911c575bef07a8',
1150000: '977528ca7953a2c9c19fefaa3aab7ebdec3ac324d74a07d83764ba25d9be0689',
1151000: 'a09c51c832600ded63a19201df008075273ea248fd406886e93a2cbaa3bba46b',
1152000: '0e5367cfa0f00dd932a5bcc00dcc807fa6825161806bed588e16a57947b4b32d',
1153000: '55a9de3dcde2efb56a3c5fea7d22b98c1e180db9a4d4f4f6be7aae1f1cbd7608',
1154000: 'abc58cf71c4691ebfaef920252730cf69abbe9de88b424c03051b9b03e85d45a',
1155000: '4f074ce73c8a096620b8a32498362eb66a072eae95d561f2d53557cd513ae785',
1156000: '540a838a0f0a8834466b17dd456d35b8acae2ec8419f8bd9a704d9ea439062ac',
1157000: 'd5310ac671abdb658ea028db86c23fc729af965f91d67a37218c1412cf32a1f5',
1158000: '162d906a07e6c35e7c3ebf7069a200521605a97920f5b589d31b19bfd7766ee2',
1159000: '600bd8f5e1e62219e220f4dcb650db5812e79956f95ae8a50e83126932685ee0',
1160000: '91319398d1a805fac8582c8485e6d84e7490d6cfa6e44e2c630665b6bce0e6b8',
1161000: 'f7ad3cff6ee76e1e3df4abe70c600e4af66e1df55bf7b03aee12251d4455a1d4',
1162000: '85b9fbba669c2a4d3f85cdb5123f9538c05bd66172b7236d756703f99258454d',
1163000: '966085d767d1e5e2e8baf8eda8c11472ec5351181c418b503585284009aaea79',
1164000: '1c94e1b531215c019b12caf407296d8868481f49524b7180c7161b0363c1f789',
1165000: '803b6bf93735aeae2cf607824e2adf0d754b58da2516c2da1e485c697e472143',
1166000: '872561a82f7991633d0927d25cb659d096bbe556fe6dac7a0b6a679820733069',
1167000: '6bd7cdd605a3179b54c8af88d1638bf8133fab12cbf0a78d37cf21eddf4395a1',
1168000: '79946f5758c1817239cc642d27298bd710983551a8236e49832c6d818b097337',
1169000: 'b0994c60728e74de4aa361f37fa85e5296ce3188ae4e0b66d7b34fe86a239c9c',
1170000: 'a54188a5a64e0cf8da2406d16a0ac3983b087fc7d6231b6f8abf92cf11dc78cd',
1171000: 'ec2924d98e470cc6359821e6468df2c15d60301861d443188730342581230ef2',
1172000: 'b4ac11116aa73ce19428009a80e583e19dc9bcd380f7f7ce272a92921d5868d2',
1173000: '501d3551f762999dd5a799f3c5658fff2a7f3aff0511488272cd7693fefb8f9d',
1174000: '4660074ea48a78ae453cb14b694b2844cc0fb63ed9352ed20d11158bbb5c1f28',
1175000: '0727f6b1d9f8fe5677a9ffa0d475f53f5a419ef90b80896c22c2c95de22175de',
1176000: '150633d6a35496c24a93c9e19817e90f649c56b7e2558f99e97325bfd5df8b17',
1177000: '0849e19f22571b62dba8ff02f6b5a064a7ac36e7ed491321b3663567e8e17294',
1178000: '770dd463e7bad80f689f12934e4ae06e24378d1545dcf211fd143beaef49464e',
1179000: '059d383dcc60a49b658b674d92fc35cab07b06329c58d73818b6387cb0c06534',
1180000: 'e547cb3c636243ca9ae4cfb92c30a0f583eda84e329a5c1e5f64a26fc6fc791e',
1181000: '4521a4396ab02f73d45d7a3393ea1c602d255778d52c12079c88bfbad32aab43',
1182000: '051cfe993e4b0b34233403a9e8c397dd50e8b78a30fb07e9c260604ee9e624a9',
1183000: '44a69c99bb8b85e84ae279f2d8e5400d51cb3d5f0bcd178db49d55548cd66191',
1184000: '2a1d23c9bb3c71a533e0c9d25b03bfa7e9db8e014645f3e7fbede6d99fff0191',
1185000: 'bb90d6c6d77819163a9e909ee621d874707cdb21c91b1d9e861b204cf37d0ffa',
1186000: '4a92051b738ea0e28c64c64f1eb6f0405bc7c3427bef91ff20f4c43cf084d750',
1187000: 'f782ac330ca20fb5d8a094ee0f0f8c086a76e3f03ecc6a2c42f8fd07e52e0f41',
1188000: '94cb7b653dd3d838c186420158cf0e73db73ec28deaf67d9a2ca902caba4141a',
1189000: 'c8128e59b9ec948de890184578a113478ea63f7d57cb75c2c8d5c001a5a724c0',
1190000: '4da643bd35e5b98932ae21515a6bffb9c72f2cd8d514cd2d7eac1922af785c3f',
1191000: '0f922d86658ac3f53c5f9db360c68ab3f3253a925f23e1323820e3384214719a',
1192000: '4c3ab631cf5ba0c236f7c64af6f790fc24448319de6f75dbd28df4e2648d0b7d',
1193000: 'eda118d1fac3470a1f8f01f5c78108c8ecdcd6420be30f6d20f1d1831e7b6975',
1194000: '5723fff88abd9bb5088476fa5f4221a61c6f8a718703a92f13248ad350abeea2',
1195000: '1715846f82d011919e3446c6ce675a65fb80338bd791d4e735702c4767d9adc4',
1196000: 'b497667996aee2db61e88f442e728be15ab0b2b64cfd43198691fcf6cdafacc8',
1197000: '309a6170d837b8cb334fb888a64ed4e47e6592747e93c8e9d1bf7d608cfef87d',
1198000: '3ea918ef64a67dec20051519e6aefaeb7aca2d8583baca9ad5c5bd07073e513a',
1199000: '4ec7b7361b0243e5b2996a16e3b27acd662126b95fe542a487c7030e47ea3667',
1200000: 'b829c742686fcd642d0f9443336d7e2c4eab81667c90ce553df1350ed10b4233',
1201000: '44c022887f1e126fd281b1cae26b2017fa6415a64b105762c87643204ce165a5',
1202000: 'b11cc739eb28a14f4e47be125aa7e62d6d6f90c8f8014ee70044ed506d53d938',
1203000: '997a7c5fd7a98b39c9ca0790519924d73c3567656b605c97a6fdb7b406c3c64d',
1204000: '7d25d872e17195ee277243f7a5a39aa64d8750cec62e4777146acf61a8e76b04',
1205000: 'ce8486ae745a4645bee081ef3291d9505174bed05b0668d963b2998b7643dbb0',
1206000: '46a0bcea3c411c600dffe3e06e3d1dfbf5879a7ec4dcf3848e794cefcbf2bc0b',
1207000: '37e6297bf6e4e2bdd40401d4d7f95e3e3bdafd4a7f76b9c52865cefc6b82b20b',
1208000: 'd09e3982a9827b8cf56a5a2f4031dc6b082926c1fd57b63beaaa6cfd534eb902',
1209000: '54ae9010a9f146c83464e7ee60b30d9dbee36418561abc4e8d61bce9baa2d21d',
1210000: '5dcfd33f8e5ac21c9ba8553758b8cd8afae7961cad428530b5109c2db2ebf39f',
1211000: '91c952348bb2c3dfac0d6531a3dac770ea6dab571af257530e9c55493c96bdd9',
1212000: 'e62cc3fe044a7f5de4c04a8aed5619548f9d5c6fad9f989d3382cb96de1d780d',
1213000: '66b46ffdca8acf1dd04528dadb28b6ac4ce38807c1b84abd685d4ddb3dc59a34',
1214000: '2ce4091756ad23746bab4906f46545953cadaf61deae0d78e8a10d4eb51866b1',
1215000: '83ce3ca087799cdc4b4c5e7cfeb4a127708724a7ca76aa5f7f4ec1ed48b5fca6',
1216000: '7d07b739b7991fbd74926281bf51bba9d5721afab39598720f9ff5f7410a6721',
1217000: '76adf49491670d0e8379058eacf0228f330f3c18955dfea1ebe43bc11ee065f3',
1218000: '77f422e7301a81692dec69e5c6d35fa988a00a4d820ad0ebb1d595add36558cc',
1219000: '8ba9d944f8c468c81799294aeea8dc05ed1bb90bb26552fcd190bd88fedcddf2',
1220000: '00330367c255e0fe51b374597995c53353bc5700ad7d603cbd4197141933fe9c',
1221000: '3ba8b316b7964f31fdf628ed869a6fd023680cca6611257a31efe22e4d17e578',
1222000: '016e58d3fb6a29a3f9281789359460e776e9feb2f0db500482b6e231e1272aef',
1223000: 'fdfe767c29a3de7acd913b627d1e5fa887a1af9974f6a8a6474db822468c785c',
1224000: '92239f6207bff3689c554e92b24fe2e7be4a2203104ad8ef08b2c6bedd9aeccf',
1225000: '9a2f2dd9527b533d3d743efc55236e73e15192171bc8d0cd910918d1ab00aef7',
1226000: 'eb8269c75b8c5f66e6ea88ad70883dddcf8a75a45198ca7a46eb0ec606a791bb',
1227000: '5c82e624390cd57942dc9d64344eaa3d8991e0437e01802473053245b706290c',
1228000: '51e9a7d727f07fc01be7c03e3dd854eb666697f05bf89259baac628520d4402c',
1229000: 'c4bfdb651c9abdeda717fb9c8a4c8a6c9c0f78c13d3e6cae3f24f504d734c643',
1230000: '9f1ce781d16f2334567cbfb22fff42c14d2b9290cc2883746f435a1fb127021d',
1231000: '5c996634b377412ae0a3d8f541f3cc4a354aab72c198aa23a5cfc2678cbabf09',
1232000: '86702316a2d1730fbae01a08f36fffe5bf6d3ebb7d76b35a1617713766698b46',
1233000: 'fb16b63916c0287cb9b01d0c5aad626ced1b73c49a374c9009703aa90fd27a82',
1234000: '7c6f7904602ccd86bfb05cb8d6b5547c989c57cb2e214e93f1220fa4fe29bcb0',
1235000: '898b0f20811f52aa5a6bd0c35eff86fca3fbe3b066e423644fa77b2e269d9513',
1236000: '39128910ef624b6a8bbd390a311b5587c0991cda834eed996d814fe410cac352',
1237000: 'a0709afeedb64af4168ce8cf3dbda667a248df8e91da96acb2333686a2b89325',
1238000: 'e00075e7ba8c18cc277bfc5115ae6ff6b9678e6e99efd6e45f549ef8a3981a3d',
1239000: '3fba891600738f2d37e279209d52bbe6dc7ce005eeed62048247c96f370e7cd5',
1240000: 'def9bf1bec9325db90bb070f532972cfdd74e814c2b5e74a4d5a7c09a963a5f1',
1241000: '6a5d187e32bc189ac786959e1fe846031b97ae1ce202c22e1bdb1d2a963005fd',
1242000: 'a74d7c0b104eaf76c53a3a31ce51b75bbd8e05b5e84c31f593f505a13d83634c',
} }

View file

@ -141,7 +141,7 @@ class CoinSelector:
_) -> List[OutputEffectiveAmountEstimator]: _) -> List[OutputEffectiveAmountEstimator]:
""" Accumulate UTXOs at random until there is enough to cover the target. """ """ Accumulate UTXOs at random until there is enough to cover the target. """
target = self.target + self.cost_of_change target = self.target + self.cost_of_change
self.random.shuffle(txos, self.random.random) self.random.shuffle(txos, random=self.random.random) # pylint: disable=deprecated-argument
selection = [] selection = []
amount = 0 amount = 0
for coin in txos: for coin in txos:

View file

@ -2,6 +2,7 @@ NULL_HASH32 = b'\x00'*32
CENT = 1000000 CENT = 1000000
COIN = 100*CENT COIN = 100*CENT
DUST = 1000
TIMEOUT = 30.0 TIMEOUT = 30.0

View file

@ -9,16 +9,17 @@ from dataclasses import dataclass
from contextvars import ContextVar from contextvars import ContextVar
from typing import Tuple, List, Union, Callable, Any, Awaitable, Iterable, Dict, Optional from typing import Tuple, List, Union, Callable, Any, Awaitable, Iterable, Dict, Optional
from datetime import date from datetime import date
from prometheus_client import Gauge, Counter, Histogram from prometheus_client import Gauge, Counter, Histogram
from lbry.utils import LockWithMetrics from lbry.utils import LockWithMetrics
from .bip32 import PubKey from .bip32 import PublicKey
from .transaction import Transaction, Output, OutputScript, TXRefImmutable, Input from .transaction import Transaction, Output, OutputScript, TXRefImmutable, Input
from .constants import TXO_TYPES, CLAIM_TYPES from .constants import TXO_TYPES, CLAIM_TYPES
from .util import date_to_julian_day from .util import date_to_julian_day
from concurrent.futures.thread import ThreadPoolExecutor # pylint: disable=wrong-import-order from concurrent.futures.thread import ThreadPoolExecutor # pylint: disable=wrong-import-order
if platform.system() == 'Windows' or 'ANDROID_ARGUMENT' or 'KIVY_BUILD' in os.environ: if platform.system() == 'Windows' or ({'ANDROID_ARGUMENT', 'KIVY_BUILD'} & os.environ.keys()):
from concurrent.futures.thread import ThreadPoolExecutor as ReaderExecutorClass # pylint: disable=reimported from concurrent.futures.thread import ThreadPoolExecutor as ReaderExecutorClass # pylint: disable=reimported
else: else:
from concurrent.futures.process import ProcessPoolExecutor as ReaderExecutorClass from concurrent.futures.process import ProcessPoolExecutor as ReaderExecutorClass
@ -82,10 +83,10 @@ class AIOSQLite:
"read_count", "Number of database reads", namespace="daemon_database" "read_count", "Number of database reads", namespace="daemon_database"
) )
acquire_write_lock_metric = Histogram( acquire_write_lock_metric = Histogram(
f'write_lock_acquired', 'Time to acquire the write lock', namespace="daemon_database", buckets=HISTOGRAM_BUCKETS 'write_lock_acquired', 'Time to acquire the write lock', namespace="daemon_database", buckets=HISTOGRAM_BUCKETS
) )
held_write_lock_metric = Histogram( held_write_lock_metric = Histogram(
f'write_lock_held', 'Length of time the write lock is held for', namespace="daemon_database", 'write_lock_held', 'Length of time the write lock is held for', namespace="daemon_database",
buckets=HISTOGRAM_BUCKETS buckets=HISTOGRAM_BUCKETS
) )
@ -446,6 +447,10 @@ class SQLiteMixin:
version = await self.db.execute_fetchone("SELECT version FROM version LIMIT 1;") version = await self.db.execute_fetchone("SELECT version FROM version LIMIT 1;")
if version == (self.SCHEMA_VERSION,): if version == (self.SCHEMA_VERSION,):
return return
if version == ("1.5",) and self.SCHEMA_VERSION == "1.6":
await self.db.execute("ALTER TABLE txo ADD COLUMN has_source bool DEFAULT 1;")
await self.db.execute("UPDATE version SET version = ?", (self.SCHEMA_VERSION,))
return
await self.db.executescript('\n'.join( await self.db.executescript('\n'.join(
f"DROP TABLE {table};" for table in tables f"DROP TABLE {table};" for table in tables
) + '\n' + 'PRAGMA WAL_CHECKPOINT(FULL);' + '\n' + 'VACUUM;') ) + '\n' + 'PRAGMA WAL_CHECKPOINT(FULL);' + '\n' + 'VACUUM;')
@ -502,7 +507,7 @@ def _get_spendable_utxos(transaction: sqlite3.Connection, accounts: List, decode
amount_to_reserve: int, reserved_amount: int, floor: int, ceiling: int, amount_to_reserve: int, reserved_amount: int, floor: int, ceiling: int,
fee_per_byte: int) -> int: fee_per_byte: int) -> int:
accounts_fmt = ",".join(["?"] * len(accounts)) accounts_fmt = ",".join(["?"] * len(accounts))
txo_query = f""" txo_query = """
SELECT tx.txid, txo.txoid, tx.raw, tx.height, txo.position as nout, tx.is_verified, txo.amount FROM txo SELECT tx.txid, txo.txoid, tx.raw, tx.height, txo.position as nout, tx.is_verified, txo.amount FROM txo
INNER JOIN account_address USING (address) INNER JOIN account_address USING (address)
LEFT JOIN txi USING (txoid) LEFT JOIN txi USING (txoid)
@ -592,7 +597,7 @@ def get_and_reserve_spendable_utxos(transaction: sqlite3.Connection, accounts: L
class Database(SQLiteMixin): class Database(SQLiteMixin):
SCHEMA_VERSION = "1.5" SCHEMA_VERSION = "1.6"
PRAGMAS = """ PRAGMAS = """
pragma journal_mode=WAL; pragma journal_mode=WAL;
@ -646,6 +651,7 @@ class Database(SQLiteMixin):
txo_type integer not null default 0, txo_type integer not null default 0,
claim_id text, claim_id text,
claim_name text, claim_name text,
has_source bool,
channel_id text, channel_id text,
reposted_claim_id text reposted_claim_id text
@ -690,7 +696,8 @@ class Database(SQLiteMixin):
'address': txo.get_address(self.ledger), 'address': txo.get_address(self.ledger),
'position': txo.position, 'position': txo.position,
'amount': txo.amount, 'amount': txo.amount,
'script': sqlite3.Binary(txo.script.source) 'script': sqlite3.Binary(txo.script.source),
'has_source': False,
} }
if txo.is_claim: if txo.is_claim:
if txo.can_decode_claim: if txo.can_decode_claim:
@ -698,8 +705,11 @@ class Database(SQLiteMixin):
row['txo_type'] = TXO_TYPES.get(claim.claim_type, TXO_TYPES['stream']) row['txo_type'] = TXO_TYPES.get(claim.claim_type, TXO_TYPES['stream'])
if claim.is_repost: if claim.is_repost:
row['reposted_claim_id'] = claim.repost.reference.claim_id row['reposted_claim_id'] = claim.repost.reference.claim_id
row['has_source'] = True
if claim.is_signed: if claim.is_signed:
row['channel_id'] = claim.signing_channel_id row['channel_id'] = claim.signing_channel_id
if claim.is_stream:
row['has_source'] = claim.stream.has_source
else: else:
row['txo_type'] = TXO_TYPES['stream'] row['txo_type'] = TXO_TYPES['stream']
elif txo.is_support: elif txo.is_support:
@ -760,9 +770,10 @@ class Database(SQLiteMixin):
conn.execute(*self._insert_sql( conn.execute(*self._insert_sql(
"txo", self.txo_to_row(tx, txo), ignore_duplicate=True "txo", self.txo_to_row(tx, txo), ignore_duplicate=True
)).fetchall() )).fetchall()
elif txo.script.is_pay_script_hash: elif txo.script.is_pay_script_hash and is_my_input:
# TODO: implement script hash payments conn.execute(*self._insert_sql(
log.warning('Database.save_transaction_io: pay script hash is not implemented!') "txo", self.txo_to_row(tx, txo), ignore_duplicate=True
)).fetchall()
def save_transaction_io(self, tx: Transaction, address, txhash, history): def save_transaction_io(self, tx: Transaction, address, txhash, history):
return self.save_transaction_io_batch([tx], address, txhash, history) return self.save_transaction_io_batch([tx], address, txhash, history)
@ -965,7 +976,9 @@ class Database(SQLiteMixin):
sql.append("LEFT JOIN txi ON (txi.position=0 AND txi.txid=txo.txid)") sql.append("LEFT JOIN txi ON (txi.position=0 AND txi.txid=txo.txid)")
return await self.db.execute_fetchall(*query(' '.join(sql), **constraints), read_only=read_only) return await self.db.execute_fetchall(*query(' '.join(sql), **constraints), read_only=read_only)
async def get_txos(self, wallet=None, no_tx=False, no_channel_info=False, read_only=False, **constraints): async def get_txos(
self, wallet=None, no_tx=False, no_channel_info=False, read_only=False, **constraints
) -> List[Output]:
include_is_spent = constraints.get('include_is_spent', False) include_is_spent = constraints.get('include_is_spent', False)
include_is_my_input = constraints.get('include_is_my_input', False) include_is_my_input = constraints.get('include_is_my_input', False)
include_is_my_output = constraints.pop('include_is_my_output', False) include_is_my_output = constraints.pop('include_is_my_output', False)
@ -1015,7 +1028,7 @@ class Database(SQLiteMixin):
if 'order_by' not in constraints or constraints['order_by'] == 'height': if 'order_by' not in constraints or constraints['order_by'] == 'height':
constraints['order_by'] = [ constraints['order_by'] = [
"tx.height=0 DESC", "tx.height DESC", "tx.position DESC", "txo.position" "tx.height in (0, -1) DESC", "tx.height DESC", "tx.position DESC", "txo.position"
] ]
elif constraints.get('order_by', None) == 'none': elif constraints.get('order_by', None) == 'none':
del constraints['order_by'] del constraints['order_by']
@ -1142,6 +1155,41 @@ class Database(SQLiteMixin):
) )
return balance[0]['total'] or 0 return balance[0]['total'] or 0
async def get_detailed_balance(self, accounts, read_only=False, **constraints):
constraints['accounts'] = accounts
result = (await self.select_txos(
f"COALESCE(SUM(amount), 0) AS total,"
f"COALESCE(SUM("
f" CASE WHEN"
f" txo_type NOT IN ({TXO_TYPES['other']}, {TXO_TYPES['purchase']})"
f" THEN amount ELSE 0 END), 0) AS reserved,"
f"COALESCE(SUM("
f" CASE WHEN"
f" txo_type IN ({','.join(map(str, CLAIM_TYPES))})"
f" THEN amount ELSE 0 END), 0) AS claims,"
f"COALESCE(SUM(CASE WHEN txo_type = {TXO_TYPES['support']} THEN amount ELSE 0 END), 0) AS supports,"
f"COALESCE(SUM("
f" CASE WHEN"
f" txo_type = {TXO_TYPES['support']} AND"
f" TXI.address IS NOT NULL AND"
f" TXI.address IN (SELECT address FROM account_address WHERE account = :$account__in0)"
f" THEN amount ELSE 0 END), 0) AS my_supports",
is_spent=False,
include_is_my_input=True,
read_only=read_only,
**constraints
))[0]
return {
"total": result["total"],
"available": result["total"] - result["reserved"],
"reserved": result["reserved"],
"reserved_subtotals": {
"claims": result["claims"],
"supports": result["my_supports"],
"tips": result["supports"] - result["my_supports"]
}
}
async def select_addresses(self, cols, read_only=False, **constraints): async def select_addresses(self, cols, read_only=False, **constraints):
return await self.db.execute_fetchall(*query( return await self.db.execute_fetchall(*query(
f"SELECT {cols} FROM pubkey_address JOIN account_address USING (address)", f"SELECT {cols} FROM pubkey_address JOIN account_address USING (address)",
@ -1156,13 +1204,14 @@ class Database(SQLiteMixin):
addresses = await self.select_addresses(', '.join(cols), read_only=read_only, **constraints) addresses = await self.select_addresses(', '.join(cols), read_only=read_only, **constraints)
if 'pubkey' in cols: if 'pubkey' in cols:
for address in addresses: for address in addresses:
address['pubkey'] = PubKey( address['pubkey'] = PublicKey(
self.ledger, address.pop('pubkey'), address.pop('chain_code'), self.ledger, address.pop('pubkey'), address.pop('chain_code'),
address.pop('n'), address.pop('depth') address.pop('n'), address.pop('depth')
) )
return addresses return addresses
async def get_address_count(self, cols=None, read_only=False, **constraints): async def get_address_count(self, cols=None, read_only=False, **constraints):
self._clean_txo_constraints_for_aggregation(constraints)
count = await self.select_addresses('COUNT(*) as total', read_only=read_only, **constraints) count = await self.select_addresses('COUNT(*) as total', read_only=read_only, **constraints)
return count[0]['total'] or 0 return count[0]['total'] or 0
@ -1196,6 +1245,18 @@ class Database(SQLiteMixin):
async def set_address_history(self, address, history): async def set_address_history(self, address, history):
await self._set_address_history(address, history) await self._set_address_history(address, history)
async def is_channel_key_used(self, account, key: PublicKey):
channels = await self.get_txos(
accounts=[account], txo_type=TXO_TYPES['channel'],
no_tx=True, no_channel_info=True
)
other_key_bytes = key.pubkey_bytes
for channel in channels:
claim = channel.can_decode_claim
if claim and claim.channel.public_key_bytes == other_key_bytes:
return True
return False
@staticmethod @staticmethod
def constrain_purchases(constraints): def constrain_purchases(constraints):
accounts = constraints.pop('accounts', None) accounts = constraints.pop('accounts', None)

View file

@ -16,18 +16,18 @@ from lbry.crypto.hash import hash160, double_sha256, sha256
from lbry.crypto.base58 import Base58 from lbry.crypto.base58 import Base58
from lbry.utils import LRUCacheWithMetrics from lbry.utils import LRUCacheWithMetrics
from .tasks import TaskGroup from lbry.wallet.tasks import TaskGroup
from .database import Database from lbry.wallet.database import Database
from .stream import StreamController from lbry.wallet.stream import StreamController
from .dewies import dewies_to_lbc from lbry.wallet.dewies import dewies_to_lbc
from .account import Account, AddressManager, SingleKey from lbry.wallet.account import Account, AddressManager, SingleKey
from .network import Network from lbry.wallet.network import Network
from .transaction import Transaction, Output from lbry.wallet.transaction import Transaction, Output
from .header import Headers, UnvalidatedHeaders from lbry.wallet.header import Headers, UnvalidatedHeaders
from .checkpoints import HASHES from lbry.wallet.checkpoints import HASHES
from .constants import TXO_TYPES, CLAIM_TYPES, COIN, NULL_HASH32 from lbry.wallet.constants import TXO_TYPES, CLAIM_TYPES, COIN, NULL_HASH32
from .bip32 import PubKey, PrivateKey from lbry.wallet.bip32 import PublicKey, PrivateKey
from .coinselection import CoinSelector from lbry.wallet.coinselection import CoinSelector
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -106,7 +106,7 @@ class Ledger(metaclass=LedgerRegistry):
target_timespan = 150 target_timespan = 150
default_fee_per_byte = 50 default_fee_per_byte = 50
default_fee_per_name_char = 200000 default_fee_per_name_char = 0
checkpoints = HASHES checkpoints = HASHES
@ -178,15 +178,25 @@ class Ledger(metaclass=LedgerRegistry):
raw_address = cls.pubkey_address_prefix + h160 raw_address = cls.pubkey_address_prefix + h160
return Base58.encode(bytearray(raw_address + double_sha256(raw_address)[0:4])) return Base58.encode(bytearray(raw_address + double_sha256(raw_address)[0:4]))
@classmethod
def hash160_to_script_address(cls, h160):
raw_address = cls.script_address_prefix + h160
return Base58.encode(bytearray(raw_address + double_sha256(raw_address)[0:4]))
@staticmethod @staticmethod
def address_to_hash160(address): def address_to_hash160(address):
return Base58.decode(address)[1:21] return Base58.decode(address)[1:21]
@classmethod @classmethod
def is_valid_address(cls, address): def is_pubkey_address(cls, address):
decoded = Base58.decode_check(address) decoded = Base58.decode_check(address)
return decoded[0] == cls.pubkey_address_prefix[0] return decoded[0] == cls.pubkey_address_prefix[0]
@classmethod
def is_script_address(cls, address):
decoded = Base58.decode_check(address)
return decoded[0] == cls.script_address_prefix[0]
@classmethod @classmethod
def public_key_to_address(cls, public_key): def public_key_to_address(cls, public_key):
return cls.hash160_to_address(hash160(public_key)) return cls.hash160_to_address(hash160(public_key))
@ -216,7 +226,7 @@ class Ledger(metaclass=LedgerRegistry):
return account.get_private_key(address_info['chain'], address_info['pubkey'].n) return account.get_private_key(address_info['chain'], address_info['pubkey'].n)
return None return None
async def get_public_key_for_address(self, wallet, address) -> Optional[PubKey]: async def get_public_key_for_address(self, wallet, address) -> Optional[PublicKey]:
match = await self._get_account_and_address_info_for_address(wallet, address) match = await self._get_account_and_address_info_for_address(wallet, address)
if match: if match:
_, address_info = match _, address_info = match
@ -319,10 +329,10 @@ class Ledger(metaclass=LedgerRegistry):
async def start(self): async def start(self):
if not os.path.exists(self.path): if not os.path.exists(self.path):
os.mkdir(self.path) os.mkdir(self.path)
await asyncio.wait([ await asyncio.wait(map(asyncio.create_task, [
self.db.open(), self.db.open(),
self.headers.open() self.headers.open()
]) ]))
fully_synced = self.on_ready.first fully_synced = self.on_ready.first
asyncio.create_task(self.network.start()) asyncio.create_task(self.network.start())
await self.network.on_connected.first await self.network.on_connected.first
@ -355,6 +365,10 @@ class Ledger(metaclass=LedgerRegistry):
await self.db.close() await self.db.close()
await self.headers.close() await self.headers.close()
async def tasks_are_done(self):
await self._update_tasks.done.wait()
await self._other_tasks.done.wait()
@property @property
def local_height_including_downloaded_height(self): def local_height_including_downloaded_height(self):
return max(self.headers.height, self._download_height) return max(self.headers.height, self._download_height)
@ -452,14 +466,15 @@ class Ledger(metaclass=LedgerRegistry):
async def subscribe_accounts(self): async def subscribe_accounts(self):
if self.network.is_connected and self.accounts: if self.network.is_connected and self.accounts:
log.info("Subscribe to %i accounts", len(self.accounts)) log.info("Subscribe to %i accounts", len(self.accounts))
await asyncio.wait([ await asyncio.wait(map(asyncio.create_task, [
self.subscribe_account(a) for a in self.accounts self.subscribe_account(a) for a in self.accounts
]) ]))
async def subscribe_account(self, account: Account): async def subscribe_account(self, account: Account):
for address_manager in account.address_managers.values(): for address_manager in account.address_managers.values():
await self.subscribe_addresses(address_manager, await address_manager.get_addresses()) await self.subscribe_addresses(address_manager, await address_manager.get_addresses())
await account.ensure_address_gap() await account.ensure_address_gap()
await account.deterministic_channel_keys.ensure_cache_primed()
async def unsubscribe_account(self, account: Account): async def unsubscribe_account(self, account: Account):
for address in await account.get_addresses(): for address in await account.get_addresses():
@ -538,15 +553,16 @@ class Ledger(metaclass=LedgerRegistry):
"request %i transactions, %i/%i for %s are already synced", len(to_request), len(already_synced), "request %i transactions, %i/%i for %s are already synced", len(to_request), len(already_synced),
len(remote_history), address len(remote_history), address
) )
remote_history_txids = set(txid for txid, _ in remote_history) remote_history_txids = {txid for txid, _ in remote_history}
async for tx in self.request_synced_transactions(to_request, remote_history_txids, address): async for tx in self.request_synced_transactions(to_request, remote_history_txids, address):
self.maybe_has_channel_key(tx)
pending_synced_history[tx_indexes[tx.id]] = f"{tx.id}:{tx.height}:" pending_synced_history[tx_indexes[tx.id]] = f"{tx.id}:{tx.height}:"
if len(pending_synced_history) % 100 == 0: if len(pending_synced_history) % 100 == 0:
log.info("Syncing address %s: %d/%d", address, len(pending_synced_history), len(to_request)) log.info("Syncing address %s: %d/%d", address, len(pending_synced_history), len(to_request))
log.info("Sync finished for address %s: %d/%d", address, len(pending_synced_history), len(to_request)) log.info("Sync finished for address %s: %d/%d", address, len(pending_synced_history), len(to_request))
assert len(pending_synced_history) == len(remote_history), \ assert len(pending_synced_history) == len(remote_history), \
f"{len(pending_synced_history)} vs {len(remote_history)}" f"{len(pending_synced_history)} vs {len(remote_history)} for {address}"
synced_history = "" synced_history = ""
for remote_i, i in zip(range(len(remote_history)), sorted(pending_synced_history.keys())): for remote_i, i in zip(range(len(remote_history)), sorted(pending_synced_history.keys())):
assert i == remote_i, f"{i} vs {remote_i}" assert i == remote_i, f"{i} vs {remote_i}"
@ -607,6 +623,12 @@ class Ledger(metaclass=LedgerRegistry):
tx.is_verified = merkle_root == header['merkle_root'] tx.is_verified = merkle_root == header['merkle_root']
return tx return tx
def maybe_has_channel_key(self, tx):
for txo in tx._outputs:
if txo.can_decode_claim and txo.claim.is_channel:
for account in self.accounts:
account.deterministic_channel_keys.maybe_generate_deterministic_key_for_channel(txo)
async def request_transactions(self, to_request: Tuple[Tuple[str, int], ...], cached=False): async def request_transactions(self, to_request: Tuple[Tuple[str, int], ...], cached=False):
batches = [[]] batches = [[]]
remote_heights = {} remote_heights = {}
@ -700,6 +722,15 @@ class Ledger(metaclass=LedgerRegistry):
return account.address_managers[details['chain']] return account.address_managers[details['chain']]
return None return None
async def broadcast_or_release(self, tx, blocking=False):
try:
await self.broadcast(tx)
except:
await self.release_tx(tx)
raise
if blocking:
await self.wait(tx, timeout=None)
def broadcast(self, tx): def broadcast(self, tx):
# broadcast can't be a retriable call yet # broadcast can't be a retriable call yet
return self.network.broadcast(hexlify(tx.raw).decode()) return self.network.broadcast(hexlify(tx.raw).decode())
@ -713,13 +744,15 @@ class Ledger(metaclass=LedgerRegistry):
self.hash160_to_address(txi.txo_ref.txo.pubkey_hash) self.hash160_to_address(txi.txo_ref.txo.pubkey_hash)
) )
for txo in tx.outputs: for txo in tx.outputs:
if txo.has_address: if txo.is_pubkey_hash:
addresses.add(self.hash160_to_address(txo.pubkey_hash)) addresses.add(self.hash160_to_address(txo.pubkey_hash))
elif txo.is_script_hash:
addresses.add(self.hash160_to_script_address(txo.script_hash))
start = int(time.perf_counter()) start = int(time.perf_counter())
while timeout and (int(time.perf_counter()) - start) <= timeout: while timeout and (int(time.perf_counter()) - start) <= timeout:
if await self._wait_round(tx, height, addresses): if await self._wait_round(tx, height, addresses):
return return
raise asyncio.TimeoutError('Timed out waiting for transaction.') raise asyncio.TimeoutError(f'Timed out waiting for transaction. {tx.id}')
async def _wait_round(self, tx: Transaction, height: int, addresses: Iterable[str]): async def _wait_round(self, tx: Transaction, height: int, addresses: Iterable[str]):
records = await self.db.get_addresses(address__in=addresses) records = await self.db.get_addresses(address__in=addresses)
@ -738,7 +771,7 @@ class Ledger(metaclass=LedgerRegistry):
))[1] if record['history'] else [] ))[1] if record['history'] else []
for txid, local_height in local_history: for txid, local_height in local_history:
if txid == tx.id: if txid == tx.id:
if local_height >= height: if local_height >= height or (local_height == 0 and height > local_height):
return True return True
log.warning( log.warning(
"local history has higher height than remote for %s (%i vs %i)", txid, "local history has higher height than remote for %s (%i vs %i)", txid,
@ -758,7 +791,7 @@ class Ledger(metaclass=LedgerRegistry):
include_sent_tips=False, include_sent_tips=False,
include_received_tips=False) -> Tuple[List[Output], dict, int, int]: include_received_tips=False) -> Tuple[List[Output], dict, int, int]:
encoded_outputs = await query encoded_outputs = await query
outputs = Outputs.from_base64(encoded_outputs or b'') # TODO: why is the server returning None? outputs = Outputs.from_base64(encoded_outputs or '') # TODO: why is the server returning None?
txs: List[Transaction] = [] txs: List[Transaction] = []
if len(outputs.txs) > 0: if len(outputs.txs) > 0:
async for tx in self.request_transactions(tuple(outputs.txs), cached=True): async for tx in self.request_transactions(tuple(outputs.txs), cached=True):
@ -834,13 +867,10 @@ class Ledger(metaclass=LedgerRegistry):
txo.received_tips = tips txo.received_tips = tips
return txos, blocked, outputs.offset, outputs.total return txos, blocked, outputs.offset, outputs.total
async def resolve(self, accounts, urls, new_sdk_server=None, **kwargs): async def resolve(self, accounts, urls, **kwargs):
txos = [] txos = []
urls_copy = list(urls) urls_copy = list(urls)
if new_sdk_server: resolve = partial(self.network.retriable_call, self.network.resolve)
resolve = partial(self.network.new_resolve, new_sdk_server)
else:
resolve = partial(self.network.retriable_call, self.network.resolve)
while urls_copy: while urls_copy:
batch, urls_copy = urls_copy[:100], urls_copy[100:] batch, urls_copy = urls_copy[:100], urls_copy[100:]
txos.extend( txos.extend(
@ -865,21 +895,31 @@ class Ledger(metaclass=LedgerRegistry):
return await self.network.sum_supports(new_sdk_server, **kwargs) return await self.network.sum_supports(new_sdk_server, **kwargs)
async def claim_search( async def claim_search(
self, accounts, include_purchase_receipt=False, include_is_my_output=False, self, accounts,
new_sdk_server=None, **kwargs) -> Tuple[List[Output], dict, int, int]: include_purchase_receipt=False,
if new_sdk_server: include_is_my_output=False,
claim_search = partial(self.network.new_claim_search, new_sdk_server) **kwargs) -> Tuple[List[Output], dict, int, int]:
else:
claim_search = self.network.claim_search
return await self._inflate_outputs( return await self._inflate_outputs(
claim_search(**kwargs), accounts, self.network.claim_search(**kwargs), accounts,
include_purchase_receipt=include_purchase_receipt,
include_is_my_output=include_is_my_output
)
# async def get_claim_by_claim_id(self, accounts, claim_id, **kwargs) -> Output:
# return await self.network.get_claim_by_id(claim_id)
async def get_claim_by_claim_id(self, claim_id, accounts=None, include_purchase_receipt=False,
include_is_my_output=False):
accounts = accounts or []
# return await self.network.get_claim_by_id(claim_id)
inflated = await self._inflate_outputs(
self.network.get_claim_by_id(claim_id), accounts,
include_purchase_receipt=include_purchase_receipt, include_purchase_receipt=include_purchase_receipt,
include_is_my_output=include_is_my_output, include_is_my_output=include_is_my_output,
) )
txos = inflated[0]
async def get_claim_by_claim_id(self, accounts, claim_id, **kwargs) -> Output: if txos:
for claim in (await self.claim_search(accounts, claim_id=claim_id, **kwargs))[0]: return txos[0]
return claim
async def _report_state(self): async def _report_state(self):
try: try:
@ -898,9 +938,7 @@ class Ledger(metaclass=LedgerRegistry):
"%d change addresses (gap: %d), %d channels, %d certificates and %d claims. ", "%d change addresses (gap: %d), %d channels, %d certificates and %d claims. ",
account.id, balance, total_receiving, account.receiving.gap, total_change, account.id, balance, total_receiving, account.receiving.gap, total_change,
account.change.gap, channel_count, len(account.channel_keys), claim_count) account.change.gap, channel_count, len(account.channel_keys), claim_count)
except Exception as err: except Exception:
if isinstance(err, asyncio.CancelledError): # TODO: remove when updated to 3.8
raise
log.exception( log.exception(
'Failed to display wallet state, please file issue ' 'Failed to display wallet state, please file issue '
'for this bug along with the traceback you see below:') 'for this bug along with the traceback you see below:')
@ -923,9 +961,7 @@ class Ledger(metaclass=LedgerRegistry):
claim_ids = [p.purchased_claim_id for p in purchases] claim_ids = [p.purchased_claim_id for p in purchases]
try: try:
resolved, _, _, _ = await self.claim_search([], claim_ids=claim_ids) resolved, _, _, _ = await self.claim_search([], claim_ids=claim_ids)
except Exception as err: except Exception:
if isinstance(err, asyncio.CancelledError): # TODO: remove when updated to 3.8
raise
log.exception("Resolve failed while looking up purchased claim ids:") log.exception("Resolve failed while looking up purchased claim ids:")
resolved = [] resolved = []
lookup = {claim.claim_id: claim for claim in resolved} lookup = {claim.claim_id: claim for claim in resolved}
@ -1005,9 +1041,7 @@ class Ledger(metaclass=LedgerRegistry):
claim_ids = collection.claim.collection.claims.ids[offset:page_size + offset] claim_ids = collection.claim.collection.claims.ids[offset:page_size + offset]
try: try:
resolve_results, _, _, _ = await self.claim_search([], claim_ids=claim_ids) resolve_results, _, _, _ = await self.claim_search([], claim_ids=claim_ids)
except Exception as err: except Exception:
if isinstance(err, asyncio.CancelledError): # TODO: remove when updated to 3.8
raise
log.exception("Resolve failed while looking up collection claim ids:") log.exception("Resolve failed while looking up collection claim ids:")
return [] return []
claims = [] claims = []
@ -1022,8 +1056,10 @@ class Ledger(metaclass=LedgerRegistry):
claims.append(None) claims.append(None)
return claims return claims
async def get_collections(self, resolve_claims=0, **constraints): async def get_collections(self, resolve_claims=0, resolve=False, **constraints):
collections = await self.db.get_collections(**constraints) collections = await self.db.get_collections(**constraints)
if resolve:
collections = await self._resolve_for_local_results(constraints.get('accounts', []), collections)
if resolve_claims > 0: if resolve_claims > 0:
for collection in collections: for collection in collections:
collection.claims = await self.resolve_collection(collection, page_size=resolve_claims) collection.claims = await self.resolve_collection(collection, page_size=resolve_claims)
@ -1058,7 +1094,7 @@ class Ledger(metaclass=LedgerRegistry):
'abandon_info': [], 'abandon_info': [],
'purchase_info': [] 'purchase_info': []
} }
is_my_inputs = all([txi.is_my_input for txi in tx.inputs]) is_my_inputs = all(txi.is_my_input for txi in tx.inputs)
if is_my_inputs: if is_my_inputs:
# fees only matter if we are the ones paying them # fees only matter if we are the ones paying them
item['value'] = dewies_to_lbc(tx.net_account_balance + tx.fee) item['value'] = dewies_to_lbc(tx.net_account_balance + tx.fee)
@ -1169,7 +1205,7 @@ class Ledger(metaclass=LedgerRegistry):
balance = self._balance_cache.get(account.id) balance = self._balance_cache.get(account.id)
if not balance: if not balance:
balance = self._balance_cache[account.id] = \ balance = self._balance_cache[account.id] = \
await account.get_detailed_balance(confirmations, reserved_subtotals=True) await account.get_detailed_balance(confirmations)
for key, value in balance.items(): for key, value in balance.items():
if key == 'reserved_subtotals': if key == 'reserved_subtotals':
for subkey, subvalue in value.items(): for subkey, subvalue in value.items():
@ -1178,6 +1214,7 @@ class Ledger(metaclass=LedgerRegistry):
result[key] += value result[key] += value
return result return result
class TestNetLedger(Ledger): class TestNetLedger(Ledger):
network_name = 'testnet' network_name = 'testnet'
pubkey_address_prefix = bytes((111,)) pubkey_address_prefix = bytes((111,))
@ -1186,6 +1223,7 @@ class TestNetLedger(Ledger):
extended_private_key_prefix = unhexlify('04358394') extended_private_key_prefix = unhexlify('04358394')
checkpoints = {} checkpoints = {}
class RegTestLedger(Ledger): class RegTestLedger(Ledger):
network_name = 'regtest' network_name = 'regtest'
headers_class = UnvalidatedHeaders headers_class = UnvalidatedHeaders

View file

@ -3,20 +3,21 @@ import json
import typing import typing
import logging import logging
import asyncio import asyncio
from binascii import unhexlify from binascii import unhexlify
from decimal import Decimal from decimal import Decimal
from typing import List, Type, MutableSequence, MutableMapping, Optional from typing import List, Type, MutableSequence, MutableMapping, Optional
from lbry.error import KeyFeeAboveMaxAllowedError from lbry.error import KeyFeeAboveMaxAllowedError, WalletNotLoadedError
from lbry.conf import Config from lbry.conf import Config, NOT_SET
from .dewies import dewies_to_lbc from lbry.wallet.dewies import dewies_to_lbc
from .account import Account from lbry.wallet.account import Account
from .ledger import Ledger, LedgerRegistry from lbry.wallet.ledger import Ledger, LedgerRegistry
from .transaction import Transaction, Output from lbry.wallet.transaction import Transaction, Output
from .database import Database from lbry.wallet.database import Database
from .wallet import Wallet, WalletStorage, ENCRYPT_ON_DISK from lbry.wallet.wallet import Wallet, WalletStorage, ENCRYPT_ON_DISK
from .rpc.jsonrpc import CodeMessageError from lbry.wallet.rpc.jsonrpc import CodeMessageError
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from lbry.extras.daemon.exchange_rate_manager import ExchangeRateManager from lbry.extras.daemon.exchange_rate_manager import ExchangeRateManager
@ -95,7 +96,7 @@ class WalletManager:
for wallet in self.wallets: for wallet in self.wallets:
if wallet.id == wallet_id: if wallet.id == wallet_id:
return wallet return wallet
raise ValueError(f"Couldn't find wallet: {wallet_id}.") raise WalletNotLoadedError(wallet_id)
@staticmethod @staticmethod
def get_balance(wallet): def get_balance(wallet):
@ -182,10 +183,17 @@ class WalletManager:
ledger_config = { ledger_config = {
'auto_connect': True, 'auto_connect': True,
'explicit_servers': [],
'hub_timeout': config.hub_timeout,
'default_servers': config.lbryum_servers, 'default_servers': config.lbryum_servers,
'known_hubs': config.known_hubs,
'jurisdiction': config.jurisdiction,
'concurrent_hub_requests': config.concurrent_hub_requests,
'data_path': config.wallet_dir, 'data_path': config.wallet_dir,
'tx_cache_size': config.transaction_cache_size 'tx_cache_size': config.transaction_cache_size
} }
if 'LBRY_FEE_PER_NAME_CHAR' in os.environ:
ledger_config['fee_per_name_char'] = int(os.environ.get('LBRY_FEE_PER_NAME_CHAR'))
wallets_directory = os.path.join(config.wallet_dir, 'wallets') wallets_directory = os.path.join(config.wallet_dir, 'wallets')
if not os.path.exists(wallets_directory): if not os.path.exists(wallets_directory):
@ -195,6 +203,10 @@ class WalletManager:
os.path.join(wallets_directory, 'default_wallet') os.path.join(wallets_directory, 'default_wallet')
) )
if Config.lbryum_servers.is_set_to_default(config):
with config.update_config() as c:
c.lbryum_servers = NOT_SET
manager = cls.from_config({ manager = cls.from_config({
'ledgers': {ledger_id: ledger_config}, 'ledgers': {ledger_id: ledger_config},
'wallets': [ 'wallets': [
@ -225,9 +237,16 @@ class WalletManager:
async def reset(self): async def reset(self):
self.ledger.config = { self.ledger.config = {
'auto_connect': True, 'auto_connect': True,
'default_servers': self.config.lbryum_servers, 'explicit_servers': [],
'default_servers': Config.lbryum_servers.default,
'known_hubs': self.config.known_hubs,
'jurisdiction': self.config.jurisdiction,
'hub_timeout': self.config.hub_timeout,
'concurrent_hub_requests': self.config.concurrent_hub_requests,
'data_path': self.config.wallet_dir, 'data_path': self.config.wallet_dir,
} }
if Config.lbryum_servers.is_set(self.config):
self.ledger.config['explicit_servers'] = self.config.lbryum_servers
await self.ledger.stop() await self.ledger.stop()
await self.ledger.start() await self.ledger.start()
@ -298,10 +317,4 @@ class WalletManager:
) )
async def broadcast_or_release(self, tx, blocking=False): async def broadcast_or_release(self, tx, blocking=False):
try: await self.ledger.broadcast_or_release(tx, blocking=blocking)
await self.ledger.broadcast(tx)
except:
await self.ledger.release_tx(tx)
raise
if blocking:
await self.ledger.wait(tx, timeout=None)

View file

@ -2,6 +2,7 @@ import logging
import asyncio import asyncio
import json import json
import socket import socket
import random
from time import perf_counter from time import perf_counter
from collections import defaultdict from collections import defaultdict
from typing import Dict, Optional, Tuple from typing import Dict, Optional, Tuple
@ -12,13 +13,14 @@ from lbry.utils import resolve_host
from lbry.error import IncompatibleWalletServerError from lbry.error import IncompatibleWalletServerError
from lbry.wallet.rpc import RPCSession as BaseClientSession, Connector, RPCError, ProtocolError from lbry.wallet.rpc import RPCSession as BaseClientSession, Connector, RPCError, ProtocolError
from lbry.wallet.stream import StreamController from lbry.wallet.stream import StreamController
from lbry.wallet.server.udp import SPVStatusClientProtocol, SPVPong from lbry.wallet.udp import SPVStatusClientProtocol, SPVPong
from lbry.conf import KnownHubsList
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class ClientSession(BaseClientSession): class ClientSession(BaseClientSession):
def __init__(self, *args, network: 'Network', server, timeout=30, **kwargs): def __init__(self, *args, network: 'Network', server, timeout=30, concurrency=32, **kwargs):
self.network = network self.network = network
self.server = server self.server = server
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
@ -28,7 +30,11 @@ class ClientSession(BaseClientSession):
self.response_time: Optional[float] = None self.response_time: Optional[float] = None
self.connection_latency: Optional[float] = None self.connection_latency: Optional[float] = None
self._response_samples = 0 self._response_samples = 0
self.pending_amount = 0 self._concurrency = asyncio.Semaphore(concurrency)
@property
def concurrency(self):
return self._concurrency._value
@property @property
def available(self): def available(self):
@ -54,9 +60,9 @@ class ClientSession(BaseClientSession):
return result return result
async def send_request(self, method, args=()): async def send_request(self, method, args=()):
self.pending_amount += 1
log.debug("send %s%s to %s:%i (%i timeout)", method, tuple(args), self.server[0], self.server[1], self.timeout) log.debug("send %s%s to %s:%i (%i timeout)", method, tuple(args), self.server[0], self.server[1], self.timeout)
try: try:
await self._concurrency.acquire()
if method == 'server.version': if method == 'server.version':
return await self.send_timed_server_version_request(args, self.timeout) return await self.send_timed_server_version_request(args, self.timeout)
request = asyncio.ensure_future(super().send_request(method, args)) request = asyncio.ensure_future(super().send_request(method, args))
@ -90,7 +96,7 @@ class ClientSession(BaseClientSession):
# self.synchronous_close() # self.synchronous_close()
raise raise
finally: finally:
self.pending_amount -= 1 self._concurrency.release()
async def ensure_server_version(self, required=None, timeout=3): async def ensure_server_version(self, required=None, timeout=3):
required = required or self.network.PROTOCOL_VERSION required = required or self.network.PROTOCOL_VERSION
@ -111,9 +117,9 @@ class ClientSession(BaseClientSession):
) )
else: else:
await asyncio.sleep(max(0, max_idle - (now - self.last_send))) await asyncio.sleep(max(0, max_idle - (now - self.last_send)))
except Exception as err: except (Exception, asyncio.CancelledError) as err:
if isinstance(err, asyncio.CancelledError): if isinstance(err, asyncio.CancelledError):
log.warning("closing connection to %s:%i", *self.server) log.info("closing connection to %s:%i", *self.server)
else: else:
log.exception("lost connection to spv") log.exception("lost connection to spv")
finally: finally:
@ -131,7 +137,7 @@ class ClientSession(BaseClientSession):
controller.add(request.args) controller.add(request.args)
def connection_lost(self, exc): def connection_lost(self, exc):
log.warning("Connection lost: %s:%d", *self.server) log.debug("Connection lost: %s:%d", *self.server)
super().connection_lost(exc) super().connection_lost(exc)
self.response_time = None self.response_time = None
self.connection_latency = None self.connection_latency = None
@ -153,7 +159,6 @@ class Network:
# self._switch_task: Optional[asyncio.Task] = None # self._switch_task: Optional[asyncio.Task] = None
self.running = False self.running = False
self.remote_height: int = 0 self.remote_height: int = 0
self._concurrency = asyncio.Semaphore(16)
self._on_connected_controller = StreamController() self._on_connected_controller = StreamController()
self.on_connected = self._on_connected_controller.stream self.on_connected = self._on_connected_controller.stream
@ -164,9 +169,13 @@ class Network:
self._on_status_controller = StreamController(merge_repeated_events=True) self._on_status_controller = StreamController(merge_repeated_events=True)
self.on_status = self._on_status_controller.stream self.on_status = self._on_status_controller.stream
self._on_hub_controller = StreamController(merge_repeated_events=True)
self.on_hub = self._on_hub_controller.stream
self.subscription_controllers = { self.subscription_controllers = {
'blockchain.headers.subscribe': self._on_header_controller, 'blockchain.headers.subscribe': self._on_header_controller,
'blockchain.address.subscribe': self._on_status_controller, 'blockchain.address.subscribe': self._on_status_controller,
'blockchain.peers.subscribe': self._on_hub_controller,
} }
self.aiohttp_session: Optional[aiohttp.ClientSession] = None self.aiohttp_session: Optional[aiohttp.ClientSession] = None
@ -178,6 +187,16 @@ class Network:
def config(self): def config(self):
return self.ledger.config return self.ledger.config
@property
def known_hubs(self):
if 'known_hubs' not in self.config:
return KnownHubsList()
return self.config['known_hubs']
@property
def jurisdiction(self):
return self.config.get("jurisdiction")
def disconnect(self): def disconnect(self):
if self._keepalive_task and not self._keepalive_task.done(): if self._keepalive_task and not self._keepalive_task.done():
self._keepalive_task.cancel() self._keepalive_task.cancel()
@ -188,13 +207,14 @@ class Network:
self.running = True self.running = True
self.aiohttp_session = aiohttp.ClientSession() self.aiohttp_session = aiohttp.ClientSession()
self.on_header.listen(self._update_remote_height) self.on_header.listen(self._update_remote_height)
self.on_hub.listen(self._update_hubs)
self._loop_task = asyncio.create_task(self.network_loop()) self._loop_task = asyncio.create_task(self.network_loop())
self._urgent_need_reconnect.set() self._urgent_need_reconnect.set()
def loop_task_done_callback(f): def loop_task_done_callback(f):
try: try:
f.result() f.result()
except Exception: except (Exception, asyncio.CancelledError):
if self.running: if self.running:
log.exception("wallet server connection loop crashed") log.exception("wallet server connection loop crashed")
@ -215,18 +235,25 @@ class Network:
log.exception("error looking up dns for spv server %s:%i", server, port) log.exception("error looking up dns for spv server %s:%i", server, port)
# accumulate the dns results # accumulate the dns results
await asyncio.gather(*(resolve_spv(server, port) for (server, port) in self.config['default_servers'])) if self.config.get('explicit_servers', []):
hubs = self.config['explicit_servers']
elif self.known_hubs:
hubs = self.known_hubs
else:
hubs = self.config['default_servers']
await asyncio.gather(*(resolve_spv(server, port) for (server, port) in hubs))
return hostname_to_ip, ip_to_hostnames return hostname_to_ip, ip_to_hostnames
async def get_n_fastest_spvs(self, n=5, timeout=3.0) -> Dict[Tuple[str, int], SPVPong]: async def get_n_fastest_spvs(self, timeout=3.0) -> Dict[Tuple[str, int], Optional[SPVPong]]:
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
pong_responses = asyncio.Queue() pong_responses = asyncio.Queue()
connection = SPVStatusClientProtocol(pong_responses) connection = SPVStatusClientProtocol(pong_responses)
sent_ping_timestamps = {} sent_ping_timestamps = {}
_, ip_to_hostnames = await self.resolve_spv_dns() _, ip_to_hostnames = await self.resolve_spv_dns()
log.info("%i possible spv servers to try (%i urls in config)", len(ip_to_hostnames), n = len(ip_to_hostnames)
len(self.config['default_servers'])) log.info("%i possible spv servers to try (%i urls in config)", n, len(self.config.get('explicit_servers', [])))
pongs = {} pongs = {}
known_hubs = self.known_hubs
try: try:
await loop.create_datagram_endpoint(lambda: connection, ('0.0.0.0', 0)) await loop.create_datagram_endpoint(lambda: connection, ('0.0.0.0', 0))
# could raise OSError if it cant bind # could raise OSError if it cant bind
@ -241,28 +268,39 @@ class Network:
'/'.join(ip_to_hostnames[remote]), remote[1], round(latency * 1000, 2), '/'.join(ip_to_hostnames[remote]), remote[1], round(latency * 1000, 2),
pong.available, pong.height) pong.available, pong.height)
known_hubs.hubs.setdefault((ip_to_hostnames[remote][0], remote[1]), {}).update(
{"country": pong.country_name}
)
if pong.available: if pong.available:
pongs[remote] = pong pongs[(ip_to_hostnames[remote][0], remote[1])] = pong
return pongs return pongs
except asyncio.TimeoutError: except asyncio.TimeoutError:
if pongs: if pongs:
log.info("%i/%i probed spv servers are accepting connections", len(pongs), len(ip_to_hostnames)) log.info("%i/%i probed spv servers are accepting connections", len(pongs), len(ip_to_hostnames))
return pongs
else: else:
log.warning("%i spv status probes failed, retrying later. servers tried: %s", log.warning("%i spv status probes failed, retrying later. servers tried: %s",
len(sent_ping_timestamps), len(sent_ping_timestamps),
', '.join('/'.join(hosts) + f' ({ip})' for ip, hosts in ip_to_hostnames.items())) ', '.join('/'.join(hosts) + f' ({ip})' for ip, hosts in ip_to_hostnames.items()))
return pongs random_server = random.choice(list(ip_to_hostnames.keys()))
host, port = random_server
log.warning("trying fallback to randomly selected spv: %s:%i", host, port)
known_hubs.hubs.setdefault((host, port), {})
return {(host, port): None}
finally: finally:
connection.close() connection.close()
async def connect_to_fastest(self) -> Optional[ClientSession]: async def connect_to_fastest(self) -> Optional[ClientSession]:
fastest_spvs = await self.get_n_fastest_spvs() fastest_spvs = await self.get_n_fastest_spvs()
for (host, port) in fastest_spvs: for (host, port), pong in fastest_spvs.items():
if (pong is not None and self.jurisdiction is not None) and \
client = ClientSession(network=self, server=(host, port)) (pong.country_name != self.jurisdiction):
continue
client = ClientSession(network=self, server=(host, port), timeout=self.config.get('hub_timeout', 30),
concurrency=self.config.get('concurrent_hub_requests', 30))
try: try:
await client.create_connection() await client.create_connection()
log.warning("Connected to spv server %s:%i", host, port) log.info("Connected to spv server %s:%i", host, port)
await client.ensure_server_version() await client.ensure_server_version()
return client return client
except (asyncio.TimeoutError, ConnectionError, OSError, IncompatibleWalletServerError, RPCError): except (asyncio.TimeoutError, ConnectionError, OSError, IncompatibleWalletServerError, RPCError):
@ -274,7 +312,8 @@ class Network:
sleep_delay = 30 sleep_delay = 30
while self.running: while self.running:
await asyncio.wait( await asyncio.wait(
[asyncio.sleep(30), self._urgent_need_reconnect.wait()], return_when=asyncio.FIRST_COMPLETED map(asyncio.create_task, [asyncio.sleep(30), self._urgent_need_reconnect.wait()]),
return_when=asyncio.FIRST_COMPLETED
) )
if self._urgent_need_reconnect.is_set(): if self._urgent_need_reconnect.is_set():
sleep_delay = 30 sleep_delay = 30
@ -289,6 +328,8 @@ class Network:
log.debug("get spv server features %s:%i", *client.server) log.debug("get spv server features %s:%i", *client.server)
features = await client.send_request('server.features', []) features = await client.send_request('server.features', [])
self.client, self.server_features = client, features self.client, self.server_features = client, features
log.debug("discover other hubs %s:%i", *client.server)
await self._update_hubs(await client.send_request('server.peers.subscribe', []))
log.info("subscribe to headers %s:%i", *client.server) log.info("subscribe to headers %s:%i", *client.server)
self._update_remote_height((await self.subscribe_headers(),)) self._update_remote_height((await self.subscribe_headers(),))
self._on_connected_controller.add(True) self._on_connected_controller.add(True)
@ -296,13 +337,15 @@ class Network:
log.info("maintaining connection to spv server %s", server_str) log.info("maintaining connection to spv server %s", server_str)
self._keepalive_task = asyncio.create_task(self.client.keepalive_loop()) self._keepalive_task = asyncio.create_task(self.client.keepalive_loop())
try: try:
await asyncio.wait( if not self._urgent_need_reconnect.is_set():
[self._keepalive_task, self._urgent_need_reconnect.wait()], await asyncio.wait(
return_when=asyncio.FIRST_COMPLETED [self._keepalive_task, asyncio.create_task(self._urgent_need_reconnect.wait())],
) return_when=asyncio.FIRST_COMPLETED
)
else:
await self._keepalive_task
if self._urgent_need_reconnect.is_set(): if self._urgent_need_reconnect.is_set():
log.warning("urgent reconnect needed") log.warning("urgent reconnect needed")
self._urgent_need_reconnect.clear()
if self._keepalive_task and not self._keepalive_task.done(): if self._keepalive_task and not self._keepalive_task.done():
self._keepalive_task.cancel() self._keepalive_task.cancel()
except asyncio.CancelledError: except asyncio.CancelledError:
@ -311,7 +354,7 @@ class Network:
self._keepalive_task = None self._keepalive_task = None
self.client = None self.client = None
self.server_features = None self.server_features = None
log.warning("connection lost to %s", server_str) log.info("connection lost to %s", server_str)
log.info("network loop finished") log.info("network loop finished")
async def stop(self): async def stop(self):
@ -337,24 +380,30 @@ class Network:
raise ConnectionError("Attempting to send rpc request when connection is not available.") raise ConnectionError("Attempting to send rpc request when connection is not available.")
async def retriable_call(self, function, *args, **kwargs): async def retriable_call(self, function, *args, **kwargs):
async with self._concurrency: while self.running:
while self.running: if not self.is_connected:
if not self.is_connected: log.warning("Wallet server unavailable, waiting for it to come back and retry.")
log.warning("Wallet server unavailable, waiting for it to come back and retry.") self._urgent_need_reconnect.set()
self._urgent_need_reconnect.set() await self.on_connected.first
await self.on_connected.first try:
try: return await function(*args, **kwargs)
return await function(*args, **kwargs) except asyncio.TimeoutError:
except asyncio.TimeoutError: log.warning("Wallet server call timed out, retrying.")
log.warning("Wallet server call timed out, retrying.") except ConnectionError:
except ConnectionError: log.warning("connection error")
log.warning("connection error")
raise asyncio.CancelledError() # if we got here, we are shutting down raise asyncio.CancelledError() # if we got here, we are shutting down
def _update_remote_height(self, header_args): def _update_remote_height(self, header_args):
self.remote_height = header_args[0]["height"] self.remote_height = header_args[0]["height"]
async def _update_hubs(self, hubs):
if hubs and hubs != ['']:
try:
if self.known_hubs.add_hubs(hubs):
self.known_hubs.save()
except Exception:
log.exception("could not add hubs: %s", hubs)
def get_transaction(self, tx_hash, known_height=None): def get_transaction(self, tx_hash, known_height=None):
# use any server if its old, otherwise restrict to who gave us the history # use any server if its old, otherwise restrict to who gave us the history
restricted = known_height in (None, -1, 0) or 0 > known_height > self.remote_height - 10 restricted = known_height in (None, -1, 0) or 0 > known_height > self.remote_height - 10
@ -412,8 +461,11 @@ class Network:
def get_server_features(self): def get_server_features(self):
return self.rpc('server.features', (), restricted=True) return self.rpc('server.features', (), restricted=True)
def get_claims_by_ids(self, claim_ids): # def get_claims_by_ids(self, claim_ids):
return self.rpc('blockchain.claimtrie.getclaimsbyids', claim_ids) # return self.rpc('blockchain.claimtrie.getclaimsbyids', claim_ids)
def get_claim_by_id(self, claim_id):
return self.rpc('blockchain.claimtrie.getclaimbyid', [claim_id])
def resolve(self, urls, session_override=None): def resolve(self, urls, session_override=None):
return self.rpc('blockchain.claimtrie.resolve', urls, False, session_override) return self.rpc('blockchain.claimtrie.resolve', urls, False, session_override)
@ -421,19 +473,6 @@ class Network:
def claim_search(self, session_override=None, **kwargs): def claim_search(self, session_override=None, **kwargs):
return self.rpc('blockchain.claimtrie.search', kwargs, False, session_override) return self.rpc('blockchain.claimtrie.search', kwargs, False, session_override)
async def new_resolve(self, server, urls):
message = {"method": "resolve", "params": {"urls": urls, "protobuf": True}}
async with self.aiohttp_session.post(server, json=message) as r:
result = await r.json()
return result['result']
async def new_claim_search(self, server, **kwargs):
kwargs['protobuf'] = True
message = {"method": "claim_search", "params": kwargs}
async with self.aiohttp_session.post(server, json=message) as r:
result = await r.json()
return result['result']
async def sum_supports(self, server, **kwargs): async def sum_supports(self, server, **kwargs):
message = {"method": "support_sum", "params": kwargs} message = {"method": "support_sum", "params": kwargs}
async with self.aiohttp_session.post(server, json=message) as r: async with self.aiohttp_session.post(server, json=message) as r:

View file

@ -1,2 +1,2 @@
from .node import Conductor from lbry.wallet.orchstr8.node import Conductor
from .service import ConductorService from lbry.wallet.orchstr8.service import ConductorService

View file

@ -5,7 +5,9 @@ import aiohttp
from lbry import wallet from lbry import wallet
from lbry.wallet.orchstr8.node import ( from lbry.wallet.orchstr8.node import (
Conductor, get_blockchain_node_from_ledger Conductor,
get_lbcd_node_from_ledger,
get_lbcwallet_node_from_ledger
) )
from lbry.wallet.orchstr8.service import ConductorService from lbry.wallet.orchstr8.service import ConductorService
@ -16,10 +18,11 @@ def get_argument_parser():
) )
subparsers = parser.add_subparsers(dest='command', help='sub-command help') subparsers = parser.add_subparsers(dest='command', help='sub-command help')
subparsers.add_parser("download", help="Download blockchain node binary.") subparsers.add_parser("download", help="Download lbcd and lbcwallet node binaries.")
start = subparsers.add_parser("start", help="Start orchstr8 service.") start = subparsers.add_parser("start", help="Start orchstr8 service.")
start.add_argument("--blockchain", help="Hostname to start blockchain node.") start.add_argument("--lbcd", help="Hostname to start lbcd node.")
start.add_argument("--lbcwallet", help="Hostname to start lbcwallet node.")
start.add_argument("--spv", help="Hostname to start SPV server.") start.add_argument("--spv", help="Hostname to start SPV server.")
start.add_argument("--wallet", help="Hostname to start wallet daemon.") start.add_argument("--wallet", help="Hostname to start wallet daemon.")
@ -47,7 +50,8 @@ def main():
if command == 'download': if command == 'download':
logging.getLogger('blockchain').setLevel(logging.INFO) logging.getLogger('blockchain').setLevel(logging.INFO)
get_blockchain_node_from_ledger(wallet).ensure() get_lbcd_node_from_ledger(wallet).ensure()
get_lbcwallet_node_from_ledger(wallet).ensure()
elif command == 'generate': elif command == 'generate':
loop.run_until_complete(run_remote_command( loop.run_until_complete(run_remote_command(
@ -57,9 +61,12 @@ def main():
elif command == 'start': elif command == 'start':
conductor = Conductor() conductor = Conductor()
if getattr(args, 'blockchain', False): if getattr(args, 'lbcd', False):
conductor.blockchain_node.hostname = args.blockchain conductor.lbcd_node.hostname = args.lbcd
loop.run_until_complete(conductor.start_blockchain()) loop.run_until_complete(conductor.start_lbcd())
if getattr(args, 'lbcwallet', False):
conductor.lbcwallet_node.hostname = args.lbcwallet
loop.run_until_complete(conductor.start_lbcwallet())
if getattr(args, 'spv', False): if getattr(args, 'spv', False):
conductor.spv_node.hostname = args.spv conductor.spv_node.hostname = args.spv
loop.run_until_complete(conductor.start_spv()) loop.run_until_complete(conductor.start_spv())

View file

@ -1,3 +1,4 @@
# pylint: disable=import-error
import os import os
import json import json
import shutil import shutil
@ -7,31 +8,44 @@ import tarfile
import logging import logging
import tempfile import tempfile
import subprocess import subprocess
import importlib import platform
from binascii import hexlify from binascii import hexlify
from typing import Type, Optional from typing import Type, Optional
import urllib.request import urllib.request
from uuid import uuid4
import lbry import lbry
from lbry.wallet.server.server import Server
from lbry.wallet.server.env import Env
from lbry.wallet import Wallet, Ledger, RegTestLedger, WalletManager, Account, BlockHeightEvent from lbry.wallet import Wallet, Ledger, RegTestLedger, WalletManager, Account, BlockHeightEvent
from lbry.conf import KnownHubsList, Config
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
try:
def get_spvserver_from_ledger(ledger_module): from hub.herald.env import ServerEnv
spvserver_path, regtest_class_name = ledger_module.__spvserver__.rsplit('.', 1) from hub.scribe.env import BlockchainEnv
spvserver_module = importlib.import_module(spvserver_path) from hub.elastic_sync.env import ElasticEnv
return getattr(spvserver_module, regtest_class_name) from hub.herald.service import HubServerService
from hub.elastic_sync.service import ElasticSyncService
from hub.scribe.service import BlockchainProcessorService
except ImportError:
pass
def get_blockchain_node_from_ledger(ledger_module): def get_lbcd_node_from_ledger(ledger_module):
return BlockchainNode( return LBCDNode(
ledger_module.__node_url__, ledger_module.__lbcd_url__,
os.path.join(ledger_module.__node_bin__, ledger_module.__node_daemon__), ledger_module.__lbcd__,
os.path.join(ledger_module.__node_bin__, ledger_module.__node_cli__) ledger_module.__lbcctl__
)
def get_lbcwallet_node_from_ledger(ledger_module):
return LBCWalletNode(
ledger_module.__lbcwallet_url__,
ledger_module.__lbcwallet__,
ledger_module.__lbcctl__
) )
@ -39,40 +53,37 @@ class Conductor:
def __init__(self, seed=None): def __init__(self, seed=None):
self.manager_module = WalletManager self.manager_module = WalletManager
self.spv_module = get_spvserver_from_ledger(lbry.wallet) self.lbcd_node = get_lbcd_node_from_ledger(lbry.wallet)
self.lbcwallet_node = get_lbcwallet_node_from_ledger(lbry.wallet)
self.blockchain_node = get_blockchain_node_from_ledger(lbry.wallet) self.spv_node = SPVNode()
self.spv_node = SPVNode(self.spv_module)
self.wallet_node = WalletNode( self.wallet_node = WalletNode(
self.manager_module, RegTestLedger, default_seed=seed self.manager_module, RegTestLedger, default_seed=seed
) )
self.lbcd_started = False
self.blockchain_started = False self.lbcwallet_started = False
self.spv_started = False self.spv_started = False
self.wallet_started = False self.wallet_started = False
self.log = log.getChild('conductor') self.log = log.getChild('conductor')
async def start_blockchain(self): async def start_lbcd(self):
if not self.blockchain_started: if not self.lbcd_started:
asyncio.create_task(self.blockchain_node.start()) await self.lbcd_node.start()
await self.blockchain_node.running.wait() self.lbcd_started = True
await self.blockchain_node.generate(200)
self.blockchain_started = True
async def stop_blockchain(self): async def stop_lbcd(self, cleanup=True):
if self.blockchain_started: if self.lbcd_started:
await self.blockchain_node.stop(cleanup=True) await self.lbcd_node.stop(cleanup)
self.blockchain_started = False self.lbcd_started = False
async def start_spv(self): async def start_spv(self):
if not self.spv_started: if not self.spv_started:
await self.spv_node.start(self.blockchain_node) await self.spv_node.start(self.lbcwallet_node)
self.spv_started = True self.spv_started = True
async def stop_spv(self): async def stop_spv(self, cleanup=True):
if self.spv_started: if self.spv_started:
await self.spv_node.stop(cleanup=True) await self.spv_node.stop(cleanup)
self.spv_started = False self.spv_started = False
async def start_wallet(self): async def start_wallet(self):
@ -80,13 +91,30 @@ class Conductor:
await self.wallet_node.start(self.spv_node) await self.wallet_node.start(self.spv_node)
self.wallet_started = True self.wallet_started = True
async def stop_wallet(self): async def stop_wallet(self, cleanup=True):
if self.wallet_started: if self.wallet_started:
await self.wallet_node.stop(cleanup=True) await self.wallet_node.stop(cleanup)
self.wallet_started = False self.wallet_started = False
async def start_lbcwallet(self, clean=True):
if not self.lbcwallet_started:
await self.lbcwallet_node.start()
if clean:
mining_addr = await self.lbcwallet_node.get_new_address()
self.lbcwallet_node.mining_addr = mining_addr
await self.lbcwallet_node.generate(200)
# unlock the wallet for the next 1 hour
await self.lbcwallet_node.wallet_passphrase("password", 3600)
self.lbcwallet_started = True
async def stop_lbcwallet(self, cleanup=True):
if self.lbcwallet_started:
await self.lbcwallet_node.stop(cleanup)
self.lbcwallet_started = False
async def start(self): async def start(self):
await self.start_blockchain() await self.start_lbcd()
await self.start_lbcwallet()
await self.start_spv() await self.start_spv()
await self.start_wallet() await self.start_wallet()
@ -94,7 +122,8 @@ class Conductor:
all_the_stops = [ all_the_stops = [
self.stop_wallet, self.stop_wallet,
self.stop_spv, self.stop_spv,
self.stop_blockchain self.stop_lbcwallet,
self.stop_lbcd
] ]
for stop in all_the_stops: for stop in all_the_stops:
try: try:
@ -102,11 +131,18 @@ class Conductor:
except Exception as e: except Exception as e:
log.exception('Exception raised while stopping services:', exc_info=e) log.exception('Exception raised while stopping services:', exc_info=e)
async def clear_mempool(self):
await self.stop_lbcwallet(cleanup=False)
await self.stop_lbcd(cleanup=False)
await self.start_lbcd()
await self.start_lbcwallet(clean=False)
class WalletNode: class WalletNode:
def __init__(self, manager_class: Type[WalletManager], ledger_class: Type[Ledger], def __init__(self, manager_class: Type[WalletManager], ledger_class: Type[Ledger],
verbose: bool = False, port: int = 5280, default_seed: str = None) -> None: verbose: bool = False, port: int = 5280, default_seed: str = None,
data_path: str = None) -> None:
self.manager_class = manager_class self.manager_class = manager_class
self.ledger_class = ledger_class self.ledger_class = ledger_class
self.verbose = verbose self.verbose = verbose
@ -114,27 +150,34 @@ class WalletNode:
self.ledger: Optional[Ledger] = None self.ledger: Optional[Ledger] = None
self.wallet: Optional[Wallet] = None self.wallet: Optional[Wallet] = None
self.account: Optional[Account] = None self.account: Optional[Account] = None
self.data_path: Optional[str] = None self.data_path: str = data_path or tempfile.mkdtemp()
self.port = port self.port = port
self.default_seed = default_seed self.default_seed = default_seed
self.known_hubs = KnownHubsList()
async def start(self, spv_node: 'SPVNode', seed=None, connect=True): async def start(self, spv_node: 'SPVNode', seed=None, connect=True, config=None):
self.data_path = tempfile.mkdtemp()
wallets_dir = os.path.join(self.data_path, 'wallets') wallets_dir = os.path.join(self.data_path, 'wallets')
os.mkdir(wallets_dir)
wallet_file_name = os.path.join(wallets_dir, 'my_wallet.json') wallet_file_name = os.path.join(wallets_dir, 'my_wallet.json')
with open(wallet_file_name, 'w') as wallet_file: if not os.path.isdir(wallets_dir):
wallet_file.write('{"version": 1, "accounts": []}\n') os.mkdir(wallets_dir)
with open(wallet_file_name, 'w') as wallet_file:
wallet_file.write('{"version": 1, "accounts": []}\n')
self.manager = self.manager_class.from_config({ self.manager = self.manager_class.from_config({
'ledgers': { 'ledgers': {
self.ledger_class.get_id(): { self.ledger_class.get_id(): {
'api_port': self.port, 'api_port': self.port,
'default_servers': [(spv_node.hostname, spv_node.port)], 'explicit_servers': [(spv_node.hostname, spv_node.port)],
'data_path': self.data_path 'default_servers': Config.lbryum_servers.default,
'data_path': self.data_path,
'known_hubs': config.known_hubs if config else KnownHubsList(),
'hub_timeout': 30,
'concurrent_hub_requests': 32,
'fee_per_name_char': 200000
} }
}, },
'wallets': [wallet_file_name] 'wallets': [wallet_file_name]
}) })
self.manager.config = config
self.ledger = self.manager.ledgers[self.ledger_class] self.ledger = self.manager.ledgers[self.ledger_class]
self.wallet = self.manager.default_wallet self.wallet = self.manager.default_wallet
if not self.wallet: if not self.wallet:
@ -160,44 +203,83 @@ class WalletNode:
class SPVNode: class SPVNode:
def __init__(self, node_number=1):
def __init__(self, coin_class, node_number=1): self.node_number = node_number
self.coin_class = coin_class
self.controller = None self.controller = None
self.data_path = None self.data_path = None
self.server = None self.server: Optional[HubServerService] = None
self.writer: Optional[BlockchainProcessorService] = None
self.es_writer: Optional[ElasticSyncService] = None
self.hostname = 'localhost' self.hostname = 'localhost'
self.port = 50001 + node_number # avoid conflict with default daemon self.port = 50001 + node_number # avoid conflict with default daemon
self.udp_port = self.port
self.elastic_notifier_port = 19080 + node_number
self.elastic_services = f'localhost:9200/localhost:{self.elastic_notifier_port}'
self.session_timeout = 600 self.session_timeout = 600
self.rpc_port = '0' # disabled by default self.stopped = True
self.index_name = uuid4().hex
async def start(self, blockchain_node: 'BlockchainNode', extraconf=None): async def start(self, lbcwallet_node: 'LBCWalletNode', extraconf=None):
self.data_path = tempfile.mkdtemp() if not self.stopped:
conf = { log.warning("spv node is already running")
'DESCRIPTION': '', return
'PAYMENT_ADDRESS': '', self.stopped = False
'DAILY_FEE': '0', try:
'DB_DIRECTORY': self.data_path, self.data_path = tempfile.mkdtemp()
'DAEMON_URL': blockchain_node.rpc_url, conf = {
'REORG_LIMIT': '100', 'description': '',
'HOST': self.hostname, 'payment_address': '',
'TCP_PORT': str(self.port), 'daily_fee': '0',
'SESSION_TIMEOUT': str(self.session_timeout), 'db_dir': self.data_path,
'MAX_QUERY_WORKERS': '0', 'daemon_url': lbcwallet_node.rpc_url,
'INDIVIDUAL_TAG_INDEXES': '', 'reorg_limit': 100,
'RPC_PORT': self.rpc_port 'host': self.hostname,
} 'tcp_port': self.port,
if extraconf: 'udp_port': self.udp_port,
conf.update(extraconf) 'elastic_services': self.elastic_services,
# TODO: don't use os.environ 'session_timeout': self.session_timeout,
os.environ.update(conf) 'max_query_workers': 0,
self.server = Server(Env(self.coin_class)) 'es_index_prefix': self.index_name,
self.server.mempool.refresh_secs = self.server.bp.prefetcher.polling_delay = 0.5 'chain': 'regtest',
await self.server.start() 'index_address_status': False
}
if extraconf:
conf.update(extraconf)
self.writer = BlockchainProcessorService(
BlockchainEnv(db_dir=self.data_path, daemon_url=lbcwallet_node.rpc_url,
reorg_limit=100, max_query_workers=0, chain='regtest', index_address_status=False)
)
self.server = HubServerService(ServerEnv(**conf))
self.es_writer = ElasticSyncService(
ElasticEnv(
db_dir=self.data_path, reorg_limit=100, max_query_workers=0, chain='regtest',
elastic_notifier_port=self.elastic_notifier_port,
es_index_prefix=self.index_name,
filtering_channel_ids=(extraconf or {}).get('filtering_channel_ids'),
blocking_channel_ids=(extraconf or {}).get('blocking_channel_ids')
)
)
await self.writer.start()
await self.es_writer.start()
await self.server.start()
except Exception as e:
self.stopped = True
log.exception("failed to start spv node")
raise e
async def stop(self, cleanup=True): async def stop(self, cleanup=True):
if self.stopped:
log.warning("spv node is already stopped")
return
try: try:
await self.server.stop() await self.server.stop()
await self.es_writer.delete_index()
await self.es_writer.stop()
await self.writer.stop()
self.stopped = True
except Exception as e:
log.exception("failed to stop spv node")
raise e
finally: finally:
cleanup and self.cleanup() cleanup and self.cleanup()
@ -205,18 +287,19 @@ class SPVNode:
shutil.rmtree(self.data_path, ignore_errors=True) shutil.rmtree(self.data_path, ignore_errors=True)
class BlockchainProcess(asyncio.SubprocessProtocol): class LBCDProcess(asyncio.SubprocessProtocol):
IGNORE_OUTPUT = [ IGNORE_OUTPUT = [
b'keypool keep', b'keypool keep',
b'keypool reserve', b'keypool reserve',
b'keypool return', b'keypool return',
b'Block submitted',
] ]
def __init__(self): def __init__(self):
self.ready = asyncio.Event() self.ready = asyncio.Event()
self.stopped = asyncio.Event() self.stopped = asyncio.Event()
self.log = log.getChild('blockchain') self.log = log.getChild('lbcd')
def pipe_data_received(self, fd, data): def pipe_data_received(self, fd, data):
if self.log and not any(ignore in data for ignore in self.IGNORE_OUTPUT): if self.log and not any(ignore in data for ignore in self.IGNORE_OUTPUT):
@ -227,7 +310,7 @@ class BlockchainProcess(asyncio.SubprocessProtocol):
if b'Error:' in data: if b'Error:' in data:
self.ready.set() self.ready.set()
raise SystemError(data.decode()) raise SystemError(data.decode())
if b'Done loading' in data: if b'RPCS: RPC server listening on' in data:
self.ready.set() self.ready.set()
def process_exited(self): def process_exited(self):
@ -235,39 +318,57 @@ class BlockchainProcess(asyncio.SubprocessProtocol):
self.ready.set() self.ready.set()
class BlockchainNode: class WalletProcess(asyncio.SubprocessProtocol):
P2SH_SEGWIT_ADDRESS = "p2sh-segwit" IGNORE_OUTPUT = [
BECH32_ADDRESS = "bech32" ]
def __init__(self):
self.ready = asyncio.Event()
self.stopped = asyncio.Event()
self.log = log.getChild('lbcwallet')
self.transport: Optional[asyncio.transports.SubprocessTransport] = None
def pipe_data_received(self, fd, data):
if self.log and not any(ignore in data for ignore in self.IGNORE_OUTPUT):
if b'Error:' in data:
self.log.error(data.decode())
else:
self.log.info(data.decode())
if b'Error:' in data:
self.ready.set()
raise SystemError(data.decode())
if b'WLLT: Finished rescan' in data:
self.ready.set()
def process_exited(self):
self.stopped.set()
self.ready.set()
class LBCDNode:
def __init__(self, url, daemon, cli): def __init__(self, url, daemon, cli):
self.latest_release_url = url self.latest_release_url = url
self.project_dir = os.path.dirname(os.path.dirname(__file__)) self.project_dir = os.path.dirname(os.path.dirname(__file__))
self.bin_dir = os.path.join(self.project_dir, 'bin') self.bin_dir = os.path.join(self.project_dir, 'bin')
self.daemon_bin = os.path.join(self.bin_dir, daemon) self.daemon_bin = os.path.join(self.bin_dir, daemon)
self.cli_bin = os.path.join(self.bin_dir, cli) self.cli_bin = os.path.join(self.bin_dir, cli)
self.log = log.getChild('blockchain') self.log = log.getChild('lbcd')
self.data_path = None self.data_path = tempfile.mkdtemp()
self.protocol = None self.protocol = None
self.transport = None self.transport = None
self.block_expected = 0
self.hostname = 'localhost' self.hostname = 'localhost'
self.peerport = 9246 + 2 # avoid conflict with default peer port self.peerport = 29246
self.rpcport = 9245 + 2 # avoid conflict with default rpc port self.rpcport = 29245
self.rpcuser = 'rpcuser' self.rpcuser = 'rpcuser'
self.rpcpassword = 'rpcpassword' self.rpcpassword = 'rpcpassword'
self.stopped = False self.stopped = True
self.restart_ready = asyncio.Event()
self.restart_ready.set()
self.running = asyncio.Event() self.running = asyncio.Event()
@property @property
def rpc_url(self): def rpc_url(self):
return f'http://{self.rpcuser}:{self.rpcpassword}@{self.hostname}:{self.rpcport}/' return f'http://{self.rpcuser}:{self.rpcpassword}@{self.hostname}:{self.rpcport}/'
def is_expected_block(self, e: BlockHeightEvent):
return self.block_expected == e.height
@property @property
def exists(self): def exists(self):
return ( return (
@ -276,6 +377,12 @@ class BlockchainNode:
) )
def download(self): def download(self):
uname = platform.uname()
target_os = str.lower(uname.system)
target_arch = str.replace(uname.machine, 'x86_64', 'amd64')
target_platform = target_os + '_' + target_arch
self.latest_release_url = str.replace(self.latest_release_url, 'TARGET_PLATFORM', target_platform)
downloaded_file = os.path.join( downloaded_file = os.path.join(
self.bin_dir, self.bin_dir,
self.latest_release_url[self.latest_release_url.rfind('/')+1:] self.latest_release_url[self.latest_release_url.rfind('/')+1:]
@ -309,72 +416,206 @@ class BlockchainNode:
return self.exists or self.download() return self.exists or self.download()
async def start(self): async def start(self):
assert self.ensure() if not self.stopped:
self.data_path = tempfile.mkdtemp() return
loop = asyncio.get_event_loop() self.stopped = False
asyncio.get_child_watcher().attach_loop(loop) try:
command = [ assert self.ensure()
self.daemon_bin, loop = asyncio.get_event_loop()
f'-datadir={self.data_path}', '-printtoconsole', '-regtest', '-server', '-txindex', asyncio.get_child_watcher().attach_loop(loop)
f'-rpcuser={self.rpcuser}', f'-rpcpassword={self.rpcpassword}', f'-rpcport={self.rpcport}', command = [
f'-port={self.peerport}' self.daemon_bin,
] '--notls',
self.log.info(' '.join(command)) f'--datadir={self.data_path}',
while not self.stopped: '--regtest', f'--listen=127.0.0.1:{self.peerport}', f'--rpclisten=127.0.0.1:{self.rpcport}',
if self.running.is_set(): '--txindex', f'--rpcuser={self.rpcuser}', f'--rpcpass={self.rpcpassword}'
await asyncio.sleep(1) ]
continue self.log.info(' '.join(command))
await self.restart_ready.wait() self.transport, self.protocol = await loop.subprocess_exec(
try: LBCDProcess, *command
self.transport, self.protocol = await loop.subprocess_exec( )
BlockchainProcess, *command await self.protocol.ready.wait()
) assert not self.protocol.stopped.is_set()
await self.protocol.ready.wait() self.running.set()
assert not self.protocol.stopped.is_set() except asyncio.CancelledError:
self.running.set() self.running.clear()
except asyncio.CancelledError: self.stopped = True
self.running.clear() raise
raise except Exception as e:
except Exception as e: self.running.clear()
self.running.clear() self.stopped = True
log.exception('failed to start lbrycrdd', exc_info=e) log.exception('failed to start lbcd', exc_info=e)
raise
async def stop(self, cleanup=True): async def stop(self, cleanup=True):
if self.stopped:
return
try:
if self.transport:
self.transport.terminate()
await self.protocol.stopped.wait()
self.transport.close()
except Exception as e:
log.exception('failed to stop lbcd', exc_info=e)
raise
finally:
self.log.info("Done shutting down " + self.daemon_bin)
self.stopped = True
if cleanup:
self.cleanup()
self.running.clear()
def cleanup(self):
assert self.stopped
shutil.rmtree(self.data_path, ignore_errors=True)
class LBCWalletNode:
P2SH_SEGWIT_ADDRESS = "p2sh-segwit"
BECH32_ADDRESS = "bech32"
def __init__(self, url, lbcwallet, cli):
self.latest_release_url = url
self.project_dir = os.path.dirname(os.path.dirname(__file__))
self.bin_dir = os.path.join(self.project_dir, 'bin')
self.lbcwallet_bin = os.path.join(self.bin_dir, lbcwallet)
self.cli_bin = os.path.join(self.bin_dir, cli)
self.log = log.getChild('lbcwallet')
self.protocol = None
self.transport = None
self.hostname = 'localhost'
self.lbcd_rpcport = 29245
self.lbcwallet_rpcport = 29244
self.rpcuser = 'rpcuser'
self.rpcpassword = 'rpcpassword'
self.data_path = tempfile.mkdtemp()
self.stopped = True self.stopped = True
self.running = asyncio.Event()
self.block_expected = 0
self.mining_addr = ''
@property
def rpc_url(self):
# FIXME: somehow the hub/sdk doesn't learn the blocks through the Walet RPC port, why?
# return f'http://{self.rpcuser}:{self.rpcpassword}@{self.hostname}:{self.lbcwallet_rpcport}/'
return f'http://{self.rpcuser}:{self.rpcpassword}@{self.hostname}:{self.lbcd_rpcport}/'
def is_expected_block(self, e: BlockHeightEvent):
return self.block_expected == e.height
@property
def exists(self):
return (
os.path.exists(self.lbcwallet_bin)
)
def download(self):
uname = platform.uname()
target_os = str.lower(uname.system)
target_arch = str.replace(uname.machine, 'x86_64', 'amd64')
target_platform = target_os + '_' + target_arch
self.latest_release_url = str.replace(self.latest_release_url, 'TARGET_PLATFORM', target_platform)
downloaded_file = os.path.join(
self.bin_dir,
self.latest_release_url[self.latest_release_url.rfind('/')+1:]
)
if not os.path.exists(self.bin_dir):
os.mkdir(self.bin_dir)
if not os.path.exists(downloaded_file):
self.log.info('Downloading: %s', self.latest_release_url)
with urllib.request.urlopen(self.latest_release_url) as response:
with open(downloaded_file, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
self.log.info('Extracting: %s', downloaded_file)
if downloaded_file.endswith('.zip'):
with zipfile.ZipFile(downloaded_file) as dotzip:
dotzip.extractall(self.bin_dir)
# zipfile bug https://bugs.python.org/issue15795
os.chmod(self.lbcwallet_bin, 0o755)
elif downloaded_file.endswith('.tar.gz'):
with tarfile.open(downloaded_file) as tar:
tar.extractall(self.bin_dir)
return self.exists
def ensure(self):
return self.exists or self.download()
async def start(self):
assert self.ensure()
loop = asyncio.get_event_loop()
asyncio.get_child_watcher().attach_loop(loop)
command = [
self.lbcwallet_bin,
'--noservertls', '--noclienttls',
'--regtest',
f'--rpcconnect=127.0.0.1:{self.lbcd_rpcport}', f'--rpclisten=127.0.0.1:{self.lbcwallet_rpcport}',
'--createtemp', f'--appdata={self.data_path}',
f'--username={self.rpcuser}', f'--password={self.rpcpassword}'
]
self.log.info(' '.join(command))
try:
self.transport, self.protocol = await loop.subprocess_exec(
WalletProcess, *command
)
self.protocol.transport = self.transport
await self.protocol.ready.wait()
assert not self.protocol.stopped.is_set()
self.running.set()
self.stopped = False
except asyncio.CancelledError:
self.running.clear()
raise
except Exception as e:
self.running.clear()
log.exception('failed to start lbcwallet', exc_info=e)
def cleanup(self):
assert self.stopped
shutil.rmtree(self.data_path, ignore_errors=True)
async def stop(self, cleanup=True):
if self.stopped:
return
try: try:
self.transport.terminate() self.transport.terminate()
await self.protocol.stopped.wait() await self.protocol.stopped.wait()
self.transport.close() self.transport.close()
except Exception as e:
log.exception('failed to stop lbcwallet', exc_info=e)
raise
finally: finally:
self.log.info("Done shutting down " + self.lbcwallet_bin)
self.stopped = True
if cleanup: if cleanup:
self.cleanup() self.cleanup()
self.running.clear()
async def clear_mempool(self):
self.restart_ready.clear()
self.transport.terminate()
await self.protocol.stopped.wait()
self.transport.close()
self.running.clear()
os.remove(os.path.join(self.data_path, 'regtest', 'mempool.dat'))
self.restart_ready.set()
await self.running.wait()
def cleanup(self):
shutil.rmtree(self.data_path, ignore_errors=True)
async def _cli_cmnd(self, *args): async def _cli_cmnd(self, *args):
cmnd_args = [ cmnd_args = [
self.cli_bin, f'-datadir={self.data_path}', '-regtest', self.cli_bin,
f'-rpcuser={self.rpcuser}', f'-rpcpassword={self.rpcpassword}', f'-rpcport={self.rpcport}' f'--rpcuser={self.rpcuser}', f'--rpcpass={self.rpcpassword}', '--notls', '--regtest', '--wallet'
] + list(args) ] + list(args)
self.log.info(' '.join(cmnd_args)) self.log.info(' '.join(cmnd_args))
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
asyncio.get_child_watcher().attach_loop(loop) asyncio.get_child_watcher().attach_loop(loop)
process = await asyncio.create_subprocess_exec( process = await asyncio.create_subprocess_exec(
*cmnd_args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT *cmnd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE
) )
out, _ = await process.communicate() out, err = await process.communicate()
result = out.decode().strip() result = out.decode().strip()
err = err.decode().strip()
if len(result) <= 0 and err.startswith('-'):
raise Exception(err)
if err and 'creating a default config file' not in err:
log.warning(err)
self.log.info(result) self.log.info(result)
if result.startswith('error code'): if result.startswith('error code'):
raise Exception(result) raise Exception(result)
@ -382,7 +623,14 @@ class BlockchainNode:
def generate(self, blocks): def generate(self, blocks):
self.block_expected += blocks self.block_expected += blocks
return self._cli_cmnd('generate', str(blocks)) return self._cli_cmnd('generatetoaddress', str(blocks), self.mining_addr)
def generate_to_address(self, blocks, addr):
self.block_expected += blocks
return self._cli_cmnd('generatetoaddress', str(blocks), addr)
def wallet_passphrase(self, passphrase, timeout):
return self._cli_cmnd('walletpassphrase', passphrase, str(timeout))
def invalidate_block(self, blockhash): def invalidate_block(self, blockhash):
return self._cli_cmnd('invalidateblock', blockhash) return self._cli_cmnd('invalidateblock', blockhash)
@ -399,11 +647,11 @@ class BlockchainNode:
def get_raw_change_address(self): def get_raw_change_address(self):
return self._cli_cmnd('getrawchangeaddress') return self._cli_cmnd('getrawchangeaddress')
def get_new_address(self, address_type): def get_new_address(self, address_type='legacy'):
return self._cli_cmnd('getnewaddress', "", address_type) return self._cli_cmnd('getnewaddress', "", address_type)
async def get_balance(self): async def get_balance(self):
return float(await self._cli_cmnd('getbalance')) return await self._cli_cmnd('getbalance')
def send_to_address(self, address, amount): def send_to_address(self, address, amount):
return self._cli_cmnd('sendtoaddress', address, str(amount)) return self._cli_cmnd('sendtoaddress', address, str(amount))
@ -415,7 +663,10 @@ class BlockchainNode:
return self._cli_cmnd('createrawtransaction', json.dumps(inputs), json.dumps(outputs)) return self._cli_cmnd('createrawtransaction', json.dumps(inputs), json.dumps(outputs))
async def sign_raw_transaction_with_wallet(self, tx): async def sign_raw_transaction_with_wallet(self, tx):
return json.loads(await self._cli_cmnd('signrawtransactionwithwallet', tx))['hex'].encode() # the "withwallet" portion should only come into play if we are doing segwit.
# and "withwallet" doesn't exist on lbcd yet.
result = await self._cli_cmnd('signrawtransaction', tx)
return json.loads(result)['hex'].encode()
def decode_raw_transaction(self, tx): def decode_raw_transaction(self, tx):
return self._cli_cmnd('decoderawtransaction', hexlify(tx.raw).decode()) return self._cli_cmnd('decoderawtransaction', hexlify(tx.raw).decode())

View file

@ -61,8 +61,10 @@ class ConductorService:
#set_logging( #set_logging(
# self.stack.ledger_module, logging.DEBUG, WebSocketLogHandler(self.send_message) # self.stack.ledger_module, logging.DEBUG, WebSocketLogHandler(self.send_message)
#) #)
self.stack.blockchain_started or await self.stack.start_blockchain() self.stack.lbcd_started or await self.stack.start_lbcd()
self.send_message({'type': 'service', 'name': 'blockchain', 'port': self.stack.blockchain_node.port}) self.send_message({'type': 'service', 'name': 'lbcd', 'port': self.stack.lbcd_node.port})
self.stack.lbcwallet_started or await self.stack.start_lbcwallet()
self.send_message({'type': 'service', 'name': 'lbcwallet', 'port': self.stack.lbcwallet_node.port})
self.stack.spv_started or await self.stack.start_spv() self.stack.spv_started or await self.stack.start_spv()
self.send_message({'type': 'service', 'name': 'spv', 'port': self.stack.spv_node.port}) self.send_message({'type': 'service', 'name': 'spv', 'port': self.stack.spv_node.port})
self.stack.wallet_started or await self.stack.start_wallet() self.stack.wallet_started or await self.stack.start_wallet()
@ -74,7 +76,7 @@ class ConductorService:
async def generate(self, request): async def generate(self, request):
data = await request.post() data = await request.post()
blocks = data.get('blocks', 1) blocks = data.get('blocks', 1)
await self.stack.blockchain_node.generate(int(blocks)) await self.stack.lbcwallet_node.generate(int(blocks))
return json_response({'blocks': blocks}) return json_response({'blocks': blocks})
async def transfer(self, request): async def transfer(self, request):
@ -85,11 +87,14 @@ class ConductorService:
if not address: if not address:
raise ValueError("No address was provided.") raise ValueError("No address was provided.")
amount = data.get('amount', 1) amount = data.get('amount', 1)
txid = await self.stack.blockchain_node.send_to_address(address, amount)
if self.stack.wallet_started: if self.stack.wallet_started:
await self.stack.wallet_node.ledger.on_transaction.where( watcher = self.stack.wallet_node.ledger.on_transaction.where(
lambda e: e.tx.id == txid and e.address == address lambda e: e.address == address # and e.tx.id == txid -- might stall; see send_to_address_and_wait
) )
txid = await self.stack.lbcwallet_node.send_to_address(address, amount)
await watcher
else:
txid = await self.stack.lbcwallet_node.send_to_address(address, amount)
return json_response({ return json_response({
'address': address, 'address': address,
'amount': amount, 'amount': amount,
@ -98,7 +103,7 @@ class ConductorService:
async def balance(self, _): async def balance(self, _):
return json_response({ return json_response({
'balance': await self.stack.blockchain_node.get_balance() 'balance': await self.stack.lbcwallet_node.get_balance()
}) })
async def log(self, request): async def log(self, request):
@ -129,7 +134,7 @@ class ConductorService:
'type': 'status', 'type': 'status',
'height': self.stack.wallet_node.ledger.headers.height, 'height': self.stack.wallet_node.ledger.headers.height,
'balance': satoshis_to_coins(await self.stack.wallet_node.account.get_balance()), 'balance': satoshis_to_coins(await self.stack.wallet_node.account.get_balance()),
'miner': await self.stack.blockchain_node.get_balance() 'miner': await self.stack.lbcwallet_node.get_balance()
}) })
def send_message(self, msg): def send_message(self, msg):

View file

@ -108,9 +108,6 @@ class Response:
class CodeMessageError(Exception): class CodeMessageError(Exception):
def __init__(self, code, message):
super().__init__(code, message)
@property @property
def code(self): def code(self):
return self.args[0] return self.args[0]

View file

@ -395,8 +395,8 @@ class RPCSession(SessionBase):
namespace=NAMESPACE, labelnames=("version",) namespace=NAMESPACE, labelnames=("version",)
) )
def __init__(self, *, framer=None, loop=None, connection=None): def __init__(self, *, framer=None, connection=None):
super().__init__(framer=framer, loop=loop) super().__init__(framer=framer)
self.connection = connection or self.default_connection() self.connection = connection or self.default_connection()
self.client_version = 'unknown' self.client_version = 'unknown'
@ -436,7 +436,8 @@ class RPCSession(SessionBase):
except CancelledError: except CancelledError:
raise raise
except Exception: except Exception:
self.logger.exception(f'exception handling {request}') reqstr = str(request)
self.logger.exception(f'exception handling {reqstr[:16_000]}')
result = RPCError(JSONRPC.INTERNAL_ERROR, result = RPCError(JSONRPC.INTERNAL_ERROR,
'internal server error') 'internal server error')
if isinstance(request, Request): if isinstance(request, Request):
@ -495,6 +496,17 @@ class RPCSession(SessionBase):
self.abort() self.abort()
return False return False
async def send_notifications(self, notifications) -> bool:
"""Send an RPC notification over the network."""
message, _ = self.connection.send_batch(notifications)
try:
await self._send_message(message)
return True
except asyncio.TimeoutError:
self.logger.info("timeout sending address notification to %s", self.peer_address_str(for_log=True))
self.abort()
return False
def send_batch(self, raise_errors=False): def send_batch(self, raise_errors=False):
"""Return a BatchRequest. Intended to be used like so: """Return a BatchRequest. Intended to be used like so:

Some files were not shown because too many files have changed in this diff Show more