-add get_read_handle to file_utils
-don’t leave read handle hanging after creating lbry file
-get rid of threaded weirdness
-remove reflector functionality from Publisher
-fix updating with an existing stream
-reflect new stream in thread after broadcasting name claim
-move status message on connectionDone to client, indicate if blobs
were sent or not (and how many blobs reflector still needs, if any)
-only try uploading failed blob once after first failure, to prevent
indefinite retries
-add {‘sd_blob_hash’: …, ‘sd_blob_size ‘: …} query type with
{‘send_sd_blob’: True/False, ‘needed_blobs’: []} response
this allows the reflector client to know how much of a stream reflector
already has covered, as to minimize the number of subsequent requests
and prevent streams from being partially reflected
-remove empty {} request
this fixes a bug where if github is down the app will fail to start.
-check for new version every 30 min instead of every 12 hours
-check connection problems every 30 seconds instead of every second
What was happening was the wallet claimed to be caught up before it
actually was and so the wallet’s local_height was still the value from
when lbry was last run, frequently more than 20 or 50 blocks
behind. _get_value_for_name uses the block at local_height as the
basis for the proof. If _get_value_for_name is called during that
time between when the wallet claims to be caught up and it actually
is, the “Block too deep error” happens. And since the discover page
of the UI does name resolution right away, the error basically happens
anytime somebody starts the app after not using it for a few hours.
This changes the startup behaviour of the wallet to
- use the `update` callback provided by lbryum
- check that local_height and network_height match before declaring
that the wallet has caught up
For reference, the error is raised here:
1b896ae75b/src/rpc/claimtrie.cpp (L688)
previously only the sd blob was reflected, if the server indicated it
needed the blob then the rest of the stream would follow. this allowed
for many streams to be partially reflected, where for whatever reason
the connection was broken before the full upload was completed. this
meant that on a subsequent run, the client would falsely believe
reflector had the whole stream when it actually only had some portion
of it.
this solution isn’t ideal, I’m most of the way done with a better one,
but this can be deployed now.
When creating a CryptStream, the last blob is always empty. Previously, this
blob wouldn't be deleted and would instead just stick around in the blobfiles
directory.
- Both methods now take an SD hash instead of a path (more logical API
and avoids potential security problems)
- Moves the core logic into functions on a new module,
lbry.core.file_utils
- Adds reveal support for Windows
* Move the blob verification to the actual Blob object
* remove the check on verification time
* remove get_blob_length from BlobManager
Removed because I'm not sure what checking verification time against ctime gets us, except some protection against an accidental modification of the blob.
It is possible (likely) that a manage call is in progress when
`stop` is called. When that happens, _manage will continue to
run, and schedule another call - and the manager won't actually stop,
and will likely cause an error as other components have been torn down.
This fix adds a deferred that gets created when a manage call starts
and is fired when its done. At this points its safe to start the
stopping process. Also add a check to not schedule another manage
call if we're stopped
This fixes https://app.asana.com/0/142330900434470/239832897034382