-move status message on connectionDone to client, indicate if blobs
were sent or not (and how many blobs reflector still needs, if any)
-only try uploading failed blob once after first failure, to prevent
indefinite retries
-add {‘sd_blob_hash’: …, ‘sd_blob_size ‘: …} query type with
{‘send_sd_blob’: True/False, ‘needed_blobs’: []} response
this allows the reflector client to know how much of a stream reflector
already has covered, as to minimize the number of subsequent requests
and prevent streams from being partially reflected
-remove empty {} request
this fixes a bug where if github is down the app will fail to start.
-check for new version every 30 min instead of every 12 hours
-check connection problems every 30 seconds instead of every second
What was happening was the wallet claimed to be caught up before it
actually was and so the wallet’s local_height was still the value from
when lbry was last run, frequently more than 20 or 50 blocks
behind. _get_value_for_name uses the block at local_height as the
basis for the proof. If _get_value_for_name is called during that
time between when the wallet claims to be caught up and it actually
is, the “Block too deep error” happens. And since the discover page
of the UI does name resolution right away, the error basically happens
anytime somebody starts the app after not using it for a few hours.
This changes the startup behaviour of the wallet to
- use the `update` callback provided by lbryum
- check that local_height and network_height match before declaring
that the wallet has caught up
For reference, the error is raised here:
1b896ae75b/src/rpc/claimtrie.cpp (L688)
previously only the sd blob was reflected, if the server indicated it
needed the blob then the rest of the stream would follow. this allowed
for many streams to be partially reflected, where for whatever reason
the connection was broken before the full upload was completed. this
meant that on a subsequent run, the client would falsely believe
reflector had the whole stream when it actually only had some portion
of it.
this solution isn’t ideal, I’m most of the way done with a better one,
but this can be deployed now.