Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.
The real issue still remains -- the upload is somehow locked at the backend.
Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).
Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.
Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.
## Background
Per developer of `tus-js-client`, it is normal to occasionally encounter upload errors. The auto-retry mechanism is meant to address this.
While implementing tab-lock to prevent multiple uploads of the same file, 423_locked was used to detect this scenario. But 423_locked could also mean "the server is busy writing the chunk" (per discussion with Randy), so we kind of disabled the auto-retry mechanism accidentally.
Meanwhile, from a prior discussion with Randy, one of the chunk-writing duration took 3 minutes. Our current maximum of "retry after 15s" wouldn't help.
## Change
1. Given that tab-locking was improved recently and no longer reliant on the server error messages (we use secure storage to mark a file as locked), reverted the change to "skip retry on 409/423". This is now back to normal recommended behavior.
2. `tus-js-client` currently does not support variable retry delay, otherwise we could prolong the delay if the error was 423. Since we know it could take up to 3 minutes, and that we don't know if it's file-size dependant, just add another 30s retry and put a friendlier message asking the user to retry themselves after waiting a bit.
Our current chunk size is 25,000,000.
Google and S3 documentation suggests the chunk size to be multiples of 256KiB. MongoDB too. We aren't using any of those, but I guess no harm doing the same. From the logs, the values "25,000,000" and "50,000,000" seems to be common.
f0cd1592 did an additional call instead of replacement.
Aside: the 1 hour value only has effect in dev instances. For prod, CloudFlare seems to override that to 4 hours.
## Issue
In the original Desktop code, new strings encountered during runtime will be automatically added to the local `app-strings.json` file. The feature is unavailable in Web because writing to file would require explicit permissions.
## Change
Partially restore the functionality by saving the strings to memory and retrieving it from the console via `copy(window.new_strings)`. It's a little bit of manual work, but I think it is good as it forces a sanity check before committing (previously, experimental/developmental strings are committed and being translated in Transifex).
## Ticket
418 TUS: skip fingerprint storage
- Fingerprints for canceled uploads are not being cleared by tus-js-client. It's in localStorage, and there is limit for that.
- We are storing the confirmed fingerprint (from the backend) in redux anyway, so we don't need that functionality.
* Upload: fix redux key clash
## Issue
`params` is the "final" value that will be passed to the SDK and `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.
For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.
## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.
The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.
* Upload: add tab-locking
## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.
- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.
## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.
## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.
* Upload: check if locked before confirming delete
## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
* Fix query selection
* Fix xml format
* Fix link url and author_url
* Refactor repeated components
* Refactor repeated embed iframe string
* Add support for passing referrer queries to src
* Change iframe id from lbry to odysee
* Improve replace logic understanding
* Fix URL
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
## Issue
Tom seeing crashes on the line that was trying to remove the script, saying it's not a child of that node.
## Changes
- I'm guessing the found `fjs` sometimes is not in `head`, but we always remove from `head` during cleanup. Just append to the bottom of head, and remove from head. I think script order doesn't matter if we are injecting at runtime?
- Fixed effect dependency while at it (the latest PR removed the need to check for `type`).
* coming along well
* coming along well
* adding custom react element
* coming along well
* coming along well
* coming along well
* working pretty well
* almost done
* essentially working just could use a couple touchups
* cleanup and lint errors
* fix lint errors
* fix flow errors
* possible bugfix
* dynamically set width and height
* only run when rowdata is populated
* trying using ref
* better way to check for card population
* working implementation
* working implementation
* clean up flow and clean up script
* fix typo in comment and logs
* Route recommendation search to recsys 5% of the time + add `user_id`
## Ticket
334 send some recommended requests to recsys
## Approach
`doSearch`:
- If the search options include `related_to`, route that to the new `searchRecommendations` which performs the 5% check + appends `user_id` at the end. This way, we don't need to alter the function signature of `doSearch`.
- Else, run proceed as normal.
* Always go to alt provider
f
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* add gdpr support
* only run on production
* testing implementation
* just needs last touches then ready
* ready for merge
* add cookies to sidebar
* hide button when secureprivacy not available
* switch over to loading script as a react hook
* conditionally add secureprivacy script
* save gdpr status on session
* better design
* SyncFatalError: show nag instead of hard-crashing.
## Issue
When sync fails, we crash the app.
## Ticket
Maybe closes 39 "Better handle both internal and web backend interruptions / downtime"
## Approach
I'm tackling this from the standpoint that (1) sync errors are not that fatal -- we'll just lost a few recent changes (2) network disconnection is the common cause.
## Changes
- If we are offline:
- Inform user through a nag. All other status is meaningless if we are offline.
- If we are online:
- If api is STATUS_DOWN, show the existing crash page.
- If there is a sync error, show a nag saying settings are now potentially unsynchronized, and add a button to retry sync.
- If there is a chunk error, nag to reload.
* Attempt to detect `status=DOWN`
Previous code resolves the status to either "ok" or "not", which makes the app unable to differentiate between the "degraded" (nag) and "down" (crash) states.