We no longer ask tus to save the upload URL since December, so there should no reason for it to be writing to localStorage.
Adding more logs to determine what is the actual cause -- localStorage being full, or not available. Neither should affect the upload but they are the only known causes for that error message, so try to narrow down the investigation path.
## Issue
`/$/embed/尾崎豊----大阪球場ライブ%E3%80%801_6/bfd63daa9453bb1a11674ca8a7c5f5dd6b49d024?r=2ituZftpdG18f1TBADDbCaaEZ9ecYYYm` wasn't working
## Change
Probably need to revisit this properly, but for now, grab the `requestPath` that's needed for resolving before escaping the characters.
Tested that `http://localhost:1337/$/search?q=%22\/%3E%3Cimg%20s+src+c=x%20on+onerror+%20=alert(1)+\%3E` would still be blocked.
* Rewrite __ to be usable on server side
* Add changelog entry
* Clean invisible characters from primaryModValue
* Revert "Rewrite __ to be usable on server side"
This reverts commit 53f63c01f3b56c5530955323612826c0ac5dc5d3.
* Make pass-through placeholder for __ fn until it can be adapted for node.
* Switch messagages to inline interpolation until i18n done
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
* Adjust meta data to allow search pages to reflect search term
* Update changelog
* Address empty query string case and pass query string to append to og:url
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
With the throughput tweaks at the backend, it seems like the number of "file is locked" errors have reduced.
The next thing to try is to reduce the chunk size, hoping that file writes would be faster, reducing the lock duration from causing a timeout.
Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.
The real issue still remains -- the upload is somehow locked at the backend.
Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).
Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.
Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.
## Background
Per developer of `tus-js-client`, it is normal to occasionally encounter upload errors. The auto-retry mechanism is meant to address this.
While implementing tab-lock to prevent multiple uploads of the same file, 423_locked was used to detect this scenario. But 423_locked could also mean "the server is busy writing the chunk" (per discussion with Randy), so we kind of disabled the auto-retry mechanism accidentally.
Meanwhile, from a prior discussion with Randy, one of the chunk-writing duration took 3 minutes. Our current maximum of "retry after 15s" wouldn't help.
## Change
1. Given that tab-locking was improved recently and no longer reliant on the server error messages (we use secure storage to mark a file as locked), reverted the change to "skip retry on 409/423". This is now back to normal recommended behavior.
2. `tus-js-client` currently does not support variable retry delay, otherwise we could prolong the delay if the error was 423. Since we know it could take up to 3 minutes, and that we don't know if it's file-size dependant, just add another 30s retry and put a friendlier message asking the user to retry themselves after waiting a bit.
Our current chunk size is 25,000,000.
Google and S3 documentation suggests the chunk size to be multiples of 256KiB. MongoDB too. We aren't using any of those, but I guess no harm doing the same. From the logs, the values "25,000,000" and "50,000,000" seems to be common.
f0cd1592 did an additional call instead of replacement.
Aside: the 1 hour value only has effect in dev instances. For prod, CloudFlare seems to override that to 4 hours.
## Issue
In the original Desktop code, new strings encountered during runtime will be automatically added to the local `app-strings.json` file. The feature is unavailable in Web because writing to file would require explicit permissions.
## Change
Partially restore the functionality by saving the strings to memory and retrieving it from the console via `copy(window.new_strings)`. It's a little bit of manual work, but I think it is good as it forces a sanity check before committing (previously, experimental/developmental strings are committed and being translated in Transifex).