Currently, homepages are still build as part of the app, so this change doesn't bring much benefit other than to support the wrapper app.
When the service is moved away from the app, we won't have to rebuild the app when homepages change, and also the ui.js bundle would be smaller without the need to code-split.
## Ticket
910
## Changes
- Change the "message" from a generic "tus-upload" to more specific ones like "tus: failed to resume upload". These are grouped as "Events" in sentry, so we can isolate and search for them easily.
- Pass more info to Sentry (previously only available from Slack). It is still good to send to both, since some browsers block Sentry even without blocker extensions.
- Reduce verbosity of Slack's
## Notes
- Was unable to change the "unknown" problem mentioned in the ticket. The API does not accept `new Error('xxx')`, even though that's being mentioned by many in the forums. It might be due to the version of Sentry that we are using.
- To search for tus issues, go to "Issues" and query `message:tus*`. Results are collapsed per event, so click on the item of interest, then click "Events" at the upper right to see all occurrences of the same problem.
* Move into getLocalStorageSummary + always log
- Move into getLocalStorageSummary to clean up the clutter.
- Always log the localStorage info to get a bigger picture of what's going on with the QuotaExceededError.
* Remove 'findPreviousUploads' - we use the url stored in Redux.
Something I forgot to remove in the past. It also reads from localStorage, so remove since we are trying to avoid touching localStorage.
* Ensure localStorage is not used when uploading
I don't think it's being written when `storeFingerprintForResuming` is disabled, but doing the suggestion nonetheless.
`https://github.com/tus/tus-js-client/issues/315#issuecomment-1046821112`
We no longer ask tus to save the upload URL since December, so there should no reason for it to be writing to localStorage.
Adding more logs to determine what is the actual cause -- localStorage being full, or not available. Neither should affect the upload but they are the only known causes for that error message, so try to narrow down the investigation path.
## Issue
`/$/embed/尾崎豊----大阪球場ライブ%E3%80%801_6/bfd63daa9453bb1a11674ca8a7c5f5dd6b49d024?r=2ituZftpdG18f1TBADDbCaaEZ9ecYYYm` wasn't working
## Change
Probably need to revisit this properly, but for now, grab the `requestPath` that's needed for resolving before escaping the characters.
Tested that `http://localhost:1337/$/search?q=%22\/%3E%3Cimg%20s+src+c=x%20on+onerror+%20=alert(1)+\%3E` would still be blocked.
* Rewrite __ to be usable on server side
* Add changelog entry
* Clean invisible characters from primaryModValue
* Revert "Rewrite __ to be usable on server side"
This reverts commit 53f63c01f3b56c5530955323612826c0ac5dc5d3.
* Make pass-through placeholder for __ fn until it can be adapted for node.
* Switch messagages to inline interpolation until i18n done
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
* Adjust meta data to allow search pages to reflect search term
* Update changelog
* Address empty query string case and pass query string to append to og:url
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
With the throughput tweaks at the backend, it seems like the number of "file is locked" errors have reduced.
The next thing to try is to reduce the chunk size, hoping that file writes would be faster, reducing the lock duration from causing a timeout.
Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.
The real issue still remains -- the upload is somehow locked at the backend.
Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).
Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.
Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.