Since `triggerBlacklist` is just a simple boolean and `<Ads>` is not doing any additional logic on top of it, it doesn't make sense to pass as a parameter. Just not mount the component -- it's more concise and obvious at the client side.
Currently, homepages are still build as part of the app, so this change doesn't bring much benefit other than to support the wrapper app.
When the service is moved away from the app, we won't have to rebuild the app when homepages change, and also the ui.js bundle would be smaller without the need to code-split.
## Ticket
910
## Changes
- Change the "message" from a generic "tus-upload" to more specific ones like "tus: failed to resume upload". These are grouped as "Events" in sentry, so we can isolate and search for them easily.
- Pass more info to Sentry (previously only available from Slack). It is still good to send to both, since some browsers block Sentry even without blocker extensions.
- Reduce verbosity of Slack's
## Notes
- Was unable to change the "unknown" problem mentioned in the ticket. The API does not accept `new Error('xxx')`, even though that's being mentioned by many in the forums. It might be due to the version of Sentry that we are using.
- To search for tus issues, go to "Issues" and query `message:tus*`. Results are collapsed per event, so click on the item of interest, then click "Events" at the upper right to see all occurrences of the same problem.
* Move into getLocalStorageSummary + always log
- Move into getLocalStorageSummary to clean up the clutter.
- Always log the localStorage info to get a bigger picture of what's going on with the QuotaExceededError.
* Remove 'findPreviousUploads' - we use the url stored in Redux.
Something I forgot to remove in the past. It also reads from localStorage, so remove since we are trying to avoid touching localStorage.
* Ensure localStorage is not used when uploading
I don't think it's being written when `storeFingerprintForResuming` is disabled, but doing the suggestion nonetheless.
`https://github.com/tus/tus-js-client/issues/315#issuecomment-1046821112`
We no longer ask tus to save the upload URL since December, so there should no reason for it to be writing to localStorage.
Adding more logs to determine what is the actual cause -- localStorage being full, or not available. Neither should affect the upload but they are the only known causes for that error message, so try to narrow down the investigation path.
## Issue
`/$/embed/尾崎豊----大阪球場ライブ%E3%80%801_6/bfd63daa9453bb1a11674ca8a7c5f5dd6b49d024?r=2ituZftpdG18f1TBADDbCaaEZ9ecYYYm` wasn't working
## Change
Probably need to revisit this properly, but for now, grab the `requestPath` that's needed for resolving before escaping the characters.
Tested that `http://localhost:1337/$/search?q=%22\/%3E%3Cimg%20s+src+c=x%20on+onerror+%20=alert(1)+\%3E` would still be blocked.
* Rewrite __ to be usable on server side
* Add changelog entry
* Clean invisible characters from primaryModValue
* Revert "Rewrite __ to be usable on server side"
This reverts commit 53f63c01f3b56c5530955323612826c0ac5dc5d3.
* Make pass-through placeholder for __ fn until it can be adapted for node.
* Switch messagages to inline interpolation until i18n done
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
* Adjust meta data to allow search pages to reflect search term
* Update changelog
* Address empty query string case and pass query string to append to og:url
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
With the throughput tweaks at the backend, it seems like the number of "file is locked" errors have reduced.
The next thing to try is to reduce the chunk size, hoping that file writes would be faster, reducing the lock duration from causing a timeout.
Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.
The real issue still remains -- the upload is somehow locked at the backend.
Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).
Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.
Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.
## Background
Per developer of `tus-js-client`, it is normal to occasionally encounter upload errors. The auto-retry mechanism is meant to address this.
While implementing tab-lock to prevent multiple uploads of the same file, 423_locked was used to detect this scenario. But 423_locked could also mean "the server is busy writing the chunk" (per discussion with Randy), so we kind of disabled the auto-retry mechanism accidentally.
Meanwhile, from a prior discussion with Randy, one of the chunk-writing duration took 3 minutes. Our current maximum of "retry after 15s" wouldn't help.
## Change
1. Given that tab-locking was improved recently and no longer reliant on the server error messages (we use secure storage to mark a file as locked), reverted the change to "skip retry on 409/423". This is now back to normal recommended behavior.
2. `tus-js-client` currently does not support variable retry delay, otherwise we could prolong the delay if the error was 423. Since we know it could take up to 3 minutes, and that we don't know if it's file-size dependant, just add another 30s retry and put a friendlier message asking the user to retry themselves after waiting a bit.
Our current chunk size is 25,000,000.
Google and S3 documentation suggests the chunk size to be multiples of 256KiB. MongoDB too. We aren't using any of those, but I guess no harm doing the same. From the logs, the values "25,000,000" and "50,000,000" seems to be common.
f0cd1592 did an additional call instead of replacement.
Aside: the 1 hour value only has effect in dev instances. For prod, CloudFlare seems to override that to 4 hours.
## Issue
In the original Desktop code, new strings encountered during runtime will be automatically added to the local `app-strings.json` file. The feature is unavailable in Web because writing to file would require explicit permissions.
## Change
Partially restore the functionality by saving the strings to memory and retrieving it from the console via `copy(window.new_strings)`. It's a little bit of manual work, but I think it is good as it forces a sanity check before committing (previously, experimental/developmental strings are committed and being translated in Transifex).
## Ticket
418 TUS: skip fingerprint storage
- Fingerprints for canceled uploads are not being cleared by tus-js-client. It's in localStorage, and there is limit for that.
- We are storing the confirmed fingerprint (from the backend) in redux anyway, so we don't need that functionality.
* Upload: fix redux key clash
## Issue
`params` is the "final" value that will be passed to the SDK and `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.
For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.
## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.
The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.
* Upload: add tab-locking
## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.
- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.
## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.
## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.
* Upload: check if locked before confirming delete
## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
* Fix query selection
* Fix xml format
* Fix link url and author_url
* Refactor repeated components
* Refactor repeated embed iframe string
* Add support for passing referrer queries to src
* Change iframe id from lbry to odysee
* Improve replace logic understanding
* Fix URL
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
## Issue
Tom seeing crashes on the line that was trying to remove the script, saying it's not a child of that node.
## Changes
- I'm guessing the found `fjs` sometimes is not in `head`, but we always remove from `head` during cleanup. Just append to the bottom of head, and remove from head. I think script order doesn't matter if we are injecting at runtime?
- Fixed effect dependency while at it (the latest PR removed the need to check for `type`).
* coming along well
* coming along well
* adding custom react element
* coming along well
* coming along well
* coming along well
* working pretty well
* almost done
* essentially working just could use a couple touchups
* cleanup and lint errors
* fix lint errors
* fix flow errors
* possible bugfix
* dynamically set width and height
* only run when rowdata is populated
* trying using ref
* better way to check for card population
* working implementation
* working implementation
* clean up flow and clean up script
* fix typo in comment and logs
* Route recommendation search to recsys 5% of the time + add `user_id`
## Ticket
334 send some recommended requests to recsys
## Approach
`doSearch`:
- If the search options include `related_to`, route that to the new `searchRecommendations` which performs the 5% check + appends `user_id` at the end. This way, we don't need to alter the function signature of `doSearch`.
- Else, run proceed as normal.
* Always go to alt provider
f
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* add gdpr support
* only run on production
* testing implementation
* just needs last touches then ready
* ready for merge
* add cookies to sidebar
* hide button when secureprivacy not available
* switch over to loading script as a react hook
* conditionally add secureprivacy script
* save gdpr status on session
* better design
* SyncFatalError: show nag instead of hard-crashing.
## Issue
When sync fails, we crash the app.
## Ticket
Maybe closes 39 "Better handle both internal and web backend interruptions / downtime"
## Approach
I'm tackling this from the standpoint that (1) sync errors are not that fatal -- we'll just lost a few recent changes (2) network disconnection is the common cause.
## Changes
- If we are offline:
- Inform user through a nag. All other status is meaningless if we are offline.
- If we are online:
- If api is STATUS_DOWN, show the existing crash page.
- If there is a sync error, show a nag saying settings are now potentially unsynchronized, and add a button to retry sync.
- If there is a chunk error, nag to reload.
* Attempt to detect `status=DOWN`
Previous code resolves the status to either "ok" or "not", which makes the app unable to differentiate between the "degraded" (nag) and "down" (crash) states.
## Issue
The TUS client automatically removes the upload fingerprint whenever there is a 4xx error. When we try to resume later, we couldn't find the the fingerprint and ended up creating a new upload ID.
## Changes
Since we are also storing the uploadUrl ourselves, provided that to override the tus client's default behavior of restarting a new session on 4xx errors.
The stalling behavior has changed a bit, probably with the removal of CF.
The stall difference between 10MB and 50MB is not too noticable, so picking 25MB as a start.