Ticket: 1256
For `notify`, "file is currently locked" and "no such file or directory" is indication that the previous "failed" SDK call actually worked. Tell the user to check the transactions.
This is the band aid until odysee-api/401 is addressed.
There is anecdote that we need to wait up to 2 minutes to preven the locking scenario.
`https://github.com/tus/tusd/pull/667#issuecomment-1079647640`
## Change
Instead of multiple retries at short intervals, do a one-time retry after a 2-minute wait. We'll do this until the fix is available in tusd v2.
Was trying to save 1 state by assuming the homepage will be in a busy state and the ad-detection code will finish first. But this is not true for those with a small Following count.
* Factor out lighthouse-result processing code for FYP re-use.
The FYP results will be in the same format as LH.
* Recsys: add ability to pass in specific uuid to use
For FYP, we want to pass the UUID as a param when searching for recommendations. The search comes before the recsys entry creation, so we need to generate the UUID first when searching, and then tell recsys to use that specific ID.
* Redux: fetch and store FYP
Note that the gid cannot be used as "hash" for the uri list -- it doesn't necessarily change when the list changes, so we can't use it to optimize redux. For now, just always update/render when re-fetched.
* UI for FYP
* Mark rendered FYPs
* Pass the FYP ID down the same way as Collection ID
Not ideal, but at least it's in the same pattern as existing code for now. The whole prop-drilling problem with the claim components will be fixed together later.
* Include 'gid' and 'uuid' in recommendation search
* Allow users to mark recommendations that they dislike
* Pass auth-token to all FYP requests + remove beacon use
beacons are unreliable and often blocked
* Only show FYP for members
* FYP readme page
* small fixes
* fyp
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* various cleanups
* more touchups
* select currency to use based on location
* fix sidebar
* fixing strings and other touchups
* refactor and do proper string interpolation
* fix stripe error
* text bugfix
* Adjust help
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* Homepage-Following: insert instead of replace when ad-blocker is detected.
`window.odysee_ad_blocker_detected` was not meant to be used outside of `<Ads>`, since it wouldn't spark a GUI update. But since homepages are rendered several times, perhaps it doesn't matter and we can skip adding to redux for now.
* Handle uBlock origin
If refreshed via "Clear Cache and Hard Reload", the detection method fails, but it does perform an internal redirect, so treat that as a failure.
The solution didn't work 100%, only reduces the chances of happening significantly. So far, still can happen on hot-reload during development.
- Reverted 15bd26399f.
- Fix by just hiding the unoccupied aniBox
## Issue
Double ads on screen
## Reproduce
- Pre re-design:
- Change "Only Language" from tne locale nag.
- Post re-design:
- Follow a bunch of channels with active livestreams. The homepage ad will show 2 ads stack on each other, with second one invisible.
Both should be equivalent, just different UI/style.
## Analysis
In both cases, it is due to `removeIfExists` failing to remove the element if the ad script hasn't run yet.
This can happen when the component is re-mounted immediately after the ad script was added. When the effect-cleanup runs, the script have not started or is running halfway.
Aside: The need to run `removeIfExists` further strengthens my hunch that the script is meant to live throughout the lifetime of the app, with it populating the given div as we navigate. But just my guess.
## Approach
Clean up before adding the script as well. This covers any missed elements from the previous cleanup.
The drawback is that this approach assumes there will only be 1 ad per page, but that's pretty much the case with the existing `removeIfExists` approach.
- Initially, we did a bunch of forceful CSS to make the ad tile resize properly when the browser resizes. This was causing the EU ad to display incorrectly. Turns out, that css is no longer needed after the BEM-style re-write.
- Changed the floating ad selector to cover both EU and non-EU version. It's probably still not the best given it relies on style rather than ID, but the ID seems dynamic.
- Removed "customAniviewStyling"
- No longer exists -- it was from the old DOM manipulation method.
- Removed "with no tileLayout it indicates sidebar ad"
- Misleading and not useful. I can easily set the sidebar ad to Tile if we wanted to (it used to be in Tile, we changed to save space). So, every time we change the style, we need to update the comment??
- The sidebar is not the only place that List layout is used -- there's Channel and Category pages, which allows either layout.
Since we are temporarily disabling the floating ad, it means that the invisible close button issue in Firefox Android is also temporarily not an issue.
- Instead of 2 ways to display ads (DOM injection + React method) and having both of them clash, just do it the predictable React way.
- Augment the existing React version to support tile layout + ability to place in last visible slot.
- Consolidate styling code to scss ... DOM manipulations were making it even harder to maintain.
- Removed the need to check for ad-blockers for now. It was being executed every time an ad is displayed, and now that we are displaying ads in more places, the gains doesn't justify the performance loss. Also, it wasn't being done for Recommended ads anyway, so the inconsistency probably means it's not needed in the first place.
Other known issues fixed:
- double ad injection when changing language via nag.
- additional "total-blocking-time" due to ads at startup removed.
- fixed ads not appearing in mobile homepage until navigated away and back to homepage.
- enable ads in channel page.
- support for both List and Tile layout.
Since `triggerBlacklist` is just a simple boolean and `<Ads>` is not doing any additional logic on top of it, it doesn't make sense to pass as a parameter. Just not mount the component -- it's more concise and obvious at the client side.
Currently, homepages are still build as part of the app, so this change doesn't bring much benefit other than to support the wrapper app.
When the service is moved away from the app, we won't have to rebuild the app when homepages change, and also the ui.js bundle would be smaller without the need to code-split.
## Ticket
910
## Changes
- Change the "message" from a generic "tus-upload" to more specific ones like "tus: failed to resume upload". These are grouped as "Events" in sentry, so we can isolate and search for them easily.
- Pass more info to Sentry (previously only available from Slack). It is still good to send to both, since some browsers block Sentry even without blocker extensions.
- Reduce verbosity of Slack's
## Notes
- Was unable to change the "unknown" problem mentioned in the ticket. The API does not accept `new Error('xxx')`, even though that's being mentioned by many in the forums. It might be due to the version of Sentry that we are using.
- To search for tus issues, go to "Issues" and query `message:tus*`. Results are collapsed per event, so click on the item of interest, then click "Events" at the upper right to see all occurrences of the same problem.
* Move into getLocalStorageSummary + always log
- Move into getLocalStorageSummary to clean up the clutter.
- Always log the localStorage info to get a bigger picture of what's going on with the QuotaExceededError.
* Remove 'findPreviousUploads' - we use the url stored in Redux.
Something I forgot to remove in the past. It also reads from localStorage, so remove since we are trying to avoid touching localStorage.
* Ensure localStorage is not used when uploading
I don't think it's being written when `storeFingerprintForResuming` is disabled, but doing the suggestion nonetheless.
`https://github.com/tus/tus-js-client/issues/315#issuecomment-1046821112`
We no longer ask tus to save the upload URL since December, so there should no reason for it to be writing to localStorage.
Adding more logs to determine what is the actual cause -- localStorage being full, or not available. Neither should affect the upload but they are the only known causes for that error message, so try to narrow down the investigation path.
## Issue
`/$/embed/尾崎豊----大阪球場ライブ%E3%80%801_6/bfd63daa9453bb1a11674ca8a7c5f5dd6b49d024?r=2ituZftpdG18f1TBADDbCaaEZ9ecYYYm` wasn't working
## Change
Probably need to revisit this properly, but for now, grab the `requestPath` that's needed for resolving before escaping the characters.
Tested that `http://localhost:1337/$/search?q=%22\/%3E%3Cimg%20s+src+c=x%20on+onerror+%20=alert(1)+\%3E` would still be blocked.
* Rewrite __ to be usable on server side
* Add changelog entry
* Clean invisible characters from primaryModValue
* Revert "Rewrite __ to be usable on server side"
This reverts commit 53f63c01f3b56c5530955323612826c0ac5dc5d3.
* Make pass-through placeholder for __ fn until it can be adapted for node.
* Switch messagages to inline interpolation until i18n done
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
* Adjust meta data to allow search pages to reflect search term
* Update changelog
* Address empty query string case and pass query string to append to og:url
Co-authored-by: Thomas Zarebczan <tzarebczan@users.noreply.github.com>
With the throughput tweaks at the backend, it seems like the number of "file is locked" errors have reduced.
The next thing to try is to reduce the chunk size, hoping that file writes would be faster, reducing the lock duration from causing a timeout.
Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.
The real issue still remains -- the upload is somehow locked at the backend.
Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).
Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.
Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.
## Background
Per developer of `tus-js-client`, it is normal to occasionally encounter upload errors. The auto-retry mechanism is meant to address this.
While implementing tab-lock to prevent multiple uploads of the same file, 423_locked was used to detect this scenario. But 423_locked could also mean "the server is busy writing the chunk" (per discussion with Randy), so we kind of disabled the auto-retry mechanism accidentally.
Meanwhile, from a prior discussion with Randy, one of the chunk-writing duration took 3 minutes. Our current maximum of "retry after 15s" wouldn't help.
## Change
1. Given that tab-locking was improved recently and no longer reliant on the server error messages (we use secure storage to mark a file as locked), reverted the change to "skip retry on 409/423". This is now back to normal recommended behavior.
2. `tus-js-client` currently does not support variable retry delay, otherwise we could prolong the delay if the error was 423. Since we know it could take up to 3 minutes, and that we don't know if it's file-size dependant, just add another 30s retry and put a friendlier message asking the user to retry themselves after waiting a bit.
Our current chunk size is 25,000,000.
Google and S3 documentation suggests the chunk size to be multiples of 256KiB. MongoDB too. We aren't using any of those, but I guess no harm doing the same. From the logs, the values "25,000,000" and "50,000,000" seems to be common.
f0cd1592 did an additional call instead of replacement.
Aside: the 1 hour value only has effect in dev instances. For prod, CloudFlare seems to override that to 4 hours.