Commit graph

53 commits

Author SHA1 Message Date
infinite-persistence
73e6dfd399
(patch) tus: Fix upload not found scenario (#1814)
Second attempt, this time just hiding the cancel button when the upload is done. If user is impatient and refreshed in between this and `notify`, it will still be resumable later.

Bumped MINIMUM_VERSION to nudge for a refresh so we get a slightly more accurate logging, and also to prevent the issue from lingering.
2022-07-08 06:45:43 -04:00
Thomas Zarebczan
ec745c31de
Revert "tus: Fix upload not found scenario (#1808)" (#1813)
This reverts commit cdcf7e7772.
2022-07-07 15:54:14 -04:00
infinite-persistence
cdcf7e7772
tus: Fix upload not found scenario (#1808)
## Steps
When it upload reaches 100%, click Cancel (not refresh).

## Issue
There was an old hack in b0509bc9 where we decided to wait a while before sending `notify` as the server was not responsive. Since the task was dispatched before the Cancel action, the server cleared the upload first and later received the `notify`.

## Change
Instead of trying to cancel the timer, I think the hack is no longer needed given the throughput and lock fixes. With things running back in sequential mode, the Cancel button will now just show the "upload already completed" modal.
2022-07-07 08:53:47 -04:00
infinite-persistence
640237c630
tus: don't allow 'notify' to be sent again (#1778)
## Ticket
725

## Issue
Upload a video. When `notify` is sent at the end of the tus upload, refresh immediately. The GUI allowed the user to resume the upload, but the ID is no longer present in the server.

## Approach
Until the polling API for `notify` is available, we can only assume the best and let the user know how to handle it.
- Store the "notify was sent" state.
- Show a dialog explaining the situation.

Thought of trying to make `claim_list` calls behind the scenes to clear itself, but it doesn't handle the case of `notify` actually failing. The best is to just let the user handle it for now.

Note that for the case of `onerror` actually received, we still retry since a network error could be the culprit (`notify` wasn't sent).
2022-06-30 19:30:08 -04:00
infinite-persistence
70ea3f0812
tus: restore higher chunk size (10MB -> 25MB) (#1774)
It was previously reduced to 10MB (d1447083) with the assumption that the slow disk write was causing the "lock" issue.

Now that the backend has implemented a new locking mechanism, restore to a larger chunk to reduce the number of PATCH calls.

```
10MB  ->  2s/call
25MB  ->  6s/call (similar to what I see with Google Drive)
100MB -> 25s/call
```
2022-06-29 08:34:25 -04:00
infinite-persistence
0a88c6254d
Publish: restore the multiple retries (#1763)
- Previously, we tried to solve the "file locked" problem by only making one retry after a super long delay. This was from an anecdote that it's more likely to lock up if the delay was short.
  - This didn't help at all for our case, and Andrey has made some locking mechanism changes in the backend.
  - The reduced number of retries probably increased the number of "failed to upload chunk" errors (not sure), which is supposedly a normal occurrence and we're expected to keep retrying.

Restoring the retry behavior and monitor...
2022-06-28 06:29:07 -04:00
saltrafael
0998e3d48c
Support stream updates via claim_id parameter (#1465)
* Support stream updates via claim_id parameter

* Pass claim_id on v2
2022-05-19 08:13:48 -04:00
Thomas Zarebczan
a0097dc3ce
fix timeout with remote
f
2022-05-05 13:04:46 -04:00
infinite-persistence
d7d8d3516e Log SDK timeout errors
Logging it so we know when to give the SDKs a kick
2022-05-05 09:06:05 -04:00
infinite-persistence
9b44b7eb91 Add a timeout on SDK calls to allow specific error messages.
## Issue 1263
Previously, we tried to inform the user that when an SDK call such as `support_create` and `publish` fails (specifically, timed out), the operation could be successful -- please check the transactions later.

However, we only covered the case of `fetch` actually getting a response that indicated a timeout, e.g. "status = 524". For our SDK case, the timeout scenario is an error that goes into the `catch` block. In the `catch` block, we can't differentiate whether it is a timeout because it only returns a generic "failed to fetch" message.

## New Approach
Since `fetch` does not support a timeout value, the usual solution is to wrap it with a `setTimeout`. This already exists in our code as `fetchWithTimeout` (yay).

By setting a timeout that is lower than the browser's default and also lower than the SDK operation (90s for most commands, 5m for `publish`), we would now have a way to detect a timeout and inform the user.

Firefox's 90s seems to be the lowest common denominator ... so 60s was chosen as the default (added some buffer).

For the case of 'publish', it is actually called in the backend, so wrap the xhr call with a timeout as well.
2022-05-04 08:10:17 -04:00
infinite-persistence
89feddee0d Publish: handle failed 'notify' at the server
Ticket: 1256

For `notify`, "file is currently locked" and "no such file or directory" is indication that the previous "failed" SDK call actually worked. Tell the user to check the transactions.

This is the band aid until odysee-api/401 is addressed.
2022-04-04 07:02:23 -04:00
infinite-persistence
e358f0715d tus: retry only after 2-minute wait
There is anecdote that we need to wait up to 2 minutes to preven the locking scenario.
`https://github.com/tus/tusd/pull/667#issuecomment-1079647640`

## Change
Instead of multiple retries at short intervals, do a one-time retry after a 2-minute wait. We'll do this until the fix is available in tusd v2.
2022-03-29 09:01:10 -04:00
infinite-persistence
fdb5658df6 Reduce verbosity of tus errors now that we know the locked problem is with tusd 2022-03-18 08:52:49 -04:00
infinite-persistence
27f70d5f90
tus: try longer retry delays to maybe avoid lockups (#1012) 2022-03-02 11:03:49 -05:00
infinite-persistence
59e83f3fa8
tus: sentry improvements
## Ticket
910

## Changes
- Change the "message" from a generic "tus-upload" to more specific ones like "tus: failed to resume upload". These are grouped as "Events" in sentry, so we can isolate and search for them easily.

- Pass more info to Sentry (previously only available from Slack). It is still good to send to both, since some browsers block Sentry even without blocker extensions.

- Reduce verbosity of Slack's

## Notes
- Was unable to change the "unknown" problem mentioned in the ticket. The API does not accept `new Error('xxx')`, even though that's being mentioned by many in the forums. It might be due to the version of Sentry that we are using.

- To search for tus issues, go to "Issues" and query `message:tus*`. Results are collapsed per event, so click on the item of interest, then click "Events" at the upper right to see all occurrences of the same problem.
2022-02-27 17:40:19 +08:00
infinite-persistence
3f8dfd5b21 Stop logging localStorage sizes
The issue has been solved, plus we've collected enough data.
2022-02-26 10:34:12 -05:00
infinite-persistence
c74dbbb68a
tus: QuotaExceededError (#935)
* Move into getLocalStorageSummary + always log

- Move into getLocalStorageSummary to clean up the clutter.
- Always log the localStorage info to get a bigger picture of what's going on with the QuotaExceededError.

* Remove 'findPreviousUploads' - we use the url stored in Redux.

Something I forgot to remove in the past. It also reads from localStorage, so remove since we are trying to avoid touching localStorage.

* Ensure localStorage is not used when uploading

I don't think it's being written when `storeFingerprintForResuming` is disabled, but doing the suggestion nonetheless.

`https://github.com/tus/tus-js-client/issues/315#issuecomment-1046821112`
2022-02-22 10:11:22 -05:00
infinite-persistence
c1fed3f4df
Undo tus-sentry experiment since it completely broke 2022-02-22 00:43:57 +08:00
infinite-persistence
0ae015b7a5
Revert "tus: remove 'uploader' param -- seems to be breaking Sentry."
This reverts commit f14e7ad0ec.
2022-02-21 23:02:59 +08:00
infinite-persistence
f14e7ad0ec
tus: remove 'uploader' param -- seems to be breaking Sentry. 2022-02-21 22:28:03 +08:00
infinite-persistence
5b81346e59 Sentry: fix tus errors
The first parameter should be the error object, not a general label.
2022-02-21 00:12:37 -08:00
infinite-persistence
85ef16026d tus: log reason for QuotaExceededError
We no longer ask tus to save the upload URL since December, so there should no reason for it to be writing to localStorage.

Adding more logs to determine what is the actual cause -- localStorage being full, or not available. Neither should affect the upload but they are the only known causes for that error message, so try to narrow down the investigation path.
2022-02-14 09:41:46 -08:00
infinite-persistence
d68be6e9af
tus: route errors to sentry
Per discussion with Andrey
2022-01-24 12:25:54 +08:00
infinite-persistence
d14470830c
tus: reduce chunk size (25MB -> 10MB)
With the throughput tweaks at the backend, it seems like the number of "file is locked" errors have reduced.

The next thing to try is to reduce the chunk size, hoping that file writes would be faster, reducing the lock duration from causing a timeout.
2022-01-15 14:44:15 +08:00
infinite-persistence
91d0eb30b8
tus: remove multi-tab assumption + pass original err msg
Completely remove any assumptions of multi-tab uploading from server status (should have done it previously, but wanted to be conservative). This should make it less confusing to the user.

The real issue still remains -- the upload is somehow locked at the backend.

Also, when we override the error to present a user-friendly message to the user, pass the original error to the log (just in case it gives extra info).
2022-01-13 09:57:45 +08:00
infinite-persistence
238f6b2eda
tus: adjust retry delay + remove logging
From the logs, it seems like the second retry (5s) fixes the "normal" cases, so just remove the first retry (0s).

Also from the logs, if a retry doesn't work by the third attempt (10s), it's most likely the "locked" case and retrying further doesn't help. So, reduce one more useless retry attemp.

Removed logging since we've gathered enough data, and that this hook is expected to be hit a lot, so we don't want to clog the logs.
2022-01-13 09:43:29 +08:00
infinite-persistence
c90c5bcc2a
Route markdown to v1 (#680)
I think I just forgot to do it the first time.
2022-01-12 10:31:46 -05:00
infinite-persistence
6bd384b01a
TUS: retry on 423_locked to try address "failed to upload chunk"
## Background
Per developer of `tus-js-client`, it is normal to occasionally encounter upload errors. The auto-retry mechanism is meant to address this.

While implementing tab-lock to prevent multiple uploads of the same file, 423_locked was used to detect this scenario. But 423_locked could also mean "the server is busy writing the chunk" (per discussion with Randy), so we kind of disabled the auto-retry mechanism accidentally.

Meanwhile, from a prior discussion with Randy, one of the chunk-writing duration took 3 minutes. Our current maximum of "retry after 15s" wouldn't help.

## Change
1. Given that tab-locking was improved recently and no longer reliant on the server error messages (we use secure storage to mark a file as locked), reverted the change to "skip retry on 409/423". This is now back to normal recommended behavior.
2. `tus-js-client` currently does not support variable retry delay, otherwise we could prolong the delay if the error was 423. Since we know it could take up to 3 minutes, and that we don't know if it's file-size dependant, just add another 30s retry and put a friendlier message asking the user to retry themselves after waiting a bit.
2022-01-10 16:46:57 +08:00
infinite-persistence
46c7c193be
Log the status code for the retry 2022-01-10 11:50:29 +08:00
infinite-persistence
47043bc965
Remove 'retryTimeout' - that's just the timer ID, nothing useful 2022-01-10 11:39:34 +08:00
infinite-persistence
01459d906a
tus: Get more information from publish errors 2022-01-06 15:39:51 +08:00
infinite-persistence
555bde87f8
tus: make chunk size multiples of 256KiB
Our current chunk size is 25,000,000.

Google and S3 documentation suggests the chunk size to be multiples of 256KiB. MongoDB too.  We aren't using any of those, but I guess no harm doing the same.  From the logs, the values "25,000,000" and "50,000,000" seems to be common.
2022-01-06 15:39:51 +08:00
infinite-persistence
9c723b3db3
tus: skip fingerprint storage
## Ticket
418 TUS: skip fingerprint storage

- Fingerprints for canceled uploads are not being cleared by tus-js-client. It's in localStorage, and there is limit for that.
- We are storing the confirmed fingerprint (from the backend) in redux anyway, so we don't need that functionality.
2021-12-13 15:35:21 +08:00
infinite-persistence
157b50c58e
Upload: tab sync and various fixes (#428)
* Upload: fix redux key clash

## Issue
`params` is the "final" value that will be passed to the SDK and  `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.

For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.

## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.

The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.

* Upload: add tab-locking

## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.

- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.

## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.

## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.

* Upload: check if locked before confirming delete

## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
2021-12-07 09:48:09 -05:00
infinite-persistence
eb83a834a1
TUS: handle remaining locked file error messages 2021-11-23 11:28:32 +08:00
infinite-persistence
2d3057d5cf
Detect concurrent uploads and stop it. 2021-11-22 16:12:11 +08:00
infinite-persistence
b6e9c7aabf
TUS: handle URL removal on 4xx errors
## Issue
The TUS client automatically removes the upload fingerprint whenever there is a 4xx error. When we try to resume later, we couldn't find the the fingerprint and ended up creating a new upload ID.

## Changes
Since we are also storing the uploadUrl ourselves, provided that to override the tus client's default behavior of restarting a new session on 4xx errors.
2021-11-22 16:12:10 +08:00
infinite-persistence
d48a7c7295
TUS: reduce chunk size from 100MB to 25MB.
The stalling behavior has changed a bit, probably with the removal of CF.

The stall difference between 10MB and 50MB is not too noticable, so picking 25MB as a start.
2021-11-19 14:40:03 +08:00
infinite-persistence
b0509bc990
Band-aid: wait a while before sending notify
## Issue
The status = 0 is due to unresponsive backend right after the tus-upload. No root-cause found yet.

## Change
It may or may not help, but adding a delay to account for the unresponsive stage for now.
2021-11-12 18:43:57 +08:00
infinite-persistence
d8080a9fda
Notify: log retry attempts 2021-11-12 16:30:41 +08:00
infinite-persistence
62e7fe06a5
TUS: Don't retry on 4xx
## Issue/Steps
From Randy:
- started the upload then open a new tab of the same page
- one of the tab finished the upload and successfully published the file, and the other tab received 404 error on patch and head request, because the file is already removed on the server

## Changes
Use the default onRetry code that ignores all 4xx, except for LOCKED and CONFLICT. Had to duplicate some code from tus because I still need to inject the 'retry' progress for the GUI to update the string.
2021-11-12 14:32:41 +08:00
infinite-persistence
dfe30b6d78
TUS: fix parallel uploads of the same file
## Issue
If you make 2 claims from the same source file, the second upload thinks it's trying to resume from the first one. They should be unique uploads.

## Approach
Stash the upload url for comparison when looking up existing uploads to resume.

Stash that in `params` to minimize code changes. We'll just need to ensure it is cleared before we generate the SDK payload.
2021-11-12 14:32:40 +08:00
infinite-persistence
861aaf4cde Notify: Re-enable delay but only for initial connection problem
We want to avoid the double `notify`, and also to confirm whether the SDK is timing out.
2021-11-12 11:19:26 +08:00
infinite-persistence
9bfa1a3577
Notify: Disable retry + try to report status code 2021-11-12 09:01:36 +08:00
infinite-persistence
7ef5975ee8
Notify: auto-retry once after 10 seconds
Also:
- Show the resume button on notify errors.
- Changed the error message to differentiate against v1's.
2021-11-11 09:55:48 +08:00
infinite-persistence
b5f1ae1291
Tus-retry: widen delay gap + add 1 more retry 2021-11-11 09:54:25 +08:00
infinite-persistence
cb6a044584
Support resume-able upload via tus (#186)
* Publish button: use spinner instead of "Publishing..."

Looks better, plus the preview could take a while sometimes.

* Refactor `doPublish`. No functional change

This is to allow `doPublish` to accept a custom payload as an input (for resuming uploads), instead of always resolving it from the redux data.

* Add doPublishResume

* Support resume-able upload via tus

## Issue
38 Handle resumable file upload

## Notes
Since we can't serialize a File object, we'll need to the user to re-select the file to resume.

* Exclude "modified date" for Firefox/Android

## Issue
It appears that the modification date of the Android file changes when selected, so that file was deemed "different" when trying to resume upload.

## Change
Exclude modification date for now. Let's assume a smart user.

* Move 'currentUploads' to 'publish' reducer

`publish` is currently rehydrated, so we can ride on that and don't need to store the `currentUploads` in `localStorage` for persistence. This would allow us to store Markdown Post data too, as `localStorage` has a 5MB limit per app.

We could have also made `webReducer` rehydrate, but in this repo, there is no need to split it to another reducer. It also makes more sense to be part of publish anyway (at least to me).

This change is mostly moving items between files, with the exception of
1. An additional REHYDRATE in the publish reducer to clean up the tusUploader.
2. Not clearing `currentUploads` in CLEAR_PUBLISH.

* Restore v1 code for livestream replay, etc.

v2 (tus) does not handle `remote_url`, so the app still needs v1 for that. Since we'll still have v1 code, use v1 for previews as well.
2021-11-10 13:16:16 -05:00
Andrey Beletsky
8e3aee5813 Change API server address to odysee-api 2021-07-20 10:23:00 -04:00
jessopb
989126c603
Feat publish replays on master (#5863)
* provide livestream replay publish via url
2021-04-14 00:06:11 -04:00
zeppi
713109167c publish, edit, remote_url publish 2021-03-26 18:43:09 -04:00