## Issue
- Closes 592 Force clear stuck upload
- It was possible for an upload to stay "locked" e.g. when browser is killed.
## Change
When refreshing or opening a new tab, always clear the locks. The on-going sessions will re-lock them immediately.
* Update publish form when editing livestream + update to radios for liveststream release time
* Fix bug where upload tools may remain visible upon switching upload type, even when no option to upload is available.
* move publish source state up, when editing livestream only show scheduling option when source is none.
* Reset any set release time if switching to live stream mode.
* Update date/time cmpnts to reset any chnages they made on umount. Update schedule date/time cmpnt to clear release time when selecting anytime option.
* Additional filtering of internal tags
* Default to replay view when editing a liveststream
- Query for that tag in the upcoming section
- Improve special tag organization
- Filter out the internal tags in the UI
- Add missing types to publish state
## Repro
1. Follow a channel.
2. When `preference_set` is sent, unfollow the channel.
3. A few seconds later, the final setting reflects (1) instead of (2).
The current sync loop involves doing a final `sync/get` at the end. While not necessary for the scenario above, the code flow covers various cases, so it's still needed for now.
## Approach
Implement an abort mechanism to the sync-loop.
When syncing from the `buildSharedStateMiddleware` loop, generate an ID for each sync session, and only store the latest one. Pass the ID to the completion-callback (and other places as needed), so we can check if our session is still the latest one before executing the callback.
The check for invalidation can also be placed in more places to cut off the sync process earlier, but it's only done for 2 critical places for now.
* Refactor userChannelFollowIntro
* Add community based channels to signup auto follow
* Add German channels
* Remove main english channels and leave only community ones for auto follow
* Upload: fix redux key clash
## Issue
`params` is the "final" value that will be passed to the SDK and `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.
For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.
## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.
The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.
* Upload: add tab-locking
## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.
- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.
## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.
## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.
* Upload: check if locked before confirming delete
## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
* apiCall: add option to not send the auth header
## Why
Want an option to make un-authenticated `resolve` calls where appropriate, to improve caching.
## How
All `apiCall`s are authenticated by default, but when clients add NO_AUTH to the params, `apiCall` will exclude the X_LBRY_AUTH_TOKEN. It will also strip NO_AUTH from the param object before sending it out.
* Add hook for 'resolve' and 'claim_search' to check and skip auth...
... if the params does not contain anything that requires the wallet.
* doResolveUri, doClaimSearch: let clients decide when to include_my_output
- No more hardcoding 'include_purchase_receipt' and 'include_is_my_output'
- doResolveUri: include these params when opening a file page. This was the only place that was doing that prior to this PR.
* is_my_output: use the signing_channel as alternative
## Notes
`is_my_output` is more expensive to resolve, so it is not being requested all the time.
## Change
Looking at the signing channel as the additional fallback, on top of `myClaimIds`.
## Aside
I think using `myClaimIds` here is redundant, as it is usually populated from `is_my_ouput`. But leaving as is for now...
As long as the input parameters are the same, the selector will return the cached value so that we don't construct the list twice, which involves blocklist filtering.
* Publish button: use spinner instead of "Publishing..."
Looks better, plus the preview could take a while sometimes.
* Refactor `doPublish`. No functional change
This is to allow `doPublish` to accept a custom payload as an input (for resuming uploads), instead of always resolving it from the redux data.
* Add doPublishResume
* Support resume-able upload via tus
## Issue
38 Handle resumable file upload
## Notes
Since we can't serialize a File object, we'll need to the user to re-select the file to resume.
* Exclude "modified date" for Firefox/Android
## Issue
It appears that the modification date of the Android file changes when selected, so that file was deemed "different" when trying to resume upload.
## Change
Exclude modification date for now. Let's assume a smart user.
* Move 'currentUploads' to 'publish' reducer
`publish` is currently rehydrated, so we can ride on that and don't need to store the `currentUploads` in `localStorage` for persistence. This would allow us to store Markdown Post data too, as `localStorage` has a 5MB limit per app.
We could have also made `webReducer` rehydrate, but in this repo, there is no need to split it to another reducer. It also makes more sense to be part of publish anyway (at least to me).
This change is mostly moving items between files, with the exception of
1. An additional REHYDRATE in the publish reducer to clean up the tusUploader.
2. Not clearing `currentUploads` in CLEAR_PUBLISH.
* Restore v1 code for livestream replay, etc.
v2 (tus) does not handle `remote_url`, so the app still needs v1 for that. Since we'll still have v1 code, use v1 for previews as well.
* Fix error logs
* Improve LBC sticker flow/clarity
* Show inline error if custom sticker amount below min
* Sort emojis alphabetically
* Improve loading of Images
* Improve quality and display of emojis and fix CSS
* Display both USD and LBC prices
* Default to LBC tip if creator can't receive USD
* Don't clear text-field after sticker is sent
* Refactor notification component
* Handle notifications
* Don't show profile pic on sticker livestream comments
* Change Sticker icon
* Fix wording and number rounding
* Fix blurring emojis
* Disable non functional emote buttons
Added a re-usable "yes/no" confirmation modal where the client just sets the question string and gets a callback "OK" or "Cancel" is clicked.
It doesn't make sense to create one modal for each confirmation, especially when the modal is only used in one place.
Replaced one of the existing modal as an example.
* Refactor filePrice
* Refactor Wallet Tip Components
* Add backend sticker support for comments
* Add stickers
* Refactor commentCreate
* Add Sticker Selector and sticker comment creation
* Add stickers display to comments and hyperchats
* Fix wrong checks for total Super Chats
## Issue
We previously automatically reload when there is a chunk error. This works fine if it's the case of new code was pushed recently while the user was active. But if the failure was caused by other things like network problems or the file IS actually missing, we end up in an infinite loop of refreshes.
## New approach
Tell the user to reload instead of automatically doing it.