## Issue
Apparently, a user is experiencing 423 locked errors from the server, which should not happen given the locking mechanism plus the user wasn't trying to do concurrent uploads.
## Fix
Anyway, fix the Resume button so that at least they can try to resume.
* Add ability to have claim searches auto-fetch up to 3 pages.
* make total_items and total_pages optional
* use auto pagination strategy when determining live claim
* Bump page size back to 50
* Upload: fix redux key clash
## Issue
`params` is the "final" value that will be passed to the SDK and `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.
For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.
## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.
The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.
* Upload: add tab-locking
## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.
- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.
## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.
## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.
* Upload: check if locked before confirming delete
## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
I yanked out the parseURI part in a prior commit ... the comment was misleading me to think it was redundant. But it had another hidden function, which is to handle abandoned claims which `claim` will be `null`.
* Route recommendation search to recsys 5% of the time + add `user_id`
## Ticket
334 send some recommended requests to recsys
## Approach
`doSearch`:
- If the search options include `related_to`, route that to the new `searchRecommendations` which performs the 5% check + appends `user_id` at the end. This way, we don't need to alter the function signature of `doSearch`.
- Else, run proceed as normal.
* Always go to alt provider
f
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* Add remove_duplicates to tile/list claim_search except for Channel Page
This removes the any duplicates from reposts.
* Re-activate the "Hide reposts" setting
* Category Rows: default to ['stream', 'repost'] unless specified otherwise.
* apiCall: add option to not send the auth header
## Why
Want an option to make un-authenticated `resolve` calls where appropriate, to improve caching.
## How
All `apiCall`s are authenticated by default, but when clients add NO_AUTH to the params, `apiCall` will exclude the X_LBRY_AUTH_TOKEN. It will also strip NO_AUTH from the param object before sending it out.
* Add hook for 'resolve' and 'claim_search' to check and skip auth...
... if the params does not contain anything that requires the wallet.
* doResolveUri, doClaimSearch: let clients decide when to include_my_output
- No more hardcoding 'include_purchase_receipt' and 'include_is_my_output'
- doResolveUri: include these params when opening a file page. This was the only place that was doing that prior to this PR.
* is_my_output: use the signing_channel as alternative
## Notes
`is_my_output` is more expensive to resolve, so it is not being requested all the time.
## Change
Looking at the signing channel as the additional fallback, on top of `myClaimIds`.
## Aside
I think using `myClaimIds` here is redundant, as it is usually populated from `is_my_ouput`. But leaving as is for now...
## Why
- No memo required (no transformation).
- `makeSelect*` is an incorrect pattern.
## Changes
- Replaced makeSelectClientSetting with selectClientSetting.
- Remove unused selectShowRepostedContent.
## Issue
- Large resolve count (albeit batched) on bootup.
## Changes
- Skip the call on bootup. The same call will happen when you click the notification bell, so it's not too late to resolve at that time.
- Added `true` to `doResolveUris` to return cached results, otherwise it will keep resolving the same channels every time we enter Notifications Page.
## Issue
One of the bottlenecks of livestream page.
The component probably needs a re-design:
- Don't perpetually mount -- only mount when activated by the user through "@". This would avoid the heavy processing entirely.
- Better way of resolving uris (too many arrays, too many loops).
- Tom also mentioned that we should not be resolving every commenter as we see encounter them in a livestream. This is currently the case because the component is always mounted.
## Changes
Until the re-design occurs, attempt to cache the heavy processing. Also, trimmed down the amount of loops.
For the case of livestreams, the comments are added incrementally via websocket. The selector returns everything, which grows as a user watches the livestream.
We could even make it a bit more efficient by passing in `maxCount` to `filterComments`, and do a `for` loop there, but decided to keep things readable by not changing the `filter` usage.
- Memo not required. `resolvingUris` is very dynamic and is a short array anyways.
- Changeg from using `indexOf` to `includes`, which is more concise.
## Changes
- doHandleSyncComplete: only call doGetAndPopulatePreferences when there is new data.
- But for that to work, we'll need to populate preferences at least once. We'll do that in doSignIn.
- We can also remove the "sync/prefs ready" mechanism that was mainly meant for Desktop.
Then came another problem: while trying to spark changes between 2 tabs, `sync/get` was saying "no change" despite the local and server hash being different. I think it is because the both `sync_hash + sync/get` combo is operating on server data, so the hash is the same. I'm guessing this is why we ended up just running doGetAndPopulatePreferences every time before PR, since this flag wasn't correct in this scenario.
- Updated `data.changed` to consider both API results and comparison with local hash.
## Issue
When the muted list was being cleared from another app, the web version ended up restoring the previous muted list.
## Change
- As long as `blocked` is defined, return that since an empty array is a valid result.
- If undefined, something went wrong when calling the reducer, so retain the muted list. I believe this was the original intention of that line.
* Exclude default homepage data at compile time
The youtuber IDs alone is pretty huge, and is unused in the `CUSTOM_HOMEPAGE=true` configuration.
* Remove Desktop items and other cleanup
- Moved constants out of the component.
- Remove SIMPLE_SITE check.
- Remove Desktop-only items
* Sidebar: limit subscription and tag section
## Issue
Too slow for huge lists
## Change
Limit to 10 initially, and load everything on "Show more"
* Fix makeSelectThumbnailForUri
- Fix memo
- Expose function to extract directly from claim if client already have it.
## Issues with `makeSelectIsSubscribed`
- It will not return true if the uri provided is canonical, because the compared subscription uri is in permanent form. This was causing certain elements like the Heart to not appear in claim tiles.
- It is super slow for large subscriptions not just because of the array size + being a hot selector, but also because it is looking up the claim twice (not memo'd) and also calling `parseURI` to determine if it's a channel, which is unnecessary if you already have the claim.
## Changes
- Optimize the selector to only look up the claim once, and make operations using already-obtained info.
## Issue
If you make 2 claims from the same source file, the second upload thinks it's trying to resume from the first one. They should be unique uploads.
## Approach
Stash the upload url for comparison when looking up existing uploads to resume.
Stash that in `params` to minimize code changes. We'll just need to ensure it is cleared before we generate the SDK payload.
## Why
Frequently used; top in perf profile
## Changes
Most of the time, you already have the claim object in the current context. `selectClaimIsMineForUri` will retrieve the claim again, which is wasteful, even if it is memoized (looking up the cache still takes time).
Break apart the logic and added the alternative `selectClaimIsMine` for faster lookup.
* Publish button: use spinner instead of "Publishing..."
Looks better, plus the preview could take a while sometimes.
* Refactor `doPublish`. No functional change
This is to allow `doPublish` to accept a custom payload as an input (for resuming uploads), instead of always resolving it from the redux data.
* Add doPublishResume
* Support resume-able upload via tus
## Issue
38 Handle resumable file upload
## Notes
Since we can't serialize a File object, we'll need to the user to re-select the file to resume.
* Exclude "modified date" for Firefox/Android
## Issue
It appears that the modification date of the Android file changes when selected, so that file was deemed "different" when trying to resume upload.
## Change
Exclude modification date for now. Let's assume a smart user.
* Move 'currentUploads' to 'publish' reducer
`publish` is currently rehydrated, so we can ride on that and don't need to store the `currentUploads` in `localStorage` for persistence. This would allow us to store Markdown Post data too, as `localStorage` has a 5MB limit per app.
We could have also made `webReducer` rehydrate, but in this repo, there is no need to split it to another reducer. It also makes more sense to be part of publish anyway (at least to me).
This change is mostly moving items between files, with the exception of
1. An additional REHYDRATE in the publish reducer to clean up the tusUploader.
2. Not clearing `currentUploads` in CLEAR_PUBLISH.
* Restore v1 code for livestream replay, etc.
v2 (tus) does not handle `remote_url`, so the app still needs v1 for that. Since we'll still have v1 code, use v1 for previews as well.
- memo not required.
- start to not use the confusing and wrongly-named 'selectCommentsByUri' (per comment from Sean); use the existing 'selectClaimIdForUri' instead. This works because currently we do fetch any comments without first visiting a claim/uri, so we'll always have fetched the required claim, and can be queried in 'selectClaimIdForUri'.
It's technically incorrect and was causing the GUI to not update sometimes because the reference did not change, despite the array contents did. The GUI just happens to update most of the time due to other state changes.