## Issue
A huge list like http://localhost:9090/$/list/d91815c1bc8c80a1f354284a8c8e92d84d5f07a6 (193 items) often produces fewer results in the final rendered list.
## Cause
The same list of IDs was passed into `claim_search`, and we just increment the `page`. However, it seems like `claim_search` does not honor the order between each call, and some results from the previous page took the place of the results in the next page. The total is the same, but there are now duplicates.
## Fix
When batching, slice the ID list ourselves to bypass the problem for now.
Commentron json params are usually underscored instead of camel-cased.
Double-checked commentron code:
```
// MentionedChannel channels mentioned in comment
type MentionedChannel struct {
ChannelName string `json:"channel_name"`
ChannelID string `json:"channel_id"`
}
```
The parameter should probably also be skipped instead of sending empty array, but leaving as-is for now since that is minor.
* Refactor CommentBadge
* Refactor livestreamComment component
* Refactor and split livestreamComment CSS
* Refactor livestreamComments component
* Refactor and split livestreamComments CSS
* Remove never used spinner
* Refactor livestream Page
* Refactor page component
* Refactor livestreamLayout component
* Break apart livestreamComments into separate sibling components
- This helps separating LivestreamComments to deal with only the comments, and the LivestreamLayout to be used for its own Page as a Popout option, and also for a layered approach for mobile
* Create Popout Chat Page, Add Popout Chat Menu Option
* Add Hide Chat option
* sockety improvements
* Websocket changes
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* Refactor doAbandonClaim parameters to only claim
- Gets txid and nout by default now, and passing claim allows using more data to verify ownership in case of txid:nout failing again
- Unused on modalRemoveCard
- Edited the comment on doCollectionDelete to explain better
* Fix doAbandonClaim failing to select my claim
* Add ordering Icons
* Refactor doCollectionEdit
- It required claims as parameter, when only uris are used to populate the collection, so that was changed to pass down the uris instead.
- There were unused and mostly unnecessary functions inside, for example the parameter claimIds was never used so it would never enter the claimSearch function which again would be used to generate uris, so it's better to just use uris as parameter
* Add List Reordering changes
* Add toggle button for list editing
* Add toggle on content page collection sidebar
* Enable drag-n-drop to re-order list items
https://www.youtube.com/watch?v=aYZRRyukuIw
* Allow removing all unavailable claims from a List
* Fix <g> on icons
* Fix section buttons positioning
* Move preventDefault and stopPropagation to buttons div instead of each button, preventing clicking even if disabled opening the claim
* Change dragging cursor
* Fix sizing
* Fix dragging component
* Restrict dragging to vertical axis
* Ignore shuffle state for ordering
* Fix console errors
* Mobile fixes
* Fix sidebar spacing
* Fix grey on mobile after click
- activeChannelClaim is the comment creator, not receiver.
- doSeeNotifications marks a notification as "seen", and not "send a notification".
- Removed comments that are just explaining the syntax.
## Issue
- When signing up, the "channel suggestions" page was stuck because `prefsReady` was never set as `preference_get` was never called.
- It was never called due to the optimizations to skip it when there is no difference between the local and server wallet.
## Change
- The first ever `sync/get` will result in a "no wallet" error, and there is already a `catch` to handle it. But the change in 38c13cf5 caused the `catch` to be skipped and went directly to `sync_apply` instead. Although the `catch` is also doing the same thing (`sync_apply`), it also has an additional callback to call `preference_get`.
- Fixed by ensuring this scenario ends up in the `catch` block like it was originally intended.
- We also did some optimization in the callback to skip the final `preference_get` if there is no difference in hash. But for the case of signing up, we do want to run it (so that `prefsReady` and other stuff gets initialized), so pass `hasNewData = true` to the callback.
## Issue
In the original Desktop code, new strings encountered during runtime will be automatically added to the local `app-strings.json` file. The feature is unavailable in Web because writing to file would require explicit permissions.
## Change
Partially restore the functionality by saving the strings to memory and retrieving it from the console via `copy(window.new_strings)`. It's a little bit of manual work, but I think it is good as it forces a sanity check before committing (previously, experimental/developmental strings are committed and being translated in Transifex).
- Query for that tag in the upcoming section
- Improve special tag organization
- Filter out the internal tags in the UI
- Add missing types to publish state
## Issue
- Slow for users with many channels.
- The need to do it in Sync Loop is more applicable to Desktop.
## Approach
- Just remove it from sync loop.
- It will still be called when entering `/$/channels`, so that's an alternative to fix things when it is out of sync.
## Issue
In 38c13cf5, an additional `doGetAndPopulatePreferences` was added in `doSignIn` to ensure we have populated the preferences at least once. With that, `doHandleSyncComplete` can skip `doGetAndPopulatePreferences` is there is no change in the hash.
The addition broke the "initial sync lock", thus incorrectly allowing users to change preferences when the process is not completed.
## Change
I think the additional call is no longer needed since we now store a local hash for comparison, so `doGetAndPopulatePreferences` wouldn't be incorrectly skipped in the first ever `doHandleSyncComplete`.
## Repro
1. Follow a channel.
2. When `preference_set` is sent, unfollow the channel.
3. A few seconds later, the final setting reflects (1) instead of (2).
The current sync loop involves doing a final `sync/get` at the end. While not necessary for the scenario above, the code flow covers various cases, so it's still needed for now.
## Approach
Implement an abort mechanism to the sync-loop.
When syncing from the `buildSharedStateMiddleware` loop, generate an ID for each sync session, and only store the latest one. Pass the ID to the completion-callback (and other places as needed), so we can check if our session is still the latest one before executing the callback.
The check for invalidation can also be placed in more places to cut off the sync process earlier, but it's only done for 2 critical places for now.
## Ticket
Part of "#385 Defer api.odysee.com calls to their respective pages / install new"
## Change
Pull out `notification/categories` from `doNotificationList` and fetch it in Notifications Page. I don't there there are other places that need it for now.
* Add ability to have claim searches auto-fetch up to 3 pages.
* make total_items and total_pages optional
* use auto pagination strategy when determining live claim
* Bump page size back to 50
* Upload: fix redux key clash
## Issue
`params` is the "final" value that will be passed to the SDK and `channel` is not a valid argument (it should be `channel_name`). Also, it seems like we only pass the channel ID now and skip the channel name entirely.
For the anonymous case, a clash will still happen when since the channel part is hardcoded to `anonymous`.
## Approach
Generate a guid in `params` and use that as the key to handle all the cases above. We couldn't use the `uploadUrl` because v1 doesn't have it.
The old formula is retained to allow users to retry or cancel their existing uploads one last time (otherwise it will persist forever). The next upload will be using the new key.
* Upload: add tab-locking
## Issue
- The previous code does detect uploads from multiple tabs, but it was done by handling the CONFLICT error message from the backend. At certain corner-cases, this does not work well. A better way is to not allow resumption while the same file is being uploading from another tab.
- When an upload from 1 tab finishes, the GUI on the other tab does not remove the completed item. User either have to refresh or click Cancel. Clicking Cancel results in the 404 backend error. This should be avoided.
## Approach
- Added tab synchronization and locking by passing the "locked" and "removed" information through `localStorage`.
## Other considered approaches
- Wallet sync -- but decided not to pollute the wallet.
- 3rd-party redux tab syncing -- but decided it's not worth adding another module for 1 usage.
* Upload: check if locked before confirming delete
## Reproduce
Have 2 tabs + paused upload
Open "cancel" dialog in one of the tabs.
Continue upload in other tab
Confirm cancellation in first tab
Upload disappears from both tabs, but based on network traffic the upload keeps happening.
(If upload finishes the claim seems to get created)
* Route recommendation search to recsys 5% of the time + add `user_id`
## Ticket
334 send some recommended requests to recsys
## Approach
`doSearch`:
- If the search options include `related_to`, route that to the new `searchRecommendations` which performs the 5% check + appends `user_id` at the end. This way, we don't need to alter the function signature of `doSearch`.
- Else, run proceed as normal.
* Always go to alt provider
f
Co-authored-by: Thomas Zarebczan <thomas.zarebczan@gmail.com>
* apiCall: add option to not send the auth header
## Why
Want an option to make un-authenticated `resolve` calls where appropriate, to improve caching.
## How
All `apiCall`s are authenticated by default, but when clients add NO_AUTH to the params, `apiCall` will exclude the X_LBRY_AUTH_TOKEN. It will also strip NO_AUTH from the param object before sending it out.
* Add hook for 'resolve' and 'claim_search' to check and skip auth...
... if the params does not contain anything that requires the wallet.
* doResolveUri, doClaimSearch: let clients decide when to include_my_output
- No more hardcoding 'include_purchase_receipt' and 'include_is_my_output'
- doResolveUri: include these params when opening a file page. This was the only place that was doing that prior to this PR.
* is_my_output: use the signing_channel as alternative
## Notes
`is_my_output` is more expensive to resolve, so it is not being requested all the time.
## Change
Looking at the signing channel as the additional fallback, on top of `myClaimIds`.
## Aside
I think using `myClaimIds` here is redundant, as it is usually populated from `is_my_ouput`. But leaving as is for now...
## Why
- No memo required (no transformation).
- `makeSelect*` is an incorrect pattern.
## Changes
- Replaced makeSelectClientSetting with selectClientSetting.
- Remove unused selectShowRepostedContent.
## Issue
- Large resolve count (albeit batched) on bootup.
## Changes
- Skip the call on bootup. The same call will happen when you click the notification bell, so it's not too late to resolve at that time.
- Added `true` to `doResolveUris` to return cached results, otherwise it will keep resolving the same channels every time we enter Notifications Page.