## Issue
211 - CSS load-order problem
## Notes
It is unlikely that we'll need to support different brands in the future, so simplifying the code and number of files so that we don't have to handle the various import paths. Will probably make things easier for the css-splitting work too.
## Issue
238 Comments: author-name not highlighted when in Channel Community tab
## Changes
- Channel claims don't have a signing channel. Use `getChannelFromClaim`, which handles both content and channel claim.
* Don't update 'pendingById' if no changes.
'pendingById' isn't frequently updated, but using it as a proof-of-concept to fix how reducers should be written to avoid unnecessary updates.
ImmutableJS apparently does all of this for us, but there are cons to using it as well, so using own wrappers for now.
* Don't update 'byId' if no changes + add 'selectClaimWithId'
## Ticket
116 Claim store optimization ideas (reducing unnecessary renders)
## Changes
- Ignore things like `confirmations` so that already-fetched claims aren't invalidated and causes re-rendering. The `stringify` might look expensive, but the amount of avoided re-renders outweighs it. There might be faster ways to compare, though.
- With `byId[claimId]` references more stable now, memoized selectors can now use 'selectClaimWithId' to pick a specific claim to depend on, instead of 'byId' which changes on every update.
* Fix memo: selectMyChannelClaims, selectActiveChannelClaim
## Issue
These should never recalculate after `channel_list` has been fetched, but they do because of poor selector dependency.
## Change
With the `byId` changes from the previous commit, we are now able to memoize these selectors correctly.
## Ticket
223 Add ability for delegated moderators to delete comments
## Changes
- Refactored doCommentAbandon's signature so we don't end up converting between "uri" and "Claim" several times and needing to lookup redux when the client can already provide us the exact values that we need.
- Pass the new moderator fields to the API.
- Remove the need to call 'makeSelectChannelPermUrlForClaimUri' since it's a simple field query when we already have the claim.
I tried to use event.preventDefault on the click handler but that didn't
work. So instead I'm using css 'pointer-events: none' to disable click
events on the player while the player is being dragged.
https://github.com/OdyseeTeam/odysee-frontend/issues/206
- selectMyChannelClaims depends on `byId`, which currently is always invalidated per update, so it is not memoized.
- Most of the use-cases just needs the ID or the length of the array anyways, so avoid generating a Claim array (in selectMyChannelClaims) unnecessarily -- the client need to reduce it back down to IDs again :/
- The simpler boolean also removes the need to memoize the selector, which saves a bit of memory.
* Fix error logs
* Improve LBC sticker flow/clarity
* Show inline error if custom sticker amount below min
* Sort emojis alphabetically
* Improve loading of Images
* Improve quality and display of emojis and fix CSS
* Display both USD and LBC prices
* Default to LBC tip if creator can't receive USD
* Don't clear text-field after sticker is sent
* Refactor notification component
* Handle notifications
* Don't show profile pic on sticker livestream comments
* Change Sticker icon
* Fix wording and number rounding
* Fix blurring emojis
* Disable non functional emote buttons
`selectMyActiveClaims` includes pending claims, so it gets invalidated often.
For the case of comment-filtering, we don't care about pending or abandoned own claims.
`selectMyActiveClaims` includes `byId`, which gets invalidated on each resolve. Having this as an input selector breaks memoization.
For the case of comment-filtering, we don't really care about pending or abandoned own claims (I think), so just grab the raw IDs.
## Issue
There was one instance of ModalError that wasn't wrapped in the Suspense.
## Fix
- Moved `getModal` outside to make the code cleaner. Due to the length of `getModal`, I didn't notice the early return statement.
- Fix ModalError's Suspense.
* Prevent multiple parseURI calls
## Ticket
129
## Issue
Code was shortened to use `isURIValid` during the consolidation. `isURIValid` calls `normalizeURI`, which calls another `parseURI`.
`parseURI` is pretty expensive.
## Approach
- Add optional parameter to `isURIValid` to skip the normalization.
- Set those that were converted during the consolidation to skip the normalization. Also covered a few other instances where it is obvious to me that normalization is not required.
- For the rest, I can't tell for sure if it's safe to remove the normalization, so the default `normalize=true` will leave things as is.
The whole `parseURI` probably needs a refactoring, or a few lighter version for specific needs.
* Simplify isURIEqual
## Issue
`parseURI` is too expensive to be used in a loop, plus `normalizeURI` itself is calling `parseURI`.
## Approach
Not sure if it covers all cases, but just try convert colons to hashes before comparing.
## Issue
95% of `secondary.js` is unused code.
- It was meant to reduce network overhead by chunking up files needed after bootup, and also to reduce the number of `vendor-*.js` files.
- But it ended up accidentally grabbing everything, defeating the purpose of code-splitting.
Added a re-usable "yes/no" confirmation modal where the client just sets the question string and gets a callback "OK" or "Cancel" is clicked.
It doesn't make sense to create one modal for each confirmation, especially when the modal is only used in one place.
Replaced one of the existing modal as an example.
* Refactor filePrice
* Refactor Wallet Tip Components
* Add backend sticker support for comments
* Add stickers
* Refactor commentCreate
* Add Sticker Selector and sticker comment creation
* Add stickers display to comments and hyperchats
* Fix wrong checks for total Super Chats
## Issue
- Go to a channel page
- Go to Wild West
- Back
- Expand the search filter (valid here)
- Forward
## Fix
Resolve the 'expanded' setting on mount to ensure it is never true when 'hideAdvancedFilter' is set.
4a22814c broke the intention of if-block (it essentially breaks the functionality in Search page if we enable view counts there in the future).
It also seems completely unrelated to the PR.
* various control bar fixes
* fixes for mobile
* hide advertisement div by default
* fix duration bar
* more frontend touchups
* more styles
* fix for advertisement bar showing
* dont use ima on each re-render
## Issue
We previously automatically reload when there is a chunk error. This works fine if it's the case of new code was pushed recently while the user was active. But if the failure was caused by other things like network problems or the file IS actually missing, we end up in an infinite loop of refreshes.
## New approach
Tell the user to reload instead of automatically doing it.
* fix type error
fix is subscribed check
- Persist subscription data locally
- add / remove subscription during log in / out
- Use store directly in hook
Add toast error if subscription fails
Revert removal of v2
hotfix linting issue
Add custom notification handler
- fix isSupported flag
- make icon color compatible with light/dark theme
- fix icon on notifications blocked banner
wip: add push notification banner to notifications page.
- ignore failed deletions via internal API
- add ua parsing package
- add more robust meta data to token save
refactor naming + add push toggle to notification button
shift some code around
update css naming o proper BEM notation
update notifications UI
remove now unneeded util function
Update push notification system to sue firebase sdk
separate service worker webpack bundling
update service worker to use firebase sdk
Add firebase config
Add firebase and remove filemanager
Stub out the basics for browser push notifications.
* fix safari
* try smaller image for badge
* add token validation with server, refactor code
* remove param
* add special icon for web notification badge
* add translations
* add missing trans for toast error
* add pushRequest method that will not prompt users who have subscribed but since disabled notifications in the settings.
## Issue
- Each tile was checking against 4 blocklists (blacklisted, filtered, muted, commentron) on every render. Loading the front-page with Cheese alone caused 1400 calls.
- This is also part of the reason why pressing Back into the tile list takes forever.
## Fix
Since we still need to perform the checks at the app side for now, tried to memoize the operation through a selector.
Now, with the exception of connecting to lbry.com after re-opening the browser (i.e. establishing first connection), refreshing odysee.com is almost instantaneous.
## Issue
85 Add additional GA events
## Approach
Instead of placing analytic calls all over the GUI code (no separation of concerns), try to do it through a redux middleware instead.
## Changes
- Updated GA event and parameter naming after understanding how reporting works.
- Removed unused analytics.
Only picking components that are involved in a livestream for now. Ideally, all usages of `makeSelectClaimForUri` should be replaced -- will do it incrementally.
followedTags:
- Moved the filtering to the reducer side, so that we don't do it every time. We can't rely on `createSelector` because the store will be invalidated on each `USER_STATE_POPULATE`, unfortunately.
tags:
- Memoize via re-reselect for the "ForUri" selector.
## Issue
`makeSelectDataForUri` always returns a new reference, so `ClaimPreview` was constantly being rendered. It's pretty expensive since `ClaimPreview`'s rendering checks against a huge blocklist, which is another issue on it's own.
## Changes
- This commit tests the usage of `re-reselect` as the solution to the multi-instance memoization problem (https://github.com/toomuchdesign/re-reselect/blob/master/examples/1-join-selectors.md)
## Issue
When comments are refreshed, each `Comment` gets rendered 4-5 times due to reference invalidation for `othersReacts` (the data didn't actually change).
## Change
For selectors without transformation, there is no need to memoize using `createSelector` -- just access it directly. Also, don't do things like `return a[id] || {}` in a reducer, because the reference to the empty object will be different on each call.
Always return directly from the state so that the same reference is returned.
This simple change avoided the wasted resources needed for `createSelector`, and reduced to render to just 2 (initial render, and when reactions are fetched).
## Issue
Components render unnecessarily due to reference invalidation from `selectMutedChannels` selector.
## Notes
`selectMutedChannels` run and return a new reference each time the app gains focus. `createSelector` will not help in this case, because we are indeed invalidating the data in the store in `USER_STATE_POPULATE`.
## Changes
- Don't update the state if the array is identical in content.
- Fixed `selectMutedChannels` to return the reference from the store, so `createSelector` is not needed.
- Also, the filtering is not needed because we've already done it in the reducer.
## Comments
I've done some profiling on large blocklists. The time needed for the array comparison is still an order magnitude lower than the time needed to render all the Components that got incorrectly marked by this.
The ideal solution is for the sync code to return a hash or timestamp of the array, so that we can compare that instead of the array.
- It is recommended to use "lowercase + underscore format" for events to keep things neat, since the dashboard will be mixed with Automated and Recommended events.
- GA4 event structure is no longer the same as UA's, and the recommendation is to retructure rather than trying to mimic the old pattern.
- Always check the Recommended events to see if there is an equivalent, and use the exact name. GA4 might add automated features for these events in the future, and we'll benefit from it without code changes and invalidating existing data.
- pageView: use default snippet behavior instead of manually sending
Start converting to GA4...
- Outbound click are automatically handled.
Reverted/restored stuff from the following repo, with minimal modifications (trying to keep the diffs clean for future reference):
- lbry-desktop@5008972
- lbry-desktop@7fe88d8
## Issue
Now that we batch-resolve the comment authors before displaying the comments, the linked-comment scrolling logic didn't work well with nested replies.
## Change
Previously, I didn't want to put the logic at the lowest level (`Comment`) because it was hard for the child to know whether to scroll or not. For example, we don't want to scroll when user changes the comment filters or presses the Refresh Comments button.
Relented and moved the logic to `Comment`, and pass a flag via `window` (I know this is frowned upon by some) to indicate whether a scrolling is needed.
This is probably more efficient overall as we don't need to scan the DOM, and with minimal delay as we scroll immediately after the linked-comment is mounted.
## Known issues
In markdown posts with lots of images, a layout shift due to delayed inline-image fetching can cause the scrolling to be inaccurate. This should be fixed by reserving space for markdown post images.
## Issue
60 setting.Get calls spiked since October
It was called 24 times per livestream page load.
## Notes
The effect was intended to be a one-time effect, but the dependency was changed in 2f4dedfb
* re enable preload ads
* switch macro to aniview
* point towards test server
* improving documentation
* bugfix and turn skip back on
* only run twenty percent of the time for unauthed users
* allow for embeds
* enable show internal feature
* working prototype
* seems to work well
* bugfix
* review old aniview setup
* change to production channelid
* final touchups
This reverts commit caadd889ce, reversing
changes made to 8b2c7a2b21.
## Issue
- Infinite `resolve` loop when deleted channel is present in the comments.
- Since it was only displayed comments with resolved channels, it masked away those comments. While that may or may not be regarded as a defect, I think we should do it at Commentron instead of at the app if we want to filter deleted channels. I vote to show comments from deleted channels, since it might have good conversation thread.
* Fix CSS for live chat embeds
* Fix Markdown Lists in Comments
* Disable copy link menu option on livestream comments
* Fix nested indents in Live Chat
* Fix mentions and timestamps not parsed in bullet lists
* Highlight livestream comment and menu button on hover
* Fix mention parsing
## Issue
44 tor browser crash related to recsys?
## Reproduce the exact error
Block the request for `me|new` in dev tools
## Fix
The code was trying to destructure a null object.
The existing code seems to indicate that null ID is expected (it uses null as fallback), so this change shouldn't impact recsys results (I didn't check the recsys docs to confirm).
## Issue
40 Linked comments doesn't scroll for deep replies
## Notes
Don't need an effect for this, plus it was causing the parent to not pick it up for auto-scrolling.
## Issues
- The current version of the link handler doesn't seem able to control the livestream player's position.
- The "live" position is always 0:00 and everything behind it is a negative timestamp. The current timestamp parser doesn't handle negative values.
* adding functionality to detect user download speed
* calculating bandwidth speed more intelligently
* saving download speed and updating it every 30s
* all the functionality should be done needs testing
* fix linting
* use a 1mb file for calculating bandwidth
* add optional chaining plugin to babel and get bitrate from texttrack
* allow optional chaining for flow
* ignore flow error
* disable bandwidth checking functionality
* fix flow error
* Add option to pass in url-search params.
Impetus: allow linked comment ID and setting the discussion tab when clicking on the `ClaimPreview`.
* comment.list: fix typos and renamed variables
- Switch from 'author' to 'creator' to disambiguate between comment author and content author. For comment author, we'll use 'commenter' from now on.
- Corrected 'commenterClaimId' to 'creatorClaimId' (just a typo, no functional change).
* doCommentReset: change param from uri to claimId
This reduces one lookup as clients will always have the claimID ready, but might not have the full URI.
It was using URI previously just to match the other APIs.
* Add doCommentListOwn -- command to fetch own comments
Since the redux slice is set up based on content or channel ID (for Channel Discussion page), re-use the channel ID for the case of "own comments". We always clear each ID when fetching page-0, so no worries of conflict when actually browsing the Channel Discussion page.
* Comment: add option to hide the actions section
* Implement own-comments page
* Use new param to remove sort-pins-first.
comment.List currently always pushes pins to the top to support pagination. This new param removes this behavior.
I think this the best solution so far, at the expense of a slight delay in scrolling if the network call stalls.
- Added "fetching by ID" state so that we don't need to use the ugly N-retries method.
- `scrollIntoView` doesn't work if the element is already in the viewport, and the `scrollBy` adjustment doesn't take into account the y-position restoration that we perform on certain type of pages. Use `window.scrollTo` instead and taking into account current scroll position.
## Issue
.../archives/C02FQBM00Q0/p1633044695010600
## Changes
When querying a search key, it has to be an exact match. This was broken by the insertion of `free_only` in the fetch.
Added a function to generate the options, so that all clients stay in sync.
* Add Channel Mention selection ability
* Fix mentioned user name being smaller than other text
* Improve logic for locating a mention
* Fix mentioning with enter on livestream
* Fix breaking for invalid URI query
* Handle punctuation after mention
* Fix name display and appeareance
* Use canonical url
* Fix missing search
* fix issue where viewcounts were creating a new line
* conditionally add large view css
* conditionally apply class based on if view count should be shown
* last couple touchups
* clean up the css
* add scss to flow config
* add scss component to flow config
It was being recalculated repeatedly.
This memoizes it, although it still re-calculates occasionally despite none of the source arrays changed. I think it is due to the state change in the Preference Sync.
Note: input selectors to `createSelector` needs to be extractions-only (i.e. must not have transformations). I think most of our `makeSelect*` selectors violate this and broke memoization.
## Issue
7176
## Changes
Pitfalls of pausing render via React.memo:
- We'll miss the `doClaimSearch()` since that is sparked by an `useEffect`.
Seems like we can't avoid having a redundant copy of the previously-displayed URIs.
## Ticket
7165 homepage queries don't take into account blocked channel ids (mute does)
## Changes
resolveSearchOptions: was not grabbing redux data correctly.
* ❌ Remove old method of displaying active livestreams
Completely remove it for now to make the commit deltas clearer.
We'll replace it with the new method at the end.
* Fetch and store active-livestream info in redux
* Tiles can now query active-livestream state from redux instead of getting from parent.
* ⏪ ClaimTilesDiscover: revert and cleanup
## Simplify
- Simplify to just `uris` instead of having multiple arrays (`uris`, `modifiedUris`, `prevUris`)
- The `prevUris` is for CLS prevention. With this removal, the CLS issue is back, but we'll handle it differently later.
- Temporarily disable the view-count fetching. Code is left there so that I don't forget.
## Fix
- `shouldPerformSearch` was never true when `prefixUris` is present. Corrected the logic.
- Aside: prefix and pin is so similar in function. Hm ....
* ClaimTilesDiscover: factor out options
## Change
Move the `option` code outside and passed in as a pre-calculated prop.
## Reason
To skip rendering while waiting for `claim_search`, we need to add `React.memo(areEqual)`. However, the flag that determines if we are fetching `claim_search` (fetchingClaimSearchByQuery[]) depends on the derived options as the key.
Instead of calculating `options` twice, we moved it to the props so both sides can use it.
It also makes the component a bit more readable.
The downside is that the prop-passing might not be clear.
* ClaimTilesDiscover: reduce ~17 renders at startup to just 2.
* ClaimTilesDiscover: fill with placeholder while waiting for claim_search
## Issue
Livestream claims are fetched seperately, so they might already exists. While claim_search is running, the list only consists of livestreams (collapsed).
## Fix
Fill up the space with placeholders to prevent layout shift.
* Add 'useFetchViewCount' to handle fetching from lists
This effect also stashes fetched uris, so that we won't re-fetch the same uris during the same instance (e.g. during infinite scroll).
* ⏪ ClaimListDiscover: revert and cleanup
## Revert
- Removed the 'finalUris' stuff that was meant to "pause" visual changes when fetching. I think it'll be cleaner to use React.memo to achieve that.
## Alterations
- Added `renderUri` to make it clear which array that this component will render.
- Re-do the way we fetch view counts now that 'finalUris' is gone. Not the best method, but at least correct for now.
* ClaimListDiscover: add prefixUris, similar to ClaimTilesDiscover
This will be initially used to append livestreams at the top.
* ✅ Re-enable active livestream tiles using the new method
* doFetchActiveLivestreams: add interval check
- Added a default minimum of 5 minutes between fetches. Clients can bypass this through `forceFetch` if needed.
* doFetchActiveLivestreams: add option check
We'll need to support different 'orderBy', so adding an "options check" when determining if we just made the same fetch.
* WildWest: limit livestream tiles + add ability to show more
Most likely this behavior will change in the future, so we'll leave `ClaimListDiscover` untouched and handle the logic at the page level.
This solution uses 2 `ClaimListDiscover` -- if the reduced livestream list is visible, it handles the header; else the normal list handles the header.
* Use better tile-count on larger screens.
Used the same method as how the homepage does it.