The Chainquery sync needs to be done in batches #142
Labels
No labels
area: app c
area: app d
area: devops
area: discovery
area: docs
area: proposal
area: X-device Sync
Chainquery
consider soon
dependencies
Epic
Fix till next release
good first issue
hacktoberfest
help wanted
icebox
Invalid
level: 1
level: 2
level: 3
level: 4
needs: exploration
needs: grooming
needs: priority
needs: repro
needs: tech design
on hold
Parked
priority: blocker
priority: high
priority: low
priority: medium
Tom's Wishlist
type: bug
type: discussion
type: improvement
type: new feature
type: refactor
type: task
type: testing
unplanned
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: LBRYCommunity/lighthouse.js#142
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Right now, the sync if starting from scratch will grab all claims from chainquery. This transfers a huge amount of data. From recent events this will crash lighthouse with the volume of claims we currently have in the chain. There are a few PRs right that require a full resync of the claims. So this needs to be done first.
As part of the process, we need to do a full sync on another machine @nikooo777 and then move DNS to that new machine when completed, but only after this batch processing is completed.
Should lighthouse just be using it's own independent chainquery instance?
That might be a good idea, but it's not what's causing the initial sync issue Mark mentioned here.
No, i think that would be a waste. We are barely using Chainquery right now ( as far as capacity goes ). This problem is just a volume issue for lighthouse. When we grab claims we need to use pagination so node doesn't grab too much information at once and crash with an out of memory exception.
I added a batching process to only grab 5000 claims at a time. This should make things much easier on node and allow @nikooo777 to continue to rebuild the lighthouse machine so we can merge the other PR too. Solved with
81986315cb
.