implemented hard fork to include nonwinning claims in hash #209
No reviewers
Labels
No labels
area: devops
area: discovery
area: docs
area: livestream
area: proposal
consider soon
Epic
good first issue
hacktoberfest
hard fork
help wanted
icebox
Invalid
level: 0
level: 1
level: 2
level: 3
level: 4
needs: exploration
needs: grooming
needs: priority
needs: repro
needs: tech design
on hold
priority: blocker
priority: high
priority: low
priority: medium
resilience
soft fork
Tom's Wishlist
type: bug
type: discussion
type: improvement
type: new feature
type: refactor
type: task
type: testing
unplanned
work in progress
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: LBRYCommunity/lbrycrd#209
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "nonwinning_claim_proofs"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
It also uses Merkle trees for hashing children and claims post-fork. This significantly reduces the output for
getnameproof
.Notable things to look at:
@ -1265,1 +1342,4 @@
cachedNode = cache.find("");
if (cachedNode != cache.end())
currentNode = cachedNode->second;
for (std::string::const_iterator itCur = name.begin(); itCur != name.end(); ++itCur)
stringstream is slowest C++ string manipulation class, better not use it in time expensive functions. You can use string + char which is faster.
@ -1265,3 +1344,4 @@
currentNode = cachedNode->second;
for (std::string::const_iterator itCur = name.begin(); itCur != name.end(); ++itCur)
{
std::string sCurrentSubstring(name.begin(), itCur);
count look-up through whole container constant, better use find and compare against end.
You can make 2nd string parameter const ref, here and 4 line below.
@ -1265,1 +1342,4 @@
cachedNode = cache.find("");
if (cachedNode != cache.end())
currentNode = cachedNode->second;
for (std::string::const_iterator itCur = name.begin(); itCur != name.end(); ++itCur)
Sorry, the stringstream was there in the existing method that I copied to start with. There are quite a few of those in the file, unfortunately. I completely agree with you that it's both slow and a horribly asinine way to convert a char to a string. I've been dragging my feet a little on completely refactoring the trie searches (aka, moving them all to a search method that all these other methods call); I've been waiting for the upstream merge to happen before doing some major refactoring on the trie. It would eliminate all stringstream calls.
@ -1265,3 +1344,4 @@
currentNode = cachedNode->second;
for (std::string::const_iterator itCur = name.begin(); itCur != name.end(); ++itCur)
{
std::string sCurrentSubstring(name.begin(), itCur);
Again, both the stringstream and the count were in the previous method; I wasn't focused on those. If I do another round of changes on this story, I'll remove them though.
@ -1265,3 +1344,4 @@
currentNode = cachedNode->second;
for (std::string::const_iterator itCur = name.begin(); itCur != name.end(); ++itCur)
{
std::string sCurrentSubstring(name.begin(), itCur);
Yeah, i saw that but it's good time to fix it.
CClaimTrieCache::insertClaimIntoTrie
needs to be updated to detect if there were any ordering changes.I would like to see a better name for this function recursiveComputeMerkleHashBinaryTree or some comments/docstrings on what its supposed to do.
We should clarify how the claims are ordered when hashed.
It looks to me like the winning claim would always be the first in position of the merkle tree ( that is good because it is still important to be able to prove that a claim is the winning one). It also looks like the non winning claims are sorted from highest effective amount to lowest effective amount? I would make sure that this is the case (are claims ever stored out of order ?) .
For the claim sorting, the code has been adjusted to do a full, stable sort instead of heapify. The sort operator is complete and has been for a while (see
CClaimValue::operator<
). However, I think there is a scenario where we might end up with out-of-order claims: when loading blocks that were created before the addition of the full sort. Claims aren't re-sorted on load. We have been discussing doing this, though, as we've been considering the right way to recreate the effective amount in this same scenario (which isn't serialized at present).We've had new requirements on this. It now needs to implement all necessary support for this: https://www.notion.so/lbry/Q-A-on-RPC-results-verification-44888d23efa3475a90eec997f9bf3103 . It also needs to return the claim_sequence field in all claim RPC calls.
Pull request closed