wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
package nodemgr
|
|
|
|
|
|
|
|
import (
|
|
|
|
"fmt"
|
|
|
|
"sort"
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
"github.com/lbryio/claimtrie/change"
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
"github.com/lbryio/claimtrie/claim"
|
|
|
|
"github.com/lbryio/claimtrie/trie"
|
2018-08-06 02:43:38 +02:00
|
|
|
|
|
|
|
"github.com/pkg/errors"
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
"github.com/syndtr/goleveldb/leveldb"
|
|
|
|
)
|
|
|
|
|
|
|
|
// NodeMgr ...
|
|
|
|
type NodeMgr struct {
|
2018-08-06 02:43:38 +02:00
|
|
|
height claim.Height
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
|
|
|
|
db *leveldb.DB
|
2018-08-06 02:43:38 +02:00
|
|
|
cache map[string]*claim.Node
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
nextUpdates todos
|
|
|
|
}
|
|
|
|
|
|
|
|
// New ...
|
|
|
|
func New(db *leveldb.DB) *NodeMgr {
|
|
|
|
nm := &NodeMgr{
|
|
|
|
db: db,
|
2018-08-06 02:43:38 +02:00
|
|
|
cache: map[string]*claim.Node{},
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
nextUpdates: todos{},
|
|
|
|
}
|
|
|
|
return nm
|
|
|
|
}
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
// Load loads the nodes from the database up to height ht.
|
|
|
|
func (nm *NodeMgr) Load(ht claim.Height) {
|
|
|
|
nm.height = ht
|
|
|
|
iter := nm.db.NewIterator(nil, nil)
|
|
|
|
for iter.Next() {
|
|
|
|
name := string(iter.Key())
|
|
|
|
nm.cache[name] = nm.load(name, ht)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
// Get returns the latest node with name specified by key.
|
|
|
|
func (nm *NodeMgr) Get(key []byte) trie.Value {
|
|
|
|
return nm.nodeAt(string(key), nm.height)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// Reset resets all nodes to specified height.
|
2018-08-06 02:43:38 +02:00
|
|
|
func (nm *NodeMgr) Reset(ht claim.Height) {
|
|
|
|
nm.height = ht
|
|
|
|
for name, n := range nm.cache {
|
|
|
|
if n.Height() >= ht {
|
|
|
|
nm.cache[name] = nm.load(name, ht)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
// Size returns the number of nodes loaded into the cache.
|
|
|
|
func (nm *NodeMgr) Size() int {
|
|
|
|
return len(nm.cache)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (nm *NodeMgr) load(name string, ht claim.Height) *claim.Node {
|
|
|
|
c := change.NewChangeList(nm.db, name).Load().Truncate(ht).Changes()
|
|
|
|
return NewFromChanges(name, c, ht)
|
|
|
|
}
|
|
|
|
|
|
|
|
// nodeAt returns the node adjusted to specified height.
|
|
|
|
func (nm *NodeMgr) nodeAt(name string, ht claim.Height) *claim.Node {
|
|
|
|
n, ok := nm.cache[name]
|
|
|
|
if !ok {
|
|
|
|
n = claim.NewNode(name)
|
|
|
|
nm.cache[name] = n
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
2018-08-06 02:43:38 +02:00
|
|
|
|
|
|
|
// Cached version is too new.
|
|
|
|
if n.Height() > nm.height || n.Height() > ht {
|
|
|
|
n = nm.load(name, ht)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
2018-08-06 02:43:38 +02:00
|
|
|
return n.AdjustTo(ht)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// ModifyNode returns the node adjusted to specified height.
|
2018-08-06 02:43:38 +02:00
|
|
|
func (nm *NodeMgr) ModifyNode(name string, chg *change.Change) error {
|
|
|
|
ht := nm.height
|
|
|
|
n := nm.nodeAt(name, ht)
|
|
|
|
n.AdjustTo(ht)
|
|
|
|
if err := execute(n, chg); err != nil {
|
|
|
|
return errors.Wrapf(err, "claim.execute(n,chg)")
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
2018-08-06 02:43:38 +02:00
|
|
|
nm.cache[name] = n
|
|
|
|
nm.nextUpdates.set(name, ht+1)
|
|
|
|
change.NewChangeList(nm.db, name).Load().Append(chg).Save()
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// CatchUp ...
|
2018-08-06 02:43:38 +02:00
|
|
|
func (nm *NodeMgr) CatchUp(ht claim.Height, notifier func(key []byte)) {
|
|
|
|
nm.height = ht
|
|
|
|
for name := range nm.nextUpdates[ht] {
|
|
|
|
notifier([]byte(name))
|
|
|
|
if next := nm.nodeAt(name, ht).NextUpdate(); next > ht {
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
nm.nextUpdates.set(name, next)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
// Show is a conevenient function for debugging and velopment purpose.
|
|
|
|
// The proper way to handle user request would be a query function with filters specified.
|
|
|
|
func (nm *NodeMgr) Show(name string, ht claim.Height, dump bool) error {
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
names := []string{}
|
2018-08-06 02:43:38 +02:00
|
|
|
if len(name) != 0 {
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
names = append(names, name)
|
2018-08-06 02:43:38 +02:00
|
|
|
} else {
|
|
|
|
for name := range nm.cache {
|
|
|
|
names = append(names, name)
|
|
|
|
}
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
sort.Strings(names)
|
|
|
|
for _, name := range names {
|
2018-08-06 02:43:38 +02:00
|
|
|
n := nm.nodeAt(name, ht)
|
|
|
|
if n.BestClaim() == nil {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
fmt.Printf("[%s] %s\n", name, n)
|
|
|
|
if dump {
|
2018-08-08 21:21:58 +02:00
|
|
|
change.NewChangeList(nm.db, name).Load().Truncate(ht).Dump()
|
2018-08-06 02:43:38 +02:00
|
|
|
}
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
// NewFromChanges ...
|
|
|
|
func NewFromChanges(name string, chgs []*change.Change, ht claim.Height) *claim.Node {
|
|
|
|
return replay(name, chgs).AdjustTo(ht)
|
|
|
|
}
|
|
|
|
|
|
|
|
func replay(name string, chgs []*change.Change) *claim.Node {
|
|
|
|
n := claim.NewNode(name)
|
|
|
|
for _, chg := range chgs {
|
|
|
|
if n.Height() < chg.Height-1 {
|
|
|
|
n.AdjustTo(chg.Height - 1)
|
|
|
|
}
|
|
|
|
if n.Height() == chg.Height-1 {
|
|
|
|
if err := execute(n, chg); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
}
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
2018-08-06 02:43:38 +02:00
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
func execute(n *claim.Node, c *change.Change) error {
|
|
|
|
var err error
|
|
|
|
switch c.Cmd {
|
|
|
|
case change.AddClaim:
|
2018-08-15 04:45:54 +02:00
|
|
|
err = n.AddClaim(c.OP, c.Amt, c.Value)
|
2018-08-06 02:43:38 +02:00
|
|
|
case change.SpendClaim:
|
|
|
|
err = n.SpendClaim(c.OP)
|
|
|
|
case change.UpdateClaim:
|
2018-08-15 04:45:54 +02:00
|
|
|
err = n.UpdateClaim(c.OP, c.Amt, c.ID, c.Value)
|
2018-08-06 02:43:38 +02:00
|
|
|
case change.AddSupport:
|
|
|
|
err = n.AddSupport(c.OP, c.Amt, c.ID)
|
|
|
|
case change.SpendSupport:
|
|
|
|
err = n.SpendSupport(c.OP)
|
|
|
|
}
|
|
|
|
return errors.Wrapf(err, "chg %s", c)
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
type todos map[claim.Height]map[string]bool
|
|
|
|
|
2018-08-06 02:43:38 +02:00
|
|
|
func (t todos) set(name string, ht claim.Height) {
|
|
|
|
if t[ht] == nil {
|
|
|
|
t[ht] = map[string]bool{}
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|
2018-08-06 02:43:38 +02:00
|
|
|
t[ht][name] = true
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
}
|