2018-07-14 07:10:23 +02:00
|
|
|
package claim
|
|
|
|
|
|
|
|
import (
|
|
|
|
"crypto/sha256"
|
|
|
|
"encoding/binary"
|
|
|
|
"strconv"
|
|
|
|
|
|
|
|
"github.com/btcsuite/btcd/chaincfg/chainhash"
|
|
|
|
)
|
|
|
|
|
wip: a few updates so far.
(the code is not cleaned up yet, especially DB related part)
1. Separate claim nodes from the Trie to NodeMgr (Node Manager).
The Trie is mainly responsible for rsolving the MerkleHash.
The Node Manager, which manages all the claim nodes implements
KeyValue interface.
type KeyValue interface{
Get(Key) error
Set(Key, Value) error
}
When the Trie traverses to the Value node, it consults the KV
with the prefix to get the value, which is the Hash of Best Claim.
2. Versioined/Snapshot based/Copy-on-Write Merkle Trie.
Every resolved trie node is saved to the TrieDB (leveldb) with it's
Hash as Key and content as Value.
The content has the following format:
Char (1B) Hash (32B) {0 to 256 entries }
VHash (32B) (0 or 1 entry)
The nodes are immutable and content(hash)-addressable. This gives
the benefit of de-dup for free.
3. The NodeManager implements Replay, and can construct any past state.
After experimentng on Memento vs Replay with the real dataset on the
mainnet. I decided to go with Replay (at least for now) for a few reasons:
a. Concurrency and usability.
In the real world scenario, the ClaimTrie is always working on the
Tip of the chain to accept Claim Script, update its own state and
generate the Hash.
On the other hand, most of the client requests are interested in
the past state with minimal number of confirmations required.
With Memento, the ClaimTrie has to either:
a. Pin down the node, and likely the ClaimTrie itself as well, as
it doesn't have the latest state (in terms of the whole Trie) to
resolve the Hash. Undo the changes and redo the changes after
serving the request.
b. Copy the current state of the node and rollback that node to
serve the request in the background.
With Replay, the ClaimTrie can simply spin a background task
without any pause.
The history of the nodes is immutable and read-only, so there is
contention in reconstructing a node.
b. Negligible performance difference.
Most of the nodes only have few commands to playback.
The time to playback is negligible, and will be dominated by the
I/O if the node was flushed to the disk.
c. Simplicity.
Implementing undo saves more changes of states during
the process, and has to pay much more attention to the bidding rules.
2018-08-03 07:15:08 +02:00
|
|
|
func calNodeHash(op OutPoint, tookover Height) *chainhash.Hash {
|
2018-07-14 07:10:23 +02:00
|
|
|
txHash := chainhash.DoubleHashH(op.Hash[:])
|
|
|
|
|
|
|
|
nOut := []byte(strconv.Itoa(int(op.Index)))
|
|
|
|
nOutHash := chainhash.DoubleHashH(nOut)
|
|
|
|
|
|
|
|
buf := make([]byte, 8)
|
|
|
|
binary.BigEndian.PutUint64(buf, uint64(tookover))
|
|
|
|
heightHash := chainhash.DoubleHashH(buf)
|
|
|
|
|
|
|
|
h := make([]byte, 0, sha256.Size*3)
|
|
|
|
h = append(h, txHash[:]...)
|
|
|
|
h = append(h, nOutHash[:]...)
|
|
|
|
h = append(h, heightHash[:]...)
|
|
|
|
|
2018-07-17 08:20:11 +02:00
|
|
|
hh := chainhash.DoubleHashH(h)
|
|
|
|
return &hh
|
2018-07-14 07:10:23 +02:00
|
|
|
}
|