rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
package db
|
|
|
|
|
|
|
|
// db_get.go contains the basic access functions to the database.
|
|
|
|
|
|
|
|
import (
|
2022-12-06 22:14:28 +01:00
|
|
|
"bytes"
|
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.
* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.
* Pull out decode logic into named func newBlockHeaderElectrum().
* Rename BlockchainService -> BlockchainBlockService.
* Drop http.Request arg from handlers, and use RegisterTCPService().
* Implement GetStatus() to pull data from HashXStatus table.
* Make the service objects independent, so we don't have inheritance.
* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.
* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.
* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.
* Handle failures with goto instead of break. Update error logging.
* Add --max-sessions, --session-timeout args. Enforce max sessions.
* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.
* Add tests for headers, headers.subscribe, address.subscribe.
* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().
* Use time.Ticker object to drive management activity.
2022-10-04 16:05:06 +02:00
|
|
|
"crypto/sha256"
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
"encoding/hex"
|
|
|
|
"fmt"
|
2022-08-29 21:51:00 +02:00
|
|
|
"math"
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
|
2022-08-09 13:43:01 +02:00
|
|
|
"github.com/lbryio/herald.go/db/prefixes"
|
2022-09-01 00:23:47 +02:00
|
|
|
"github.com/lbryio/herald.go/db/stack"
|
2022-08-29 21:51:00 +02:00
|
|
|
"github.com/lbryio/lbcd/chaincfg/chainhash"
|
2022-12-06 22:14:28 +01:00
|
|
|
"github.com/lbryio/lbcd/wire"
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
"github.com/linxGnu/grocksdb"
|
2022-12-06 22:14:28 +01:00
|
|
|
|
|
|
|
log "github.com/sirupsen/logrus"
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
)
|
|
|
|
|
|
|
|
// GetExpirationHeight returns the expiration height for the given height. Uses
|
|
|
|
// the original claim expiration time.
|
|
|
|
func GetExpirationHeight(lastUpdatedHeight uint32) uint32 {
|
|
|
|
return GetExpirationHeightFull(lastUpdatedHeight, false)
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetExpirationHeightFull returns the expiration height for the given height.
|
|
|
|
// Takes boolean to indicated whether to use extended or original expiration time.
|
|
|
|
func GetExpirationHeightFull(lastUpdatedHeight uint32, extended bool) uint32 {
|
|
|
|
if extended {
|
|
|
|
return lastUpdatedHeight + ExtendedClaimExpirationTime
|
|
|
|
}
|
|
|
|
if lastUpdatedHeight < ExtendedClaimExpirationForkHeight {
|
|
|
|
return lastUpdatedHeight + OriginalClaimExpirationTime
|
|
|
|
}
|
|
|
|
return lastUpdatedHeight + ExtendedClaimExpirationTime
|
|
|
|
}
|
|
|
|
|
|
|
|
// EnsureHandle is a helper function to ensure that the db has a handle to the given column family.
|
|
|
|
func (db *ReadOnlyDBColumnFamily) EnsureHandle(prefix byte) (*grocksdb.ColumnFamilyHandle, error) {
|
|
|
|
cfName := string(prefix)
|
|
|
|
handle := db.Handles[cfName]
|
|
|
|
if handle == nil {
|
|
|
|
return nil, fmt.Errorf("%s handle not found", cfName)
|
|
|
|
}
|
|
|
|
return handle, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetBlockHash returns the block hash for the given height.
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetBlockHash(height uint32) ([]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.BlockHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewBlockHashKey(height)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
return rawValue, nil
|
|
|
|
}
|
|
|
|
|
2022-12-06 22:14:28 +01:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetBlockTXs(height uint32) ([]*chainhash.Hash, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.BlockTXs)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.BlockTxsKey{
|
|
|
|
Prefix: []byte{prefixes.BlockTXs},
|
|
|
|
Height: height,
|
|
|
|
}
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, key.PackKey())
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.BlockTxsValueUnpack(rawValue)
|
|
|
|
return value.TxHashes, nil
|
|
|
|
}
|
|
|
|
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetHeader(height uint32) ([]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.Header)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewHeaderKey(height)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
return rawValue, nil
|
|
|
|
}
|
|
|
|
|
2022-08-29 21:51:00 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetHeaders(height uint32, count uint32) ([][112]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.Header)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-09-01 21:21:53 +02:00
|
|
|
startKeyRaw := prefixes.NewHeaderKey(height).PackKey()
|
2022-08-29 21:51:00 +02:00
|
|
|
endKeyRaw := prefixes.NewHeaderKey(height + count).PackKey()
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.Header}).WithCfHandle(handle)
|
2022-08-29 21:51:00 +02:00
|
|
|
options = options.WithIncludeKey(false).WithIncludeValue(true) //.WithIncludeStop(true)
|
|
|
|
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw)
|
|
|
|
|
|
|
|
result := make([][112]byte, 0, count)
|
|
|
|
for kv := range IterCF(db.DB, options) {
|
|
|
|
h := [112]byte{}
|
|
|
|
copy(h[:], kv.Value.(*prefixes.BlockHeaderValue).Header[:112])
|
|
|
|
result = append(result, h)
|
|
|
|
}
|
|
|
|
|
|
|
|
return result, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetBalance(hashX []byte) (uint64, uint64, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.UTXO)
|
|
|
|
if err != nil {
|
|
|
|
return 0, 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
startKey := prefixes.UTXOKey{
|
|
|
|
Prefix: []byte{prefixes.UTXO},
|
|
|
|
HashX: hashX,
|
|
|
|
TxNum: 0,
|
|
|
|
Nout: 0,
|
|
|
|
}
|
|
|
|
endKey := prefixes.UTXOKey{
|
|
|
|
Prefix: []byte{prefixes.UTXO},
|
|
|
|
HashX: hashX,
|
|
|
|
TxNum: math.MaxUint32,
|
|
|
|
Nout: math.MaxUint16,
|
|
|
|
}
|
|
|
|
|
|
|
|
startKeyRaw := startKey.PackKey()
|
|
|
|
endKeyRaw := endKey.PackKey()
|
|
|
|
// Prefix and handle
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.UTXO}).WithCfHandle(handle)
|
2022-08-29 21:51:00 +02:00
|
|
|
// Start and stop bounds
|
|
|
|
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw).WithIncludeStop(true)
|
|
|
|
// Don't include the key
|
|
|
|
options = options.WithIncludeKey(false).WithIncludeValue(true)
|
|
|
|
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
var confirmed uint64 = 0
|
|
|
|
var unconfirmed uint64 = 0 // TODO
|
|
|
|
for kv := range ch {
|
|
|
|
confirmed += kv.Value.(*prefixes.UTXOValue).Amount
|
|
|
|
}
|
|
|
|
|
|
|
|
return confirmed, unconfirmed, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
type TXOInfo struct {
|
|
|
|
TxHash *chainhash.Hash
|
|
|
|
TxPos uint16
|
|
|
|
Height uint32
|
|
|
|
Value uint64
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetUnspent(hashX []byte) ([]TXOInfo, error) {
|
|
|
|
startKey := &prefixes.UTXOKey{
|
|
|
|
Prefix: []byte{prefixes.UTXO},
|
|
|
|
HashX: hashX,
|
|
|
|
TxNum: 0,
|
|
|
|
Nout: 0,
|
|
|
|
}
|
|
|
|
endKey := &prefixes.UTXOKey{
|
|
|
|
Prefix: []byte{prefixes.UTXO},
|
|
|
|
HashX: hashX,
|
|
|
|
TxNum: math.MaxUint32,
|
|
|
|
Nout: math.MaxUint16,
|
|
|
|
}
|
|
|
|
selectedUTXO, err := db.selectFrom([]byte{prefixes.UTXO}, startKey, endKey)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
selectTxHashByTxNum := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
|
|
|
|
historyKey := in[0].Key.(*prefixes.UTXOKey)
|
|
|
|
out := make([]*IterOptions, 0, 100)
|
|
|
|
startKey := &prefixes.TxHashKey{
|
|
|
|
Prefix: []byte{prefixes.TxHash},
|
|
|
|
TxNum: historyKey.TxNum,
|
|
|
|
}
|
|
|
|
endKey := &prefixes.TxHashKey{
|
|
|
|
Prefix: []byte{prefixes.TxHash},
|
|
|
|
TxNum: historyKey.TxNum,
|
|
|
|
}
|
|
|
|
selectedTxHash, err := db.selectFrom([]byte{prefixes.TxHash}, startKey, endKey)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
out = append(out, selectedTxHash...)
|
|
|
|
return out, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
results := make([]TXOInfo, 0, 1000)
|
|
|
|
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedUTXO), selectTxHashByTxNum) {
|
|
|
|
if err := checkForError(kvs); err != nil {
|
|
|
|
return results, err
|
|
|
|
}
|
|
|
|
utxoKey := kvs[0].Key.(*prefixes.UTXOKey)
|
|
|
|
utxoValue := kvs[0].Value.(*prefixes.UTXOValue)
|
|
|
|
txhashValue := kvs[1].Value.(*prefixes.TxHashValue)
|
|
|
|
results = append(results,
|
|
|
|
TXOInfo{
|
|
|
|
TxHash: txhashValue.TxHash,
|
|
|
|
TxPos: utxoKey.Nout,
|
2022-09-01 00:23:47 +02:00
|
|
|
Height: stack.BisectRight(db.TxCounts, []uint32{utxoKey.TxNum})[0],
|
2022-08-29 21:51:00 +02:00
|
|
|
Value: utxoValue.Amount,
|
|
|
|
},
|
|
|
|
)
|
|
|
|
}
|
|
|
|
return results, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
type TxInfo struct {
|
|
|
|
TxHash *chainhash.Hash
|
|
|
|
Height uint32
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetHistory(hashX []byte) ([]TxInfo, error) {
|
|
|
|
startKey := &prefixes.HashXHistoryKey{
|
|
|
|
Prefix: []byte{prefixes.HashXHistory},
|
|
|
|
HashX: hashX,
|
|
|
|
Height: 0,
|
|
|
|
}
|
|
|
|
endKey := &prefixes.HashXHistoryKey{
|
2022-09-01 21:21:53 +02:00
|
|
|
Prefix: []byte{prefixes.HashXHistory},
|
2022-08-29 21:51:00 +02:00
|
|
|
HashX: hashX,
|
|
|
|
Height: math.MaxUint32,
|
|
|
|
}
|
|
|
|
selectedHistory, err := db.selectFrom([]byte{prefixes.HashXHistory}, startKey, endKey)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
selectTxHashByTxNums := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
|
|
|
|
historyValue := in[0].Value.(*prefixes.HashXHistoryValue)
|
|
|
|
out := make([]*IterOptions, 0, 100)
|
|
|
|
for _, txnum := range historyValue.TxNums {
|
|
|
|
startKey := &prefixes.TxHashKey{
|
|
|
|
Prefix: []byte{prefixes.TxHash},
|
|
|
|
TxNum: txnum,
|
|
|
|
}
|
|
|
|
endKey := &prefixes.TxHashKey{
|
|
|
|
Prefix: []byte{prefixes.TxHash},
|
|
|
|
TxNum: txnum,
|
|
|
|
}
|
|
|
|
selectedTxHash, err := db.selectFrom([]byte{prefixes.TxHash}, startKey, endKey)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
out = append(out, selectedTxHash...)
|
|
|
|
}
|
|
|
|
return out, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
results := make([]TxInfo, 0, 1000)
|
|
|
|
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedHistory), selectTxHashByTxNums) {
|
|
|
|
if err := checkForError(kvs); err != nil {
|
|
|
|
return results, err
|
|
|
|
}
|
|
|
|
historyKey := kvs[0].Key.(*prefixes.HashXHistoryKey)
|
|
|
|
txHashValue := kvs[1].Value.(*prefixes.TxHashValue)
|
|
|
|
results = append(results, TxInfo{
|
|
|
|
TxHash: txHashValue.TxHash,
|
|
|
|
Height: historyKey.Height,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return results, nil
|
|
|
|
}
|
|
|
|
|
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.
* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.
* Pull out decode logic into named func newBlockHeaderElectrum().
* Rename BlockchainService -> BlockchainBlockService.
* Drop http.Request arg from handlers, and use RegisterTCPService().
* Implement GetStatus() to pull data from HashXStatus table.
* Make the service objects independent, so we don't have inheritance.
* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.
* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.
* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.
* Handle failures with goto instead of break. Update error logging.
* Add --max-sessions, --session-timeout args. Enforce max sessions.
* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.
* Add tests for headers, headers.subscribe, address.subscribe.
* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().
* Use time.Ticker object to drive management activity.
2022-10-04 16:05:06 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetStatus(hashX []byte) ([]byte, error) {
|
|
|
|
// Lookup in HashXMempoolStatus first.
|
|
|
|
status, err := db.getMempoolStatus(hashX)
|
|
|
|
if err == nil && status != nil {
|
2022-12-06 22:14:28 +01:00
|
|
|
log.Debugf("(mempool) status(%#v) -> %#v", hashX, status)
|
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.
* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.
* Pull out decode logic into named func newBlockHeaderElectrum().
* Rename BlockchainService -> BlockchainBlockService.
* Drop http.Request arg from handlers, and use RegisterTCPService().
* Implement GetStatus() to pull data from HashXStatus table.
* Make the service objects independent, so we don't have inheritance.
* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.
* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.
* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.
* Handle failures with goto instead of break. Update error logging.
* Add --max-sessions, --session-timeout args. Enforce max sessions.
* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.
* Add tests for headers, headers.subscribe, address.subscribe.
* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().
* Use time.Ticker object to drive management activity.
2022-10-04 16:05:06 +02:00
|
|
|
return status, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// No indexed mempool status. Lookup in HashXStatus second.
|
|
|
|
handle, err := db.EnsureHandle(prefixes.HashXStatus)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
key := &prefixes.HashXStatusKey{
|
|
|
|
Prefix: []byte{prefixes.HashXStatus},
|
|
|
|
HashX: hashX,
|
|
|
|
}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err == nil && slice.Size() > 0 {
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.HashXStatusValue{}
|
|
|
|
value.UnpackValue(rawValue)
|
2022-12-06 22:14:28 +01:00
|
|
|
log.Debugf("status(%#v) -> %#v", hashX, value.Status)
|
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.
* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.
* Pull out decode logic into named func newBlockHeaderElectrum().
* Rename BlockchainService -> BlockchainBlockService.
* Drop http.Request arg from handlers, and use RegisterTCPService().
* Implement GetStatus() to pull data from HashXStatus table.
* Make the service objects independent, so we don't have inheritance.
* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.
* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.
* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.
* Handle failures with goto instead of break. Update error logging.
* Add --max-sessions, --session-timeout args. Enforce max sessions.
* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.
* Add tests for headers, headers.subscribe, address.subscribe.
* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().
* Use time.Ticker object to drive management activity.
2022-10-04 16:05:06 +02:00
|
|
|
return value.Status, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// No indexed status. Fall back to enumerating HashXHistory.
|
|
|
|
txs, err := db.GetHistory(hashX)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2022-12-06 22:14:28 +01:00
|
|
|
|
|
|
|
if len(txs) == 0 {
|
|
|
|
return []byte{}, err
|
|
|
|
}
|
|
|
|
|
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.
* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.
* Pull out decode logic into named func newBlockHeaderElectrum().
* Rename BlockchainService -> BlockchainBlockService.
* Drop http.Request arg from handlers, and use RegisterTCPService().
* Implement GetStatus() to pull data from HashXStatus table.
* Make the service objects independent, so we don't have inheritance.
* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.
* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.
* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.
* Handle failures with goto instead of break. Update error logging.
* Add --max-sessions, --session-timeout args. Enforce max sessions.
* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.
* Add tests for headers, headers.subscribe, address.subscribe.
* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().
* Use time.Ticker object to drive management activity.
2022-10-04 16:05:06 +02:00
|
|
|
hash := sha256.New()
|
|
|
|
for _, tx := range txs {
|
|
|
|
hash.Write([]byte(fmt.Sprintf("%s:%d:", tx.TxHash.String(), tx.Height)))
|
|
|
|
}
|
|
|
|
// TODO: Mempool history
|
|
|
|
return hash.Sum(nil), err
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) getMempoolStatus(hashX []byte) ([]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.HashXMempoolStatus)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := &prefixes.HashXMempoolStatusKey{
|
|
|
|
Prefix: []byte{prefixes.HashXMempoolStatus},
|
|
|
|
HashX: hashX,
|
|
|
|
}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.HashXMempoolStatusValue{}
|
|
|
|
value.UnpackValue(rawValue)
|
|
|
|
return value.Status, nil
|
|
|
|
}
|
|
|
|
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
// GetStreamsAndChannelRepostedByChannelHashes returns a map of streams and channel hashes that are reposted by the given channel hashes.
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetStreamsAndChannelRepostedByChannelHashes(reposterChannelHashes [][]byte) (map[string][]byte, map[string][]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ChannelToClaim)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
streams := make(map[string][]byte)
|
|
|
|
channels := make(map[string][]byte)
|
|
|
|
|
|
|
|
for _, reposterChannelHash := range reposterChannelHashes {
|
|
|
|
key := prefixes.NewChannelToClaimKeyWHash(reposterChannelHash)
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
rawKeyPrefix := key.PartialPack(1)
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
options = options.WithIncludeKey(false).WithIncludeValue(true)
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
// for stream := range Iterate(db.DB, prefixes.ChannelToClaim, []byte{reposterChannelHash}, false) {
|
|
|
|
for stream := range ch {
|
|
|
|
value := stream.Value.(*prefixes.ChannelToClaimValue)
|
|
|
|
repost, err := db.GetRepost(value.ClaimHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
if repost != nil {
|
|
|
|
txo, err := db.GetClaimTxo(repost)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
if txo != nil {
|
|
|
|
repostStr := hex.EncodeToString(repost)
|
|
|
|
if normalName := txo.NormalizedName(); len(normalName) > 0 && normalName[0] == '@' {
|
|
|
|
channels[repostStr] = reposterChannelHash
|
|
|
|
} else {
|
|
|
|
streams[repostStr] = reposterChannelHash
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return streams, channels, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetClaimsInChannelCount returns the number of claims in the given channel.
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetClaimsInChannelCount(channelHash []byte) (uint32, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ChannelCount)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewChannelCountKey(channelHash)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return 0, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.ChannelCountValueUnpack(rawValue)
|
|
|
|
|
|
|
|
return value.Count, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetShortClaimIdUrl(name string, normalizedName string, claimHash []byte, rootTxNum uint32, rootPosition uint16) (string, error) {
|
|
|
|
prefix := []byte{prefixes.ClaimShortIdPrefix}
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimShortIdPrefix)
|
|
|
|
if err != nil {
|
|
|
|
return "", err
|
|
|
|
}
|
|
|
|
|
|
|
|
claimId := hex.EncodeToString(claimHash)
|
|
|
|
claimIdLen := len(claimId)
|
|
|
|
for prefixLen := 0; prefixLen < 10; prefixLen++ {
|
|
|
|
var j int = prefixLen + 1
|
|
|
|
if j > claimIdLen {
|
|
|
|
j = claimIdLen
|
|
|
|
}
|
|
|
|
partialClaimId := claimId[:j]
|
|
|
|
partialKey := prefixes.NewClaimShortIDKey(normalizedName, partialClaimId)
|
|
|
|
log.Printf("partialKey: %#v\n", partialKey)
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
keyPrefix := partialKey.PartialPack(2)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
// Prefix and handle
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithPrefix(prefix).WithCfHandle(handle)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
// Start and stop bounds
|
|
|
|
options = options.WithStart(keyPrefix).WithStop(keyPrefix)
|
|
|
|
// Don't include the key
|
|
|
|
options = options.WithIncludeValue(false)
|
|
|
|
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
row := <-ch
|
|
|
|
if row == nil {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
key := row.Key.(*prefixes.ClaimShortIDKey)
|
|
|
|
if key.RootTxNum == rootTxNum && key.RootPosition == rootPosition {
|
|
|
|
return fmt.Sprintf("%s#%s", name, key.PartialClaimId), nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return "", nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetRepost(claimHash []byte) ([]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.Repost)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewRepostKey(claimHash)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.RepostValueUnpack(rawValue)
|
|
|
|
return value.RepostedClaimHash, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetRepostedCount(claimHash []byte) (int, error) {
|
2022-08-26 15:24:39 +02:00
|
|
|
handle, err := db.EnsureHandle(prefixes.RepostedCount)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
key := prefixes.RepostedCountKey{Prefix: []byte{prefixes.RepostedCount}, ClaimHash: claimHash}
|
|
|
|
rawKey := key.PackKey()
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return 0, nil
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
}
|
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
value := prefixes.RepostedCountValue{}
|
|
|
|
value.UnpackValue(slice.Data())
|
|
|
|
return int(value.RepostedCount), nil
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetChannelForClaim(claimHash []byte, txNum uint32, position uint16) ([]byte, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimToChannel)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewClaimToChannelKey(claimHash, txNum, position)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.ClaimToChannelValueUnpack(rawValue)
|
|
|
|
return value.SigningHash, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetActiveAmount(claimHash []byte, txoType uint8, height uint32) (uint64, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ActiveAmount)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
startKey := prefixes.NewActiveAmountKey(claimHash, txoType, 0)
|
2022-09-16 17:03:15 +02:00
|
|
|
endKey := prefixes.NewActiveAmountKey(claimHash, txoType, height+1)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
startKeyRaw := startKey.PartialPack(3)
|
|
|
|
endKeyRaw := endKey.PartialPack(3)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
// Prefix and handle
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.ActiveAmount}).WithCfHandle(handle)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
// Start and stop bounds
|
|
|
|
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw)
|
|
|
|
// Don't include the key
|
|
|
|
options = options.WithIncludeKey(false).WithIncludeValue(true)
|
|
|
|
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
var sum uint64 = 0
|
|
|
|
for kv := range ch {
|
|
|
|
sum += kv.Value.(*prefixes.ActiveAmountValue).Amount
|
|
|
|
}
|
|
|
|
|
|
|
|
return sum, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetEffectiveAmount(claimHash []byte, supportOnly bool) (uint64, error) {
|
2022-08-26 15:24:39 +02:00
|
|
|
handle, err := db.EnsureHandle(prefixes.EffectiveAmount)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.EffectiveAmountKey{Prefix: []byte{prefixes.EffectiveAmount}, ClaimHash: claimHash}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
2022-08-26 15:24:39 +02:00
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return 0, nil
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
}
|
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
value := prefixes.EffectiveAmountValue{}
|
|
|
|
value.UnpackValue(slice.Data())
|
2022-09-16 17:03:15 +02:00
|
|
|
var amount uint64
|
|
|
|
if supportOnly {
|
|
|
|
amount += value.ActivatedSupportSum
|
|
|
|
} else {
|
|
|
|
amount += value.ActivatedSum
|
|
|
|
}
|
|
|
|
return amount, nil
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetSupportAmount(claimHash []byte) (uint64, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.SupportAmount)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewSupportAmountKey(claimHash)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return 0, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.SupportAmountValueUnpack(rawValue)
|
|
|
|
return value.Amount, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetTxHash(txNum uint32) ([]byte, error) {
|
|
|
|
// TODO: caching
|
|
|
|
handle, err := db.EnsureHandle(prefixes.TxHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewTxHashKey(txNum)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
return rawValue, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetActivation(txNum uint32, postition uint16) (uint32, error) {
|
|
|
|
return db.GetActivationFull(txNum, postition, false)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetActivationFull(txNum uint32, postition uint16, isSupport bool) (uint32, error) {
|
|
|
|
var typ uint8
|
|
|
|
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ActivatedClaimAndSupport)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
if isSupport {
|
|
|
|
typ = prefixes.ActivatedSupportTXOType
|
|
|
|
} else {
|
|
|
|
typ = prefixes.ActivateClaimTXOType
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewActivationKey(typ, txNum, postition)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.ActivationValueUnpack(rawValue)
|
|
|
|
// Does this need to explicitly return an int64, in case the uint32 overflows the max of an int?
|
|
|
|
return value.Height, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetClaimTxo(claim []byte) (*prefixes.ClaimToTXOValue, error) {
|
|
|
|
return db.GetCachedClaimTxo(claim, false)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetCachedClaimTxo(claim []byte, useCache bool) (*prefixes.ClaimToTXOValue, error) {
|
|
|
|
// TODO: implement cache
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimToTXO)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewClaimToTXOKey(claim)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.ClaimToTXOValueUnpack(rawValue)
|
|
|
|
return value, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) ControllingClaimIter() <-chan *prefixes.PrefixRowKV {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimTakeover)
|
|
|
|
if err != nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewClaimTakeoverKey("")
|
|
|
|
var rawKeyPrefix []byte = nil
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
rawKeyPrefix = key.PartialPack(0)
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
return ch
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetControllingClaim(name string) (*prefixes.ClaimTakeoverValue, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimTakeover)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Println(name)
|
|
|
|
key := prefixes.NewClaimTakeoverKey(name)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
log.Println(hex.EncodeToString(rawKey))
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.ClaimTakeoverValueUnpack(rawValue)
|
|
|
|
return value, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) FsGetClaimByHash(claimHash []byte) (*ResolveResult, error) {
|
|
|
|
claim, err := db.GetCachedClaimTxo(claimHash, true)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
activation, err := db.GetActivation(claim.TxNum, claim.Position)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
log.Printf("%#v\n%#v\n%#v\n", claim, hex.EncodeToString(claimHash), activation)
|
|
|
|
return PrepareResolveResult(
|
|
|
|
db,
|
|
|
|
claim.TxNum,
|
|
|
|
claim.Position,
|
|
|
|
claimHash,
|
|
|
|
claim.Name,
|
|
|
|
claim.RootTxNum,
|
|
|
|
claim.RootPosition,
|
|
|
|
activation,
|
|
|
|
claim.ChannelSignatureIsValid,
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
2022-12-06 22:14:28 +01:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetTx(txhash *chainhash.Hash) ([]byte, *wire.MsgTx, error) {
|
|
|
|
// Lookup in MempoolTx first.
|
|
|
|
raw, tx, err := db.getMempoolTx(txhash)
|
|
|
|
if err == nil && raw != nil && tx != nil {
|
|
|
|
return raw, tx, err
|
|
|
|
}
|
|
|
|
|
|
|
|
handle, err := db.EnsureHandle(prefixes.Tx)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.TxKey{Prefix: []byte{prefixes.Tx}, TxHash: txhash}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.TxValue{}
|
|
|
|
value.UnpackValue(rawValue)
|
|
|
|
var msgTx wire.MsgTx
|
|
|
|
err = msgTx.Deserialize(bytes.NewReader(value.RawTx))
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
return value.RawTx, &msgTx, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) getMempoolTx(txhash *chainhash.Hash) ([]byte, *wire.MsgTx, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.MempoolTx)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.MempoolTxKey{Prefix: []byte{prefixes.Tx}, TxHash: txhash}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.MempoolTxValue{}
|
|
|
|
value.UnpackValue(rawValue)
|
|
|
|
var msgTx wire.MsgTx
|
|
|
|
err = msgTx.Deserialize(bytes.NewReader(value.RawTx))
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
return value.RawTx, &msgTx, nil
|
|
|
|
}
|
|
|
|
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetTxCount(height uint32) (*prefixes.TxCountValue, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.TxCount)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewTxCountKey(height)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.TxCountValueUnpack(rawValue)
|
|
|
|
return value, nil
|
|
|
|
}
|
|
|
|
|
2022-12-06 22:14:28 +01:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetTxHeight(txhash *chainhash.Hash) (uint32, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.TxNum)
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.TxNumKey{Prefix: []byte{prefixes.TxNum}, TxHash: txhash}
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
if slice.Size() == 0 {
|
|
|
|
return 0, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// No slice copy needed. Value will be abandoned.
|
|
|
|
value := prefixes.TxNumValueUnpack(slice.Data())
|
|
|
|
height := stack.BisectRight(db.TxCounts, []uint32{value.TxNum})[0]
|
|
|
|
return height, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
type TxMerkle struct {
|
|
|
|
TxHash *chainhash.Hash
|
|
|
|
RawTx []byte
|
|
|
|
Height int
|
|
|
|
Pos uint32
|
|
|
|
Merkle []*chainhash.Hash
|
|
|
|
}
|
|
|
|
|
|
|
|
// merklePath selects specific transactions by position within blockTxs.
|
|
|
|
// The resulting merkle path (aka merkle branch, or merkle) is a list of TX hashes
|
|
|
|
// which are in sibling relationship with TX nodes on the path to the root.
|
|
|
|
func merklePath(pos uint32, blockTxs, partial []*chainhash.Hash) []*chainhash.Hash {
|
|
|
|
parent := func(p uint32) uint32 {
|
|
|
|
return p >> 1
|
|
|
|
}
|
|
|
|
sibling := func(p uint32) uint32 {
|
|
|
|
if p%2 == 0 {
|
|
|
|
return p + 1
|
|
|
|
} else {
|
|
|
|
return p - 1
|
|
|
|
}
|
|
|
|
}
|
|
|
|
p := parent(pos)
|
|
|
|
if p == 0 {
|
|
|
|
// No parent, path is complete.
|
|
|
|
return partial
|
|
|
|
}
|
|
|
|
// Add sibling to partial path and proceed to parent TX.
|
|
|
|
return merklePath(p, blockTxs, append(partial, blockTxs[sibling(pos)]))
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetTxMerkle(tx_hashes []chainhash.Hash) ([]TxMerkle, error) {
|
|
|
|
|
|
|
|
selectedTxNum := make([]*IterOptions, 0, len(tx_hashes))
|
|
|
|
for _, txhash := range tx_hashes {
|
|
|
|
key := prefixes.TxNumKey{Prefix: []byte{prefixes.TxNum}, TxHash: &txhash}
|
|
|
|
log.Debugf("%v", key)
|
|
|
|
opt, err := db.selectFrom(key.Prefix, &key, &key)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
selectedTxNum = append(selectedTxNum, opt...)
|
|
|
|
}
|
|
|
|
|
|
|
|
selectTxByTxNum := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
|
|
|
|
txNumKey := in[0].Key.(*prefixes.TxNumKey)
|
|
|
|
log.Debugf("%v", txNumKey.TxHash.String())
|
|
|
|
out := make([]*IterOptions, 0, 100)
|
|
|
|
startKey := &prefixes.TxKey{
|
|
|
|
Prefix: []byte{prefixes.Tx},
|
|
|
|
TxHash: txNumKey.TxHash,
|
|
|
|
}
|
|
|
|
endKey := &prefixes.TxKey{
|
|
|
|
Prefix: []byte{prefixes.Tx},
|
|
|
|
TxHash: txNumKey.TxHash,
|
|
|
|
}
|
|
|
|
selectedTx, err := db.selectFrom([]byte{prefixes.Tx}, startKey, endKey)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
out = append(out, selectedTx...)
|
|
|
|
return out, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
blockTxsCache := make(map[uint32][]*chainhash.Hash)
|
|
|
|
results := make([]TxMerkle, 0, 500)
|
|
|
|
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedTxNum), selectTxByTxNum) {
|
|
|
|
if err := checkForError(kvs); err != nil {
|
|
|
|
return results, err
|
|
|
|
}
|
|
|
|
txNumKey, txNumVal := kvs[0].Key.(*prefixes.TxNumKey), kvs[0].Value.(*prefixes.TxNumValue)
|
|
|
|
_, txVal := kvs[1].Key.(*prefixes.TxKey), kvs[1].Value.(*prefixes.TxValue)
|
|
|
|
txHeight := stack.BisectRight(db.TxCounts, []uint32{txNumVal.TxNum})[0]
|
|
|
|
txPos := txNumVal.TxNum - db.TxCounts.Get(txHeight-1)
|
|
|
|
// We need all the TX hashes in order to select out the relevant ones.
|
|
|
|
if _, ok := blockTxsCache[txHeight]; !ok {
|
|
|
|
txs, err := db.GetBlockTXs(txHeight)
|
|
|
|
if err != nil {
|
|
|
|
return results, err
|
|
|
|
}
|
|
|
|
blockTxsCache[txHeight] = txs
|
|
|
|
}
|
2022-11-13 12:47:55 +01:00
|
|
|
blockTxs := blockTxsCache[txHeight]
|
2022-12-06 22:14:28 +01:00
|
|
|
results = append(results, TxMerkle{
|
|
|
|
TxHash: txNumKey.TxHash,
|
|
|
|
RawTx: txVal.RawTx,
|
|
|
|
Height: int(txHeight),
|
|
|
|
Pos: txPos,
|
|
|
|
Merkle: merklePath(txPos, blockTxs, []*chainhash.Hash{}),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return results, nil
|
|
|
|
}
|
|
|
|
|
2022-11-13 12:47:55 +01:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetClaimByID(claimID string) ([]*ExpandedResolveResult, []*ExpandedResolveResult, error) {
|
|
|
|
rows := make([]*ExpandedResolveResult, 0)
|
|
|
|
extras := make([]*ExpandedResolveResult, 0)
|
|
|
|
claimHash, err := hex.DecodeString(claimID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
stream, err := db.FsGetClaimByHash(claimHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
var res = NewExpandedResolveResult()
|
|
|
|
res.Stream = &optionalResolveResultOrError{res: stream}
|
|
|
|
rows = append(rows, res)
|
|
|
|
|
|
|
|
if stream != nil && stream.ChannelHash != nil {
|
|
|
|
channel, err := db.FsGetClaimByHash(stream.ChannelHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
var res = NewExpandedResolveResult()
|
|
|
|
res.Channel = &optionalResolveResultOrError{res: channel}
|
|
|
|
extras = append(extras, res)
|
|
|
|
}
|
|
|
|
|
|
|
|
if stream != nil && stream.RepostedClaimHash != nil {
|
|
|
|
repost, err := db.FsGetClaimByHash(stream.RepostedClaimHash)
|
|
|
|
if err != nil {
|
|
|
|
return nil, nil, err
|
|
|
|
}
|
|
|
|
var res = NewExpandedResolveResult()
|
|
|
|
res.Repost = &optionalResolveResultOrError{res: repost}
|
|
|
|
extras = append(extras, res)
|
|
|
|
}
|
|
|
|
|
|
|
|
return rows, extras, nil
|
|
|
|
}
|
|
|
|
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) GetDBState() (*prefixes.DBStateValue, error) {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.DBState)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewDBStateKey()
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.DBStateValueUnpack(rawValue)
|
|
|
|
return value, nil
|
|
|
|
}
|
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
func (db *ReadOnlyDBColumnFamily) BidOrderNameIter(normalizedName string) <-chan *prefixes.PrefixRowKV {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.BidOrder)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
if err != nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2022-08-26 15:24:39 +02:00
|
|
|
key := prefixes.NewBidOrderKey(normalizedName)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
var rawKeyPrefix []byte = nil
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
rawKeyPrefix = key.PartialPack(1)
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
return ch
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) ClaimShortIdIter(normalizedName string, claimId string) <-chan *prefixes.PrefixRowKV {
|
|
|
|
handle, err := db.EnsureHandle(prefixes.ClaimShortIdPrefix)
|
|
|
|
if err != nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
key := prefixes.NewClaimShortIDKey(normalizedName, claimId)
|
|
|
|
var rawKeyPrefix []byte = nil
|
|
|
|
if claimId != "" {
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
rawKeyPrefix = key.PartialPack(2)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
} else {
|
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.
* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.
* Simplify type-specific partial packing, and simplify dispatch for pack key/value.
* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.
* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.
* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).
* Undo rename. TouchedOrDeleted -> ClaimDiff.
* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).
* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.
* Undo accidental bump of rocksdb version.
* Add .vscode dir to gitignore.
* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.
* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.
* Remove .DS_Store, and ignore in future.
* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.
* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).
* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.
* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 07:45:41 +02:00
|
|
|
rawKeyPrefix = key.PartialPack(1)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
}
|
2022-10-04 19:25:44 +02:00
|
|
|
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
|
rocksdb (#29)
* Initial rocksdb commit
Basic reading from rocksdb works
* Try github action thing
* try local dockerfile
* asdf
* qwer
* asdf
* Try adding test db with git-lfs
* update action
* cleanup
* Don't hardcode stop on read
* Progress of reading rocksdb
* fixes and arg test
* asdf
* Fix rocksdb iterator and tests
* update script
* asdf
* Better iterator. Need to implement a lot of keys next, and tests, maybe
tests needed.
* asdf
* asdf
* asdf
* Implementation, testing, and cleanup.
Implemented more prefixes. Figured out a good test that should work for
all prefixes. Removed binary databases so we can just store human
readable csv files.
* more tests, prefixes and small refactor
* Another prefix
* EffectiveAmount
* ActiveAmount
* ActivatedClaimAndSupport
* PendingActivation
* ClaimTakeover
* ClaimExpiration
* SupportToClaim
* ClaimToSupport
* Fix bug with variable length keys
* ChannelToClaim
* ClaimToChannel
* ClaimShortID
* TXOToClaim
* ClaimToTXO
* BlockHeader
* BlockHash
* Undo
* HashXHistory
* Tx and big refactor
* rest the the keys
* Refactor and starting to add resolve
* asdf
* Refactor tests and add column families
* changes
* more work on implementing resolve
* code cleanup, function tests
* small code refactoring
* start building pieces of the test data set for full resolve.
* Export constant, add test
* another test
* TestGetTxHash
* more tests
* more tests
* More tests
* Refactor db functions into three files
* added slice backed stack, need to fix tests
* fix some issues with test suite
* some cleanup and adding arguments and db load / refresh to server command
* fix some bugs, start using logrus for leveled logging, upgrade to go 1.17, run go mod tidy
* logrus, protobuf updates, resolve grpc endpoint
* don't run integration test with unit tests
* signal handling and cleanup functions
* signal handling code files
* Unit tests for db stack
* reorganize bisect function so we lock it properly
* fix txcounts loading
* cleanup some logic around iterators and fix a bug where I was running two detect changes threads
* add some metrics
* cleanup
* blocking and filtering implemented
* add params for blocking and filtering channels and streams
* updates and fixes for integration tests
* use newer version of lbry.go when possible
* Add height endpoint and move string functions internal
* remove gitattributes, unused
* some cleanup
* more cleanup / refactor. almost ready for another review
* More cleanup
* use chainhash.Hash types from lbcd where appropriate
* update github action to go-1.17.8
* update go version needed
* trying to fix these builds
* cleanup
* trying to fix memory leak
* fix memory leak (iterator never finished so cleanup didn't run)
* changes per code review
* remove lbry.go v2
* rename sort.go search.go
* fix test
2022-04-29 17:04:01 +02:00
|
|
|
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
|
|
|
|
ch := IterCF(db.DB, options)
|
|
|
|
return ch
|
|
|
|
}
|
|
|
|
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetCachedClaimHash(txNum uint32, position uint16) (*prefixes.TXOToClaimValue, error) {
|
|
|
|
// TODO: implement cache
|
|
|
|
handle, err := db.EnsureHandle(prefixes.TXOToClaim)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
key := prefixes.NewTXOToClaimKey(txNum, position)
|
|
|
|
rawKey := key.PackKey()
|
|
|
|
|
|
|
|
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
|
|
|
|
defer slice.Free()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
} else if slice.Size() == 0 {
|
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
rawValue := make([]byte, len(slice.Data()))
|
|
|
|
copy(rawValue, slice.Data())
|
|
|
|
value := prefixes.TXOToClaimValueUnpack(rawValue)
|
|
|
|
return value, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetBlockerHash get the hash of the blocker or filterer of the claim.
|
|
|
|
// TODO: this currently converts the byte arrays to strings, which is not
|
|
|
|
// very efficient. Might want to figure out a better way to do this.
|
|
|
|
func (db *ReadOnlyDBColumnFamily) GetBlockerHash(claimHash, repostedClaimHash, channelHash []byte) ([]byte, []byte, error) {
|
|
|
|
claimHashStr := string(claimHash)
|
|
|
|
respostedClaimHashStr := string(repostedClaimHash)
|
|
|
|
channelHashStr := string(channelHash)
|
|
|
|
|
|
|
|
var blockedHash []byte = nil
|
|
|
|
var filteredHash []byte = nil
|
|
|
|
|
|
|
|
blockedHash = db.BlockedStreams[claimHashStr]
|
|
|
|
if blockedHash == nil {
|
|
|
|
blockedHash = db.BlockedStreams[respostedClaimHashStr]
|
|
|
|
}
|
|
|
|
if blockedHash == nil {
|
|
|
|
blockedHash = db.BlockedChannels[claimHashStr]
|
|
|
|
}
|
|
|
|
if blockedHash == nil {
|
|
|
|
blockedHash = db.BlockedChannels[respostedClaimHashStr]
|
|
|
|
}
|
|
|
|
if blockedHash == nil {
|
|
|
|
blockedHash = db.BlockedChannels[channelHashStr]
|
|
|
|
}
|
|
|
|
|
|
|
|
filteredHash = db.FilteredStreams[claimHashStr]
|
|
|
|
if filteredHash == nil {
|
|
|
|
filteredHash = db.FilteredStreams[respostedClaimHashStr]
|
|
|
|
}
|
|
|
|
if filteredHash == nil {
|
|
|
|
filteredHash = db.FilteredChannels[claimHashStr]
|
|
|
|
}
|
|
|
|
if filteredHash == nil {
|
|
|
|
filteredHash = db.FilteredChannels[respostedClaimHashStr]
|
|
|
|
}
|
|
|
|
if filteredHash == nil {
|
|
|
|
filteredHash = db.FilteredChannels[channelHashStr]
|
|
|
|
}
|
|
|
|
|
|
|
|
return blockedHash, filteredHash, nil
|
|
|
|
}
|