Compare commits

...

44 commits

Author SHA1 Message Date
Jonathan Moody
d2193e980a
blockchain.transaction.broadcast implementation (#80)
* Generate secondary hashXNotification(s) on every headerNotification.

* Attempt LBCD connection with rpc.Client.

* Optional --daemon-url.

* Correct HashXStatusKey field. Should be HASHX_LEN.

* Connect to lbcd using lbcd/rpcclient library.

* Handle deprecation of node.js 12 actions.

* Add --daemon-ca-path argument and establish HTTPS connection if specified.

* Remove dead code. Tighten definition of TransactionBroadcastReq.

* Correct default daemon URL.
2022-12-16 13:54:19 -06:00
Jeffrey Picard
711c4b4b7b
WIP: Json rpc federation, search/getclaimbyid, and shutdown (#76)
* Cleanup shutdown and peers subscribe

this has errors currently, need to figure out data race

* fixed data race finally....

* getclaimbyid and search

* hook jsonrpc peers subcribe into current federation

* cleanup some peers stuff

* remove commented code

* tie into session manager in jsonrpc peers subscribe

* use int for port args

* cleanup and fix a bunch of compiler warnings

* use logrus everywhere

* cleanup test
2022-12-07 11:01:36 -05:00
Jonathan Moody
317cdf7129
WIP: blockchain.transaction.yyy JSON RPC implementations (#78)
* Partial blockchain.transaction.yyy RPC implementations.

* Register RPC service object.

* Move session manager start/stop to a better place.

* Attempt to fill in the details of transaction.get_batch,
including merkle path.

* Correct interpretation of DBStateValue Genesis hash.

* Convert Args.Port to int and validate. Run UDP ping server on JSONRPCPort too.

* Add BlockHeader to HeightHash notification.

* Limit session-based JSON RPC service to IPv4.
Client not ready for IPv6.

* Adapt to new HeightHash struct.

* Fine tune JSON RPC handlers and types to match lbry-sdk expectations.
Implement UnmarshalJSON()/MarshalJSON() for several types.

* Add more special handling of DBStateValue.Genesis hash.

* Set IncludeStop=false generally to avoid returning extra rows.
Other misc fixes.
2022-12-06 16:14:28 -05:00
Jonathan Moody
e070e8a51e
JSON RPC compatibility workarounds to support lbry-sdk (#75)
* Fix log spam (Already shutting down...)

* Workaround to allow lbry-sdk to call server.version and server.features.
Incoming/outgoing JSON is patched using yet another codec (jsonPatchingCodec).
Add more logging of raw/patched JSON.

* Elaborate comment on jsonPatchingCodec.
2022-10-29 18:42:24 +03:00
Jeffrey Picard
cd7f20a461
Server endpoints goroutine refactor (#69)
* server.xxx endpoints

Additional server endpoints in jsonrpc and also some refactoring

* server.banner

* more endpoints

* use lbry.go stop pattern

* set genesis hash properly

* updates and refactors

* remove shutdowncalled and itmut

* remove OpenIterators

* remove shutdown and done channels from db and use stop group

* currently broken, incorporated stop groups into the session manager

* set the rest of the default args for tests

* add default json rpc http port and cleanup

* tests for new jsonrpc endpoints

* cleanup and add manage goroutine to stopper pattern

* cleanup

* NewDebug

* asdf

* fix
2022-10-25 08:48:13 +03:00
Jeffrey Picard
b85556499f v0.2022.10.05.1 2022-10-05 06:25:42 +03:00
Jeffrey Picard
8eb7841600
little fixes for debugging and shutdown (#67) 2022-10-05 06:24:42 +03:00
Jeffrey Picard
6d4b9b5e37 v0.2022.10.04.1 2022-10-04 20:52:47 +03:00
Jeffrey Picard
537b8c7ddd
integration testing scripts (#64)
* integration testing scripts

some scripts for integration testing and a docker file for an action.
Still need to figure out how to properly run a more realistic version
in ci.

* update

* changes

* db shutdown racecondition fix

* changes per pr

* changes per code review

* fix testing

* add shutdowncalled bool to db
2022-10-04 20:25:44 +03:00
Jonathan Moody
8fb3db8136
Add subscribe/unsubscribe RPCs. Add session, sessionManager, and serve JSON RPC (without HTTP). (#66)
* Move and rename BlockchainCodec, BlockchainCodecRequest.
These are not specifically "blockchain", rather they are
specific to how gorilla/rpc works.

* Move claimtrie-related service/handlers to jsonrpc_claimtrie.go.

* Pull out decode logic into named func newBlockHeaderElectrum().

* Rename BlockchainService -> BlockchainBlockService.

* Drop http.Request arg from handlers, and use RegisterTCPService().

* Implement GetStatus() to pull data from HashXStatus table.

* Make the service objects independent, so we don't have inheritance.

* Add core session/subscription logic (session.go).
Implement subsribe/unsubscribe handlers.

* Support both pure JSON and JSON-over-HTTP services.
Forward NotifierChan messages to sessionManager.

* Only assign default port (50001) if neither --json-rpc-port nor
--json-rpc-http-port are specified.

* Handle failures with goto instead of break. Update error logging.

* Add --max-sessions, --session-timeout args. Enforce max sessions.

* Changes to make session.go testable. Conn created with Pipe()
used in testing has no unique Addr.

* Add tests for headers, headers.subscribe, address.subscribe.

* HashXStatus, HashXMempoolStatus not populated by default. Fix GetStatus().

* Use time.Ticker object to drive management activity.
2022-10-04 17:05:06 +03:00
Jonathan Moody
979d0d16b6
Adjust EffectiveAmountValue to include ActivatedSupportSum. (#61)
Make use of this in GetEffectiveAmount() tests.
2022-09-16 18:03:15 +03:00
Jonathan Moody
7d24ff82bf
Merge pull request #55 from moodyjon/blockchain_rpc1
Add some blockchain RPC handlers and DB fetching routines
2022-09-14 10:23:34 -05:00
Jonathan Moody
86a287ec69 Skip TestUDPPing if production server is down. 2022-09-13 15:35:54 -05:00
Jonathan Moody
891f63fb5c Update jsonrpc_blockchain_tests.go for new sample data. 2022-09-13 15:33:24 -05:00
Jonathan Moody
789974227f Add sample data from test_variety_of_transactions_and_longish_history.
Rework tests to use the sample data.
2022-09-13 15:05:11 -05:00
Jonathan Moody
71e79c553e One more RPC (get_server_height), and update comment
to include full RPC name.
2022-09-08 13:17:52 -05:00
Jonathan Moody
b298454727 Fix RPC handler registration and BlockGetChunkResp name. 2022-09-08 11:50:06 -05:00
Jonathan Moody
8c8871b4d2 Register blockchain.* handlers in jsonrpc_service.go. 2022-09-07 15:01:47 -05:00
Jonathan Moody
20e32437e9 Rename blockchain.go -> jsonrpc_blockchain.go. 2022-09-07 14:32:34 -05:00
Jonathan Moody
d0d6145f9d Refactor blockchain.go handlers to be compatible with gorilla/rpc.
Add speculative BlockchainCodecRequest which might handle rewriting
incoming method names.
2022-09-07 14:32:34 -05:00
Jonathan Moody
90afae7cd5 Add scripthash variants of RPC handlers. 2022-09-07 14:32:34 -05:00
Jonathan Moody
f5b8f2ce0d Add --chain=X argument, and use it to determine chain when DB is empty. 2022-09-07 14:32:34 -05:00
Jonathan Moody
321bcf6420 Fix encoding of TX and Block hashes in response. 2022-09-07 14:32:34 -05:00
Jonathan Moody
293a3f685e Fix some DB logic and add tests (blockchain_test.go). 2022-09-07 14:32:34 -05:00
Jonathan Moody
f55a5ed777 Continuing blockchain RPC handler work. Add JSON tags.
Fetch Height using TxCounts.
2022-09-07 14:32:34 -05:00
Jonathan Moody
50f7e91ead Infer chain (mainnet, testnet3, regtest) based on DBStateValue.
Correct typo DDVersion -> DBVersion. Misc logging improvements.
2022-09-07 14:32:34 -05:00
Jonathan Moody
fe18c70bf7 Add some blockchain RPC handlers and database fetching routines. 2022-09-07 14:32:20 -05:00
Jeffrey Picard
9403d84a83
WIP: Resolve json rpc (#57)
* jsonrpc

* update readme for open file limits

* add CGO flags to readme

* remove uneeded logging

* don't start jsonrpc server in unit tests

* cleanup and add args for json rpc

* correct rpc default port

* remove unused test_rpc.sh script

Co-authored-by: Ubuntu <ubuntu@ns5010184.ip-15-235-15.net>
2022-09-07 21:36:07 +03:00
Jonathan Moody
09fd939b60
Merge pull request #60 from moodyjon/slicebacked_type_param
Update go.mod, go.sum for use of constraints (x/exp).
2022-09-07 07:22:01 -05:00
Jonathan Moody
dc9b4ada2a Update go.mod, go.sum for use of constraints (x/exp). 2022-09-07 07:13:51 -05:00
Jonathan Moody
c38134b645
Merge pull request #56 from moodyjon/slicebacked_type_param
Add element type param T to SlicedBacked[T]. Require T satisfy
2022-09-07 05:39:34 -05:00
Jonathan Moody
d025ea1616
Add "on: pull_request" to worflow. (#59) 2022-09-06 22:13:13 +03:00
Jonathan Moody
5b690ff2ff
Merge pull request #58 from moodyjon/hashx_history_fix
Fix struct annotation for HashXHistoryValue. TxNums now little-endian.
2022-09-06 13:03:07 -05:00
Jonathan Moody
aa16207aa5 Fix struct annotation for HashXHistoryValue. TxNums now little-endian. 2022-09-06 12:57:45 -05:00
Jonathan Moody
78b9a625eb
Merge pull request #54 from moodyjon/hashx_history_fix
Payload of HashXHistoryValue should be an array of uint32 representing "txnums"
2022-09-06 12:40:06 -05:00
Jonathan Moody
e46ac7c913 HashXHistoryValue TxNums are unique in that they are little-endian
(at least when written by Python scribe on ARM64 Mac or x86).
2022-09-01 13:01:16 -05:00
Jonathan Moody
8ac89195db Add element type param T to SlicedBacked[T]. Require T satisfy
constraints.Ordered to make BisectRight() statically type-safe.
2022-08-30 16:24:43 -05:00
Jonathan Moody
4e11433325 Payload of HashXHistoryValue should be an array of uint32 representing "txnums". 2022-08-26 10:18:01 -04:00
Jonathan Moody
9d9c73f97f
Add RepostedCount, EffectiveAmount prefix rows (#51)
* Rename prefix EffectiveAmount -> BidOrder.

* Add RepostedCount, EffectiveAmount prefix rows. Add testdata.

* Update db_get.go helpers to use EffectiveAmount, RepostedCount
tables. Update tests.
2022-08-26 16:24:39 +03:00
Jeffrey Picard
cbdcc5faeb v0.2022.08.16.1 2022-08-16 14:56:19 +03:00
Jeffrey Picard
3a53f46114
Updates for build (#50)
* Updates for build

* go 1.18.1 in dockerfile

* use go 1.18.5

* trying this ...

* asdf
2022-08-16 14:52:26 +03:00
Jonathan Moody
071aa2a7ad
Catchup to python-herald schema. Plus lots of refactoring. (#49)
* Make prefixes_test.go more resilient against garbage left
by a prior crash. Also correct error logging.

* Don't do the ones' complement thing with DBStateValue fields
HistFlushCount, CompFlushCount, CompCursor. Python-herald
doesn't do it, and it presents one more irregular case for
(un)marshalling fields.

* Simplify type-specific partial packing, and simplify dispatch for pack key/value.

* Add struct field annotations and refactor to prepare for
use of "restruct" generic packing/unpacking.

* Add dynamic pack/unpack based on "restruct" module.
Dispatch normal pack/unpack through tableRegistry[] map
instead of switch.

* Add 5 new prefixes/tables (TrendingNotifications..HashXMempoolStatus).

* Undo rename. TouchedOrDeleted -> ClaimDiff.

* Fixup callers of eliminated partial pack functions. Have them use key.PartialPack(n).

* Add pluggable SerializationAPI. Use it in prefixes_test.
Populate PrefixRowKV.RawKey,RawValue when appropriate.

* Undo accidental bump of rocksdb version.

* Add .vscode dir to gitignore.

* Fix ClaimToChannelValue annotation. Implement BlockTxsValue workaround
as I can't find the right annotation to get it marshalled/unmarshalled.

* Strengthen partial packing verification. Fix bugs
in UnpackKey/UnpackValue for new types.

* Remove .DS_Store, and ignore in future.

* Fix MempoolTxKey, TouchedHashXValue. Remove some unneeded struct tags.

* Generate test data and complete the tests for the new tables.
Add Fuzz tests for TouchedHashXKey, TouchedHashXValue with
happy path test data (only).

* Move tableRegistry to prefixes.go and rename it prefixRegistry.
Other minor fixes, comments.

* Add test that runs through GetPrefixes() contents, and verifies
they are registered in prefixRegistry.
2022-08-16 08:45:41 +03:00
Jeffrey Picard
b018217899
fix release script (#48) 2022-08-10 21:06:56 +03:00
Jack Robison
13479794ed
Update readme.md 2022-08-10 11:05:39 -04:00
64 changed files with 15864 additions and 1350 deletions

View file

@ -1,12 +1,12 @@
name: 'Build and Test Hub'
on:
push:
on: ["push", "pull_request"]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Build and Test
uses: ./

3
.gitignore vendored
View file

@ -1 +1,4 @@
.idea/
.vscode/
.DS_Store
.venv

View file

@ -2,4 +2,4 @@ name: 'Build and Test'
description: 'Build and test hub'
runs:
using: 'docker'
image: 'jeffreypicard/hub-github-action:dev'
image: 'jeffreypicard/hub-github-action:dev'

170
db/db.go
View file

@ -15,6 +15,7 @@ import (
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/herald.go/internal/metrics"
pb "github.com/lbryio/herald.go/protobuf/go"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/linxGnu/grocksdb"
log "github.com/sirupsen/logrus"
@ -48,18 +49,17 @@ type ReadOnlyDBColumnFamily struct {
DB *grocksdb.DB
Handles map[string]*grocksdb.ColumnFamilyHandle
Opts *grocksdb.ReadOptions
TxCounts *stack.SliceBacked
TxCounts *stack.SliceBacked[uint32]
Height uint32
LastState *prefixes.DBStateValue
Headers *stack.SliceBacked
Headers *stack.SliceBacked[[]byte]
BlockingChannelHashes [][]byte
FilteringChannelHashes [][]byte
BlockedStreams map[string][]byte
BlockedChannels map[string][]byte
FilteredStreams map[string][]byte
FilteredChannels map[string][]byte
ShutdownChan chan struct{}
DoneChan chan struct{}
Grp *stop.Group
Cleanup func()
}
@ -318,9 +318,29 @@ func intMin(a, b int) int {
return b
}
// FIXME: This was copied from the signal.go file, maybe move it to a more common place?
// interruptRequested returns true when the channel returned by
// interruptListener was closed. This simplifies early shutdown slightly since
// the caller can just use an if statement instead of a select.
func interruptRequested(interrupted <-chan struct{}) bool {
select {
case <-interrupted:
return true
default:
}
return false
}
func IterCF(db *grocksdb.DB, opts *IterOptions) <-chan *prefixes.PrefixRowKV {
ch := make(chan *prefixes.PrefixRowKV)
// Check if we've been told to shutdown in between getting created and getting here
if opts.Grp != nil && interruptRequested(opts.Grp.Ch()) {
opts.Grp.Done()
return ch
}
ro := grocksdb.NewDefaultReadOptions()
ro.SetFillCache(opts.FillCache)
it := db.NewIteratorCF(ro, opts.CfHandle)
@ -336,6 +356,10 @@ func IterCF(db *grocksdb.DB, opts *IterOptions) <-chan *prefixes.PrefixRowKV {
it.Close()
close(ch)
ro.Destroy()
if opts.Grp != nil {
// opts.Grp.DoneNamed(iterKey)
opts.Grp.Done()
}
}()
var prevKey []byte
@ -355,6 +379,9 @@ func IterCF(db *grocksdb.DB, opts *IterOptions) <-chan *prefixes.PrefixRowKV {
if kv = opts.ReadRow(&prevKey); kv != nil {
ch <- kv
}
if opts.Grp != nil && interruptRequested(opts.Grp.Ch()) {
return
}
}
}()
@ -406,6 +433,75 @@ func Iter(db *grocksdb.DB, opts *IterOptions) <-chan *prefixes.PrefixRowKV {
return ch
}
func (db *ReadOnlyDBColumnFamily) selectFrom(prefix []byte, startKey, stopKey prefixes.BaseKey) ([]*IterOptions, error) {
handle, err := db.EnsureHandle(prefix[0])
if err != nil {
return nil, err
}
// Prefix and handle
options := NewIterateOptions().WithDB(db).WithPrefix(prefix).WithCfHandle(handle)
// Start and stop bounds
options = options.WithStart(startKey.PackKey()).WithStop(stopKey.PackKey()).WithIncludeStop(false)
// Include the key and value
options = options.WithIncludeKey(true).WithIncludeValue(true)
return []*IterOptions{options}, nil
}
func iterate(db *grocksdb.DB, opts []*IterOptions) <-chan []*prefixes.PrefixRowKV {
out := make(chan []*prefixes.PrefixRowKV)
routine := func() {
for i, o := range opts {
j := 0
for kv := range IterCF(db, o) {
row := make([]*prefixes.PrefixRowKV, 0, 1)
row = append(row, kv)
log.Debugf("iterate[%v][%v] %#v -> %#v", i, j, kv.Key, kv.Value)
out <- row
j++
}
}
close(out)
}
go routine()
return out
}
func innerJoin(db *grocksdb.DB, in <-chan []*prefixes.PrefixRowKV, selectFn func([]*prefixes.PrefixRowKV) ([]*IterOptions, error)) <-chan []*prefixes.PrefixRowKV {
out := make(chan []*prefixes.PrefixRowKV)
routine := func() {
for kvs := range in {
selected, err := selectFn(kvs)
if err != nil {
out <- []*prefixes.PrefixRowKV{{Error: err}}
close(out)
return
}
for kv := range iterate(db, selected) {
row := make([]*prefixes.PrefixRowKV, 0, len(kvs)+1)
row = append(row, kvs...)
row = append(row, kv...)
for i, kv := range row {
log.Debugf("row[%v] %#v -> %#v", i, kv.Key, kv.Value)
}
out <- row
}
}
close(out)
return
}
go routine()
return out
}
func checkForError(kvs []*prefixes.PrefixRowKV) error {
for _, kv := range kvs {
if kv.Error != nil {
return kv.Error
}
}
return nil
}
//
// GetDB functions that open and return a db
//
@ -435,7 +531,7 @@ func GetWriteDBCF(name string) (*grocksdb.DB, []*grocksdb.ColumnFamilyHandle, er
}
// GetProdDB returns a db that is used for production.
func GetProdDB(name string, secondaryPath string) (*ReadOnlyDBColumnFamily, func(), error) {
func GetProdDB(name string, secondaryPath string, grp *stop.Group) (*ReadOnlyDBColumnFamily, error) {
prefixNames := prefixes.GetPrefixes()
// additional prefixes that aren't in the code explicitly
cfNames := []string{"default", "e", "d", "c"}
@ -444,7 +540,7 @@ func GetProdDB(name string, secondaryPath string) (*ReadOnlyDBColumnFamily, func
cfNames = append(cfNames, cfName)
}
db, err := GetDBColumnFamilies(name, secondaryPath, cfNames)
db, err := GetDBColumnFamilies(name, secondaryPath, cfNames, grp)
cleanupFiles := func() {
err = os.RemoveAll(secondaryPath)
@ -454,7 +550,8 @@ func GetProdDB(name string, secondaryPath string) (*ReadOnlyDBColumnFamily, func
}
if err != nil {
return nil, cleanupFiles, err
cleanupFiles()
return nil, err
}
cleanupDB := func() {
@ -463,11 +560,11 @@ func GetProdDB(name string, secondaryPath string) (*ReadOnlyDBColumnFamily, func
}
db.Cleanup = cleanupDB
return db, cleanupDB, nil
return db, nil
}
// GetDBColumnFamilies gets a db with the specified column families and secondary path.
func GetDBColumnFamilies(name string, secondayPath string, cfNames []string) (*ReadOnlyDBColumnFamily, error) {
func GetDBColumnFamilies(name string, secondayPath string, cfNames []string, grp *stop.Group) (*ReadOnlyDBColumnFamily, error) {
opts := grocksdb.NewDefaultOptions()
roOpts := grocksdb.NewDefaultReadOptions()
cfOpt := grocksdb.NewDefaultOptions()
@ -482,12 +579,13 @@ func GetDBColumnFamilies(name string, secondayPath string, cfNames []string) (*R
// db, handles, err := grocksdb.OpenDbColumnFamilies(opts, name, cfNames, cfOpts)
if err != nil {
log.Errorf("open db as secondary failed: %v", err)
return nil, err
}
var handlesMap = make(map[string]*grocksdb.ColumnFamilyHandle)
for i, handle := range handles {
log.Printf("%d: %+v\n", i, handle)
log.Printf("handle %d(%s): %+v\n", i, cfNames[i], handle)
handlesMap[cfNames[i]] = handle
}
@ -503,8 +601,7 @@ func GetDBColumnFamilies(name string, secondayPath string, cfNames []string) (*R
LastState: nil,
Height: 0,
Headers: nil,
ShutdownChan: make(chan struct{}),
DoneChan: make(chan struct{}),
Grp: grp,
}
err = myDB.ReadDBState() //TODO: Figure out right place for this
@ -556,7 +653,7 @@ func (db *ReadOnlyDBColumnFamily) Advance(height uint32) {
}
txCount := txCountObj.TxCount
if db.TxCounts.GetTip().(uint32) >= txCount {
if db.TxCounts.GetTip() >= txCount {
log.Error("current tip should be less than new txCount",
"tx count tip:", db.TxCounts.GetTip(), "tx count:", txCount)
}
@ -573,21 +670,22 @@ func (db *ReadOnlyDBColumnFamily) Unwind() {
// Shutdown shuts down the db.
func (db *ReadOnlyDBColumnFamily) Shutdown() {
db.ShutdownChan <- struct{}{}
<-db.DoneChan
log.Println("Calling cleanup...")
db.Cleanup()
log.Println("Leaving Shutdown...")
}
// RunDetectChanges Go routine the runs continuously while the hub is active
// to keep the db readonly view up to date and handle reorgs on the
// blockchain.
func (db *ReadOnlyDBColumnFamily) RunDetectChanges(notifCh chan *internal.HeightHash) {
func (db *ReadOnlyDBColumnFamily) RunDetectChanges(notifCh chan<- interface{}) {
db.Grp.Add(1)
go func() {
lastPrint := time.Now()
for {
// FIXME: Figure out best sleep interval
if time.Since(lastPrint) > time.Second {
log.Debug("DetectChanges:", db.LastState)
log.Debugf("DetectChanges: %#v", db.LastState)
lastPrint = time.Now()
}
err := db.detectChanges(notifCh)
@ -595,8 +693,8 @@ func (db *ReadOnlyDBColumnFamily) RunDetectChanges(notifCh chan *internal.Height
log.Infof("Error detecting changes: %#v", err)
}
select {
case <-db.ShutdownChan:
db.DoneChan <- struct{}{}
case <-db.Grp.Ch():
db.Grp.Done()
return
case <-time.After(time.Millisecond * 10):
}
@ -605,7 +703,7 @@ func (db *ReadOnlyDBColumnFamily) RunDetectChanges(notifCh chan *internal.Height
}
// DetectChanges keep the rocksdb db in sync and handle reorgs
func (db *ReadOnlyDBColumnFamily) detectChanges(notifCh chan *internal.HeightHash) error {
func (db *ReadOnlyDBColumnFamily) detectChanges(notifCh chan<- interface{}) error {
err := db.DB.TryCatchUpWithPrimary()
if err != nil {
return err
@ -636,11 +734,10 @@ func (db *ReadOnlyDBColumnFamily) detectChanges(notifCh chan *internal.HeightHas
if err != nil {
return err
}
curHeaderObj := db.Headers.GetTip()
if curHeaderObj == nil {
curHeader := db.Headers.GetTip()
if curHeader == nil {
break
}
curHeader := curHeaderObj.([]byte)
log.Debugln("lastHeightHeader: ", hex.EncodeToString(lastHeightHeader))
log.Debugln("curHeader: ", hex.EncodeToString(curHeader))
if bytes.Equal(curHeader, lastHeightHeader) {
@ -678,7 +775,12 @@ func (db *ReadOnlyDBColumnFamily) detectChanges(notifCh chan *internal.HeightHas
log.Info("error getting block hash: ", err)
return err
}
notifCh <- &internal.HeightHash{Height: uint64(height), BlockHash: hash}
header, err := db.GetHeader(height)
if err != nil {
log.Info("error getting block header: ", err)
return err
}
notifCh <- &internal.HeightHash{Height: uint64(height), BlockHash: hash, BlockHeader: header}
}
//TODO: ClearCache
log.Warn("implement cache clearing")
@ -716,13 +818,13 @@ func (db *ReadOnlyDBColumnFamily) InitHeaders() error {
}
//TODO: figure out a reasonable default and make it a constant
db.Headers = stack.NewSliceBacked(12000)
db.Headers = stack.NewSliceBacked[[]byte](12000)
startKey := prefixes.NewHeaderKey(0)
// endKey := prefixes.NewHeaderKey(db.LastState.Height)
startKeyRaw := startKey.PackKey()
// endKeyRaw := endKey.PackKey()
options := NewIterateOptions().WithPrefix([]byte{prefixes.Header}).WithCfHandle(handle)
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.Header}).WithCfHandle(handle)
options = options.WithIncludeKey(false).WithIncludeValue(true) //.WithIncludeStop(true)
options = options.WithStart(startKeyRaw) //.WithStop(endKeyRaw)
@ -743,9 +845,9 @@ func (db *ReadOnlyDBColumnFamily) InitTxCounts() error {
return err
}
db.TxCounts = stack.NewSliceBacked(InitialTxCountSize)
db.TxCounts = stack.NewSliceBacked[uint32](InitialTxCountSize)
options := NewIterateOptions().WithPrefix([]byte{prefixes.TxCount}).WithCfHandle(handle)
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.TxCount}).WithCfHandle(handle)
options = options.WithIncludeKey(false).WithIncludeValue(true).WithIncludeStop(true)
ch := IterCF(db.DB, options)
@ -850,8 +952,8 @@ func ReadPrefixN(db *grocksdb.DB, prefix []byte, n int) []*prefixes.PrefixRowKV
value := it.Value()
res[i] = &prefixes.PrefixRowKV{
Key: key.Data(),
Value: value.Data(),
RawKey: key.Data(),
RawValue: value.Data(),
}
key.Free()
@ -908,8 +1010,8 @@ func readWriteRawNCF(db *grocksdb.DB, options *IterOptions, out string, n int, f
if i >= n {
return
}
key := kv.Key.([]byte)
value := kv.Value.([]byte)
key := kv.RawKey
value := kv.RawValue
keyHex := hex.EncodeToString(key)
valueHex := hex.EncodeToString(value)
//log.Println(keyHex)
@ -947,8 +1049,8 @@ func ReadWriteRawN(db *grocksdb.DB, options *IterOptions, out string, n int) {
if i >= n {
return
}
key := kv.Key.([]byte)
value := kv.Value.([]byte)
key := kv.RawKey
value := kv.RawValue
keyHex := hex.EncodeToString(key)
valueHex := hex.EncodeToString(value)
log.Println(keyHex)

View file

@ -3,12 +3,19 @@ package db
// db_get.go contains the basic access functions to the database.
import (
"bytes"
"crypto/sha256"
"encoding/hex"
"fmt"
"log"
"math"
"github.com/lbryio/herald.go/db/prefixes"
"github.com/lbryio/herald.go/db/stack"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/wire"
"github.com/linxGnu/grocksdb"
log "github.com/sirupsen/logrus"
)
// GetExpirationHeight returns the expiration height for the given height. Uses
@ -61,6 +68,57 @@ func (db *ReadOnlyDBColumnFamily) GetBlockHash(height uint32) ([]byte, error) {
return rawValue, nil
}
func (db *ReadOnlyDBColumnFamily) GetBlockTXs(height uint32) ([]*chainhash.Hash, error) {
handle, err := db.EnsureHandle(prefixes.BlockTXs)
if err != nil {
return nil, err
}
key := prefixes.BlockTxsKey{
Prefix: []byte{prefixes.BlockTXs},
Height: height,
}
slice, err := db.DB.GetCF(db.Opts, handle, key.PackKey())
defer slice.Free()
if err != nil {
return nil, err
}
if slice.Size() == 0 {
return nil, nil
}
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.BlockTxsValueUnpack(rawValue)
return value.TxHashes, nil
}
func (db *ReadOnlyDBColumnFamily) GetTouchedHashXs(height uint32) ([][]byte, error) {
handle, err := db.EnsureHandle(prefixes.TouchedHashX)
if err != nil {
return nil, err
}
key := prefixes.TouchedHashXKey{
Prefix: []byte{prefixes.TouchedHashX},
Height: height,
}
slice, err := db.DB.GetCF(db.Opts, handle, key.PackKey())
defer slice.Free()
if err != nil {
return nil, err
}
if slice.Size() == 0 {
return nil, nil
}
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.TouchedHashXValue{}
value.UnpackValue(rawValue)
return value.TouchedHashXs, nil
}
func (db *ReadOnlyDBColumnFamily) GetHeader(height uint32) ([]byte, error) {
handle, err := db.EnsureHandle(prefixes.Header)
if err != nil {
@ -82,6 +140,260 @@ func (db *ReadOnlyDBColumnFamily) GetHeader(height uint32) ([]byte, error) {
return rawValue, nil
}
func (db *ReadOnlyDBColumnFamily) GetHeaders(height uint32, count uint32) ([][112]byte, error) {
handle, err := db.EnsureHandle(prefixes.Header)
if err != nil {
return nil, err
}
startKeyRaw := prefixes.NewHeaderKey(height).PackKey()
endKeyRaw := prefixes.NewHeaderKey(height + count).PackKey()
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.Header}).WithCfHandle(handle)
options = options.WithIncludeKey(false).WithIncludeValue(true) //.WithIncludeStop(true)
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw)
result := make([][112]byte, 0, count)
for kv := range IterCF(db.DB, options) {
h := [112]byte{}
copy(h[:], kv.Value.(*prefixes.BlockHeaderValue).Header[:112])
result = append(result, h)
}
return result, nil
}
func (db *ReadOnlyDBColumnFamily) GetBalance(hashX []byte) (uint64, uint64, error) {
handle, err := db.EnsureHandle(prefixes.UTXO)
if err != nil {
return 0, 0, err
}
startKey := prefixes.UTXOKey{
Prefix: []byte{prefixes.UTXO},
HashX: hashX,
TxNum: 0,
Nout: 0,
}
endKey := prefixes.UTXOKey{
Prefix: []byte{prefixes.UTXO},
HashX: hashX,
TxNum: math.MaxUint32,
Nout: math.MaxUint16,
}
startKeyRaw := startKey.PackKey()
endKeyRaw := endKey.PackKey()
// Prefix and handle
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.UTXO}).WithCfHandle(handle)
// Start and stop bounds
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw).WithIncludeStop(true)
// Don't include the key
options = options.WithIncludeKey(false).WithIncludeValue(true)
ch := IterCF(db.DB, options)
var confirmed uint64 = 0
var unconfirmed uint64 = 0 // TODO
for kv := range ch {
confirmed += kv.Value.(*prefixes.UTXOValue).Amount
}
return confirmed, unconfirmed, nil
}
type TXOInfo struct {
TxHash *chainhash.Hash
TxPos uint16
Height uint32
Value uint64
}
func (db *ReadOnlyDBColumnFamily) GetUnspent(hashX []byte) ([]TXOInfo, error) {
startKey := &prefixes.UTXOKey{
Prefix: []byte{prefixes.UTXO},
HashX: hashX,
TxNum: 0,
Nout: 0,
}
endKey := &prefixes.UTXOKey{
Prefix: []byte{prefixes.UTXO},
HashX: hashX,
TxNum: math.MaxUint32,
Nout: math.MaxUint16,
}
selectedUTXO, err := db.selectFrom([]byte{prefixes.UTXO}, startKey, endKey)
if err != nil {
return nil, err
}
selectTxHashByTxNum := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
historyKey := in[0].Key.(*prefixes.UTXOKey)
out := make([]*IterOptions, 0, 100)
startKey := &prefixes.TxHashKey{
Prefix: []byte{prefixes.TxHash},
TxNum: historyKey.TxNum,
}
endKey := &prefixes.TxHashKey{
Prefix: []byte{prefixes.TxHash},
TxNum: historyKey.TxNum,
}
selectedTxHash, err := db.selectFrom([]byte{prefixes.TxHash}, startKey, endKey)
if err != nil {
return nil, err
}
out = append(out, selectedTxHash...)
return out, nil
}
results := make([]TXOInfo, 0, 1000)
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedUTXO), selectTxHashByTxNum) {
if err := checkForError(kvs); err != nil {
return results, err
}
utxoKey := kvs[0].Key.(*prefixes.UTXOKey)
utxoValue := kvs[0].Value.(*prefixes.UTXOValue)
txhashValue := kvs[1].Value.(*prefixes.TxHashValue)
results = append(results,
TXOInfo{
TxHash: txhashValue.TxHash,
TxPos: utxoKey.Nout,
Height: stack.BisectRight(db.TxCounts, []uint32{utxoKey.TxNum})[0],
Value: utxoValue.Amount,
},
)
}
return results, nil
}
type TxInfo struct {
TxHash *chainhash.Hash
Height uint32
}
func (db *ReadOnlyDBColumnFamily) GetHistory(hashX []byte) ([]TxInfo, error) {
startKey := &prefixes.HashXHistoryKey{
Prefix: []byte{prefixes.HashXHistory},
HashX: hashX,
Height: 0,
}
endKey := &prefixes.HashXHistoryKey{
Prefix: []byte{prefixes.HashXHistory},
HashX: hashX,
Height: math.MaxUint32,
}
selectedHistory, err := db.selectFrom([]byte{prefixes.HashXHistory}, startKey, endKey)
if err != nil {
return nil, err
}
selectTxHashByTxNums := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
historyValue := in[0].Value.(*prefixes.HashXHistoryValue)
out := make([]*IterOptions, 0, 100)
for _, txnum := range historyValue.TxNums {
startKey := &prefixes.TxHashKey{
Prefix: []byte{prefixes.TxHash},
TxNum: txnum,
}
endKey := &prefixes.TxHashKey{
Prefix: []byte{prefixes.TxHash},
TxNum: txnum,
}
selectedTxHash, err := db.selectFrom([]byte{prefixes.TxHash}, startKey, endKey)
if err != nil {
return nil, err
}
out = append(out, selectedTxHash...)
}
return out, nil
}
results := make([]TxInfo, 0, 1000)
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedHistory), selectTxHashByTxNums) {
if err := checkForError(kvs); err != nil {
return results, err
}
historyKey := kvs[0].Key.(*prefixes.HashXHistoryKey)
txHashValue := kvs[1].Value.(*prefixes.TxHashValue)
results = append(results, TxInfo{
TxHash: txHashValue.TxHash,
Height: historyKey.Height,
})
}
return results, nil
}
func (db *ReadOnlyDBColumnFamily) GetStatus(hashX []byte) ([]byte, error) {
// Lookup in HashXMempoolStatus first.
status, err := db.getMempoolStatus(hashX)
if err == nil && status != nil {
log.Debugf("(mempool) status(%#v) -> %#v", hashX, status)
return status, err
}
// No indexed mempool status. Lookup in HashXStatus second.
handle, err := db.EnsureHandle(prefixes.HashXStatus)
if err != nil {
return nil, err
}
key := &prefixes.HashXStatusKey{
Prefix: []byte{prefixes.HashXStatus},
HashX: hashX,
}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err == nil && slice.Size() > 0 {
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.HashXStatusValue{}
value.UnpackValue(rawValue)
log.Debugf("status(%#v) -> %#v", hashX, value.Status)
return value.Status, nil
}
// No indexed status. Fall back to enumerating HashXHistory.
txs, err := db.GetHistory(hashX)
if err != nil {
return nil, err
}
if len(txs) == 0 {
return []byte{}, err
}
hash := sha256.New()
for _, tx := range txs {
hash.Write([]byte(fmt.Sprintf("%s:%d:", tx.TxHash.String(), tx.Height)))
}
// TODO: Mempool history
return hash.Sum(nil), err
}
func (db *ReadOnlyDBColumnFamily) getMempoolStatus(hashX []byte) ([]byte, error) {
handle, err := db.EnsureHandle(prefixes.HashXMempoolStatus)
if err != nil {
return nil, err
}
key := &prefixes.HashXMempoolStatusKey{
Prefix: []byte{prefixes.HashXMempoolStatus},
HashX: hashX,
}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return nil, err
} else if slice.Size() == 0 {
return nil, nil
}
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.HashXMempoolStatusValue{}
value.UnpackValue(rawValue)
return value.Status, nil
}
// GetStreamsAndChannelRepostedByChannelHashes returns a map of streams and channel hashes that are reposted by the given channel hashes.
func (db *ReadOnlyDBColumnFamily) GetStreamsAndChannelRepostedByChannelHashes(reposterChannelHashes [][]byte) (map[string][]byte, map[string][]byte, error) {
handle, err := db.EnsureHandle(prefixes.ChannelToClaim)
@ -94,8 +406,8 @@ func (db *ReadOnlyDBColumnFamily) GetStreamsAndChannelRepostedByChannelHashes(re
for _, reposterChannelHash := range reposterChannelHashes {
key := prefixes.NewChannelToClaimKeyWHash(reposterChannelHash)
rawKeyPrefix := prefixes.ChannelToClaimKeyPackPartial(key, 1)
options := NewIterateOptions().WithCfHandle(handle).WithPrefix(rawKeyPrefix)
rawKeyPrefix := key.PartialPack(1)
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options = options.WithIncludeKey(false).WithIncludeValue(true)
ch := IterCF(db.DB, options)
// for stream := range Iterate(db.DB, prefixes.ChannelToClaim, []byte{reposterChannelHash}, false) {
@ -167,9 +479,9 @@ func (db *ReadOnlyDBColumnFamily) GetShortClaimIdUrl(name string, normalizedName
partialClaimId := claimId[:j]
partialKey := prefixes.NewClaimShortIDKey(normalizedName, partialClaimId)
log.Printf("partialKey: %#v\n", partialKey)
keyPrefix := prefixes.ClaimShortIDKeyPackPartial(partialKey, 2)
keyPrefix := partialKey.PartialPack(2)
// Prefix and handle
options := NewIterateOptions().WithPrefix(prefix).WithCfHandle(handle)
options := NewIterateOptions().WithDB(db).WithPrefix(prefix).WithCfHandle(handle)
// Start and stop bounds
options = options.WithStart(keyPrefix).WithStop(keyPrefix)
// Don't include the key
@ -212,28 +524,25 @@ func (db *ReadOnlyDBColumnFamily) GetRepost(claimHash []byte) ([]byte, error) {
}
func (db *ReadOnlyDBColumnFamily) GetRepostedCount(claimHash []byte) (int, error) {
handle, err := db.EnsureHandle(prefixes.RepostedClaim)
handle, err := db.EnsureHandle(prefixes.RepostedCount)
if err != nil {
return 0, err
}
key := prefixes.NewRepostedKey(claimHash)
keyPrefix := prefixes.RepostedKeyPackPartial(key, 1)
// Prefix and handle
options := NewIterateOptions().WithPrefix(keyPrefix).WithCfHandle(handle)
// Start and stop bounds
// options = options.WithStart(keyPrefix)
// Don't include the key
options = options.WithIncludeValue(false)
key := prefixes.RepostedCountKey{Prefix: []byte{prefixes.RepostedCount}, ClaimHash: claimHash}
rawKey := key.PackKey()
var i int = 0
ch := IterCF(db.DB, options)
for range ch {
i++
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return 0, err
} else if slice.Size() == 0 {
return 0, nil
}
return i, nil
value := prefixes.RepostedCountValue{}
value.UnpackValue(slice.Data())
return int(value.RepostedCount), nil
}
func (db *ReadOnlyDBColumnFamily) GetChannelForClaim(claimHash []byte, txNum uint32, position uint16) ([]byte, error) {
@ -265,12 +574,12 @@ func (db *ReadOnlyDBColumnFamily) GetActiveAmount(claimHash []byte, txoType uint
}
startKey := prefixes.NewActiveAmountKey(claimHash, txoType, 0)
endKey := prefixes.NewActiveAmountKey(claimHash, txoType, height)
endKey := prefixes.NewActiveAmountKey(claimHash, txoType, height+1)
startKeyRaw := prefixes.ActiveAmountKeyPackPartial(startKey, 3)
endKeyRaw := prefixes.ActiveAmountKeyPackPartial(endKey, 3)
startKeyRaw := startKey.PartialPack(3)
endKeyRaw := endKey.PartialPack(3)
// Prefix and handle
options := NewIterateOptions().WithPrefix([]byte{prefixes.ActiveAmount}).WithCfHandle(handle)
options := NewIterateOptions().WithDB(db).WithPrefix([]byte{prefixes.ActiveAmount}).WithCfHandle(handle)
// Start and stop bounds
options = options.WithStart(startKeyRaw).WithStop(endKeyRaw)
// Don't include the key
@ -286,21 +595,30 @@ func (db *ReadOnlyDBColumnFamily) GetActiveAmount(claimHash []byte, txoType uint
}
func (db *ReadOnlyDBColumnFamily) GetEffectiveAmount(claimHash []byte, supportOnly bool) (uint64, error) {
supportAmount, err := db.GetActiveAmount(claimHash, prefixes.ActivatedSupportTXOType, db.Height+1)
handle, err := db.EnsureHandle(prefixes.EffectiveAmount)
if err != nil {
return 0, err
}
key := prefixes.EffectiveAmountKey{Prefix: []byte{prefixes.EffectiveAmount}, ClaimHash: claimHash}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return 0, err
} else if slice.Size() == 0 {
return 0, nil
}
value := prefixes.EffectiveAmountValue{}
value.UnpackValue(slice.Data())
var amount uint64
if supportOnly {
return supportAmount, nil
amount += value.ActivatedSupportSum
} else {
amount += value.ActivatedSum
}
activationAmount, err := db.GetActiveAmount(claimHash, prefixes.ActivateClaimTXOType, db.Height+1)
if err != nil {
return 0, err
}
return activationAmount + supportAmount, nil
return amount, nil
}
func (db *ReadOnlyDBColumnFamily) GetSupportAmount(claimHash []byte) (uint64, error) {
@ -416,8 +734,8 @@ func (db *ReadOnlyDBColumnFamily) ControllingClaimIter() <-chan *prefixes.Prefix
key := prefixes.NewClaimTakeoverKey("")
var rawKeyPrefix []byte = nil
rawKeyPrefix = prefixes.ClaimTakeoverKeyPackPartial(key, 0)
options := NewIterateOptions().WithCfHandle(handle).WithPrefix(rawKeyPrefix)
rawKeyPrefix = key.PartialPack(0)
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
ch := IterCF(db.DB, options)
return ch
@ -474,6 +792,70 @@ func (db *ReadOnlyDBColumnFamily) FsGetClaimByHash(claimHash []byte) (*ResolveRe
)
}
func (db *ReadOnlyDBColumnFamily) GetTx(txhash *chainhash.Hash) ([]byte, *wire.MsgTx, error) {
// Lookup in MempoolTx first.
raw, tx, err := db.getMempoolTx(txhash)
if err == nil && raw != nil && tx != nil {
return raw, tx, err
}
handle, err := db.EnsureHandle(prefixes.Tx)
if err != nil {
return nil, nil, err
}
key := prefixes.TxKey{Prefix: []byte{prefixes.Tx}, TxHash: txhash}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return nil, nil, err
}
if slice.Size() == 0 {
return nil, nil, nil
}
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.TxValue{}
value.UnpackValue(rawValue)
var msgTx wire.MsgTx
err = msgTx.Deserialize(bytes.NewReader(value.RawTx))
if err != nil {
return nil, nil, err
}
return value.RawTx, &msgTx, nil
}
func (db *ReadOnlyDBColumnFamily) getMempoolTx(txhash *chainhash.Hash) ([]byte, *wire.MsgTx, error) {
handle, err := db.EnsureHandle(prefixes.MempoolTx)
if err != nil {
return nil, nil, err
}
key := prefixes.MempoolTxKey{Prefix: []byte{prefixes.Tx}, TxHash: txhash}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return nil, nil, err
}
if slice.Size() == 0 {
return nil, nil, nil
}
rawValue := make([]byte, len(slice.Data()))
copy(rawValue, slice.Data())
value := prefixes.MempoolTxValue{}
value.UnpackValue(rawValue)
var msgTx wire.MsgTx
err = msgTx.Deserialize(bytes.NewReader(value.RawTx))
if err != nil {
return nil, nil, err
}
return value.RawTx, &msgTx, nil
}
func (db *ReadOnlyDBColumnFamily) GetTxCount(height uint32) (*prefixes.TxCountValue, error) {
handle, err := db.EnsureHandle(prefixes.TxCount)
if err != nil {
@ -497,6 +879,162 @@ func (db *ReadOnlyDBColumnFamily) GetTxCount(height uint32) (*prefixes.TxCountVa
return value, nil
}
func (db *ReadOnlyDBColumnFamily) GetTxHeight(txhash *chainhash.Hash) (uint32, error) {
handle, err := db.EnsureHandle(prefixes.TxNum)
if err != nil {
return 0, err
}
key := prefixes.TxNumKey{Prefix: []byte{prefixes.TxNum}, TxHash: txhash}
rawKey := key.PackKey()
slice, err := db.DB.GetCF(db.Opts, handle, rawKey)
defer slice.Free()
if err != nil {
return 0, err
}
if slice.Size() == 0 {
return 0, nil
}
// No slice copy needed. Value will be abandoned.
value := prefixes.TxNumValueUnpack(slice.Data())
height := stack.BisectRight(db.TxCounts, []uint32{value.TxNum})[0]
return height, nil
}
type TxMerkle struct {
TxHash *chainhash.Hash
RawTx []byte
Height int
Pos uint32
Merkle []*chainhash.Hash
}
// merklePath selects specific transactions by position within blockTxs.
// The resulting merkle path (aka merkle branch, or merkle) is a list of TX hashes
// which are in sibling relationship with TX nodes on the path to the root.
func merklePath(pos uint32, blockTxs, partial []*chainhash.Hash) []*chainhash.Hash {
parent := func(p uint32) uint32 {
return p >> 1
}
sibling := func(p uint32) uint32 {
if p%2 == 0 {
return p + 1
} else {
return p - 1
}
}
p := parent(pos)
if p == 0 {
// No parent, path is complete.
return partial
}
// Add sibling to partial path and proceed to parent TX.
return merklePath(p, blockTxs, append(partial, blockTxs[sibling(pos)]))
}
func (db *ReadOnlyDBColumnFamily) GetTxMerkle(tx_hashes []chainhash.Hash) ([]TxMerkle, error) {
selectedTxNum := make([]*IterOptions, 0, len(tx_hashes))
for _, txhash := range tx_hashes {
key := prefixes.TxNumKey{Prefix: []byte{prefixes.TxNum}, TxHash: &txhash}
log.Debugf("%v", key)
opt, err := db.selectFrom(key.Prefix, &key, &key)
if err != nil {
return nil, err
}
selectedTxNum = append(selectedTxNum, opt...)
}
selectTxByTxNum := func(in []*prefixes.PrefixRowKV) ([]*IterOptions, error) {
txNumKey := in[0].Key.(*prefixes.TxNumKey)
log.Debugf("%v", txNumKey.TxHash.String())
out := make([]*IterOptions, 0, 100)
startKey := &prefixes.TxKey{
Prefix: []byte{prefixes.Tx},
TxHash: txNumKey.TxHash,
}
endKey := &prefixes.TxKey{
Prefix: []byte{prefixes.Tx},
TxHash: txNumKey.TxHash,
}
selectedTx, err := db.selectFrom([]byte{prefixes.Tx}, startKey, endKey)
if err != nil {
return nil, err
}
out = append(out, selectedTx...)
return out, nil
}
blockTxsCache := make(map[uint32][]*chainhash.Hash)
results := make([]TxMerkle, 0, 500)
for kvs := range innerJoin(db.DB, iterate(db.DB, selectedTxNum), selectTxByTxNum) {
if err := checkForError(kvs); err != nil {
return results, err
}
txNumKey, txNumVal := kvs[0].Key.(*prefixes.TxNumKey), kvs[0].Value.(*prefixes.TxNumValue)
_, txVal := kvs[1].Key.(*prefixes.TxKey), kvs[1].Value.(*prefixes.TxValue)
txHeight := stack.BisectRight(db.TxCounts, []uint32{txNumVal.TxNum})[0]
txPos := txNumVal.TxNum - db.TxCounts.Get(txHeight-1)
// We need all the TX hashes in order to select out the relevant ones.
if _, ok := blockTxsCache[txHeight]; !ok {
txs, err := db.GetBlockTXs(txHeight)
if err != nil {
return results, err
}
blockTxsCache[txHeight] = txs
}
blockTxs := blockTxsCache[txHeight]
results = append(results, TxMerkle{
TxHash: txNumKey.TxHash,
RawTx: txVal.RawTx,
Height: int(txHeight),
Pos: txPos,
Merkle: merklePath(txPos, blockTxs, []*chainhash.Hash{}),
})
}
return results, nil
}
func (db *ReadOnlyDBColumnFamily) GetClaimByID(claimID string) ([]*ExpandedResolveResult, []*ExpandedResolveResult, error) {
rows := make([]*ExpandedResolveResult, 0)
extras := make([]*ExpandedResolveResult, 0)
claimHash, err := hex.DecodeString(claimID)
if err != nil {
return nil, nil, err
}
stream, err := db.FsGetClaimByHash(claimHash)
if err != nil {
return nil, nil, err
}
var res = NewExpandedResolveResult()
res.Stream = &optionalResolveResultOrError{res: stream}
rows = append(rows, res)
if stream != nil && stream.ChannelHash != nil {
channel, err := db.FsGetClaimByHash(stream.ChannelHash)
if err != nil {
return nil, nil, err
}
var res = NewExpandedResolveResult()
res.Channel = &optionalResolveResultOrError{res: channel}
extras = append(extras, res)
}
if stream != nil && stream.RepostedClaimHash != nil {
repost, err := db.FsGetClaimByHash(stream.RepostedClaimHash)
if err != nil {
return nil, nil, err
}
var res = NewExpandedResolveResult()
res.Repost = &optionalResolveResultOrError{res: repost}
extras = append(extras, res)
}
return rows, extras, nil
}
func (db *ReadOnlyDBColumnFamily) GetDBState() (*prefixes.DBStateValue, error) {
handle, err := db.EnsureHandle(prefixes.DBState)
if err != nil {
@ -519,16 +1057,16 @@ func (db *ReadOnlyDBColumnFamily) GetDBState() (*prefixes.DBStateValue, error) {
return value, nil
}
func (db *ReadOnlyDBColumnFamily) EffectiveAmountNameIter(normalizedName string) <-chan *prefixes.PrefixRowKV {
handle, err := db.EnsureHandle(prefixes.EffectiveAmount)
func (db *ReadOnlyDBColumnFamily) BidOrderNameIter(normalizedName string) <-chan *prefixes.PrefixRowKV {
handle, err := db.EnsureHandle(prefixes.BidOrder)
if err != nil {
return nil
}
key := prefixes.NewEffectiveAmountKey(normalizedName)
key := prefixes.NewBidOrderKey(normalizedName)
var rawKeyPrefix []byte = nil
rawKeyPrefix = prefixes.EffectiveAmountKeyPackPartial(key, 1)
options := NewIterateOptions().WithCfHandle(handle).WithPrefix(rawKeyPrefix)
rawKeyPrefix = key.PartialPack(1)
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
ch := IterCF(db.DB, options)
return ch
@ -542,11 +1080,11 @@ func (db *ReadOnlyDBColumnFamily) ClaimShortIdIter(normalizedName string, claimI
key := prefixes.NewClaimShortIDKey(normalizedName, claimId)
var rawKeyPrefix []byte = nil
if claimId != "" {
rawKeyPrefix = prefixes.ClaimShortIDKeyPackPartial(key, 2)
rawKeyPrefix = key.PartialPack(2)
} else {
rawKeyPrefix = prefixes.ClaimShortIDKeyPackPartial(key, 1)
rawKeyPrefix = key.PartialPack(1)
}
options := NewIterateOptions().WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
ch := IterCF(db.DB, options)
return ch

View file

@ -11,6 +11,7 @@ import (
"strings"
"github.com/lbryio/herald.go/db/prefixes"
"github.com/lbryio/herald.go/db/stack"
"github.com/lbryio/herald.go/internal"
pb "github.com/lbryio/herald.go/protobuf/go"
lbryurl "github.com/lbryio/lbry.go/v3/url"
@ -40,7 +41,8 @@ func PrepareResolveResult(
return nil, err
}
height, createdHeight := db.TxCounts.TxCountsBisectRight(txNum, rootTxNum)
heights := stack.BisectRight(db.TxCounts, []uint32{txNum, rootTxNum})
height, createdHeight := heights[0], heights[1]
lastTakeoverHeight := controllingClaim.Height
expirationHeight := GetExpirationHeight(height)
@ -86,7 +88,7 @@ func PrepareResolveResult(
return nil, err
}
repostTxPostition = repostTxo.Position
repostHeight, _ = db.TxCounts.TxCountsBisectRight(repostTxo.TxNum, rootTxNum)
repostHeight = stack.BisectRight(db.TxCounts, []uint32{repostTxo.TxNum})[0]
}
}
@ -122,7 +124,7 @@ func PrepareResolveResult(
return nil, err
}
channelTxPostition = channelVals.Position
channelHeight, _ = db.TxCounts.TxCountsBisectRight(channelVals.TxNum, rootTxNum)
channelHeight = stack.BisectRight(db.TxCounts, []uint32{channelVals.TxNum})[0]
}
}
@ -173,8 +175,8 @@ func (db *ReadOnlyDBColumnFamily) ResolveParsedUrl(parsed *PathSegment) (*Resolv
for kv := range ch {
key := kv.Key.(*prefixes.ClaimTakeoverKey)
val := kv.Value.(*prefixes.ClaimTakeoverValue)
log.Warnf("ClaimTakeoverKey: %#v", key)
log.Warnf("ClaimTakeoverValue: %#v", val)
log.Tracef("ClaimTakeoverKey: %#v", key)
log.Tracef("ClaimTakeoverValue: %#v", val)
}
controlling, err := db.GetControllingClaim(normalizedName)
log.Warnf("controlling: %#v", controlling)
@ -281,15 +283,15 @@ func (db *ReadOnlyDBColumnFamily) ResolveParsedUrl(parsed *PathSegment) (*Resolv
// Resolve by amount ordering
log.Warn("resolving by amount ordering")
ch := db.EffectiveAmountNameIter(normalizedName)
ch := db.BidOrderNameIter(normalizedName)
var i = 0
for kv := range ch {
if i+1 < amountOrder {
i++
continue
}
key := kv.Key.(*prefixes.EffectiveAmountKey)
claimVal := kv.Value.(*prefixes.EffectiveAmountValue)
key := kv.Key.(*prefixes.BidOrderKey)
claimVal := kv.Value.(*prefixes.BidOrderValue)
claimTxo, err := db.GetCachedClaimTxo(claimVal.ClaimHash, true)
if err != nil {
return nil, err
@ -323,8 +325,8 @@ func (db *ReadOnlyDBColumnFamily) ResolveClaimInChannel(channelHash []byte, norm
}
key := prefixes.NewChannelToClaimKey(channelHash, normalizedName)
rawKeyPrefix := prefixes.ChannelToClaimKeyPackPartial(key, 2)
options := NewIterateOptions().WithCfHandle(handle).WithPrefix(rawKeyPrefix)
rawKeyPrefix := key.PartialPack(2)
options := NewIterateOptions().WithDB(db).WithCfHandle(handle).WithPrefix(rawKeyPrefix)
options = options.WithIncludeValue(true) //.WithIncludeStop(true)
ch := IterCF(db.DB, options)
// TODO: what's a good default size for this?

View file

@ -4,7 +4,6 @@ import (
"bytes"
"encoding/csv"
"encoding/hex"
"log"
"os"
"strings"
"testing"
@ -12,7 +11,9 @@ import (
dbpkg "github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/db/prefixes"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/linxGnu/grocksdb"
log "github.com/sirupsen/logrus"
)
////////////////////////////////////////////////////////////////////////////////
@ -20,7 +21,7 @@ import (
////////////////////////////////////////////////////////////////////////////////
// OpenAndFillTmpDBColumnFamlies opens a db and fills it with data from a csv file using the given column family names
func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFamily, [][]string, func(), error) {
func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFamily, [][]string, error) {
log.Println(filePath)
file, err := os.Open(filePath)
@ -30,7 +31,7 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
wOpts := grocksdb.NewDefaultWriteOptions()
@ -38,7 +39,7 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
opts.SetCreateIfMissing(true)
db, err := grocksdb.OpenDb(opts, "tmp")
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
var handleMap map[string]*grocksdb.ColumnFamilyHandle = make(map[string]*grocksdb.ColumnFamilyHandle)
@ -53,7 +54,7 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
log.Println(cfName)
handle, err := db.CreateColumnFamily(opts, cfName)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
handleMap[cfName] = handle
}
@ -67,16 +68,16 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
for _, record := range records[1:] {
cf := record[0]
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
handle := handleMap[string(cf)]
key, err := hex.DecodeString(record[1])
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
val, err := hex.DecodeString(record[2])
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
db.PutCF(wOpts, handle, key, val)
}
@ -93,6 +94,8 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
LastState: nil,
Height: 0,
Headers: nil,
Grp: stop.New(),
Cleanup: toDefer,
}
// err = dbpkg.ReadDBState(myDB) //TODO: Figure out right place for this
@ -102,7 +105,7 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
err = myDB.InitTxCounts()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
// err = dbpkg.InitHeaders(myDB)
@ -110,7 +113,7 @@ func OpenAndFillTmpDBColumnFamlies(filePath string) (*dbpkg.ReadOnlyDBColumnFami
// return nil, nil, nil, err
// }
return myDB, records, toDefer, nil
return myDB, records, nil
}
// OpenAndFillTmpDBCF opens a db and fills it with data from a csv file
@ -241,17 +244,18 @@ func CatCSV(filePath string) {
func TestCatFullDB(t *testing.T) {
t.Skip("Skipping full db test")
grp := stop.New()
// url := "lbry://@lothrop#2/lothrop-livestream-games-and-code#c"
// "lbry://@lbry", "lbry://@lbry#3", "lbry://@lbry3f", "lbry://@lbry#3fda836a92faaceedfe398225fb9b2ee2ed1f01a", "lbry://@lbry:1", "lbry://@lbry$1"
// url := "lbry://@Styxhexenhammer666#2/legacy-media-baron-les-moonves-(cbs#9"
// url := "lbry://@lbry"
// url := "lbry://@lbry#3fda836a92faaceedfe398225fb9b2ee2ed1f01a"
dbPath := "/mnt/sda/wallet_server/_data/lbry-rocksdb/"
dbPath := "/mnt/sda1/wallet_server/_data/lbry-rocksdb/"
// dbPath := "/mnt/d/data/snapshot_1072108/lbry-rocksdb/"
secondaryPath := "asdf"
db, toDefer, err := dbpkg.GetProdDB(dbPath, secondaryPath)
db, err := dbpkg.GetProdDB(dbPath, secondaryPath, grp)
defer db.Shutdown()
defer toDefer()
if err != nil {
t.Error(err)
return
@ -271,6 +275,7 @@ func TestCatFullDB(t *testing.T) {
// TestOpenFullDB Tests running a resolve on a full db.
func TestOpenFullDB(t *testing.T) {
t.Skip("Skipping full db test")
grp := stop.New()
// url := "lbry://@lothrop#2/lothrop-livestream-games-and-code#c"
// "lbry://@lbry", "lbry://@lbry#3", "lbry://@lbry3f", "lbry://@lbry#3fda836a92faaceedfe398225fb9b2ee2ed1f01a", "lbry://@lbry:1", "lbry://@lbry$1"
// url := "lbry://@Styxhexenhammer666#2/legacy-media-baron-les-moonves-(cbs#9"
@ -278,11 +283,11 @@ func TestOpenFullDB(t *testing.T) {
// url := "lbry://@lbry#3fda836a92faaceedfe398225fb9b2ee2ed1f01a"
// url := "lbry://@lbry$1"
url := "https://lbry.tv/@lothrop:2/lothrop-livestream-games-and-code:c"
dbPath := "/mnt/sda/wallet_server/_data/lbry-rocksdb/"
dbPath := "/mnt/sda1/wallet_server/_data/lbry-rocksdb/"
// dbPath := "/mnt/d/data/snapshot_1072108/lbry-rocksdb/"
secondaryPath := "asdf"
db, toDefer, err := dbpkg.GetProdDB(dbPath, secondaryPath)
defer toDefer()
db, err := dbpkg.GetProdDB(dbPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
@ -296,12 +301,13 @@ func TestOpenFullDB(t *testing.T) {
func TestResolve(t *testing.T) {
url := "lbry://@Styxhexenhammer666#2/legacy-media-baron-les-moonves-(cbs#9"
filePath := "../testdata/FULL_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
defer toDefer()
expandedResolveResult := db.Resolve(url)
log.Printf("%#v\n", expandedResolveResult)
if expandedResolveResult != nil && expandedResolveResult.Channel != nil {
@ -315,11 +321,11 @@ func TestResolve(t *testing.T) {
func TestGetDBState(t *testing.T) {
filePath := "../testdata/s_resolve.csv"
want := uint32(1072108)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
state, err := db.GetDBState()
if err != nil {
t.Error(err)
@ -331,16 +337,50 @@ func TestGetDBState(t *testing.T) {
}
func TestGetRepostedClaim(t *testing.T) {
t.Skip("skipping obsolete? test of prefix W (Reposted)")
channelHash, _ := hex.DecodeString("2556ed1cab9d17f2a9392030a9ad7f5d138f11bd")
want := 5
// Should be non-existent
channelHash2, _ := hex.DecodeString("2556ed1cab9d17f2a9392030a9ad7f5d138f11bf")
filePath := "../testdata/W_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
count, err := db.GetRepostedCount(channelHash)
if err != nil {
t.Error(err)
}
log.Println(count)
if count != want {
t.Errorf("Expected %d, got %d", want, count)
}
count2, err := db.GetRepostedCount(channelHash2)
if err != nil {
t.Error(err)
}
if count2 != 0 {
t.Errorf("Expected 0, got %d", count2)
}
}
func TestGetRepostedCount(t *testing.T) {
channelHash, _ := hex.DecodeString("2556ed1cab9d17f2a9392030a9ad7f5d138f11bd")
want := 5
// Should be non-existent
channelHash2, _ := hex.DecodeString("2556ed1cab9d17f2a9392030a9ad7f5d138f11bf")
filePath := "../testdata/j_resolve.csv"
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
count, err := db.GetRepostedCount(channelHash)
if err != nil {
@ -373,11 +413,11 @@ func TestGetRepost(t *testing.T) {
channelHash2, _ := hex.DecodeString("000009ca6e0caaaef16872b4bd4f6f1b8c2363e2")
filePath := "../testdata/V_resolve.csv"
// want := uint32(3670)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
res, err := db.GetRepost(channelHash)
if err != nil {
@ -407,11 +447,12 @@ func TestGetClaimsInChannelCount(t *testing.T) {
channelHash, _ := hex.DecodeString("2556ed1cab9d17f2a9392030a9ad7f5d138f11bd")
filePath := "../testdata/Z_resolve.csv"
want := uint32(3670)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
count, err := db.GetClaimsInChannelCount(channelHash)
if err != nil {
t.Error(err)
@ -436,11 +477,12 @@ func TestGetShortClaimIdUrl(t *testing.T) {
var position uint16 = 0
filePath := "../testdata/F_resolve.csv"
log.Println(filePath)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
shortUrl, err := db.GetShortClaimIdUrl(name, normalName, claimHash, rootTxNum, position)
if err != nil {
t.Error(err)
@ -454,11 +496,11 @@ func TestClaimShortIdIter(t *testing.T) {
filePath := "../testdata/F_cat.csv"
normalName := "cat"
claimId := "0"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
ch := db.ClaimShortIdIter(normalName, claimId)
@ -484,11 +526,12 @@ func TestGetTXOToClaim(t *testing.T) {
var txNum uint32 = 1456296
var position uint16 = 0
filePath := "../testdata/G_2.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
val, err := db.GetCachedClaimHash(txNum, position)
if err != nil {
t.Error(err)
@ -512,11 +555,11 @@ func TestGetClaimToChannel(t *testing.T) {
var val []byte = nil
filePath := "../testdata/I_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
val, err = db.GetChannelForClaim(claimHash, txNum, position)
if err != nil {
@ -536,26 +579,69 @@ func TestGetClaimToChannel(t *testing.T) {
}
}
func TestGetEffectiveAmount(t *testing.T) {
filePath := "../testdata/S_resolve.csv"
want := uint64(586370959900)
claimHashStr := "2556ed1cab9d17f2a9392030a9ad7f5d138f11bd"
func TestGetEffectiveAmountSupportOnly(t *testing.T) {
filePath := "../testdata/Si_resolve.csv"
want := uint64(20000006)
claimHashStr := "00000324e40fcb63a0b517a3660645e9bd99244a"
claimHash, _ := hex.DecodeString(claimHashStr)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
db.Height = 1116054
db.Height = 999999999
amount, err := db.GetEffectiveAmount(claimHash, true)
if err != nil {
t.Error(err)
}
if amount != want {
t.Errorf("Expected %d, got %d", want, amount)
}
// Cross-check against iterator-based implementation.
iteratorAmount, err := db.GetActiveAmount(claimHash, prefixes.ActivatedSupportTXOType, db.Height)
if err != nil {
t.Error(err)
}
if iteratorAmount != want {
t.Errorf("Expected %d, got %d", want, iteratorAmount)
}
}
func TestGetEffectiveAmount(t *testing.T) {
filePath := "../testdata/Si_resolve.csv"
want := uint64(21000006)
claimHashStr := "00000324e40fcb63a0b517a3660645e9bd99244a"
claimHash, _ := hex.DecodeString(claimHashStr)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
db.Height = 999999999
amount, err := db.GetEffectiveAmount(claimHash, false)
if err != nil {
t.Error(err)
}
if amount != want {
t.Errorf("Expected %d, got %d", want, amount)
}
// Cross-check against iterator-based implementation.
iteratorAmount1, err := db.GetActiveAmount(claimHash, prefixes.ActivatedSupportTXOType, db.Height)
if err != nil {
t.Error(err)
}
iteratorAmount2, err := db.GetActiveAmount(claimHash, prefixes.ActivateClaimTXOType, db.Height)
if err != nil {
t.Error(err)
}
if iteratorAmount1+iteratorAmount2 != want {
t.Errorf("Expected %d, got %d (%d + %d)", want, iteratorAmount1+iteratorAmount2, iteratorAmount1, iteratorAmount2)
}
}
func TestGetSupportAmount(t *testing.T) {
@ -566,11 +652,12 @@ func TestGetSupportAmount(t *testing.T) {
t.Error(err)
}
filePath := "../testdata/a_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
res, err := db.GetSupportAmount(claimHash)
if err != nil {
t.Error(err)
@ -586,11 +673,12 @@ func TestGetTxHash(t *testing.T) {
want := "54e14ff0c404c29b3d39ae4d249435f167d5cd4ce5a428ecb745b3df1c8e3dde"
filePath := "../testdata/X_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
resHash, err := db.GetTxHash(txNum)
if err != nil {
t.Error(err)
@ -628,11 +716,12 @@ func TestGetActivation(t *testing.T) {
txNum := uint32(0x6284e3)
position := uint16(0x0)
want := uint32(0xa6b65)
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
}
defer toDefer()
activation, err := db.GetActivation(txNum, position)
if err != nil {
t.Error(err)
@ -659,12 +748,13 @@ func TestGetClaimToTXO(t *testing.T) {
return
}
filePath := "../testdata/E_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
defer toDefer()
res, err := db.GetCachedClaimTxo(claimHash, true)
if err != nil {
t.Error(err)
@ -688,12 +778,13 @@ func TestGetControllingClaim(t *testing.T) {
claimName := internal.NormalizeName("@Styxhexenhammer666")
claimHash := "2556ed1cab9d17f2a9392030a9ad7f5d138f11bd"
filePath := "../testdata/P_resolve.csv"
db, _, toDefer, err := OpenAndFillTmpDBColumnFamlies(filePath)
db, _, err := OpenAndFillTmpDBColumnFamlies(filePath)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
defer toDefer()
res, err := db.GetControllingClaim(claimName)
if err != nil {
t.Error(err)
@ -729,9 +820,9 @@ func TestIter(t *testing.T) {
// log.Println(kv.Key)
gotKey := kv.Key.(*prefixes.RepostedKey).PackKey()
keyPartial3 := prefixes.RepostedKeyPackPartial(kv.Key.(*prefixes.RepostedKey), 3)
keyPartial2 := prefixes.RepostedKeyPackPartial(kv.Key.(*prefixes.RepostedKey), 2)
keyPartial1 := prefixes.RepostedKeyPackPartial(kv.Key.(*prefixes.RepostedKey), 1)
keyPartial3 := kv.Key.(*prefixes.RepostedKey).PartialPack(3)
keyPartial2 := kv.Key.(*prefixes.RepostedKey).PartialPack(2)
keyPartial1 := kv.Key.(*prefixes.RepostedKey).PartialPack(1)
// Check pack partial for sanity
if !bytes.HasPrefix(gotKey, keyPartial3) {

View file

@ -6,6 +6,7 @@ import (
"bytes"
"github.com/lbryio/herald.go/db/prefixes"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/linxGnu/grocksdb"
log "github.com/sirupsen/logrus"
@ -22,8 +23,11 @@ type IterOptions struct {
IncludeValue bool
RawKey bool
RawValue bool
CfHandle *grocksdb.ColumnFamilyHandle
It *grocksdb.Iterator
Grp *stop.Group
// DB *ReadOnlyDBColumnFamily
CfHandle *grocksdb.ColumnFamilyHandle
It *grocksdb.Iterator
Serializer *prefixes.SerializationAPI
}
// NewIterateOptions creates a defualt options structure for a db iterator.
@ -39,8 +43,11 @@ func NewIterateOptions() *IterOptions {
IncludeValue: false,
RawKey: false,
RawValue: false,
CfHandle: nil,
It: nil,
Grp: nil,
// DB: nil,
CfHandle: nil,
It: nil,
Serializer: prefixes.ProductionAPI,
}
}
@ -99,6 +106,18 @@ func (o *IterOptions) WithRawValue(rawValue bool) *IterOptions {
return o
}
func (o *IterOptions) WithDB(db *ReadOnlyDBColumnFamily) *IterOptions {
// o.Grp.AddNamed(1, iterKey)
o.Grp = stop.New(db.Grp)
o.Grp.Add(1)
return o
}
func (o *IterOptions) WithSerializer(serializer *prefixes.SerializationAPI) *IterOptions {
o.Serializer = serializer
return o
}
// ReadRow reads a row from the db, returns nil when no more rows are available.
func (opts *IterOptions) ReadRow(prevKey *[]byte) *prefixes.PrefixRowKV {
it := opts.It
@ -117,8 +136,10 @@ func (opts *IterOptions) ReadRow(prevKey *[]byte) *prefixes.PrefixRowKV {
valueData := value.Data()
valueLen := len(valueData)
var outKey interface{} = nil
var outValue interface{} = nil
var outKey prefixes.BaseKey = nil
var outValue prefixes.BaseValue = nil
var rawOutKey []byte = nil
var rawOutValue []byte = nil
var err error = nil
log.Trace("keyData:", keyData)
@ -136,12 +157,12 @@ func (opts *IterOptions) ReadRow(prevKey *[]byte) *prefixes.PrefixRowKV {
newKeyData := make([]byte, keyLen)
copy(newKeyData, keyData)
if opts.IncludeKey && !opts.RawKey {
outKey, err = prefixes.UnpackGenericKey(newKeyData)
outKey, err = opts.Serializer.UnpackKey(newKeyData)
if err != nil {
log.Error(err)
}
} else if opts.IncludeKey {
outKey = newKeyData
rawOutKey = newKeyData
}
// Value could be quite large, so this setting could be important
@ -150,18 +171,20 @@ func (opts *IterOptions) ReadRow(prevKey *[]byte) *prefixes.PrefixRowKV {
newValueData := make([]byte, valueLen)
copy(newValueData, valueData)
if !opts.RawValue {
outValue, err = prefixes.UnpackGenericValue(newKeyData, newValueData)
outValue, err = opts.Serializer.UnpackValue(newKeyData, newValueData)
if err != nil {
log.Error(err)
}
} else {
outValue = newValueData
rawOutValue = newValueData
}
}
kv := &prefixes.PrefixRowKV{
Key: outKey,
Value: outValue,
Key: outKey,
Value: outValue,
RawKey: rawOutKey,
RawValue: rawOutValue,
}
*prevKey = newKeyData

230
db/prefixes/generic.go Normal file
View file

@ -0,0 +1,230 @@
package prefixes
import (
"encoding/binary"
"fmt"
"reflect"
"strings"
"github.com/go-restruct/restruct"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/lbcd/chaincfg/chainhash"
)
func init() {
restruct.EnableExprBeta()
}
// Type OnesComplementEffectiveAmount (uint64) has to be encoded specially
// to get the desired sort ordering.
// Implement the Sizer, Packer, Unpacker interface to handle it manually.
func (amt *OnesComplementEffectiveAmount) SizeOf() int {
return 8
}
func (amt *OnesComplementEffectiveAmount) Pack(buf []byte, order binary.ByteOrder) ([]byte, error) {
binary.BigEndian.PutUint64(buf, OnesCompTwiddle64-uint64(*amt))
return buf[8:], nil
}
func (amt *OnesComplementEffectiveAmount) Unpack(buf []byte, order binary.ByteOrder) ([]byte, error) {
*amt = OnesComplementEffectiveAmount(OnesCompTwiddle64 - binary.BigEndian.Uint64(buf))
return buf[8:], nil
}
// Struct BlockTxsValue has a field TxHashes of type []*chainhash.Hash.
// I haven't been able to figure out the right annotations to make
// restruct.Pack,Unpack work automagically.
// Implement the Sizer, Packer, Unpacker interface to handle it manually.
func (kv *BlockTxsValue) SizeOf() int {
return 32 * len(kv.TxHashes)
}
func (kv *BlockTxsValue) Pack(buf []byte, order binary.ByteOrder) ([]byte, error) {
offset := 0
for _, h := range kv.TxHashes {
offset += copy(buf[offset:], h[:])
}
return buf[offset:], nil
}
func (kv *BlockTxsValue) Unpack(buf []byte, order binary.ByteOrder) ([]byte, error) {
offset := 0
kv.TxHashes = make([]*chainhash.Hash, len(buf)/32)
for i := range kv.TxHashes {
kv.TxHashes[i] = (*chainhash.Hash)(buf[offset:32])
offset += 32
}
return buf[offset:], nil
}
// Struct BigEndianChainHash is a chainhash.Hash stored in external
// byte-order (opposite of other 32 byte chainhash.Hash values). In order
// to reuse chainhash.Hash we need to correct the byte-order.
// Currently this type is used for field Genesis of DBStateValue.
func (kv *BigEndianChainHash) SizeOf() int {
return chainhash.HashSize
}
func (kv *BigEndianChainHash) Pack(buf []byte, order binary.ByteOrder) ([]byte, error) {
offset := 0
hash := kv.CloneBytes()
// HACK: Instances of chainhash.Hash use the internal byte-order.
// Python scribe writes bytes of genesis hash in external byte-order.
internal.ReverseBytesInPlace(hash)
offset += copy(buf[offset:chainhash.HashSize], hash[:])
return buf[offset:], nil
}
func (kv *BigEndianChainHash) Unpack(buf []byte, order binary.ByteOrder) ([]byte, error) {
offset := 0
offset += copy(kv.Hash[:], buf[offset:32])
// HACK: Instances of chainhash.Hash use the internal byte-order.
// Python scribe writes bytes of genesis hash in external byte-order.
internal.ReverseBytesInPlace(kv.Hash[:])
return buf[offset:], nil
}
func genericNew(prefix []byte, key bool) (interface{}, error) {
t, ok := prefixRegistry[prefix[0]]
if !ok {
panic(fmt.Sprintf("not handled: prefix=%v", prefix))
}
if key {
return t.newKey(), nil
}
return t.newValue(), nil
}
func GenericPack(kv interface{}, fields int) ([]byte, error) {
// Locate the byte offset of the first excluded field.
offset := 0
if fields > 0 {
v := reflect.ValueOf(kv)
t := v.Type()
// Handle indirection to reach kind=Struct.
switch t.Kind() {
case reflect.Interface, reflect.Pointer:
v = v.Elem()
t = v.Type()
default:
panic(fmt.Sprintf("not handled: %v", t.Kind()))
}
count := 0
for _, sf := range reflect.VisibleFields(t) {
if !sf.IsExported() {
continue
}
if sf.Anonymous && strings.HasPrefix(sf.Name, "LengthEncoded") {
fields += 1 // Skip it but process NameLen and Name instead.
continue
}
if count > fields {
break
}
sz, err := restruct.SizeOf(v.FieldByIndex(sf.Index).Interface())
if err != nil {
panic(fmt.Sprintf("not handled: %v: %v", sf.Name, sf.Type.Kind()))
}
offset += sz
count += 1
}
}
// Pack the struct. No ability to partially pack.
buf, err := restruct.Pack(binary.BigEndian, kv)
if err != nil {
panic(fmt.Sprintf("not handled: %v", err))
}
// Return a prefix if some fields were excluded.
if fields > 0 {
return buf[:offset], nil
}
return buf, nil
}
func GenericUnpack(pfx []byte, key bool, buf []byte) (interface{}, error) {
kv, _ := genericNew(pfx, key)
err := restruct.Unpack(buf, binary.BigEndian, kv)
if err != nil {
panic(fmt.Sprintf("not handled: %v", err))
}
return kv, nil
}
func GetSerializationAPI(prefix []byte) *SerializationAPI {
t, ok := prefixRegistry[prefix[0]]
if !ok {
panic(fmt.Sprintf("not handled: prefix=%v", prefix))
}
if t.API != nil {
return t.API
}
return ProductionAPI
}
type SerializationAPI struct {
PackKey func(key BaseKey) ([]byte, error)
PackPartialKey func(key BaseKey, fields int) ([]byte, error)
PackValue func(value BaseValue) ([]byte, error)
UnpackKey func(key []byte) (BaseKey, error)
UnpackValue func(prefix []byte, value []byte) (BaseValue, error)
}
var ProductionAPI = &SerializationAPI{
PackKey: PackGenericKey,
PackPartialKey: PackPartialGenericKey,
PackValue: PackGenericValue,
UnpackKey: UnpackGenericKey,
UnpackValue: UnpackGenericValue,
}
var RegressionAPI_1 = &SerializationAPI{
PackKey: func(key BaseKey) ([]byte, error) {
return GenericPack(key, -1)
},
PackPartialKey: func(key BaseKey, fields int) ([]byte, error) {
return GenericPack(key, fields)
},
PackValue: func(value BaseValue) ([]byte, error) {
return GenericPack(value, -1)
},
UnpackKey: UnpackGenericKey,
UnpackValue: UnpackGenericValue,
}
var RegressionAPI_2 = &SerializationAPI{
PackKey: PackGenericKey,
PackPartialKey: PackPartialGenericKey,
PackValue: PackGenericValue,
UnpackKey: func(key []byte) (BaseKey, error) {
k, err := GenericUnpack(key, true, key)
return k.(BaseKey), err
},
UnpackValue: func(prefix []byte, value []byte) (BaseValue, error) {
k, err := GenericUnpack(prefix, false, value)
return k.(BaseValue), err
},
}
var RegressionAPI_3 = &SerializationAPI{
PackKey: func(key BaseKey) ([]byte, error) {
return GenericPack(key, -1)
},
PackPartialKey: func(key BaseKey, fields int) ([]byte, error) {
return GenericPack(key, fields)
},
PackValue: func(value BaseValue) ([]byte, error) {
return GenericPack(value, -1)
},
UnpackKey: func(key []byte) (BaseKey, error) {
k, err := GenericUnpack(key, true, key)
return k.(BaseKey), err
},
UnpackValue: func(prefix []byte, value []byte) (BaseValue, error) {
k, err := GenericUnpack(prefix, false, value)
return k.(BaseValue), err
},
}

File diff suppressed because it is too large Load diff

View file

@ -2,18 +2,30 @@ package prefixes_test
import (
"bytes"
"crypto/rand"
"encoding/csv"
"encoding/hex"
"fmt"
"log"
"math"
"math/big"
"os"
"sort"
"testing"
dbpkg "github.com/lbryio/herald.go/db"
prefixes "github.com/lbryio/herald.go/db/prefixes"
"github.com/linxGnu/grocksdb"
log "github.com/sirupsen/logrus"
)
func TestPrefixRegistry(t *testing.T) {
for _, prefix := range prefixes.GetPrefixes() {
if prefixes.GetSerializationAPI(prefix) == nil {
t.Errorf("prefix %c not registered", prefix)
}
}
}
func testInit(filePath string) (*grocksdb.DB, [][]string, func(), *grocksdb.ColumnFamilyHandle) {
log.Println(filePath)
file, err := os.Open(filePath)
@ -28,12 +40,25 @@ func testInit(filePath string) (*grocksdb.DB, [][]string, func(), *grocksdb.Colu
columnFamily := records[0][0]
records = records[1:]
cleanupFiles := func() {
err = os.RemoveAll("./tmp")
if err != nil {
log.Println(err)
}
}
// wOpts := grocksdb.NewDefaultWriteOptions()
opts := grocksdb.NewDefaultOptions()
opts.SetCreateIfMissing(true)
db, err := grocksdb.OpenDb(opts, "tmp")
if err != nil {
log.Println(err)
// Garbage might have been left behind by a prior crash.
cleanupFiles()
db, err = grocksdb.OpenDb(opts, "tmp")
if err != nil {
log.Println(err)
}
}
handle, err := db.CreateColumnFamily(opts, columnFamily)
if err != nil {
@ -41,16 +66,30 @@ func testInit(filePath string) (*grocksdb.DB, [][]string, func(), *grocksdb.Colu
}
toDefer := func() {
db.Close()
err = os.RemoveAll("./tmp")
if err != nil {
log.Println(err)
}
cleanupFiles()
}
return db, records, toDefer, handle
}
func testGeneric(filePath string, prefix byte, numPartials int) func(*testing.T) {
return func(t *testing.T) {
APIs := []*prefixes.SerializationAPI{
prefixes.GetSerializationAPI([]byte{prefix}),
// Verify combinations of production vs. "restruct" implementations of
// serialization API (e.g production Pack() with "restruct" Unpack()).
prefixes.RegressionAPI_1,
prefixes.RegressionAPI_2,
prefixes.RegressionAPI_3,
}
for _, api := range APIs {
opts := dbpkg.NewIterateOptions().WithPrefix([]byte{prefix}).WithSerializer(api).WithIncludeValue(true)
testGenericOptions(opts, filePath, prefix, numPartials)(t)
}
}
}
func testGenericOptions(options *dbpkg.IterOptions, filePath string, prefix byte, numPartials int) func(*testing.T) {
return func(t *testing.T) {
wOpts := grocksdb.NewDefaultWriteOptions()
@ -69,26 +108,34 @@ func testGeneric(filePath string, prefix byte, numPartials int) func(*testing.T)
db.PutCF(wOpts, handle, key, val)
}
// test prefix
options := dbpkg.NewIterateOptions().WithPrefix([]byte{prefix}).WithIncludeValue(true)
options = options.WithCfHandle(handle)
ch := dbpkg.IterCF(db, options)
var i = 0
for kv := range ch {
// log.Println(kv.Key)
gotKey, err := prefixes.PackGenericKey(prefix, kv.Key)
gotKey, err := options.Serializer.PackKey(kv.Key)
if err != nil {
log.Println(err)
}
if numPartials != kv.Key.NumFields() {
t.Errorf("key reports %v fields but %v expected", kv.Key.NumFields(), numPartials)
}
for j := 1; j <= numPartials; j++ {
keyPartial, _ := prefixes.PackPartialGenericKey(prefix, kv.Key, j)
keyPartial, _ := options.Serializer.PackPartialKey(kv.Key, j)
// Check pack partial for sanity
if !bytes.HasPrefix(gotKey, keyPartial) {
t.Errorf("%+v should be prefix of %+v\n", keyPartial, gotKey)
if j < numPartials {
if !bytes.HasPrefix(gotKey, keyPartial) || (len(keyPartial) >= len(gotKey)) {
t.Errorf("%+v should be prefix of %+v\n", keyPartial, gotKey)
}
} else {
if !bytes.Equal(gotKey, keyPartial) {
t.Errorf("%+v should be equal to %+v\n", keyPartial, gotKey)
}
}
}
got, err := prefixes.PackGenericValue(prefix, kv.Value)
got, err := options.Serializer.PackValue(kv.Value)
if err != nil {
log.Println(err)
}
@ -101,7 +148,7 @@ func testGeneric(filePath string, prefix byte, numPartials int) func(*testing.T)
log.Println(err)
}
if !bytes.Equal(gotKey, wantKey) {
t.Errorf("gotKey: %+v, wantKey: %+v\n", got, want)
t.Errorf("gotKey: %+v, wantKey: %+v\n", gotKey, wantKey)
}
if !bytes.Equal(got, want) {
t.Errorf("got: %+v, want: %+v\n", got, want)
@ -123,12 +170,12 @@ func testGeneric(filePath string, prefix byte, numPartials int) func(*testing.T)
if err != nil {
log.Println(err)
}
options2 := dbpkg.NewIterateOptions().WithStart(start).WithStop(stop).WithIncludeValue(true)
options2 := dbpkg.NewIterateOptions().WithSerializer(options.Serializer).WithStart(start).WithStop(stop).WithIncludeValue(true)
options2 = options2.WithCfHandle(handle)
ch2 := dbpkg.IterCF(db, options2)
i = 0
for kv := range ch2 {
got, err := prefixes.PackGenericValue(prefix, kv.Value)
got, err := options2.Serializer.PackValue(kv.Value)
if err != nil {
log.Println(err)
}
@ -216,7 +263,7 @@ func TestTXOToClaim(t *testing.T) {
func TestClaimShortID(t *testing.T) {
filePath := fmt.Sprintf("../../testdata/%c.csv", prefixes.ClaimShortIdPrefix)
testGeneric(filePath, prefixes.ClaimShortIdPrefix, 3)(t)
testGeneric(filePath, prefixes.ClaimShortIdPrefix, 4)(t)
}
func TestClaimToChannel(t *testing.T) {
@ -264,9 +311,9 @@ func TestActiveAmount(t *testing.T) {
testGeneric(filePath, prefixes.ActiveAmount, 5)(t)
}
func TestEffectiveAmount(t *testing.T) {
filePath := fmt.Sprintf("../../testdata/%c.csv", prefixes.EffectiveAmount)
testGeneric(filePath, prefixes.EffectiveAmount, 4)(t)
func TestBidOrder(t *testing.T) {
filePath := fmt.Sprintf("../../testdata/%c.csv", prefixes.BidOrder)
testGeneric(filePath, prefixes.BidOrder, 4)(t)
}
func TestRepost(t *testing.T) {
@ -279,6 +326,14 @@ func TestRepostedClaim(t *testing.T) {
testGeneric(filePath, prefixes.RepostedClaim, 3)(t)
}
func TestRepostedCount(t *testing.T) {
prefix := byte(prefixes.RepostedCount)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{20}, []int{4}, [][3]int{})
key := &prefixes.RepostedCountKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestClaimDiff(t *testing.T) {
filePath := fmt.Sprintf("../../testdata/%c.csv", prefixes.ClaimDiff)
testGeneric(filePath, prefixes.ClaimDiff, 1)(t)
@ -286,7 +341,7 @@ func TestClaimDiff(t *testing.T) {
func TestUTXO(t *testing.T) {
filePath := fmt.Sprintf("../../testdata/%c.csv", prefixes.UTXO)
testGeneric(filePath, prefixes.UTXO, 1)(t)
testGeneric(filePath, prefixes.UTXO, 3)(t)
}
func TestHashXUTXO(t *testing.T) {
@ -330,3 +385,183 @@ func TestUTXOKey_String(t *testing.T) {
})
}
}
func TestTrendingNotifications(t *testing.T) {
prefix := byte(prefixes.TrendingNotifications)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{4, 20}, []int{8, 8}, [][3]int{})
key := &prefixes.TrendingNotificationKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestMempoolTx(t *testing.T) {
prefix := byte(prefixes.MempoolTx)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{32}, []int{}, [][3]int{{20, 100, 1}})
key := &prefixes.MempoolTxKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestTouchedHashX(t *testing.T) {
prefix := byte(prefixes.TouchedHashX)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{4}, []int{}, [][3]int{{1, 5, 11}})
key := &prefixes.TouchedHashXKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestHashXStatus(t *testing.T) {
prefix := byte(prefixes.HashXStatus)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{20}, []int{32}, [][3]int{})
key := &prefixes.HashXStatusKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestHashXMempoolStatus(t *testing.T) {
prefix := byte(prefixes.HashXMempoolStatus)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{20}, []int{32}, [][3]int{})
key := &prefixes.HashXMempoolStatusKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func TestEffectiveAmount(t *testing.T) {
prefix := byte(prefixes.EffectiveAmount)
filePath := fmt.Sprintf("../../testdata/%c.csv", prefix)
//synthesizeTestData([]byte{prefix}, filePath, []int{20}, []int{8, 8}, [][3]int{})
key := &prefixes.EffectiveAmountKey{}
testGeneric(filePath, prefix, key.NumFields())(t)
}
func synthesizeTestData(prefix []byte, filePath string, keyFixed, valFixed []int, valVariable [][3]int) {
file, err := os.OpenFile(filePath, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0644)
if err != nil {
panic(err)
}
defer file.Close()
records := make([][2][]byte, 0, 20)
for r := 0; r < 20; r++ {
key := make([]byte, 0, 1000)
key = append(key, prefix...)
val := make([]byte, 0, 1000)
// Handle fixed columns of key.
for _, width := range keyFixed {
v := make([]byte, width)
rand.Read(v)
key = append(key, v...)
}
// Handle fixed columns of value.
for _, width := range valFixed {
v := make([]byte, width)
rand.Read(v)
val = append(val, v...)
}
// Handle variable length array in value. Each element is "chunk" size.
for _, w := range valVariable {
low, high, chunk := w[0], w[1], w[2]
n, _ := rand.Int(rand.Reader, big.NewInt(int64(high-low)))
v := make([]byte, chunk*(low+int(n.Int64())))
rand.Read(v)
val = append(val, v...)
}
records = append(records, [2][]byte{key, val})
}
sort.Slice(records, func(i, j int) bool { return bytes.Compare(records[i][0], records[j][0]) == -1 })
wr := csv.NewWriter(file)
wr.Write([]string{string(prefix), ""}) // column headers
for _, rec := range records {
encoded := []string{hex.EncodeToString(rec[0]), hex.EncodeToString(rec[1])}
err := wr.Write(encoded)
if err != nil {
panic(err)
}
}
wr.Flush()
}
// Fuzz tests for various Key and Value types (EXPERIMENTAL)
func FuzzTouchedHashXKey(f *testing.F) {
kvs := []prefixes.TouchedHashXKey{
{
Prefix: []byte{prefixes.TouchedHashX},
Height: 0,
},
{
Prefix: []byte{prefixes.TouchedHashX},
Height: 1,
},
{
Prefix: []byte{prefixes.TouchedHashX},
Height: math.MaxUint32,
},
}
for _, kv := range kvs {
seed := make([]byte, 0, 200)
seed = append(seed, kv.PackKey()...)
f.Add(seed)
}
f.Fuzz(func(t *testing.T, in []byte) {
t.Logf("testing: %+v", in)
out := make([]byte, 0, 200)
var kv prefixes.TouchedHashXKey
kv.UnpackKey(in)
out = append(out, kv.PackKey()...)
if len(in) >= 5 {
if !bytes.HasPrefix(in, out) {
t.Fatalf("%v: not equal after round trip: %v", in, out)
}
}
})
}
func FuzzTouchedHashXValue(f *testing.F) {
kvs := []prefixes.TouchedHashXValue{
{
TouchedHashXs: [][]byte{},
},
{
TouchedHashXs: [][]byte{
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
},
},
{
TouchedHashXs: [][]byte{
{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
},
},
{
TouchedHashXs: [][]byte{
{0xff, 0xff, 2, 3, 4, 5, 6, 7, 8, 9, 10},
{0, 1, 0xff, 0xff, 4, 5, 6, 7, 8, 9, 10},
{0, 1, 2, 3, 0xff, 0xff, 6, 7, 8, 9, 10},
},
},
}
for _, kv := range kvs {
seed := make([]byte, 0, 200)
seed = append(seed, kv.PackValue()...)
f.Add(seed)
}
f.Fuzz(func(t *testing.T, in []byte) {
t.Logf("testing: %+v", in)
out := make([]byte, 0, 200)
var kv prefixes.TouchedHashXValue
kv.UnpackValue(in)
out = append(out, kv.PackValue()...)
if len(in) >= 5 {
if !bytes.HasPrefix(in, out) {
t.Fatalf("%v: not equal after round trip: %v", in, out)
}
}
})
}

View file

@ -7,23 +7,24 @@ import (
"sync"
"github.com/lbryio/herald.go/internal"
"golang.org/x/exp/constraints"
)
type SliceBacked struct {
slice []interface{}
type SliceBacked[T any] struct {
slice []T
len uint32
mut sync.RWMutex
}
func NewSliceBacked(size int) *SliceBacked {
return &SliceBacked{
slice: make([]interface{}, size),
func NewSliceBacked[T any](size int) *SliceBacked[T] {
return &SliceBacked[T]{
slice: make([]T, size),
len: 0,
mut: sync.RWMutex{},
}
}
func (s *SliceBacked) Push(v interface{}) {
func (s *SliceBacked[T]) Push(v T) {
s.mut.Lock()
defer s.mut.Unlock()
@ -35,64 +36,67 @@ func (s *SliceBacked) Push(v interface{}) {
s.len++
}
func (s *SliceBacked) Pop() interface{} {
func (s *SliceBacked[T]) Pop() T {
s.mut.Lock()
defer s.mut.Unlock()
if s.len == 0 {
return nil
var null T
return null
}
s.len--
return s.slice[s.len]
}
func (s *SliceBacked) Get(i uint32) interface{} {
func (s *SliceBacked[T]) Get(i uint32) T {
s.mut.RLock()
defer s.mut.RUnlock()
if i >= s.len {
return nil
var null T
return null
}
return s.slice[i]
}
func (s *SliceBacked) GetTip() interface{} {
func (s *SliceBacked[T]) GetTip() T {
s.mut.RLock()
defer s.mut.RUnlock()
if s.len == 0 {
return nil
var null T
return null
}
return s.slice[s.len-1]
}
func (s *SliceBacked) Len() uint32 {
func (s *SliceBacked[T]) Len() uint32 {
s.mut.RLock()
defer s.mut.RUnlock()
return s.len
}
func (s *SliceBacked) Cap() int {
func (s *SliceBacked[T]) Cap() int {
s.mut.RLock()
defer s.mut.RUnlock()
return cap(s.slice)
}
func (s *SliceBacked) GetSlice() []interface{} {
func (s *SliceBacked[T]) GetSlice() []T {
// This is not thread safe so I won't bother with locking
return s.slice
}
// This function is dangerous because it assumes underlying types
func (s *SliceBacked) TxCountsBisectRight(txNum, rootTxNum uint32) (uint32, uint32) {
func BisectRight[T constraints.Ordered](s *SliceBacked[T], searchKeys []T) []uint32 {
s.mut.RLock()
defer s.mut.RUnlock()
txCounts := s.slice[:s.Len()]
height := internal.BisectRight(txCounts, txNum)
createdHeight := internal.BisectRight(txCounts, rootTxNum)
found := make([]uint32, len(searchKeys))
for i, k := range searchKeys {
found[i] = internal.BisectRight(s.slice[:s.Len()], k)
}
return height, createdHeight
return found
}

View file

@ -10,7 +10,7 @@ import (
func TestPush(t *testing.T) {
var want uint32 = 3
stack := stack.NewSliceBacked(10)
stack := stack.NewSliceBacked[int](10)
stack.Push(0)
stack.Push(1)
@ -22,7 +22,7 @@ func TestPush(t *testing.T) {
}
func TestPushPop(t *testing.T) {
stack := stack.NewSliceBacked(10)
stack := stack.NewSliceBacked[int](10)
for i := 0; i < 5; i++ {
stack.Push(i)
@ -46,20 +46,20 @@ func TestPushPop(t *testing.T) {
}
}
func doPushes(stack *stack.SliceBacked, numPushes int) {
func doPushes(stack *stack.SliceBacked[int], numPushes int) {
for i := 0; i < numPushes; i++ {
stack.Push(i)
}
}
func doPops(stack *stack.SliceBacked, numPops int) {
func doPops(stack *stack.SliceBacked[int], numPops int) {
for i := 0; i < numPops; i++ {
stack.Pop()
}
}
func TestMultiThreaded(t *testing.T) {
stack := stack.NewSliceBacked(100000)
stack := stack.NewSliceBacked[int](100000)
go doPushes(stack, 100000)
go doPushes(stack, 100000)
@ -83,7 +83,7 @@ func TestMultiThreaded(t *testing.T) {
}
func TestGet(t *testing.T) {
stack := stack.NewSliceBacked(10)
stack := stack.NewSliceBacked[int](10)
for i := 0; i < 5; i++ {
stack.Push(i)
@ -99,6 +99,10 @@ func TestGet(t *testing.T) {
}
}
if got := stack.Get(5); got != 0 {
t.Errorf("got %v, want %v", got, 0)
}
slice := stack.GetSlice()
if len(slice) != 10 {
@ -107,7 +111,7 @@ func TestGet(t *testing.T) {
}
func TestLenCap(t *testing.T) {
stack := stack.NewSliceBacked(10)
stack := stack.NewSliceBacked[int](10)
if got := stack.Len(); got != 0 {
t.Errorf("got %v, want %v", got, 0)

View file

@ -1,4 +1,4 @@
FROM jeffreypicard/hub-github-env
FROM jeffreypicard/hub-github-env:dev
COPY scripts/build_and_test.sh /build_and_test.sh
# COPY . /hub

View file

@ -0,0 +1,13 @@
FROM jeffreypicard/hub-github-env:dev
COPY scripts/integration_tests.sh /integration_tests.sh
COPY scripts/cicd_integration_test_runner.sh /cicd_integration_test_runner.sh
COPY herald /herald
RUN apt install -y jq curl
ENV CGO_LDFLAGS "-L/usr/local/lib -lrocksdb -lstdc++ -lm -lz -lsnappy -llz4 -lzstd"
ENV CGO_CFLAGS "-I/usr/local/include/rocksdb"
ENV LD_LIBRARY_PATH /usr/local/lib
ENTRYPOINT ["/cicd_integration_test_runner.sh"]

View file

@ -1,4 +1,4 @@
FROM golang:1.17.8-bullseye
FROM golang:1.18.5-bullseye
RUN apt-get update -y && \
apt-get upgrade -y && \

17
go.mod
View file

@ -7,27 +7,33 @@ go 1.18
require (
github.com/ReneKroon/ttlcache/v2 v2.8.1
github.com/akamensky/argparse v1.2.2
github.com/go-restruct/restruct v1.2.0-alpha
github.com/gorilla/mux v1.7.3
github.com/gorilla/rpc v1.2.0
github.com/lbryio/lbcutil v1.0.202
github.com/lbryio/lbry.go/v3 v3.0.1-beta
github.com/linxGnu/grocksdb v1.6.42
github.com/olivere/elastic/v7 v7.0.24
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_model v0.2.0
github.com/sirupsen/logrus v1.8.1
golang.org/x/exp v0.0.0-20220907003533-145caa8ea1d0
golang.org/x/text v0.3.7
google.golang.org/grpc v1.46.0
google.golang.org/protobuf v1.27.1
gopkg.in/karalabe/cookiejar.v1 v1.0.0-20141109175019-e1490cae028c
)
require golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b // indirect
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f // indirect
github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd // indirect
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/lbryio/lbcd v0.22.201-beta-rc1
github.com/lbryio/lbcd v0.22.201-beta-rc4
github.com/mailru/easyjson v0.7.7 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
@ -35,9 +41,10 @@ require (
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/stretchr/testify v1.7.0 // indirect
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 // indirect
golang.org/x/crypto v0.0.0-20211209193657-4570a0811e8b // indirect
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20211123173158-ef496fb156ab // indirect
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f // indirect
google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
)

27
go.sum
View file

@ -63,15 +63,18 @@ github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJm
github.com/btcsuite/btcd v0.0.0-20190213025234-306aecffea32/go.mod h1:DrZx5ec/dmnfpw9KyYoQyYo7d0KEvTkk/5M/vbZjAr8=
github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ=
github.com/btcsuite/btcd v0.22.0-beta/go.mod h1:9n5ntfhhHQBIhUvlhDvD3Qg6fRUj4jkN0VB8L8svzOA=
github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f h1:bAs4lUbRJpnnkd9VhRV3jjAVU7DJVjMaK+IsvSeZvFo=
github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f/go.mod h1:TdznJufoqS23FtqVCzL0ZqgP5MqXbb4fg/WgDys70nA=
github.com/btcsuite/btcutil v0.0.0-20190207003914-4c204d697803/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg=
github.com/btcsuite/btcutil v0.0.0-20190425235716-9e5f4b9a998d/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg=
github.com/btcsuite/btcutil v1.0.3-0.20201208143702-a53e38424cce/go.mod h1:0DVlHczLPewLcPGEIeUEzfOJhqGPQ0mJJRDBtD307+o=
github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd h1:R/opQEbFEy9JGkIguV40SvRY1uliPX8ifOvi6ICsFCw=
github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd/go.mod h1:HHNXQzUsZCxOoE+CPiyCTO6x34Zs86zZUiwtpXoGdtg=
github.com/btcsuite/goleveldb v0.0.0-20160330041536-7834afc9e8cd/go.mod h1:F+uVaaLLH7j4eDXPRvw78tMflu7Ie2bzYOH4Y8rRKBY=
github.com/btcsuite/goleveldb v1.0.0/go.mod h1:QiK9vBlgftBg6rWQIj6wFzbPfRjiykIEhBH4obrXJ/I=
github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc=
github.com/btcsuite/snappy-go v1.0.0/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc=
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792 h1:R8vQdOQdZ9Y3SkEwmHoWBmX1DNXhXZqlTpq6s4tyJGc=
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY=
github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
@ -152,6 +155,7 @@ github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5Kwzbycv
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/felixge/fgprof v0.9.1/go.mod h1:7/HK6JFtFaARhIljgP2IV8rJLIoHDoOYoUphsnGvqxE=
github.com/flosch/pongo2 v0.0.0-20190707114632-bbf5a6c351f4/go.mod h1:T9YF2M40nIgbVgp3rreNmTged+9HrbNTIQf1PsaIiTA=
github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
@ -179,6 +183,8 @@ github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTg
github.com/go-martini/martini v0.0.0-20170121215854-22fa46961aab/go.mod h1:/P9AEU963A2AYjv4d1V5eVL1CQbEJq6aCNHDDjibzu8=
github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ozzo/ozzo-validation v3.6.0+incompatible/go.mod h1:gsEKFIVnabGBt6mXmxK0MoFy+cZoTJY6mu5Ll3LVLBU=
github.com/go-restruct/restruct v1.2.0-alpha h1:2Lp474S/9660+SJjpVxoKuWX09JsXHSrdV7Nv3/gkvc=
github.com/go-restruct/restruct v1.2.0-alpha/go.mod h1:KqrpKpn4M8OLznErihXTGLlsXFGeLxHUrLRRI/1YjGk=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
@ -233,6 +239,7 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
@ -248,7 +255,9 @@ github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORR
github.com/gopherjs/gopherjs v0.0.0-20190915194858-d3ddacdb130f/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.3 h1:gnP5JzjVOuiZD07fKKToCAOjS0yOpj/qPETTXCCS6hw=
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/rpc v1.2.0 h1:WvvdC2lNeT1SP32zrIce5l0ECBfbAlmrmSBsuc57wfk=
github.com/gorilla/rpc v1.2.0/go.mod h1:V4h9r+4sF5HnzqbwIez0fKSpANP0zlYd3qR7p36jkTQ=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
@ -340,19 +349,23 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxv
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvfxNnFqi74g=
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/lbryio/lbcd v0.22.100-beta/go.mod h1:u8SaFX4xdGMMR5xasBGfgApC8pvD4rnK2OujZnrq5gs=
github.com/lbryio/lbcd v0.22.100-beta-rc5/go.mod h1:9PbFSlHYX7WlnDQwcTxHVf1W35VAnRsattCSyKOO55g=
github.com/lbryio/lbcd v0.22.200-beta/go.mod h1:kNuzGWf808ipTGB0y0WogzsGv5BVM4Qv85Z+JYwC9FA=
github.com/lbryio/lbcd v0.22.201-beta-rc1 h1:FmzzApVj2RBXloLM2w9tLvN2xyTZjeyh+QC7GIw/wwo=
github.com/lbryio/lbcd v0.22.201-beta-rc1/go.mod h1:kNuzGWf808ipTGB0y0WogzsGv5BVM4Qv85Z+JYwC9FA=
github.com/lbryio/lbcd v0.22.201-beta-rc4 h1:Xh751Bh/GWRcP5bI6NJ2+zueo2otTcTWapFvFbryP5c=
github.com/lbryio/lbcd v0.22.201-beta-rc4/go.mod h1:Jgo48JDINhdOgHHR83J70Q6G42x3WAo9DI//QogcL+E=
github.com/lbryio/lbcutil v1.0.201/go.mod h1:gDHc/b+Rdz3J7+VB8e5/Bl9roVf8Q5/8FQCyuK9dXD0=
github.com/lbryio/lbcutil v1.0.202-rc3/go.mod h1:LGPtVBBzh4cFXfLFb8ginlFcbA2QwumLNFd0yk/as2o=
github.com/lbryio/lbcutil v1.0.202 h1:L0aRMs2bdCUAicD8Xe4NmUEvevDDea3qkIpCSACnftI=
github.com/lbryio/lbcutil v1.0.202/go.mod h1:LGPtVBBzh4cFXfLFb8ginlFcbA2QwumLNFd0yk/as2o=
github.com/lbryio/lbry.go/v2 v2.7.1/go.mod h1:sUhhSKqPNkiwgBqvBzJIqfLLzGH8hkDGrrO/HcaXzFc=
github.com/lbryio/lbry.go/v3 v3.0.1-beta h1:oIpQ5czhtdVSoWZCiOHE9SrqnNsahyCnMhXvXsd2IiM=
github.com/lbryio/lbry.go/v3 v3.0.1-beta/go.mod h1:v03OVXSBGNZNDfGoAVyjQV/ZOzBGQyTnWs3jpkssxGM=
@ -490,6 +503,7 @@ github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6So
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.21.0/go.mod h1:ZPhntP/xmq1nnND05hhpAh2QMhSsA4UN3MGZ6O2J3hM=
@ -595,6 +609,7 @@ go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
@ -630,6 +645,8 @@ golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm0
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20200513190911-00229845015e/go.mod h1:4M0jN8W1tt0AVLNr8HDosyJCDCDuyL9N9+3m7wDWgKw=
golang.org/x/exp v0.0.0-20211123021643-48cbe7f80d7c/go.mod h1:b9TAUYHmRtqA6klRHApnXMnj+OyLce4yF5cZCUbk2ps=
golang.org/x/exp v0.0.0-20220907003533-145caa8ea1d0 h1:17k44ji3KFYG94XS5QEFC8pyuOlMh3IoR+vkmTZmJJs=
golang.org/x/exp v0.0.0-20220907003533-145caa8ea1d0/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@ -640,6 +657,7 @@ golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
@ -753,8 +771,9 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210909193231-528a39cd75f3/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211123173158-ef496fb156ab h1:rfJ1bsoJQQIAoAxTxB7bme+vHrNkRw8CqfsYh9w54cw=
golang.org/x/sys v0.0.0-20211123173158-ef496fb156ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f h1:v4INt8xihDGvnrfjMDVXGxw9wrfxYyCjk0KbXjhR55s=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -805,6 +824,7 @@ golang.org/x/tools v0.0.0-20210112230658-8b4aab62c064/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.8-0.20211029000441-d6a9af8af023/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -873,6 +893,7 @@ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=

View file

@ -1,10 +1,14 @@
package internal
import "sort"
import (
"sort"
"golang.org/x/exp/constraints"
)
// BisectRight returns the index of the first element in the list that is greater than or equal to the value.
// https://stackoverflow.com/questions/29959506/is-there-a-go-analog-of-pythons-bisect-module
func BisectRight(arr []interface{}, val uint32) uint32 {
i := sort.Search(len(arr), func(i int) bool { return arr[i].(uint32) >= val })
func BisectRight[T constraints.Ordered](arr []T, val T) uint32 {
i := sort.Search(len(arr), func(i int) bool { return arr[i] >= val })
return uint32(i)
}

View file

@ -4,6 +4,7 @@ package internal
// HeightHash struct for the height subscription endpoint.
type HeightHash struct {
Height uint64
BlockHash []byte
Height uint64
BlockHash []byte
BlockHeader []byte
}

21
main.go
View file

@ -10,8 +10,10 @@ import (
"github.com/lbryio/herald.go/internal"
pb "github.com/lbryio/herald.go/protobuf/go"
"github.com/lbryio/herald.go/server"
"github.com/lbryio/lbry.go/v3/extras/stop"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func main() {
@ -27,31 +29,22 @@ func main() {
if args.CmdType == server.ServeCmd {
// This will cancel goroutines with the server finishes.
ctxWCancel, cancel := context.WithCancel(ctx)
defer cancel()
stopGroup := stop.New()
initsignals()
interrupt := interruptListener()
s := server.MakeHubServer(ctxWCancel, args)
s := server.MakeHubServer(stopGroup, args)
go s.Run()
defer func() {
log.Println("Shutting down server...")
s.EsClient.Stop()
s.GrpcServer.GracefulStop()
s.DB.Shutdown()
log.Println("Returning from main...")
}()
defer s.Stop()
<-interrupt
return
}
conn, err := grpc.Dial("localhost:"+args.Port,
grpc.WithInsecure(),
conn, err := grpc.Dial("localhost:"+fmt.Sprintf("%d", args.Port),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {

View file

@ -10,7 +10,7 @@ This project will eventually subsume and replace the
[herald](https://github.com/lbryio/hub/blob/master/docs/docker_examples/hub-compose.yml#L38)
and the [lighthouse](https://github.com/lbryio/lighthouse) search provider.
![](./diagram.png)
![](https://raw.githubusercontent.com/lbryio/hub/master/docs/diagram.png)
## Installation
@ -72,6 +72,8 @@ tar xfzv rocksdb-6.29.5.tar.gz
cd rocksdb-6.29.5
make static_lib
sudo make install
export CGO_CFLAGS="-I/usr/local/lib"
export CGO_LDFLAGS="-L/usr/local/lib -lrocksdb -lstdc++ -lm -lz -lsnappy -llz4 -lzstd -lbz2"
```
```
@ -80,6 +82,19 @@ https://github.com/protocolbuffers/protobuf/releases/download/v3.17.1/protobuf-a
If you can run `./protobuf/build.sh` without errors, you have `go` and `protoc` installed correctly.
On Linux you probably need to instead the open file limits
```
ulimit -n 1000000
sysctl -w fs.file-max=1000000
```
and `/etc/security/limits.conf` or `/etc/sysctl.conf` change:
```
fs.file-max = 1000000
```
Finally, run the block processor as described under Usage.
### Running from Source

16
requirements.txt Normal file
View file

@ -0,0 +1,16 @@
certifi==2022.6.15
cffi==1.15.1
charset-normalizer==2.1.0
cryptography==37.0.4
github3.py==3.2.0
grpcio==1.47.0
grpcio-tools==1.47.0
idna==3.3
protobuf==3.20.1
pycparser==2.21
PyJWT==2.4.0
python-dateutil==2.8.2
requests==2.28.1
six==1.16.0
uritemplate==4.1.1
urllib3==1.26.11

View file

@ -0,0 +1,14 @@
#!/bin/bash
#
# cicd_integration_test_runner.sh
#
# simple script to kick off herald and call the integration testing
# script
#
# N.B. this currently just works locally until we figure a way to have
# the data in the cicd environment.
#
./herald serve --db-path /mnt/sdb1/wallet_server/_data/lbry-rocksdb &
./integration_tests.sh

235
scripts/integration_tests.sh Executable file
View file

@ -0,0 +1,235 @@
#!/bin/bash
#
# integration_testing.sh
#
# GitHub Action CI/CD based integration tests for herald.go
# These are smoke / sanity tests for the server behaving correctly on a "live"
# system, and looks for reasonable response codes, not specific correct
# behavior. Those are covered in unit tests.
#
# N.B.
# For the curl based json tests the `id` field existing is needed.
#
# global variables
RES=(0)
FINALRES=0
# functions
function logical_or {
for res in ${RES[@]}; do
if [ $res -eq 1 -o $FINALRES -eq 1 ]; then
FINALRES=1
return
fi
done
}
function want_got {
if [ "${WANT}" != "${GOT}" ]; then
echo "WANT: ${WANT}"
echo "GOT: ${GOT}"
RES+=(1)
else
RES+=(0)
fi
}
function want_greater {
if [ ${WANT} -ge ${GOT} ]; then
echo "WANT: ${WANT}"
echo "GOT: ${GOT}"
RES+=(1)
else
RES+=(0)
fi
}
function test_command_with_want {
echo $CMD
GOT=`eval $CMD`
want_got
}
# grpc endpoint testing
read -r -d '' CMD <<- EOM
grpcurl -plaintext -d '{"value": ["@Styxhexenhammer666:2"]}' 127.0.0.1:50051 pb.Hub.Resolve
| jq .txos[0].txHash | sed 's/"//g'
EOM
WANT="VOFP8MQEwps9Oa5NJJQ18WfVzUzlpCjst0Wz3xyOPd4="
test_command_with_want
# GOT=`eval $CMD`
#want_got
##
## N.B. This is a degenerate case that takes a long time to run.
## The runtime should be fixed, but in the meantime, we definitely should
## ensure this behaves as expected.
##
## TODO: Test runtime doesn't exceed worst case.
##
#WANT=806389
#read -r -d '' CMD <<- EOM
# grpcurl -plaintext -d '{"value": ["foo"]}' 127.0.0.1:50051 pb.Hub.Resolve | jq .txos[0].height
#EOM
# test_command_with_want
# json rpc endpoint testing
## blockchain.block
### blockchain.block.get_chunk
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.block.get_chunk", "params": [0]}'
| jq .result | sed 's/"//g' | head -c 100
EOM
WANT="010000000000000000000000000000000000000000000000000000000000000000000000cc59e59ff97ac092b55e423aa549"
test_command_with_want
### blockchain.block.get_header
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.block.get_header", "params": []}'
| jq .result.timestamp
EOM
WANT=1446058291
test_command_with_want
### blockchain.block.headers
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.block.headers", "params": []}'
| jq .result.count
EOM
WANT=0
test_command_with_want
## blockchain.claimtrie
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.claimtrie.resolve", "params":[{"Data": ["@Styxhexenhammer666:2"]}]}'
| jq .result.txos[0].tx_hash | sed 's/"//g'
EOM
WANT="VOFP8MQEwps9Oa5NJJQ18WfVzUzlpCjst0Wz3xyOPd4="
test_command_with_want
## blockchain.address
### blockchain.address.get_balance
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.address.get_balance", "params":[{"Address": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq .result.confirmed
EOM
WANT=44415602186
test_command_with_want
## blockchain.address.get_history
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.address.get_history", "params":[{"Address": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq '.result.confirmed | length'
EOM
WANT=82
test_command_with_want
## blockchain.address.listunspent
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.address.listunspent", "params":[{"Address": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq '.result | length'
EOM
WANT=32
test_command_with_want
# blockchain.scripthash
## blockchain.scripthash.get_mempool
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.scripthash.get_mempool", "params":[{"scripthash": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq .error | sed 's/"//g'
EOM
WANT="encoding/hex: invalid byte: U+0047 'G'"
test_command_with_want
## blockchain.scripthash.get_history
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.scripthash.get_history", "params":[{"scripthash": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq .error | sed 's/"//g'
EOM
WANT="encoding/hex: invalid byte: U+0047 'G'"
test_command_with_want
## blockchain.scripthash.listunspent
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "blockchain.scripthash.listunspent", "params":[{"scripthash": "bGqWuXRVm5bBqLvLPEQQpvsNxJ5ubc6bwN"}]}'
| jq .error | sed 's/"//g'
EOM
WANT="encoding/hex: invalid byte: U+0047 'G'"
test_command_with_want
## server.banner
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "server.banner", "params":[]}'
| jq .result | sed 's/"//g'
EOM
WANT="You are connected to an 0.107.0 server."
test_command_with_want
## server.version
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "server.version", "params":[]}'
| jq .result | sed 's/"//g'
EOM
WANT="0.107.0"
test_command_with_want
## server.features
read -r -d '' CMD <<- EOM
curl http://127.0.0.1:50002/rpc -s -H "Content-Type: application/json"
--data '{"id": 1, "method": "server.features", "params":[]}'
EOM
WANT='{"result":{"hosts":{},"pruning":"","server_version":"0.107.0","protocol_min":"0.54.0","protocol_max":"0.199.0","genesis_hash":"9c89283ba0f3227f6c03b70216b9f665f0118d5e0fa729cedf4fb34d6a34f463","description":"Herald","payment_address":"","donation_address":"","daily_fee":"1.0","hash_function":"sha256","trending_algorithm":"fast_ar"},"error":null,"id":1}'
test_command_with_want
# metrics endpoint testing
WANT=0
GOT=$(curl http://127.0.0.1:2112/metrics -s | grep requests | grep resolve | awk '{print $NF}')
want_greater
# caclulate return value
logical_or $RES
if [ $FINALRES -eq 1 ]; then
echo "Failed!"
exit 1
else
echo "Passed!"
exit 0
fi

View file

@ -152,7 +152,7 @@ def get_draft_prerelease_vars(args) -> (bool, bool):
def release(args):
gh = get_github()
repo = gh.repository('lbryio', 'hub')
repo = gh.repository('lbryio', 'herald.go')
try:
version_file = repo.file_contents('version.txt')
current_version = Version.from_content(version_file)

View file

@ -1,12 +1,16 @@
package server
import (
"fmt"
"log"
"net/url"
"os"
"strconv"
"strings"
"github.com/akamensky/argparse"
pb "github.com/lbryio/herald.go/protobuf/go"
"github.com/lbryio/lbcd/chaincfg"
)
const (
@ -17,21 +21,39 @@ const (
// Args struct contains the arguments to the hub server.
type Args struct {
CmdType int
Host string
Port string
DBPath string
EsHost string
EsPort string
PrometheusPort string
NotifierPort string
EsIndex string
RefreshDelta int
CacheTTL int
PeerFile string
Country string
BlockingChannelIds []string
FilteringChannelIds []string
CmdType int
Host string
Port int
DBPath string
Chain *string
DaemonURL *url.URL
DaemonCAPath string
EsHost string
EsPort int
PrometheusPort int
NotifierPort int
JSONRPCPort int
JSONRPCHTTPPort int
MaxSessions int
SessionTimeout int
EsIndex string
RefreshDelta int
CacheTTL int
PeerFile string
Banner *string
Country string
BlockingChannelIds []string
FilteringChannelIds []string
GenesisHash string
ServerVersion string
ProtocolMin string
ProtocolMax string
ServerDescription string
PaymentAddress string
DonationAddress string
DailyFee string
Debug bool
DisableEs bool
DisableLoadPeers bool
@ -43,21 +65,36 @@ type Args struct {
DisableResolve bool
DisableBlockingAndFiltering bool
DisableStartNotifier bool
DisableStartJSONRPC bool
}
const (
DefaultHost = "0.0.0.0"
DefaultPort = "50051"
DefaultDBPath = "/mnt/d/data/snapshot_1072108/lbry-rocksdb/" // FIXME
DefaultEsHost = "http://localhost"
DefaultEsIndex = "claims"
DefaultEsPort = "9200"
DefaultPrometheusPort = "2112"
DefaultNotifierPort = "18080"
DefaultRefreshDelta = 5
DefaultCacheTTL = 5
DefaultPeerFile = "peers.txt"
DefaultCountry = "US"
DefaultHost = "0.0.0.0"
DefaultPort = 50051
DefaultDBPath = "/mnt/d/data/snapshot_1072108/lbry-rocksdb/" // FIXME
DefaultEsHost = "http://localhost"
DefaultEsIndex = "claims"
DefaultEsPort = 9200
DefaultPrometheusPort = 2112
DefaultNotifierPort = 18080
DefaultJSONRPCPort = 50001
DefaultJSONRPCHTTPPort = 50002
DefaultMaxSessions = 10000
DefaultSessionTimeout = 300
DefaultRefreshDelta = 5
DefaultCacheTTL = 5
DefaultPeerFile = "peers.txt"
DefaultBannerFile = ""
DefaultCountry = "US"
HUB_PROTOCOL_VERSION = "0.107.0"
PROTOCOL_MIN = "0.54.0"
PROTOCOL_MAX = "0.199.0"
DefaultServerDescription = "Herald"
DefaultPaymentAddress = ""
DefaultDonationAddress = ""
DefaultDailyFee = "1.0"
DefaultDisableLoadPeers = false
DefaultDisableStartPrometheus = false
DefaultDisableStartUDP = false
@ -67,6 +104,7 @@ const (
DefaultDisableResolve = false
DefaultDisableBlockingAndFiltering = false
DisableStartNotifier = false
DisableStartJSONRPC = false
)
var (
@ -74,6 +112,73 @@ var (
DefaultFilteringChannelIds = []string{}
)
func loadBanner(bannerFile *string, serverVersion string) *string {
var banner string
data, err := os.ReadFile(*bannerFile)
if err != nil {
banner = fmt.Sprintf("You are connected to an %s server.", serverVersion)
} else {
banner = string(data)
}
/*
banner := os.Getenv("BANNER")
if banner == "" {
return nil
}
*/
return &banner
}
// MakeDefaultArgs creates a default set of arguments for testing the server.
func MakeDefaultTestArgs() *Args {
args := &Args{
CmdType: ServeCmd,
Host: DefaultHost,
Port: DefaultPort,
DBPath: DefaultDBPath,
EsHost: DefaultEsHost,
EsPort: DefaultEsPort,
PrometheusPort: DefaultPrometheusPort,
NotifierPort: DefaultNotifierPort,
JSONRPCPort: DefaultJSONRPCPort,
JSONRPCHTTPPort: DefaultJSONRPCHTTPPort,
MaxSessions: DefaultMaxSessions,
SessionTimeout: DefaultSessionTimeout,
EsIndex: DefaultEsIndex,
RefreshDelta: DefaultRefreshDelta,
CacheTTL: DefaultCacheTTL,
PeerFile: DefaultPeerFile,
Banner: nil,
Country: DefaultCountry,
GenesisHash: chaincfg.TestNet3Params.GenesisHash.String(),
ServerVersion: HUB_PROTOCOL_VERSION,
ProtocolMin: PROTOCOL_MIN,
ProtocolMax: PROTOCOL_MAX,
ServerDescription: DefaultServerDescription,
PaymentAddress: DefaultPaymentAddress,
DonationAddress: DefaultDonationAddress,
DailyFee: DefaultDailyFee,
DisableEs: true,
Debug: true,
DisableLoadPeers: true,
DisableStartPrometheus: true,
DisableStartUDP: true,
DisableWritePeers: true,
DisableRocksDBRefresh: true,
DisableResolve: true,
DisableBlockingAndFiltering: true,
DisableStartNotifier: true,
DisableStartJSONRPC: true,
}
return args
}
// GetEnvironment takes the environment variables as an array of strings
// and a getkeyval function to turn it into a map.
func GetEnvironment(data []string, getkeyval func(item string) (key, val string)) map[string]string {
@ -99,27 +204,58 @@ func GetEnvironmentStandard() map[string]string {
func ParseArgs(searchRequest *pb.SearchRequest) *Args {
environment := GetEnvironmentStandard()
parser := argparse.NewParser("hub", "hub server and client")
parser := argparse.NewParser("herald", "herald server and client")
serveCmd := parser.NewCommand("serve", "start the hub server")
serveCmd := parser.NewCommand("serve", "start the herald server")
searchCmd := parser.NewCommand("search", "claim search")
dbCmd := parser.NewCommand("db", "db testing")
defaultDaemonURL := "http://localhost:9245"
if url, ok := environment["DAEMON_URL"]; ok {
defaultDaemonURL = url
}
validateURL := func(arg []string) error {
_, err := url.Parse(arg[0])
return err
}
validatePort := func(arg []string) error {
_, err := strconv.ParseUint(arg[0], 10, 16)
return err
}
// main server config arguments
host := parser.String("", "rpchost", &argparse.Options{Required: false, Help: "RPC host", Default: DefaultHost})
port := parser.String("", "rpcport", &argparse.Options{Required: false, Help: "RPC port", Default: DefaultPort})
port := parser.Int("", "rpcport", &argparse.Options{Required: false, Help: "RPC port", Validate: validatePort, Default: DefaultPort})
dbPath := parser.String("", "db-path", &argparse.Options{Required: false, Help: "RocksDB path", Default: DefaultDBPath})
chain := parser.Selector("", "chain", []string{chaincfg.MainNetParams.Name, chaincfg.TestNet3Params.Name, chaincfg.RegressionNetParams.Name, "testnet"},
&argparse.Options{Required: false, Help: "Which chain to use, default is 'mainnet'. Values 'regtest' and 'testnet' are for testing", Default: chaincfg.MainNetParams.Name})
daemonURLStr := parser.String("", "daemon-url", &argparse.Options{Required: false, Help: "URL for rpc to lbrycrd or lbcd, <rpcuser>:<rpcpassword>@<lbcd rpc ip><lbrcd rpc port>.", Validate: validateURL, Default: defaultDaemonURL})
daemonCAPath := parser.String("", "daemon-ca-path", &argparse.Options{Required: false, Help: "Path to the lbcd CA file. Use SSL certificate to verify connection to lbcd."})
esHost := parser.String("", "eshost", &argparse.Options{Required: false, Help: "elasticsearch host", Default: DefaultEsHost})
esPort := parser.String("", "esport", &argparse.Options{Required: false, Help: "elasticsearch port", Default: DefaultEsPort})
prometheusPort := parser.String("", "prometheus-port", &argparse.Options{Required: false, Help: "prometheus port", Default: DefaultPrometheusPort})
notifierPort := parser.String("", "notifier-port", &argparse.Options{Required: false, Help: "notifier port", Default: DefaultNotifierPort})
esPort := parser.Int("", "esport", &argparse.Options{Required: false, Help: "elasticsearch port", Default: DefaultEsPort})
prometheusPort := parser.Int("", "prometheus-port", &argparse.Options{Required: false, Help: "prometheus port", Default: DefaultPrometheusPort})
notifierPort := parser.Int("", "notifier-port", &argparse.Options{Required: false, Help: "notifier port", Default: DefaultNotifierPort})
jsonRPCPort := parser.Int("", "json-rpc-port", &argparse.Options{Required: false, Help: "JSON RPC port", Validate: validatePort, Default: DefaultJSONRPCPort})
jsonRPCHTTPPort := parser.Int("", "json-rpc-http-port", &argparse.Options{Required: false, Help: "JSON RPC over HTTP port", Validate: validatePort, Default: DefaultJSONRPCHTTPPort})
maxSessions := parser.Int("", "max-sessions", &argparse.Options{Required: false, Help: "Maximum number of electrum clients that can be connected", Default: DefaultMaxSessions})
sessionTimeout := parser.Int("", "session-timeout", &argparse.Options{Required: false, Help: "Session inactivity timeout (seconds)", Default: DefaultSessionTimeout})
esIndex := parser.String("", "esindex", &argparse.Options{Required: false, Help: "elasticsearch index name", Default: DefaultEsIndex})
refreshDelta := parser.Int("", "refresh-delta", &argparse.Options{Required: false, Help: "elasticsearch index refresh delta in seconds", Default: DefaultRefreshDelta})
cacheTTL := parser.Int("", "cachettl", &argparse.Options{Required: false, Help: "Cache TTL in minutes", Default: DefaultCacheTTL})
peerFile := parser.String("", "peerfile", &argparse.Options{Required: false, Help: "Initial peer file for federation", Default: DefaultPeerFile})
bannerFile := parser.String("", "bannerfile", &argparse.Options{Required: false, Help: "Banner file server.banner", Default: DefaultBannerFile})
country := parser.String("", "country", &argparse.Options{Required: false, Help: "Country this node is running in. Default US.", Default: DefaultCountry})
blockingChannelIds := parser.StringList("", "blocking-channel-ids", &argparse.Options{Required: false, Help: "Blocking channel ids", Default: DefaultBlockingChannelIds})
filteringChannelIds := parser.StringList("", "filtering-channel-ids", &argparse.Options{Required: false, Help: "Filtering channel ids", Default: DefaultFilteringChannelIds})
// arguments for server features
serverDescription := parser.String("", "server-description", &argparse.Options{Required: false, Help: "Server description", Default: DefaultServerDescription})
paymentAddress := parser.String("", "payment-address", &argparse.Options{Required: false, Help: "Payment address", Default: DefaultPaymentAddress})
donationAddress := parser.String("", "donation-address", &argparse.Options{Required: false, Help: "Donation address", Default: DefaultDonationAddress})
dailyFee := parser.String("", "daily-fee", &argparse.Options{Required: false, Help: "Daily fee", Default: DefaultDailyFee})
// flags for disabling features
debug := parser.Flag("", "debug", &argparse.Options{Required: false, Help: "enable debug logging", Default: false})
disableEs := parser.Flag("", "disable-es", &argparse.Options{Required: false, Help: "Disable elastic search, for running/testing independently", Default: false})
disableLoadPeers := parser.Flag("", "disable-load-peers", &argparse.Options{Required: false, Help: "Disable load peers from disk at startup", Default: DefaultDisableLoadPeers})
@ -131,7 +267,9 @@ func ParseArgs(searchRequest *pb.SearchRequest) *Args {
disableResolve := parser.Flag("", "disable-resolve", &argparse.Options{Required: false, Help: "Disable resolve endpoint (and rocksdb loading)", Default: DefaultDisableRockDBRefresh})
disableBlockingAndFiltering := parser.Flag("", "disable-blocking-and-filtering", &argparse.Options{Required: false, Help: "Disable blocking and filtering of channels and streams", Default: DefaultDisableBlockingAndFiltering})
disableStartNotifier := parser.Flag("", "disable-start-notifier", &argparse.Options{Required: false, Help: "Disable start notifier", Default: DisableStartNotifier})
disableStartJSONRPC := parser.Flag("", "disable-start-jsonrpc", &argparse.Options{Required: false, Help: "Disable start jsonrpc endpoint", Default: DisableStartJSONRPC})
// search command arguments
text := parser.String("", "text", &argparse.Options{Required: false, Help: "text query"})
name := parser.String("", "name", &argparse.Options{Required: false, Help: "name"})
claimType := parser.String("", "claim_type", &argparse.Options{Required: false, Help: "claim_type"})
@ -148,22 +286,52 @@ func ParseArgs(searchRequest *pb.SearchRequest) *Args {
log.Fatalln(parser.Usage(err))
}
// Use default JSON RPC port only if *neither* JSON RPC arg is specified.
if *jsonRPCPort == 0 && *jsonRPCHTTPPort == 0 {
*jsonRPCPort = DefaultJSONRPCPort
}
daemonURL, err := url.Parse(*daemonURLStr)
if err != nil {
log.Fatalf("URL parse failed: %v", err)
}
banner := loadBanner(bannerFile, HUB_PROTOCOL_VERSION)
args := &Args{
CmdType: SearchCmd,
Host: *host,
Port: *port,
DBPath: *dbPath,
EsHost: *esHost,
EsPort: *esPort,
PrometheusPort: *prometheusPort,
NotifierPort: *notifierPort,
EsIndex: *esIndex,
RefreshDelta: *refreshDelta,
CacheTTL: *cacheTTL,
PeerFile: *peerFile,
Country: *country,
BlockingChannelIds: *blockingChannelIds,
FilteringChannelIds: *filteringChannelIds,
CmdType: SearchCmd,
Host: *host,
Port: *port,
DBPath: *dbPath,
Chain: chain,
DaemonURL: daemonURL,
DaemonCAPath: *daemonCAPath,
EsHost: *esHost,
EsPort: *esPort,
PrometheusPort: *prometheusPort,
NotifierPort: *notifierPort,
JSONRPCPort: *jsonRPCPort,
JSONRPCHTTPPort: *jsonRPCHTTPPort,
MaxSessions: *maxSessions,
SessionTimeout: *sessionTimeout,
EsIndex: *esIndex,
RefreshDelta: *refreshDelta,
CacheTTL: *cacheTTL,
PeerFile: *peerFile,
Banner: banner,
Country: *country,
BlockingChannelIds: *blockingChannelIds,
FilteringChannelIds: *filteringChannelIds,
GenesisHash: "",
ServerVersion: HUB_PROTOCOL_VERSION,
ProtocolMin: PROTOCOL_MIN,
ProtocolMax: PROTOCOL_MAX,
ServerDescription: *serverDescription,
PaymentAddress: *paymentAddress,
DonationAddress: *donationAddress,
DailyFee: *dailyFee,
Debug: *debug,
DisableEs: *disableEs,
DisableLoadPeers: *disableLoadPeers,
@ -175,6 +343,7 @@ func ParseArgs(searchRequest *pb.SearchRequest) *Args {
DisableResolve: *disableResolve,
DisableBlockingAndFiltering: *disableBlockingAndFiltering,
DisableStartNotifier: *disableStartNotifier,
DisableStartJSONRPC: *disableStartJSONRPC,
}
if esHost, ok := environment["ELASTIC_HOST"]; ok {
@ -186,11 +355,17 @@ func ParseArgs(searchRequest *pb.SearchRequest) *Args {
}
if esPort, ok := environment["ELASTIC_PORT"]; ok {
args.EsPort = esPort
args.EsPort, err = strconv.Atoi(esPort)
if err != nil {
log.Fatal(err)
}
}
if prometheusPort, ok := environment["GOHUB_PROMETHEUS_PORT"]; ok {
args.PrometheusPort = prometheusPort
args.PrometheusPort, err = strconv.Atoi(prometheusPort)
if err != nil {
log.Fatal(err)
}
}
/*

View file

@ -3,17 +3,19 @@ package server
import (
"bufio"
"context"
"log"
"math"
"net"
"os"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/lbryio/herald.go/internal/metrics"
pb "github.com/lbryio/herald.go/protobuf/go"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
// Peer holds relevant information about peers that we know about.
@ -86,7 +88,7 @@ func (s *Server) getAndSetExternalIp(ip, port string) error {
// storing them as known peers. Returns a map of peerKey -> object
func (s *Server) loadPeers() error {
peerFile := s.Args.PeerFile
port := s.Args.Port
port := strconv.Itoa(s.Args.Port)
// First we make sure our server has come up, so we can answer back to peers.
var failures = 0
@ -98,7 +100,7 @@ retry:
time.Sleep(time.Second * time.Duration(math.Pow(float64(failures), 2)))
conn, err := grpc.DialContext(ctx,
"0.0.0.0:"+port,
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
@ -171,7 +173,7 @@ func (s *Server) subscribeToPeer(peer *Peer) error {
conn, err := grpc.DialContext(ctx,
peer.Address+":"+peer.Port,
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -181,12 +183,12 @@ func (s *Server) subscribeToPeer(peer *Peer) error {
msg := &pb.ServerMessage{
Address: s.ExternalIP.String(),
Port: s.Args.Port,
Port: strconv.Itoa(s.Args.Port),
}
c := pb.NewHubClient(conn)
log.Printf("%s:%s subscribing to %+v\n", s.ExternalIP, s.Args.Port, peer)
log.Printf("%s:%d subscribing to %+v\n", s.ExternalIP, s.Args.Port, peer)
_, err = c.PeerSubscribe(ctx, msg)
if err != nil {
return err
@ -207,7 +209,7 @@ func (s *Server) helloPeer(peer *Peer) (*pb.HelloMessage, error) {
conn, err := grpc.DialContext(ctx,
peer.Address+":"+peer.Port,
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -219,12 +221,12 @@ func (s *Server) helloPeer(peer *Peer) (*pb.HelloMessage, error) {
c := pb.NewHubClient(conn)
msg := &pb.HelloMessage{
Port: s.Args.Port,
Port: strconv.Itoa(s.Args.Port),
Host: s.ExternalIP.String(),
Servers: []*pb.ServerMessage{},
}
log.Printf("%s:%s saying hello to %+v\n", s.ExternalIP, s.Args.Port, peer)
log.Printf("%s:%d saying hello to %+v\n", s.ExternalIP, s.Args.Port, peer)
res, err := c.Hello(ctx, msg)
if err != nil {
log.Println(err)
@ -277,7 +279,7 @@ func (s *Server) notifyPeer(peerToNotify *Peer, newPeer *Peer) error {
conn, err := grpc.DialContext(ctx,
peerToNotify.Address+":"+peerToNotify.Port,
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -345,15 +347,15 @@ func (s *Server) addPeer(newPeer *Peer, ping bool, subscribe bool) error {
}
}
if s.Args.Port == newPeer.Port &&
if strconv.Itoa(s.Args.Port) == newPeer.Port &&
(localHosts[newPeer.Address] || newPeer.Address == s.ExternalIP.String()) {
log.Printf("%s:%s addPeer: Self peer, skipping...\n", s.ExternalIP, s.Args.Port)
log.Printf("%s:%d addPeer: Self peer, skipping...\n", s.ExternalIP, s.Args.Port)
return nil
}
k := peerKey(newPeer)
log.Printf("%s:%s adding peer %+v\n", s.ExternalIP, s.Args.Port, newPeer)
log.Printf("%s:%d adding peer %+v\n", s.ExternalIP, s.Args.Port, newPeer)
if oldServer, loaded := s.PeerServersLoadOrStore(newPeer); !loaded {
if ping {
_, err := s.helloPeer(newPeer)
@ -369,6 +371,10 @@ func (s *Server) addPeer(newPeer *Peer, ping bool, subscribe bool) error {
metrics.PeersKnown.Inc()
s.writePeers()
s.notifyPeerSubs(newPeer)
// This is weird because we're doing grpc and jsonrpc here.
// Do we still want to custom grpc?
log.Warn("Sending peer to NotifierChan")
s.NotifierChan <- peerNotification{newPeer.Address, newPeer.Port}
// Subscribe to all our peers for now
if subscribe {
@ -415,7 +421,7 @@ func (s *Server) makeHelloMessage() *pb.HelloMessage {
s.PeerServersMut.RUnlock()
return &pb.HelloMessage{
Port: s.Args.Port,
Port: strconv.Itoa(s.Args.Port),
Host: s.ExternalIP.String(),
Servers: servers,
}

View file

@ -4,23 +4,31 @@ import (
"bufio"
"context"
"fmt"
"log"
"net"
"os"
"strconv"
"strings"
"testing"
"github.com/lbryio/herald.go/internal/metrics"
pb "github.com/lbryio/herald.go/protobuf/go"
server "github.com/lbryio/herald.go/server"
"github.com/lbryio/herald.go/server"
"github.com/lbryio/lbry.go/v3/extras/stop"
dto "github.com/prometheus/client_model/go"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
// lineCountFile takes a fileName and counts the number of lines in it.
func lineCountFile(fileName string) int {
f, err := os.Open(fileName)
defer f.Close()
defer func() {
err := f.Close()
if err != nil {
log.Warn(err)
}
}()
if err != nil {
log.Println(err)
return 0
@ -44,41 +52,12 @@ func removeFile(fileName string) {
}
}
// makeDefaultArgs creates a default set of arguments for testing the server.
func makeDefaultArgs() *server.Args {
args := &server.Args{
CmdType: server.ServeCmd,
Host: server.DefaultHost,
Port: server.DefaultPort,
DBPath: server.DefaultDBPath,
EsHost: server.DefaultEsHost,
EsPort: server.DefaultEsPort,
PrometheusPort: server.DefaultPrometheusPort,
NotifierPort: server.DefaultNotifierPort,
EsIndex: server.DefaultEsIndex,
RefreshDelta: server.DefaultRefreshDelta,
CacheTTL: server.DefaultCacheTTL,
PeerFile: server.DefaultPeerFile,
Country: server.DefaultCountry,
DisableEs: true,
Debug: true,
DisableLoadPeers: true,
DisableStartPrometheus: true,
DisableStartUDP: true,
DisableWritePeers: true,
DisableRocksDBRefresh: true,
DisableResolve: true,
DisableBlockingAndFiltering: true,
DisableStartNotifier: true,
}
return args
}
// TestAddPeer tests the ability to add peers
func TestAddPeer(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
// ctx := context.Background()
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args.DisableStartNotifier = false
tests := []struct {
name string
@ -128,6 +107,7 @@ func TestAddPeer(t *testing.T) {
if got != tt.want {
t.Errorf("len(server.PeerServers) = %d, want %d\n", got, tt.want)
}
hubServer.Stop()
})
}
@ -135,9 +115,10 @@ func TestAddPeer(t *testing.T) {
// TestPeerWriter tests that peers get written properly
func TestPeerWriter(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args.DisableWritePeers = false
args.DisableStartNotifier = false
tests := []struct {
name string
@ -172,17 +153,16 @@ func TestPeerWriter(t *testing.T) {
Port: "50051",
}
}
//log.Printf("Adding peer %+v\n", peer)
err := hubServer.AddPeerExported()(peer, false, false)
if err != nil {
log.Println(err)
}
}
//log.Println("Counting lines...")
got := lineCountFile(hubServer.Args.PeerFile)
if got != tt.want {
t.Errorf("lineCountFile(peers.txt) = %d, want %d", got, tt.want)
}
hubServer.Stop()
})
}
@ -191,10 +171,13 @@ func TestPeerWriter(t *testing.T) {
// TestAddPeerEndpoint tests the ability to add peers
func TestAddPeerEndpoint(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
args2 := makeDefaultArgs()
args2.Port = "50052"
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args.DisableStartNotifier = false
args2 := server.MakeDefaultTestArgs()
args2.DisableStartNotifier = false
args2.Port = 50052
args2.NotifierPort = 18081
tests := []struct {
name string
@ -224,9 +207,8 @@ func TestAddPeerEndpoint(t *testing.T) {
metrics.PeersKnown.Set(0)
go hubServer.Run()
go hubServer2.Run()
//go hubServer.Run()
conn, err := grpc.Dial("localhost:"+args.Port,
grpc.WithInsecure(),
conn, err := grpc.Dial("localhost:"+strconv.Itoa(args.Port),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -245,8 +227,6 @@ func TestAddPeerEndpoint(t *testing.T) {
log.Println(err)
}
hubServer.GrpcServer.GracefulStop()
hubServer2.GrpcServer.GracefulStop()
got1 := hubServer.GetNumPeersExported()()
got2 := hubServer2.GetNumPeersExported()()
if got1 != tt.wantServerOne {
@ -255,6 +235,8 @@ func TestAddPeerEndpoint(t *testing.T) {
if got2 != tt.wantServerTwo {
t.Errorf("len(hubServer2.PeerServers) = %d, want %d\n", got2, tt.wantServerTwo)
}
hubServer.Stop()
hubServer2.Stop()
})
}
@ -262,12 +244,17 @@ func TestAddPeerEndpoint(t *testing.T) {
// TestAddPeerEndpoint2 tests the ability to add peers
func TestAddPeerEndpoint2(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
args2 := makeDefaultArgs()
args3 := makeDefaultArgs()
args2.Port = "50052"
args3.Port = "50053"
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args2 := server.MakeDefaultTestArgs()
args3 := server.MakeDefaultTestArgs()
args2.Port = 50052
args3.Port = 50053
args.DisableStartNotifier = false
args2.DisableStartNotifier = false
args3.DisableStartNotifier = false
args2.NotifierPort = 18081
args3.NotifierPort = 18082
tests := []struct {
name string
@ -292,8 +279,8 @@ func TestAddPeerEndpoint2(t *testing.T) {
go hubServer.Run()
go hubServer2.Run()
go hubServer3.Run()
conn, err := grpc.Dial("localhost:"+args.Port,
grpc.WithInsecure(),
conn, err := grpc.Dial("localhost:"+strconv.Itoa(args.Port),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -321,9 +308,6 @@ func TestAddPeerEndpoint2(t *testing.T) {
log.Println(err)
}
hubServer.GrpcServer.GracefulStop()
hubServer2.GrpcServer.GracefulStop()
hubServer3.GrpcServer.GracefulStop()
got1 := hubServer.GetNumPeersExported()()
got2 := hubServer2.GetNumPeersExported()()
got3 := hubServer3.GetNumPeersExported()()
@ -336,6 +320,9 @@ func TestAddPeerEndpoint2(t *testing.T) {
if got3 != tt.wantServerThree {
t.Errorf("len(hubServer3.PeerServers) = %d, want %d\n", got3, tt.wantServerThree)
}
hubServer.Stop()
hubServer2.Stop()
hubServer3.Stop()
})
}
@ -343,12 +330,17 @@ func TestAddPeerEndpoint2(t *testing.T) {
// TestAddPeerEndpoint3 tests the ability to add peers
func TestAddPeerEndpoint3(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
args2 := makeDefaultArgs()
args3 := makeDefaultArgs()
args2.Port = "50052"
args3.Port = "50053"
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args2 := server.MakeDefaultTestArgs()
args3 := server.MakeDefaultTestArgs()
args2.Port = 50052
args3.Port = 50053
args.DisableStartNotifier = false
args2.DisableStartNotifier = false
args3.DisableStartNotifier = false
args2.NotifierPort = 18081
args3.NotifierPort = 18082
tests := []struct {
name string
@ -373,15 +365,15 @@ func TestAddPeerEndpoint3(t *testing.T) {
go hubServer.Run()
go hubServer2.Run()
go hubServer3.Run()
conn, err := grpc.Dial("localhost:"+args.Port,
grpc.WithInsecure(),
conn, err := grpc.Dial("localhost:"+strconv.Itoa(args.Port),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
log.Fatalf("did not connect: %v", err)
}
conn2, err := grpc.Dial("localhost:50052",
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithBlock(),
)
if err != nil {
@ -410,9 +402,9 @@ func TestAddPeerEndpoint3(t *testing.T) {
log.Println(err)
}
hubServer.GrpcServer.GracefulStop()
hubServer2.GrpcServer.GracefulStop()
hubServer3.GrpcServer.GracefulStop()
hubServer.Stop()
hubServer2.Stop()
hubServer3.Stop()
got1 := hubServer.GetNumPeersExported()()
got2 := hubServer2.GetNumPeersExported()()
got3 := hubServer3.GetNumPeersExported()()
@ -432,11 +424,11 @@ func TestAddPeerEndpoint3(t *testing.T) {
// TestAddPeer tests the ability to add peers
func TestUDPServer(t *testing.T) {
ctx := context.Background()
args := makeDefaultArgs()
ctx := stop.NewDebug()
args := server.MakeDefaultTestArgs()
args2 := server.MakeDefaultTestArgs()
args2.Port = 50052
args.DisableStartUDP = false
args2 := makeDefaultArgs()
args2.Port = "50052"
args2.DisableStartUDP = false
tests := []struct {
@ -467,18 +459,18 @@ func TestUDPServer(t *testing.T) {
log.Println(err)
}
hubServer.GrpcServer.GracefulStop()
hubServer2.GrpcServer.GracefulStop()
hubServer.Stop()
hubServer2.Stop()
got1 := hubServer.ExternalIP.String()
if got1 != tt.want {
t.Errorf("hubServer.ExternalIP = %s, want %s\n", got1, tt.want)
t.Errorf("hubServer.Args.Port = %s\n", hubServer.Args.Port)
t.Errorf("hubServer.Args.Port = %d\n", hubServer.Args.Port)
}
got2 := hubServer2.ExternalIP.String()
if got2 != tt.want {
t.Errorf("hubServer2.ExternalIP = %s, want %s\n", got2, tt.want)
t.Errorf("hubServer2.Args.Port = %s\n", hubServer2.Args.Port)
t.Errorf("hubServer2.Args.Port = %d\n", hubServer2.Args.Port)
}
})
}

View file

@ -0,0 +1,924 @@
package server
import (
"bytes"
"compress/zlib"
"crypto/sha256"
"encoding/base64"
"encoding/binary"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcd/wire"
"github.com/lbryio/lbcutil"
"golang.org/x/exp/constraints"
log "github.com/sirupsen/logrus"
)
// BlockchainBlockService methods handle "blockchain.block.*" RPCs
type BlockchainBlockService struct {
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
}
// BlockchainBlockService methods handle "blockchain.headers.*" RPCs
type BlockchainHeadersService struct {
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
// needed for subscribe/unsubscribe
sessionMgr *sessionManager
session *session
}
// BlockchainAddressService methods handle "blockchain.address.*" RPCs
type BlockchainAddressService struct {
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
// needed for subscribe/unsubscribe
sessionMgr *sessionManager
session *session
}
// BlockchainScripthashService methods handle "blockchain.scripthash.*" RPCs
type BlockchainScripthashService struct {
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
// needed for subscribe/unsubscribe
sessionMgr *sessionManager
session *session
}
// BlockchainTransactionService methods handle "blockchain.transaction.*" RPCs
type BlockchainTransactionService struct {
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
// needed for broadcast TX
sessionMgr *sessionManager
}
const CHUNK_SIZE = 96
const MAX_CHUNK_SIZE = 40960
const HEADER_SIZE = wire.MaxBlockHeaderPayload
const HASHX_LEN = 11
func min[Ord constraints.Ordered](x, y Ord) Ord {
if x < y {
return x
}
return y
}
func max[Ord constraints.Ordered](x, y Ord) Ord {
if x > y {
return x
}
return y
}
type BlockHeaderElectrum struct {
Version uint32 `json:"version"`
PrevBlockHash string `json:"prev_block_hash"`
MerkleRoot string `json:"merkle_root"`
ClaimTrieRoot string `json:"claim_trie_root"`
Timestamp uint32 `json:"timestamp"`
Bits uint32 `json:"bits"`
Nonce uint32 `json:"nonce"`
BlockHeight uint32 `json:"block_height"`
}
func newBlockHeaderElectrum(header *[HEADER_SIZE]byte, height uint32) *BlockHeaderElectrum {
var h1, h2, h3 chainhash.Hash
h1.SetBytes(header[4:36])
h2.SetBytes(header[36:68])
h3.SetBytes(header[68:100])
return &BlockHeaderElectrum{
Version: binary.LittleEndian.Uint32(header[0:]),
PrevBlockHash: h1.String(),
MerkleRoot: h2.String(),
ClaimTrieRoot: h3.String(),
Timestamp: binary.LittleEndian.Uint32(header[100:]),
Bits: binary.LittleEndian.Uint32(header[104:]),
Nonce: binary.LittleEndian.Uint32(header[108:]),
BlockHeight: height,
}
}
type BlockGetServerHeightReq struct{}
type BlockGetServerHeightResp uint32
// blockchain.block.get_server_height
func (s *BlockchainBlockService) Get_server_height(req *BlockGetServerHeightReq, resp **BlockGetServerHeightResp) error {
if s.DB == nil || s.DB.LastState == nil {
return fmt.Errorf("unknown height")
}
result := BlockGetServerHeightResp(s.DB.LastState.Height)
*resp = &result
return nil
}
type BlockGetChunkReq uint32
type BlockGetChunkResp string
// 'blockchain.block.get_chunk'
func (s *BlockchainBlockService) Get_chunk(req *BlockGetChunkReq, resp **BlockGetChunkResp) error {
index := uint32(*req)
db_headers, err := s.DB.GetHeaders(index*CHUNK_SIZE, CHUNK_SIZE)
if err != nil {
log.Warn(err)
return err
}
raw := make([]byte, 0, HEADER_SIZE*len(db_headers))
for _, h := range db_headers {
raw = append(raw, h[:]...)
}
headers := BlockGetChunkResp(hex.EncodeToString(raw))
*resp = &headers
return err
}
type BlockGetHeaderReq uint32
type BlockGetHeaderResp struct {
BlockHeaderElectrum
}
// 'blockchain.block.get_header'
func (s *BlockchainBlockService) Get_header(req *BlockGetHeaderReq, resp **BlockGetHeaderResp) error {
height := uint32(*req)
headers, err := s.DB.GetHeaders(height, 1)
if err != nil {
log.Warn(err)
return err
}
if len(headers) < 1 {
return errors.New("not found")
}
*resp = &BlockGetHeaderResp{*newBlockHeaderElectrum(&headers[0], height)}
return err
}
type BlockHeadersReq struct {
StartHeight uint32 `json:"start_height"`
Count uint32 `json:"count"`
CpHeight uint32 `json:"cp_height"`
B64 bool `json:"b64"`
}
func (req *BlockHeadersReq) UnmarshalJSON(b []byte) error {
var params [4]interface{}
err := json.Unmarshal(b, &params)
if err != nil {
return err
}
switch params[0].(type) {
case float64:
req.StartHeight = uint32(params[0].(float64))
default:
return fmt.Errorf("expected numeric argument #0 (start_height)")
}
switch params[1].(type) {
case float64:
req.Count = uint32(params[1].(float64))
default:
return fmt.Errorf("expected numeric argument #1 (count)")
}
switch params[2].(type) {
case float64:
req.CpHeight = uint32(params[2].(float64))
default:
return fmt.Errorf("expected numeric argument #2 (cp_height)")
}
switch params[3].(type) {
case bool:
req.B64 = params[3].(bool)
default:
return fmt.Errorf("expected boolean argument #3 (b64)")
}
return nil
}
type BlockHeadersResp struct {
Base64 string `json:"base64,omitempty"`
Hex string `json:"hex"`
Count uint32 `json:"count"`
Max uint32 `json:"max"`
Branch string `json:"branch,omitempty"`
Root string `json:"root,omitempty"`
}
// 'blockchain.block.headers'
func (s *BlockchainBlockService) Headers(req *BlockHeadersReq, resp **BlockHeadersResp) error {
count := min(req.Count, MAX_CHUNK_SIZE)
db_headers, err := s.DB.GetHeaders(req.StartHeight, count)
if err != nil {
log.Warn(err)
return err
}
count = uint32(len(db_headers))
raw := make([]byte, 0, HEADER_SIZE*count)
for _, h := range db_headers {
raw = append(raw, h[:]...)
}
result := &BlockHeadersResp{
Count: count,
Max: MAX_CHUNK_SIZE,
}
if req.B64 {
zipped := bytes.Buffer{}
w := zlib.NewWriter(&zipped)
w.Write(raw)
w.Close()
result.Base64 = base64.StdEncoding.EncodeToString(zipped.Bytes())
} else {
result.Hex = hex.EncodeToString(raw)
}
if count > 0 && req.CpHeight > 0 {
// TODO
//last_height := height + count - 1
}
*resp = result
return err
}
type HeadersSubscribeReq struct {
Raw bool `json:"raw"`
}
func (req *HeadersSubscribeReq) UnmarshalJSON(b []byte) error {
var params [1]interface{}
err := json.Unmarshal(b, &params)
if err != nil {
return err
}
switch params[0].(type) {
case bool:
req.Raw = params[0].(bool)
default:
return fmt.Errorf("expected bool argument #0 (raw)")
}
return nil
}
type HeadersSubscribeResp struct {
BlockHeaderElectrum
}
type HeadersSubscribeRawResp struct {
Hex string `json:"hex"`
Height uint32 `json:"height"`
}
// 'blockchain.headers.subscribe'
func (s *BlockchainHeadersService) Subscribe(req *HeadersSubscribeReq, resp *interface{}) error {
if s.sessionMgr == nil || s.session == nil {
return errors.New("no session, rpc not supported")
}
s.sessionMgr.headersSubscribe(s.session, req.Raw, true /*subscribe*/)
height := s.DB.Height
if s.DB.LastState != nil {
height = s.DB.LastState.Height
}
headers, err := s.DB.GetHeaders(height, 1)
if err != nil {
s.sessionMgr.headersSubscribe(s.session, req.Raw, false /*subscribe*/)
return err
}
if len(headers) < 1 {
return errors.New("not found")
}
if req.Raw {
*resp = &HeadersSubscribeRawResp{
Hex: hex.EncodeToString(headers[0][:]),
Height: height,
}
} else {
*resp = &HeadersSubscribeResp{*newBlockHeaderElectrum(&headers[0], height)}
}
return err
}
func decodeScriptHash(scripthash string) ([]byte, error) {
sh, err := hex.DecodeString(scripthash)
if err != nil {
return nil, err
}
if len(sh) != chainhash.HashSize {
return nil, fmt.Errorf("invalid scripthash: %v (length %v)", scripthash, len(sh))
}
internal.ReverseBytesInPlace(sh)
return sh, nil
}
func hashX(scripthash []byte) []byte {
return scripthash[:HASHX_LEN]
}
func hashXScript(script []byte, coin *chaincfg.Params) []byte {
if _, err := txscript.ExtractClaimScript(script); err == nil {
baseScript := txscript.StripClaimScriptPrefix(script)
if class, addrs, _, err := txscript.ExtractPkScriptAddrs(baseScript, coin); err == nil {
switch class {
case txscript.PubKeyHashTy, txscript.ScriptHashTy, txscript.PubKeyTy:
script, _ := txscript.PayToAddrScript(addrs[0])
return hashXScript(script, coin)
}
}
}
sum := sha256.Sum256(script)
return sum[:HASHX_LEN]
}
type AddressGetBalanceReq struct {
Address string `json:"address"`
}
type AddressGetBalanceResp struct {
Confirmed uint64 `json:"confirmed"`
Unconfirmed uint64 `json:"unconfirmed"`
}
// 'blockchain.address.get_balance'
func (s *BlockchainAddressService) Get_balance(req *AddressGetBalanceReq, resp **AddressGetBalanceResp) error {
address, err := lbcutil.DecodeAddress(req.Address, s.Chain)
if err != nil {
log.Warn(err)
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
log.Warn(err)
return err
}
hashX := hashXScript(script, s.Chain)
confirmed, unconfirmed, err := s.DB.GetBalance(hashX)
if err != nil {
log.Warn(err)
return err
}
*resp = &AddressGetBalanceResp{confirmed, unconfirmed}
return err
}
type scripthashGetBalanceReq struct {
ScriptHash string `json:"scripthash"`
}
type ScripthashGetBalanceResp struct {
Confirmed uint64 `json:"confirmed"`
Unconfirmed uint64 `json:"unconfirmed"`
}
// 'blockchain.scripthash.get_balance'
func (s *BlockchainScripthashService) Get_balance(req *scripthashGetBalanceReq, resp **ScripthashGetBalanceResp) error {
scripthash, err := decodeScriptHash(req.ScriptHash)
if err != nil {
log.Warn(err)
return err
}
hashX := hashX(scripthash)
confirmed, unconfirmed, err := s.DB.GetBalance(hashX)
if err != nil {
log.Warn(err)
return err
}
*resp = &ScripthashGetBalanceResp{confirmed, unconfirmed}
return err
}
type AddressGetHistoryReq struct {
Address string `json:"address"`
}
func (req *AddressGetHistoryReq) UnmarshalJSON(b []byte) error {
var params [1]interface{}
json.Unmarshal(b, &params)
err := json.Unmarshal(b, &params)
if err != nil {
return err
}
switch params[0].(type) {
case string:
req.Address = params[0].(string)
default:
return fmt.Errorf("expected string argument #0 (address)")
}
return nil
}
type TxInfo struct {
TxHash string `json:"tx_hash"`
Height uint32 `json:"height"`
}
type TxInfoFee struct {
TxInfo
Fee uint64 `json:"fee"`
}
type AddressGetHistoryResp []TxInfoFee
// 'blockchain.address.get_history'
func (s *BlockchainAddressService) Get_history(req *AddressGetHistoryReq, resp **AddressGetHistoryResp) error {
address, err := lbcutil.DecodeAddress(req.Address, s.Chain)
if err != nil {
log.Warn(err)
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
log.Warn(err)
return err
}
hashX := hashXScript(script, s.Chain)
dbTXs, err := s.DB.GetHistory(hashX)
if err != nil {
log.Warn(err)
return err
}
confirmed := make([]TxInfo, 0, len(dbTXs))
for _, tx := range dbTXs {
confirmed = append(confirmed,
TxInfo{
TxHash: tx.TxHash.String(),
Height: tx.Height,
})
}
unconfirmed := []TxInfoFee{} // TODO
result := make(AddressGetHistoryResp, len(confirmed)+len(unconfirmed))
i := 0
for _, tx := range confirmed {
result[i].TxInfo = tx
i += 1
}
for _, tx := range unconfirmed {
result[i] = tx
i += 1
}
*resp = &result
return err
}
type ScripthashGetHistoryReq struct {
ScriptHash string `json:"scripthash"`
}
type ScripthashGetHistoryResp struct {
Confirmed []TxInfo `json:"confirmed"`
Unconfirmed []TxInfoFee `json:"unconfirmed"`
}
// 'blockchain.scripthash.get_history'
func (s *BlockchainScripthashService) Get_history(req *ScripthashGetHistoryReq, resp **ScripthashGetHistoryResp) error {
scripthash, err := decodeScriptHash(req.ScriptHash)
if err != nil {
log.Warn(err)
return err
}
hashX := hashX(scripthash)
dbTXs, err := s.DB.GetHistory(hashX)
if err != nil {
log.Warn(err)
return err
}
confirmed := make([]TxInfo, 0, len(dbTXs))
for _, tx := range dbTXs {
confirmed = append(confirmed,
TxInfo{
TxHash: tx.TxHash.String(),
Height: tx.Height,
})
}
result := &ScripthashGetHistoryResp{
Confirmed: confirmed,
Unconfirmed: []TxInfoFee{}, // TODO
}
*resp = result
return err
}
type AddressGetMempoolReq struct {
Address string `json:"address"`
}
type AddressGetMempoolResp []TxInfoFee
// 'blockchain.address.get_mempool'
func (s *BlockchainAddressService) Get_mempool(req *AddressGetMempoolReq, resp **AddressGetMempoolResp) error {
address, err := lbcutil.DecodeAddress(req.Address, s.Chain)
if err != nil {
log.Warn(err)
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
log.Warn(err)
return err
}
hashX := hashXScript(script, s.Chain)
// TODO...
internal.ReverseBytesInPlace(hashX)
unconfirmed := make([]TxInfoFee, 0, 100)
result := AddressGetMempoolResp(unconfirmed)
*resp = &result
return err
}
type ScripthashGetMempoolReq struct {
ScriptHash string `json:"scripthash"`
}
type ScripthashGetMempoolResp []TxInfoFee
// 'blockchain.scripthash.get_mempool'
func (s *BlockchainScripthashService) Get_mempool(req *ScripthashGetMempoolReq, resp **ScripthashGetMempoolResp) error {
scripthash, err := decodeScriptHash(req.ScriptHash)
if err != nil {
log.Warn(err)
return err
}
hashX := hashX(scripthash)
// TODO...
internal.ReverseBytesInPlace(hashX)
unconfirmed := make([]TxInfoFee, 0, 100)
result := ScripthashGetMempoolResp(unconfirmed)
*resp = &result
return err
}
type AddressListUnspentReq struct {
Address string `json:"address"`
}
type TXOInfo struct {
TxHash string `json:"tx_hash"`
TxPos uint16 `json:"tx_pos"`
Height uint32 `json:"height"`
Value uint64 `json:"value"`
}
type AddressListUnspentResp []TXOInfo
// 'blockchain.address.listunspent'
func (s *BlockchainAddressService) Listunspent(req *AddressListUnspentReq, resp **AddressListUnspentResp) error {
address, err := lbcutil.DecodeAddress(req.Address, s.Chain)
if err != nil {
log.Warn(err)
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
log.Warn(err)
return err
}
hashX := hashXScript(script, s.Chain)
dbTXOs, err := s.DB.GetUnspent(hashX)
unspent := make([]TXOInfo, 0, len(dbTXOs))
for _, txo := range dbTXOs {
unspent = append(unspent,
TXOInfo{
TxHash: txo.TxHash.String(),
TxPos: txo.TxPos,
Height: txo.Height,
Value: txo.Value,
})
}
result := AddressListUnspentResp(unspent)
*resp = &result
return err
}
type ScripthashListUnspentReq struct {
ScriptHash string `json:"scripthash"`
}
type ScripthashListUnspentResp []TXOInfo
// 'blockchain.scripthash.listunspent'
func (s *BlockchainScripthashService) Listunspent(req *ScripthashListUnspentReq, resp **ScripthashListUnspentResp) error {
scripthash, err := decodeScriptHash(req.ScriptHash)
if err != nil {
log.Warn(err)
return err
}
hashX := hashX(scripthash)
dbTXOs, err := s.DB.GetUnspent(hashX)
unspent := make([]TXOInfo, 0, len(dbTXOs))
for _, txo := range dbTXOs {
unspent = append(unspent,
TXOInfo{
TxHash: txo.TxHash.String(),
TxPos: txo.TxPos,
Height: txo.Height,
Value: txo.Value,
})
}
result := ScripthashListUnspentResp(unspent)
*resp = &result
return err
}
type AddressSubscribeReq []string
type AddressSubscribeResp []string
// 'blockchain.address.subscribe'
func (s *BlockchainAddressService) Subscribe(req *AddressSubscribeReq, resp **AddressSubscribeResp) error {
if s.sessionMgr == nil || s.session == nil {
return errors.New("no session, rpc not supported")
}
result := make([]string, 0, len(*req))
for _, addr := range *req {
address, err := lbcutil.DecodeAddress(addr, s.Chain)
if err != nil {
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
return err
}
hashX := hashXScript(script, s.Chain)
s.sessionMgr.hashXSubscribe(s.session, hashX, addr, true /*subscribe*/)
status, err := s.DB.GetStatus(hashX)
if err != nil {
return err
}
result = append(result, hex.EncodeToString(status))
}
*resp = (*AddressSubscribeResp)(&result)
return nil
}
// 'blockchain.address.unsubscribe'
func (s *BlockchainAddressService) Unsubscribe(req *AddressSubscribeReq, resp **AddressSubscribeResp) error {
if s.sessionMgr == nil || s.session == nil {
return errors.New("no session, rpc not supported")
}
for _, addr := range *req {
address, err := lbcutil.DecodeAddress(addr, s.Chain)
if err != nil {
return err
}
script, err := txscript.PayToAddrScript(address)
if err != nil {
return err
}
hashX := hashXScript(script, s.Chain)
s.sessionMgr.hashXSubscribe(s.session, hashX, addr, false /*subscribe*/)
}
*resp = (*AddressSubscribeResp)(nil)
return nil
}
type ScripthashSubscribeReq string
type ScripthashSubscribeResp string
// 'blockchain.scripthash.subscribe'
func (s *BlockchainScripthashService) Subscribe(req *ScripthashSubscribeReq, resp **ScripthashSubscribeResp) error {
if s.sessionMgr == nil || s.session == nil {
return errors.New("no session, rpc not supported")
}
var result string
scripthash, err := decodeScriptHash(string(*req))
if err != nil {
return err
}
hashX := hashX(scripthash)
s.sessionMgr.hashXSubscribe(s.session, hashX, string(*req), true /*subscribe*/)
status, err := s.DB.GetStatus(hashX)
if err != nil {
return err
}
result = hex.EncodeToString(status)
*resp = (*ScripthashSubscribeResp)(&result)
return nil
}
// 'blockchain.scripthash.unsubscribe'
func (s *BlockchainScripthashService) Unsubscribe(req *ScripthashSubscribeReq, resp **ScripthashSubscribeResp) error {
if s.sessionMgr == nil || s.session == nil {
return errors.New("no session, rpc not supported")
}
scripthash, err := decodeScriptHash(string(*req))
if err != nil {
return err
}
hashX := hashX(scripthash)
s.sessionMgr.hashXSubscribe(s.session, hashX, string(*req), false /*subscribe*/)
*resp = (*ScripthashSubscribeResp)(nil)
return nil
}
type TransactionBroadcastReq [1]string
type TransactionBroadcastResp string
// 'blockchain.transaction.broadcast'
func (s *BlockchainTransactionService) Broadcast(req *TransactionBroadcastReq, resp **TransactionBroadcastResp) error {
if s.sessionMgr == nil {
return errors.New("no session manager, rpc not supported")
}
strTx := string((*req)[0])
rawTx, err := hex.DecodeString(strTx)
if err != nil {
return err
}
txhash, err := s.sessionMgr.broadcastTx(rawTx)
if err != nil {
return err
}
result := txhash.String()
*resp = (*TransactionBroadcastResp)(&result)
return nil
}
type TransactionGetReq string
type TXFullDetail struct {
Height int `json:"block_height"`
Pos uint32 `json:"pos"`
Merkle []string `json:"merkle"`
}
type TXDetail struct {
Height int `json:"block_height"`
}
type TXGetItem struct {
TxHash string
TxRaw string
Detail interface{} // TXFullDetail or TXDetail struct
}
type TransactionGetResp TXGetItem
// 'blockchain.transaction.get'
func (s *BlockchainTransactionService) Get(req *TransactionGetReq, resp **TransactionGetResp) error {
txids := [1]string{string(*req)}
request := TransactionGetBatchReq(txids[:])
var response *TransactionGetBatchResp
err := s.Get_batch(&request, &response)
if err != nil {
return err
}
if len(*response) < 1 {
return errors.New("tx not found")
}
switch (*response)[0].Detail.(type) {
case TXFullDetail:
break
case TXDetail:
default:
return errors.New("tx not confirmed")
}
*resp = (*TransactionGetResp)(&(*response)[0])
return err
}
type TransactionGetBatchReq []string
func (req *TransactionGetBatchReq) UnmarshalJSON(b []byte) error {
var params []interface{}
json.Unmarshal(b, &params)
if len(params) > 100 {
return fmt.Errorf("too many tx hashes in request: %v", len(params))
}
for i, txhash := range params {
switch params[0].(type) {
case string:
*req = append(*req, txhash.(string))
default:
return fmt.Errorf("expected string argument #%d (tx_hash)", i)
}
}
return nil
}
type TransactionGetBatchResp []TXGetItem
func (resp *TransactionGetBatchResp) MarshalJSON() ([]byte, error) {
// encode key/value pairs as variable length JSON object
var buf bytes.Buffer
enc := json.NewEncoder(&buf)
buf.WriteString("{")
for i, r := range *resp {
if i > 0 {
buf.WriteString(",")
}
txhash, raw, detail := r.TxHash, r.TxRaw, r.Detail
err := enc.Encode(txhash)
if err != nil {
return nil, err
}
buf.WriteString(":[")
err = enc.Encode(raw)
if err != nil {
return nil, err
}
buf.WriteString(",")
err = enc.Encode(detail)
if err != nil {
return nil, err
}
buf.WriteString("]")
}
buf.WriteString("}")
return buf.Bytes(), nil
}
// 'blockchain.transaction.get_batch'
func (s *BlockchainTransactionService) Get_batch(req *TransactionGetBatchReq, resp **TransactionGetBatchResp) error {
if len(*req) > 100 {
return fmt.Errorf("too many tx hashes in request: %v", len(*req))
}
tx_hashes := make([]chainhash.Hash, 0, len(*req))
for i, txid := range *req {
tx_hashes = append(tx_hashes, chainhash.Hash{})
if err := chainhash.Decode(&tx_hashes[i], txid); err != nil {
return err
}
}
dbResult, err := s.DB.GetTxMerkle(tx_hashes)
if err != nil {
return err
}
result := make([]TXGetItem, 0, len(dbResult))
for _, r := range dbResult {
merkles := make([]string, len(r.Merkle))
for i, m := range r.Merkle {
merkles[i] = m.String()
}
detail := TXFullDetail{
Height: r.Height,
Pos: r.Pos,
Merkle: merkles,
}
result = append(result, TXGetItem{r.TxHash.String(), hex.EncodeToString(r.RawTx), &detail})
}
*resp = (*TransactionGetBatchResp)(&result)
return err
}
type TransactionGetMerkleReq struct {
TxHash string `json:"tx_hash"`
Height uint32 `json:"height"`
}
type TransactionGetMerkleResp TXGetItem
// 'blockchain.transaction.get_merkle'
func (s *BlockchainTransactionService) Get_merkle(req *TransactionGetMerkleReq, resp **TransactionGetMerkleResp) error {
txids := [1]string{string(req.TxHash)}
request := TransactionGetBatchReq(txids[:])
var response *TransactionGetBatchResp
err := s.Get_batch(&request, &response)
if err != nil {
return err
}
if len(*response) < 1 {
return errors.New("tx not found")
}
switch (*response)[0].Detail.(type) {
case TXFullDetail:
break
case TXDetail:
default:
return errors.New("tx not confirmed")
}
*resp = (*TransactionGetMerkleResp)(&(*response)[0])
return err
}
type TransactionGetHeightReq string
type TransactionGetHeightResp uint32
// 'blockchain.transaction.get_height'
func (s *BlockchainTransactionService) Get_height(req *TransactionGetHeightReq, resp **TransactionGetHeightResp) error {
txid := string(*(req))
txhash, err := chainhash.NewHashFromStr(txid)
if err != nil {
return err
}
height, err := s.DB.GetTxHeight(txhash)
*resp = (*TransactionGetHeightResp)(&height)
return err
}
type TransactionInfoReq string
type TransactionInfoResp TXGetItem
// 'blockchain.transaction.info'
func (s *BlockchainTransactionService) Info(req *TransactionInfoReq, resp **TransactionInfoResp) error {
txids := [1]string{string(*req)}
request := TransactionGetBatchReq(txids[:])
var response *TransactionGetBatchResp
err := s.Get_batch(&request, &response)
if err != nil {
return err
}
if len(*response) < 1 {
return errors.New("tx not found")
}
switch (*response)[0].Detail.(type) {
case TXFullDetail:
break
case TXDetail:
default:
if (*response)[0].TxHash == "" {
return errors.New("no such mempool or blockchain transaction")
}
}
*resp = (*TransactionInfoResp)(&(*response)[0])
return err
}

View file

@ -0,0 +1,469 @@
package server
import (
"encoding/hex"
"encoding/json"
"net"
"strconv"
"sync"
"testing"
"github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/txscript"
"github.com/lbryio/lbcutil"
"github.com/lbryio/lbry.go/v3/extras/stop"
)
// Source: test_variety_of_transactions_and_longish_history (lbry-sdk/tests/integration/transactions)
const regTestDBPath = "../testdata/test_variety_of_transactions/lbry-rocksdb"
const regTestHeight = 502
var regTestAddrs = [30]string{
"mtgiQkd35xpx3TaZ4RBNirf3uSMQ8tXQ7z",
"mqMjBtzGTtRty7Y54RqeNLk9QE8rYUfpm3",
"n2q8ASDZmib4adu2eU4dPvVvjYeU97pks4",
"mzxYWTJogAtduNaeyH9pSSmBSPkJj33HDJ",
"mweCKeZkeUUi8RQdHry3Mziphb87vCwiiW",
"mp7ZuiZgBNJHFX6DVmeZrCj8SuzVQNDLwb",
"n2zZoBocGCcxe6jFo1anbbAsUFMPXdYfnY",
"msps28KwRJF77DxhzqD98prdwCrZwdUxJc",
"mjvkjuss63pq2mpsRn4Q5tsNKVMLG9qUt7",
"miF9cJn8HiX6vsorRDXtZEgcW7BeWowqkX",
"mx87wRYFchYaLjXyNaboMuEMRLRboFSPDD",
"mhvb94idtQvTSCQk9EB16wLLkSrbWizPRG",
"mx3Fu8FDM4nKR9VYtHWPtSGKVt1D588Ay1",
"mhqvhX7kLNQ2bUNWZxMhE1z6QEJKrqdV8T",
"mgekw8L4xEezFtkYdSarL4sk5Sc8n9UtzG",
"myhFrTz99ZHwbGo7qV4D7fJKfji7YJ3vZ8",
"mnf8UCVoo6DBq6Tg4QpnFFdV1mFVHi43TF",
"mn7hKyh6EA8oLAPkvTd9vPEgzLRejLxkj2",
"msfarwFff7LX6DkXk295x3YMnJtR5Yw8uy",
"mn8sUv6ryiLn4kzssBTqNaB1oL6qcKDzJ4",
"mhwgeQFyi1z1RxNR1CphE8PcwG2xBWcxDp",
"n2jKpDXhVaQHiKqhdQYwwykhoYtKtbh8P1",
"mhnt4btqpAuiNwjAfFxPEaA4ekCE8faRYN",
"mmTFCt6Du1VsdxSKc7f21vYsT75KnRy7NM",
"mm1nx1xSmgRponM5tmdq15KREa7f6M36La",
"mxMXmMKUqoj19hxEA5r3hZJgirT6nCQh14",
"mx2L4iqNGzpuNNsDmjvCpcomefDWLAjdv1",
"mohJcUzQdCYL7nEySKNQC8PUzowNS5gGvo",
"mjv1vErZiDXsh9TvBDGCBpzobZx7aVYuy7",
"mwDPTZzHsM6p1DfDnBeojDLRCDceTcejkT",
}
// const dbPath := "/Users/swdev1/hub/scribe_db.599529/lbry-rocksdb"
// const dbPath := "/mnt/d/data/snapshot_1072108/lbry-rocksdb"
func TestServerGetHeight(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainBlockService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
req := BlockGetServerHeightReq{}
var resp *BlockGetServerHeightResp
err = s.Get_server_height(&req, &resp)
if err != nil {
t.Errorf("handler err: %v", err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("unmarshal err: %v", err)
}
t.Logf("resp: %v", string(marshalled))
if string(marshalled) != strconv.FormatInt(regTestHeight, 10) {
t.Errorf("bad height: %v", string(marshalled))
}
}
func TestGetChunk(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainBlockService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for index := 0; index < 10; index++ {
req := BlockGetChunkReq(index)
var resp *BlockGetChunkResp
err := s.Get_chunk(&req, &resp)
if err != nil {
t.Errorf("index: %v handler err: %v", index, err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("index: %v unmarshal err: %v", index, err)
}
t.Logf("index: %v resp: %v", index, string(marshalled))
switch index {
case 0, 1, 2, 3, 4:
if len(*resp) != (CHUNK_SIZE * HEADER_SIZE * 2) {
t.Errorf("index: %v bad length: %v", index, len(*resp))
}
case 5:
if len(*resp) != 23*112*2 {
t.Errorf("index: %v bad length: %v", index, len(*resp))
}
default:
if len(*resp) != 0 {
t.Errorf("index: %v bad length: %v", index, len(*resp))
}
}
}
}
func TestGetHeader(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainBlockService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for height := 0; height < 700; height += 100 {
req := BlockGetHeaderReq(height)
var resp *BlockGetHeaderResp
err := s.Get_header(&req, &resp)
if err != nil && height <= 500 {
t.Errorf("height: %v handler err: %v", height, err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("height: %v unmarshal err: %v", height, err)
}
t.Logf("height: %v resp: %v", height, string(marshalled))
}
}
func TestHeaders(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainBlockService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for height := uint32(0); height < 700; height += 100 {
req := BlockHeadersReq{
StartHeight: height,
Count: 1,
CpHeight: 0,
B64: false,
}
var resp *BlockHeadersResp
err := s.Headers(&req, &resp)
if err != nil {
t.Errorf("Headers: %v", err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("height: %v unmarshal err: %v", height, err)
}
t.Logf("height: %v resp: %v", height, string(marshalled))
}
}
func TestHeadersSubscribe(t *testing.T) {
args := MakeDefaultTestArgs()
grp := stop.NewDebug()
secondaryPath := "asdf"
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
sm := newSessionManager(nil, db, args, grp, &chaincfg.RegressionNetParams, nil)
sm.start()
defer sm.stop()
client1, server1 := net.Pipe()
sess1 := sm.addSession(server1)
client2, server2 := net.Pipe()
sess2 := sm.addSession(server2)
// Set up logic to read a notification.
var received sync.WaitGroup
recv := func(client net.Conn) {
defer received.Done()
buf := make([]byte, 1024)
len, err := client.Read(buf)
if err != nil {
t.Errorf("read err: %v", err)
}
t.Logf("len: %v notification: %v", len, string(buf))
}
received.Add(2)
go recv(client1)
go recv(client2)
s1 := &BlockchainHeadersService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
sessionMgr: sm,
session: sess1,
}
s2 := &BlockchainHeadersService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
sessionMgr: sm,
session: sess2,
}
// Subscribe with Raw: false.
req1 := HeadersSubscribeReq{Raw: false}
var r any
err = s1.Subscribe(&req1, &r)
if err != nil {
t.Errorf("handler err: %v", err)
}
resp1 := r.(*HeadersSubscribeResp)
marshalled1, err := json.MarshalIndent(resp1, "", " ")
if err != nil {
t.Errorf("unmarshal err: %v", err)
}
// Subscribe with Raw: true.
t.Logf("resp: %v", string(marshalled1))
req2 := HeadersSubscribeReq{Raw: true}
err = s2.Subscribe(&req2, &r)
if err != nil {
t.Errorf("handler err: %v", err)
}
resp2 := r.(*HeadersSubscribeRawResp)
marshalled2, err := json.MarshalIndent(resp2, "", " ")
if err != nil {
t.Errorf("unmarshal err: %v", err)
}
t.Logf("resp: %v", string(marshalled2))
// Now send a notification.
header500, err := hex.DecodeString("00000020e9537f98ae80a3aa0936dd424439b2b9305e5e9d9d5c7aa571b4422c447741e739b3109304ed4f0330d6854271db17da221559a46b68db4ceecfebd9f0c75dbe0100000000000000000000000000000000000000000000000000000000000000b3e02063ffff7f2001000000")
if err != nil {
t.Errorf("decode err: %v", err)
}
note1 := headerNotification{
HeightHash: internal.HeightHash{Height: 500, BlockHeader: header500},
blockHeaderElectrum: nil,
blockHeaderStr: "",
}
t.Logf("sending notification")
sm.doNotify(note1)
t.Logf("waiting to receive notification(s)...")
received.Wait()
}
func TestGetBalance(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainAddressService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for _, addr := range regTestAddrs {
req := AddressGetBalanceReq{addr}
var resp *AddressGetBalanceResp
err := s.Get_balance(&req, &resp)
if err != nil {
t.Errorf("address: %v handler err: %v", addr, err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("address: %v unmarshal err: %v", addr, err)
}
t.Logf("address: %v resp: %v", addr, string(marshalled))
}
}
func TestGetHistory(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainAddressService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for _, addr := range regTestAddrs {
req := AddressGetHistoryReq{addr}
var resp *AddressGetHistoryResp
err := s.Get_history(&req, &resp)
if err != nil {
t.Errorf("address: %v handler err: %v", addr, err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("address: %v unmarshal err: %v", addr, err)
}
t.Logf("address: %v resp: %v", addr, string(marshalled))
}
}
func TestListUnspent(t *testing.T) {
secondaryPath := "asdf"
grp := stop.NewDebug()
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
s := &BlockchainAddressService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
}
for _, addr := range regTestAddrs {
req := AddressListUnspentReq{addr}
var resp *AddressListUnspentResp
err := s.Listunspent(&req, &resp)
if err != nil {
t.Errorf("address: %v handler err: %v", addr, err)
}
marshalled, err := json.MarshalIndent(resp, "", " ")
if err != nil {
t.Errorf("address: %v unmarshal err: %v", addr, err)
}
t.Logf("address: %v resp: %v", addr, string(marshalled))
}
}
func TestAddressSubscribe(t *testing.T) {
args := MakeDefaultTestArgs()
grp := stop.NewDebug()
secondaryPath := "asdf"
db, err := db.GetProdDB(regTestDBPath, secondaryPath, grp)
defer db.Shutdown()
if err != nil {
t.Error(err)
return
}
sm := newSessionManager(nil, db, args, grp, &chaincfg.RegressionNetParams, nil)
sm.start()
defer sm.stop()
client1, server1 := net.Pipe()
sess1 := sm.addSession(server1)
client2, server2 := net.Pipe()
sess2 := sm.addSession(server2)
// Set up logic to read a notification.
var received sync.WaitGroup
recv := func(client net.Conn) {
buf := make([]byte, 1024)
len, err := client.Read(buf)
if err != nil {
t.Errorf("read err: %v", err)
}
t.Logf("len: %v notification: %v", len, string(buf))
received.Done()
}
received.Add(2)
go recv(client1)
go recv(client2)
s1 := &BlockchainAddressService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
sessionMgr: sm,
session: sess1,
}
s2 := &BlockchainAddressService{
DB: db,
Chain: &chaincfg.RegressionNetParams,
sessionMgr: sm,
session: sess2,
}
addr1, addr2 := regTestAddrs[1], regTestAddrs[2]
// Subscribe to addr1 and addr2.
req1 := AddressSubscribeReq{addr1, addr2}
var resp1 *AddressSubscribeResp
err = s1.Subscribe(&req1, &resp1)
if err != nil {
t.Errorf("handler err: %v", err)
}
marshalled1, err := json.MarshalIndent(resp1, "", " ")
if err != nil {
t.Errorf("unmarshal err: %v", err)
}
// Subscribe to addr2 only.
t.Logf("resp: %v", string(marshalled1))
req2 := AddressSubscribeReq{addr2}
var resp2 *AddressSubscribeResp
err = s2.Subscribe(&req2, &resp2)
if err != nil {
t.Errorf("handler err: %v", err)
}
marshalled2, err := json.MarshalIndent(resp2, "", " ")
if err != nil {
t.Errorf("unmarshal err: %v", err)
}
t.Logf("resp: %v", string(marshalled2))
// Now send a notification for addr2.
address, _ := lbcutil.DecodeAddress(addr2, sm.chain)
script, _ := txscript.PayToAddrScript(address)
note := hashXNotification{}
copy(note.hashX[:], hashXScript(script, sm.chain))
status, err := hex.DecodeString((*resp1)[1])
if err != nil {
t.Errorf("decode err: %v", err)
}
note.status = append(note.status, []byte(status)...)
t.Logf("sending notification")
sm.doNotify(note)
t.Logf("waiting to receive notification(s)...")
received.Wait()
}

108
server/jsonrpc_claimtrie.go Normal file
View file

@ -0,0 +1,108 @@
package server
import (
"context"
"fmt"
"github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/internal/metrics"
pb "github.com/lbryio/herald.go/protobuf/go"
"github.com/prometheus/client_golang/prometheus"
log "github.com/sirupsen/logrus"
)
type ClaimtrieService struct {
DB *db.ReadOnlyDBColumnFamily
Server *Server
}
type ResolveData struct {
Data []string `json:"data"`
}
type Result struct {
Data string `json:"data"`
}
type GetClaimByIDData struct {
ClaimID string `json:"claim_id"`
}
// Resolve is the json rpc endpoint for 'blockchain.claimtrie.resolve'.
func (t *ClaimtrieService) Resolve(args *ResolveData, result **pb.Outputs) error {
log.Println("Resolve")
res, err := InternalResolve(args.Data, t.DB)
*result = res
return err
}
// Search is the json rpc endpoint for 'blockchain.claimtrie.search'.
func (t *ClaimtrieService) Search(args *pb.SearchRequest, result **pb.Outputs) error {
log.Println("Search")
if t.Server == nil {
log.Warnln("Server is nil in Search")
*result = nil
return nil
}
ctx := context.Background()
res, err := t.Server.Search(ctx, args)
*result = res
return err
}
// GetClaimByID is the json rpc endpoint for 'blockchain.claimtrie.getclaimbyid'.
func (t *ClaimtrieService) GetClaimByID(args *GetClaimByIDData, result **pb.Outputs) error {
log.Println("GetClaimByID")
if len(args.ClaimID) != 40 {
*result = nil
return fmt.Errorf("len(claim_id) != 40")
}
rows, extras, err := t.DB.GetClaimByID(args.ClaimID)
if err != nil {
*result = nil
return err
}
metrics.RequestsCount.With(prometheus.Labels{"method": "blockchain.claimtrie.getclaimbyid"}).Inc()
// FIXME: this has txos and extras and so does GetClaimById?
allTxos := make([]*pb.Output, 0)
allExtraTxos := make([]*pb.Output, 0)
for _, row := range rows {
txos, extraTxos, err := row.ToOutputs()
if err != nil {
*result = nil
return err
}
// TODO: there may be a more efficient way to do this.
allTxos = append(allTxos, txos...)
allExtraTxos = append(allExtraTxos, extraTxos...)
}
for _, extra := range extras {
txos, extraTxos, err := extra.ToOutputs()
if err != nil {
*result = nil
return err
}
// TODO: there may be a more efficient way to do this.
allTxos = append(allTxos, txos...)
allExtraTxos = append(allExtraTxos, extraTxos...)
}
res := &pb.Outputs{
Txos: allTxos,
ExtraTxos: allExtraTxos,
Total: uint32(len(allTxos) + len(allExtraTxos)),
Offset: 0, //TODO
Blocked: nil, //TODO
BlockedTotal: 0, //TODO
}
log.Warn(res)
*result = res
return nil
}

View file

@ -0,0 +1,39 @@
package server
import (
"errors"
log "github.com/sirupsen/logrus"
)
type PeersService struct {
Server *Server
// needed for subscribe/unsubscribe
sessionMgr *sessionManager
session *session
}
type PeersSubscribeReq struct {
Ip string `json:"ip"`
Host string `json:"host"`
Details []string `json:"details"`
}
type PeersSubscribeResp string
// Features is the json rpc endpoint for 'server.peers.subcribe'.
func (t *PeersService) Subscribe(req *PeersSubscribeReq, res **PeersSubscribeResp) error {
log.Println("PeersSubscribe")
// var port = "50001"
// FIXME: Get the actual port from the request details
if t.sessionMgr == nil || t.session == nil {
*res = nil
return errors.New("no session, rpc not supported")
}
t.sessionMgr.peersSubscribe(t.session, true /*subscribe*/)
*res = nil
return nil
}

88
server/jsonrpc_server.go Normal file
View file

@ -0,0 +1,88 @@
package server
import (
log "github.com/sirupsen/logrus"
)
type ServerService struct {
Args *Args
}
type ServerFeatureService struct {
Args *Args
}
type ServerFeaturesReq struct{}
type ServerFeaturesRes struct {
Hosts map[string]string `json:"hosts"`
Pruning string `json:"pruning"`
ServerVersion string `json:"server_version"`
ProtocolMin string `json:"protocol_min"`
ProtocolMax string `json:"protocol_max"`
GenesisHash string `json:"genesis_hash"`
Description string `json:"description"`
PaymentAddress string `json:"payment_address"`
DonationAddress string `json:"donation_address"`
DailyFee string `json:"daily_fee"`
HashFunction string `json:"hash_function"`
TrendingAlgorithm string `json:"trending_algorithm"`
}
// Features is the json rpc endpoint for 'server.features'.
func (t *ServerService) Features(req *ServerFeaturesReq, res **ServerFeaturesRes) error {
log.Println("Features")
features := &ServerFeaturesRes{
Hosts: map[string]string{},
Pruning: "",
ServerVersion: HUB_PROTOCOL_VERSION,
ProtocolMin: PROTOCOL_MIN,
ProtocolMax: PROTOCOL_MAX,
GenesisHash: t.Args.GenesisHash,
Description: t.Args.ServerDescription,
PaymentAddress: t.Args.PaymentAddress,
DonationAddress: t.Args.DonationAddress,
DailyFee: t.Args.DailyFee,
HashFunction: "sha256",
TrendingAlgorithm: "fast_ar",
}
*res = features
return nil
}
type ServerBannerService struct {
Args *Args
}
type ServerBannerReq struct{}
type ServerBannerRes string
// Banner is the json rpc endpoint for 'server.banner'.
func (t *ServerService) Banner(req *ServerBannerReq, res **ServerBannerRes) error {
log.Println("Banner")
*res = (*ServerBannerRes)(t.Args.Banner)
return nil
}
type ServerVersionService struct {
Args *Args
}
type ServerVersionReq [2]string // [client_name, client_version]
type ServerVersionRes [2]string // [version, protocol_version]
// Version is the json rpc endpoint for 'server.version'.
func (t *ServerService) Version(req *ServerVersionReq, res **ServerVersionRes) error {
// FIXME: We may need to do the computation of a negotiated version here.
// Also return an error if client is not supported?
result := [2]string{t.Args.ServerVersion, t.Args.ServerVersion}
*res = (*ServerVersionRes)(&result)
log.Printf("Version(%v) -> %v", *req, **res)
return nil
}

154
server/jsonrpc_service.go Normal file
View file

@ -0,0 +1,154 @@
package server
import (
"fmt"
"net"
"net/http"
"strconv"
"strings"
gorilla_mux "github.com/gorilla/mux"
gorilla_rpc "github.com/gorilla/rpc"
gorilla_json "github.com/gorilla/rpc/json"
log "github.com/sirupsen/logrus"
"golang.org/x/net/netutil"
)
type gorillaRpcCodec struct {
gorilla_rpc.Codec
}
func (c *gorillaRpcCodec) NewRequest(r *http.Request) gorilla_rpc.CodecRequest {
return &gorillaRpcCodecRequest{c.Codec.NewRequest(r)}
}
// gorillaRpcCodecRequest provides ability to rewrite the incoming
// request "method" field. For example:
// blockchain.block.get_header -> blockchain_block.Get_header
// blockchain.address.listunspent -> blockchain_address.Listunspent
// This makes the "method" string compatible with Gorilla/RPC
// requirements.
type gorillaRpcCodecRequest struct {
gorilla_rpc.CodecRequest
}
func (cr *gorillaRpcCodecRequest) Method() (string, error) {
rawMethod, err := cr.CodecRequest.Method()
if err != nil {
return rawMethod, err
}
parts := strings.Split(rawMethod, ".")
if len(parts) < 2 {
return rawMethod, fmt.Errorf("blockchain rpc: service/method ill-formed: %q", rawMethod)
}
service := strings.Join(parts[0:len(parts)-1], "_")
method := parts[len(parts)-1]
if len(method) < 1 {
return rawMethod, fmt.Errorf("blockchain rpc: method ill-formed: %q", method)
}
method = strings.ToUpper(string(method[0])) + string(method[1:])
return service + "." + method, err
}
// StartJsonRPC starts the json rpc server and registers the endpoints.
func (s *Server) StartJsonRPC() error {
// Set up the pure JSONRPC server with persistent connections/sessions.
if s.Args.JSONRPCPort != 0 {
port := ":" + strconv.Itoa(s.Args.JSONRPCPort)
laddr, err := net.ResolveTCPAddr("tcp4", port)
if err != nil {
log.Errorf("ResoveIPAddr: %v\n", err)
goto fail1
}
listener, err := net.ListenTCP("tcp4", laddr)
if err != nil {
log.Errorf("ListenTCP: %v\n", err)
goto fail1
}
log.Infof("JSONRPC server listening on %s", listener.Addr().String())
s.sessionManager.start()
acceptConnections := func(listener net.Listener) {
defer s.sessionManager.stop()
for {
conn, err := listener.Accept()
if err != nil {
log.Errorf("Accept: %v\n", err)
break
}
log.Infof("Accepted: %v", conn.RemoteAddr())
s.sessionManager.addSession(conn)
}
}
go acceptConnections(netutil.LimitListener(listener, s.sessionManager.sessionsMax))
}
fail1:
// Set up the JSONRPC over HTTP server.
if s.Args.JSONRPCHTTPPort != 0 {
s1 := gorilla_rpc.NewServer() // Create a new RPC server
// Register the type of data requested as JSON, with custom codec.
s1.RegisterCodec(&gorillaRpcCodec{gorilla_json.NewCodec()}, "application/json")
// Register "blockchain.claimtrie.*"" handlers.
claimtrieSvc := &ClaimtrieService{s.DB, s}
err := s1.RegisterTCPService(claimtrieSvc, "blockchain_claimtrie")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
// Register "blockchain.{block,address,scripthash,transaction}.*" handlers.
blockchainSvc := &BlockchainBlockService{s.DB, s.Chain}
err = s1.RegisterTCPService(blockchainSvc, "blockchain_block")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
err = s1.RegisterTCPService(&BlockchainHeadersService{s.DB, s.Chain, s.sessionManager, nil}, "blockchain_headers")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
err = s1.RegisterTCPService(&BlockchainAddressService{s.DB, s.Chain, s.sessionManager, nil}, "blockchain_address")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
err = s1.RegisterTCPService(&BlockchainScripthashService{s.DB, s.Chain, s.sessionManager, nil}, "blockchain_scripthash")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
err = s1.RegisterTCPService(&BlockchainTransactionService{s.DB, s.Chain, s.sessionManager}, "blockchain_transaction")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
// Register "server.{features,banner,version}" handlers.
serverSvc := &ServerService{s.Args}
err = s1.RegisterTCPService(serverSvc, "server")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
// Register "server.peers" handlers.
peersSvc := &PeersService{Server: s}
err = s1.RegisterTCPService(peersSvc, "server_peers")
if err != nil {
log.Errorf("RegisterTCPService: %v\n", err)
goto fail2
}
r := gorilla_mux.NewRouter()
r.Handle("/rpc", s1)
port := ":" + strconv.FormatUint(uint64(s.Args.JSONRPCHTTPPort), 10)
log.Infof("HTTP JSONRPC server listening on %s", port)
log.Fatal(http.ListenAndServe(port, r))
}
fail2:
return nil
}

View file

@ -2,6 +2,7 @@ package server
import (
"encoding/binary"
"fmt"
"net"
"github.com/lbryio/herald.go/internal"
@ -52,15 +53,27 @@ func (s *Server) DoNotify(heightHash *internal.HeightHash) error {
// RunNotifier Runs the notfying action forever
func (s *Server) RunNotifier() error {
for heightHash := range s.NotifierChan {
s.DoNotify(heightHash)
for notification := range s.NotifierChan {
switch note := notification.(type) {
case internal.HeightHash:
heightHash := note
s.DoNotify(&heightHash)
// Do we need this?
// case peerNotification:
// peer, _ := notification.(peerNotification)
// s.notifyPeerSubs(&Peer{Address: peer.address, Port: peer.port})
default:
logrus.Warn("unknown notification type")
}
s.sessionManager.doNotify(notification)
}
return nil
}
// NotifierServer implements the TCP protocol for height/blockheader notifications
func (s *Server) NotifierServer() error {
address := ":" + s.Args.NotifierPort
s.Grp.Add(1)
address := ":" + fmt.Sprintf("%d", s.Args.NotifierPort)
addr, err := net.ResolveTCPAddr("tcp", address)
if err != nil {
return err
@ -72,11 +85,27 @@ func (s *Server) NotifierServer() error {
}
defer listen.Close()
rdyCh := make(chan bool)
for {
var conn net.Conn
var err error
logrus.Info("Waiting for connection")
conn, err := listen.Accept()
go func() {
conn, err = listen.Accept()
rdyCh <- true
}()
select {
case <-s.Grp.Ch():
s.Grp.Done()
return nil
case <-rdyCh:
logrus.Info("Connection accepted")
}
if err != nil {
logrus.Warn(err)
continue

View file

@ -1,7 +1,6 @@
package server_test
import (
"context"
"encoding/hex"
"fmt"
"net"
@ -10,11 +9,32 @@ import (
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/herald.go/server"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/sirupsen/logrus"
)
const defaultBufferSize = 1024
func subReady(s *server.Server) error {
sleepTime := time.Millisecond * 100
for {
if sleepTime > time.Second {
return fmt.Errorf("timeout")
}
s.HeightSubsMut.RLock()
if len(s.HeightSubs) > 0 {
s.HeightSubsMut.RUnlock()
return nil
}
s.HeightSubsMut.RUnlock()
logrus.Warn("waiting for subscriber")
time.Sleep(sleepTime)
sleepTime = sleepTime * 2
}
}
func tcpConnReady(addr string) (net.Conn, error) {
sleepTime := time.Millisecond * 100
for {
@ -47,14 +67,14 @@ func tcpRead(conn net.Conn) ([]byte, error) {
}
func TestNotifierServer(t *testing.T) {
args := makeDefaultArgs()
ctx := context.Background()
args := server.MakeDefaultTestArgs()
ctx := stop.NewDebug()
hub := server.MakeHubServer(ctx, args)
go hub.NotifierServer()
go hub.RunNotifier()
addr := fmt.Sprintf(":%s", args.NotifierPort)
addr := fmt.Sprintf(":%d", args.NotifierPort)
logrus.Info(addr)
conn, err := tcpConnReady(addr)
if err != nil {
@ -76,11 +96,14 @@ func TestNotifierServer(t *testing.T) {
// Hacky but needed because if the reader isn't ready
// before the writer sends it won't get the data
time.Sleep(time.Second)
err = subReady(hub)
if err != nil {
t.Fatal(err)
}
hash, _ := hex.DecodeString("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
logrus.Warn("sending hash")
hub.NotifierChan <- &internal.HeightHash{Height: 1, BlockHash: hash}
hub.NotifierChan <- internal.HeightHash{Height: 1, BlockHash: hash}
res := <-resCh
logrus.Info(string(res))

View file

@ -11,6 +11,7 @@ import (
pb "github.com/lbryio/herald.go/protobuf/go"
server "github.com/lbryio/herald.go/server"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/olivere/elastic/v7"
)
@ -55,13 +56,14 @@ func TestSearch(t *testing.T) {
w.Write([]byte(resp))
}
context := context.Background()
args := makeDefaultArgs()
hubServer := server.MakeHubServer(context, args)
ctx := context.Background()
stopGroup := stop.NewDebug()
args := server.MakeDefaultTestArgs()
hubServer := server.MakeHubServer(stopGroup, args)
req := &pb.SearchRequest{
Text: "asdf",
}
out, err := hubServer.Search(context, req)
out, err := hubServer.Search(ctx, req)
if err != nil {
log.Println(err)
}

View file

@ -4,27 +4,31 @@ import (
"context"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"hash"
"io/ioutil"
"log"
golog "log"
"net"
"net/http"
"os"
"regexp"
"strconv"
"sync"
"time"
"github.com/ReneKroon/ttlcache/v2"
"github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/herald.go/internal/metrics"
"github.com/lbryio/herald.go/meta"
pb "github.com/lbryio/herald.go/protobuf/go"
"github.com/lbryio/lbcd/chaincfg"
lbcd "github.com/lbryio/lbcd/rpcclient"
"github.com/lbryio/lbry.go/v3/extras/stop"
"github.com/olivere/elastic/v7"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
logrus "github.com/sirupsen/logrus"
log "github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/reflection"
)
@ -35,6 +39,8 @@ type Server struct {
MultiSpaceRe *regexp.Regexp
WeirdCharsRe *regexp.Regexp
DB *db.ReadOnlyDBColumnFamily
Chain *chaincfg.Params
DaemonClient *lbcd.Client
EsClient *elastic.Client
QueryCache *ttlcache.Cache
S256 *hash.Hash
@ -50,7 +56,10 @@ type Server struct {
ExternalIP net.IP
HeightSubs map[net.Addr]net.Conn
HeightSubsMut sync.RWMutex
NotifierChan chan *internal.HeightHash
NotifierChan chan interface{}
Grp *stop.Group
notiferListener *net.TCPListener
sessionManager *sessionManager
pb.UnimplementedHubServer
}
@ -131,7 +140,8 @@ func (s *Server) PeerServersLoadOrStore(peer *Peer) (actual *Peer, loaded bool)
// Run "main" function for starting the server. This blocks.
func (s *Server) Run() {
l, err := net.Listen("tcp", ":"+s.Args.Port)
address := ":" + strconv.Itoa(s.Args.Port)
l, err := net.Listen("tcp", address)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
@ -146,54 +156,94 @@ func (s *Server) Run() {
}
}
func LoadDatabase(args *Args) (*db.ReadOnlyDBColumnFamily, error) {
tmpName, err := ioutil.TempDir("", "go-lbry-hub")
func (s *Server) Stop() {
log.Println("Shutting down server...")
if s.EsClient != nil {
log.Println("Stopping es client...")
s.EsClient.Stop()
}
if s.GrpcServer != nil {
log.Println("Stopping grpc server...")
s.GrpcServer.GracefulStop()
}
log.Println("Stopping other server threads...")
s.Grp.StopAndWait()
if s.DB != nil {
log.Println("Stopping database connection...")
s.DB.Shutdown()
}
log.Println("Returning from Stop...")
}
func LoadDatabase(args *Args, grp *stop.Group) (*db.ReadOnlyDBColumnFamily, error) {
tmpName, err := os.MkdirTemp("", "go-lbry-hub")
if err != nil {
logrus.Info(err)
log.Info(err)
log.Fatal(err)
}
logrus.Info("tmpName", tmpName)
log.Info("tmpName", tmpName)
if err != nil {
logrus.Info(err)
log.Info(err)
}
myDB, _, err := db.GetProdDB(args.DBPath, tmpName)
// dbShutdown = func() {
// db.Shutdown(myDB)
// }
myDB, err := db.GetProdDB(args.DBPath, tmpName, grp)
if err != nil {
// Can't load the db, fail loudly
logrus.Info(err)
log.Info(err)
log.Fatalln(err)
}
if myDB.LastState != nil {
log.Infof("DB version: %v", myDB.LastState.DBVersion)
log.Infof("height: %v", myDB.LastState.Height)
log.Infof("genesis: %v", myDB.LastState.Genesis.String())
log.Infof("tip: %v", myDB.LastState.Tip.String())
log.Infof("tx count: %v", myDB.LastState.TxCount)
}
blockingChannelHashes := make([][]byte, 0, 10)
blockingIds := make([]string, 0, 10)
filteringChannelHashes := make([][]byte, 0, 10)
filteringIds := make([]string, 0, 10)
for _, id := range args.BlockingChannelIds {
hash, err := hex.DecodeString(id)
if err != nil {
logrus.Warn("Invalid channel id: ", id)
log.Warn("Invalid channel id: ", id)
continue
}
blockingChannelHashes = append(blockingChannelHashes, hash)
blockingIds = append(blockingIds, id)
}
for _, id := range args.FilteringChannelIds {
hash, err := hex.DecodeString(id)
if err != nil {
logrus.Warn("Invalid channel id: ", id)
log.Warn("Invalid channel id: ", id)
continue
}
filteringChannelHashes = append(filteringChannelHashes, hash)
filteringIds = append(filteringIds, id)
}
myDB.BlockingChannelHashes = blockingChannelHashes
myDB.FilteringChannelHashes = filteringChannelHashes
if len(filteringIds) > 0 {
log.Infof("filtering claims reposted by channels: %+s", filteringIds)
}
if len(blockingIds) > 0 {
log.Infof("blocking claims reposted by channels: %+s", blockingIds)
}
return myDB, nil
}
// MakeHubServer takes the arguments given to a hub when it's started and
// initializes everything. It loads information about previously known peers,
// creates needed internal data structures, and initializes goroutines.
func MakeHubServer(ctx context.Context, args *Args) *Server {
func MakeHubServer(grp *stop.Group, args *Args) *Server {
grpcServer := grpc.NewServer(grpc.NumStreamWorkers(0))
multiSpaceRe, err := regexp.Compile(`\s{2,}`)
@ -206,9 +256,34 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
log.Fatal(err)
}
var client *elastic.Client = nil
var lbcdClient *lbcd.Client = nil
if args.DaemonURL != nil && args.DaemonURL.Host != "" {
var rpcCertificate []byte
if args.DaemonCAPath != "" {
rpcCertificate, err = ioutil.ReadFile(args.DaemonCAPath)
if err != nil {
log.Fatalf("failed to read SSL certificate from path: %v", args.DaemonCAPath)
}
}
log.Warnf("connecting to lbcd daemon at %v...", args.DaemonURL.Host)
password, _ := args.DaemonURL.User.Password()
cfg := &lbcd.ConnConfig{
Host: args.DaemonURL.Host,
User: args.DaemonURL.User.Username(),
Pass: password,
HTTPPostMode: true,
DisableTLS: rpcCertificate == nil,
Certificates: rpcCertificate,
}
lbcdClient, err = lbcd.New(cfg, nil)
if err != nil {
log.Fatalf("lbcd daemon connection failed: %v", err)
}
}
var esClient *elastic.Client = nil
if !args.DisableEs {
esUrl := args.EsHost + ":" + args.EsPort
esUrl := args.EsHost + ":" + fmt.Sprintf("%d", args.EsPort)
opts := []elastic.ClientOptionFunc{
elastic.SetSniff(true),
elastic.SetSnifferTimeoutStartup(time.Second * 60),
@ -216,9 +291,9 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
elastic.SetURL(esUrl),
}
if args.Debug {
opts = append(opts, elastic.SetTraceLog(log.New(os.Stderr, "[[ELASTIC]]", 0)))
opts = append(opts, elastic.SetTraceLog(golog.New(os.Stderr, "[[ELASTIC]]", 0)))
}
client, err = elastic.NewClient(opts...)
esClient, err = elastic.NewClient(opts...)
if err != nil {
log.Fatal(err)
}
@ -242,21 +317,63 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
//TODO: is this the right place to load the db?
var myDB *db.ReadOnlyDBColumnFamily
// var dbShutdown = func() {}
if !args.DisableResolve {
myDB, err = LoadDatabase(args)
myDB, err = LoadDatabase(args, grp)
if err != nil {
logrus.Warning(err)
log.Warning(err)
}
}
// Determine which chain to use based on db and cli values
dbChain := (*chaincfg.Params)(nil)
if myDB != nil && myDB.LastState != nil {
// The chain params can be inferred from DBStateValue.
switch myDB.LastState.Genesis.Hash {
case *chaincfg.MainNetParams.GenesisHash:
dbChain = &chaincfg.MainNetParams
case *chaincfg.TestNet3Params.GenesisHash:
dbChain = &chaincfg.TestNet3Params
case *chaincfg.RegressionNetParams.GenesisHash:
dbChain = &chaincfg.RegressionNetParams
}
}
cliChain := (*chaincfg.Params)(nil)
if args.Chain != nil {
switch *args.Chain {
case chaincfg.MainNetParams.Name:
cliChain = &chaincfg.MainNetParams
case chaincfg.TestNet3Params.Name, "testnet":
cliChain = &chaincfg.TestNet3Params
case chaincfg.RegressionNetParams.Name:
cliChain = &chaincfg.RegressionNetParams
}
}
chain := chaincfg.MainNetParams
if dbChain != nil && cliChain != nil {
if dbChain != cliChain {
log.Warnf("network: %v (from db) conflicts with %v (from cli)", dbChain.Name, cliChain.Name)
}
chain = *dbChain
} else if dbChain != nil {
chain = *dbChain
} else if cliChain != nil {
chain = *cliChain
}
log.Infof("network: %v", chain.Name)
args.GenesisHash = chain.GenesisHash.String()
sessionGrp := stop.New(grp)
s := &Server{
GrpcServer: grpcServer,
Args: args,
MultiSpaceRe: multiSpaceRe,
WeirdCharsRe: weirdCharsRe,
DB: myDB,
EsClient: client,
Chain: &chain,
DaemonClient: lbcdClient,
EsClient: esClient,
QueryCache: cache,
S256: &s256,
LastRefreshCheck: time.Now(),
@ -271,27 +388,39 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
ExternalIP: net.IPv4(127, 0, 0, 1),
HeightSubs: make(map[net.Addr]net.Conn),
HeightSubsMut: sync.RWMutex{},
NotifierChan: make(chan *internal.HeightHash),
NotifierChan: make(chan interface{}, 1),
Grp: grp,
sessionManager: nil,
}
// FIXME: HACK
s.sessionManager = newSessionManager(s, myDB, args, sessionGrp, &chain, lbcdClient)
// Start up our background services
if !args.DisableResolve && !args.DisableRocksDBRefresh {
logrus.Info("Running detect changes")
log.Info("Running detect changes")
myDB.RunDetectChanges(s.NotifierChan)
}
if !args.DisableBlockingAndFiltering {
myDB.RunGetBlocksAndFilters()
}
if !args.DisableStartPrometheus {
go s.prometheusEndpoint(s.Args.PrometheusPort, "metrics")
go s.prometheusEndpoint(fmt.Sprintf("%d", s.Args.PrometheusPort), "metrics")
}
if !args.DisableStartUDP {
go func() {
err := s.UDPServer()
err := s.UDPServer(s.Args.Port)
if err != nil {
log.Println("UDP Server failed!", err)
log.Errorf("UDP Server (%d) failed! %v", s.Args.Port, err)
}
}()
if s.Args.JSONRPCPort != 0 {
go func() {
err := s.UDPServer(s.Args.JSONRPCPort)
if err != nil {
log.Errorf("UDP Server (%d) failed! %v", s.Args.JSONRPCPort, err)
}
}()
}
}
if !args.DisableStartNotifier {
go func() {
@ -307,6 +436,14 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
}
}()
}
if !args.DisableStartJSONRPC {
go func() {
err := s.StartJsonRPC()
if err != nil {
log.Println("JSONRPC Server failed!", err)
}
}()
}
// Load peers from disk and subscribe to one if there are any
if !args.DisableLoadPeers {
go func() {
@ -324,7 +461,7 @@ func MakeHubServer(ctx context.Context, args *Args) *Server {
// for this hub to allow for metric tracking.
func (s *Server) prometheusEndpoint(port string, endpoint string) {
http.Handle("/"+endpoint, promhttp.Handler())
log.Println(fmt.Sprintf("listening on :%s /%s", port, endpoint))
log.Printf("listening on :%s /%s\n", port, endpoint)
err := http.ListenAndServe(":"+port, nil)
log.Fatalln("Shouldn't happen??!?!", err)
}
@ -446,14 +583,24 @@ func (s *Server) HeightHashSubscribe() error {
return nil
}
// Resolve is the gRPC endpoint for resolve.
func (s *Server) Resolve(ctx context.Context, args *pb.StringArray) (*pb.Outputs, error) {
return InternalResolve(args.Value, s.DB)
}
// InternalResolve takes an array of urls and resolves them to their transactions.
func InternalResolve(urls []string, DB *db.ReadOnlyDBColumnFamily) (*pb.Outputs, error) {
if DB == nil {
return nil, errors.New("db is nil")
// return nil, nil
}
metrics.RequestsCount.With(prometheus.Labels{"method": "resolve"}).Inc()
allTxos := make([]*pb.Output, 0)
allExtraTxos := make([]*pb.Output, 0)
for _, url := range args.Value {
res := s.DB.Resolve(url)
for _, url := range urls {
res := DB.Resolve(url)
txos, extraTxos, err := res.ToOutputs()
if err != nil {
return nil, err
@ -472,7 +619,7 @@ func (s *Server) Resolve(ctx context.Context, args *pb.StringArray) (*pb.Outputs
BlockedTotal: 0, //TODO
}
logrus.Warn(res)
log.Warn(res)
return res, nil
}

View file

@ -1,5 +1,11 @@
package server
import (
"github.com/lbryio/herald.go/db"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbry.go/v3/extras/stop"
)
func (s *Server) AddPeerExported() func(*Peer, bool, bool) error {
return s.addPeer
}
@ -7,3 +13,7 @@ func (s *Server) AddPeerExported() func(*Peer, bool, bool) error {
func (s *Server) GetNumPeersExported() func() int64 {
return s.getNumPeers
}
func NewSessionManagerExported(server *Server, db *db.ReadOnlyDBColumnFamily, args *Args, grp *stop.Group, chain *chaincfg.Params) *sessionManager {
return newSessionManager(server, db, args, grp, chain, nil)
}

631
server/session.go Normal file
View file

@ -0,0 +1,631 @@
package server
import (
"bytes"
"encoding/hex"
"encoding/json"
"fmt"
"net"
"net/rpc"
"net/rpc/jsonrpc"
"strings"
"sync"
"time"
"unsafe"
"github.com/lbryio/herald.go/db"
"github.com/lbryio/herald.go/internal"
"github.com/lbryio/lbcd/chaincfg"
"github.com/lbryio/lbcd/chaincfg/chainhash"
lbcd "github.com/lbryio/lbcd/rpcclient"
"github.com/lbryio/lbcd/wire"
"github.com/lbryio/lbry.go/v3/extras/stop"
log "github.com/sirupsen/logrus"
)
type headerNotification struct {
internal.HeightHash
blockHeaderElectrum *BlockHeaderElectrum
blockHeaderStr string
}
type hashXNotification struct {
hashX [HASHX_LEN]byte
status []byte
statusStr string
}
type peerNotification struct {
address string
port string
}
type session struct {
id uintptr
addr net.Addr
conn net.Conn
// hashXSubs maps hashX to the original subscription key (address or scripthash)
hashXSubs map[[HASHX_LEN]byte]string
// headersSub indicates header subscription
headersSub bool
// peersSub indicates peer subscription
peersSub bool
// headersSubRaw indicates the header subscription mode
headersSubRaw bool
// client provides the ability to send notifications
client rpc.ClientCodec
clientSeq uint64
// lastRecv records time of last incoming data
lastRecv time.Time
// lastSend records time of last outgoing data
lastSend time.Time
}
func (s *session) doNotify(notification interface{}) {
var method string
var params interface{}
switch note := notification.(type) {
case headerNotification:
if !s.headersSub {
return
}
heightHash := note.HeightHash
method = "blockchain.headers.subscribe"
if s.headersSubRaw {
header := note.blockHeaderStr
if len(header) == 0 {
header = hex.EncodeToString(note.BlockHeader[:])
}
params = &HeadersSubscribeRawResp{
Hex: header,
Height: uint32(heightHash.Height),
}
} else {
header := note.blockHeaderElectrum
if header == nil { // not initialized
header = newBlockHeaderElectrum((*[HEADER_SIZE]byte)(note.BlockHeader), uint32(heightHash.Height))
}
params = header
}
case hashXNotification:
orig, ok := s.hashXSubs[note.hashX]
if !ok {
return
}
if len(orig) == 64 {
method = "blockchain.scripthash.subscribe"
} else {
method = "blockchain.address.subscribe"
}
status := note.statusStr
if len(status) == 0 {
status = hex.EncodeToString(note.status)
}
params = []string{orig, status}
case peerNotification:
if !s.peersSub {
return
}
method = "server.peers.subscribe"
params = []string{note.address, note.port}
default:
log.Warnf("unknown notification type: %v", notification)
return
}
// Send the notification.
s.clientSeq += 1
req := &rpc.Request{
ServiceMethod: method,
Seq: s.clientSeq,
}
err := s.client.WriteRequest(req, params)
if err != nil {
log.Warnf("error: %v", err)
}
// Bump last send time.
s.lastSend = time.Now()
}
type sessionMap map[uintptr]*session
type sessionManager struct {
// sessionsMut protects sessions, headerSubs, hashXSubs state
sessionsMut sync.RWMutex
sessions sessionMap
// sessionsWait sync.WaitGroup
grp *stop.Group
sessionsMax int
sessionTimeout time.Duration
manageTicker *time.Ticker
db *db.ReadOnlyDBColumnFamily
args *Args
server *Server
chain *chaincfg.Params
lbcd *lbcd.Client
// peerSubs are sessions subscribed via 'blockchain.peers.subscribe'
peerSubs sessionMap
// headerSubs are sessions subscribed via 'blockchain.headers.subscribe'
headerSubs sessionMap
// hashXSubs are sessions subscribed via 'blockchain.{address,scripthash}.subscribe'
hashXSubs map[[HASHX_LEN]byte]sessionMap
}
func newSessionManager(server *Server, db *db.ReadOnlyDBColumnFamily, args *Args, grp *stop.Group, chain *chaincfg.Params, lbcd *lbcd.Client) *sessionManager {
return &sessionManager{
sessions: make(sessionMap),
grp: grp,
sessionsMax: args.MaxSessions,
sessionTimeout: time.Duration(args.SessionTimeout) * time.Second,
manageTicker: time.NewTicker(time.Duration(max(5, args.SessionTimeout/20)) * time.Second),
db: db,
args: args,
server: server,
chain: chain,
lbcd: lbcd,
peerSubs: make(sessionMap),
headerSubs: make(sessionMap),
hashXSubs: make(map[[HASHX_LEN]byte]sessionMap),
}
}
func (sm *sessionManager) start() {
sm.grp.Add(1)
go sm.manage()
}
func (sm *sessionManager) stop() {
sm.sessionsMut.Lock()
defer sm.sessionsMut.Unlock()
sm.headerSubs = make(sessionMap)
sm.hashXSubs = make(map[[HASHX_LEN]byte]sessionMap)
for _, sess := range sm.sessions {
sess.client.Close()
sess.conn.Close()
}
sm.sessions = make(sessionMap)
}
func (sm *sessionManager) manage() {
for {
sm.sessionsMut.Lock()
for _, sess := range sm.sessions {
if time.Since(sess.lastRecv) > sm.sessionTimeout {
sm.removeSessionLocked(sess)
log.Infof("session %v timed out", sess.addr.String())
}
}
sm.sessionsMut.Unlock()
// Wait for next management clock tick.
select {
case <-sm.grp.Ch():
sm.grp.Done()
return
case <-sm.manageTicker.C:
continue
}
}
}
func (sm *sessionManager) addSession(conn net.Conn) *session {
sm.sessionsMut.Lock()
sess := &session{
addr: conn.RemoteAddr(),
conn: conn,
hashXSubs: make(map[[11]byte]string),
client: jsonrpc.NewClientCodec(conn),
lastRecv: time.Now(),
}
sess.id = uintptr(unsafe.Pointer(sess))
sm.sessions[sess.id] = sess
sm.sessionsMut.Unlock()
// Create a new RPC server. These services are linked to the
// session, which allows RPC handlers to know the session for
// each request and update subscriptions.
s1 := rpc.NewServer()
// Register "server.{features,banner,version}" handlers.
serverSvc := &ServerService{sm.args}
err := s1.RegisterName("server", serverSvc)
if err != nil {
log.Errorf("RegisterName: %v\n", err)
}
// Register "server.peers" handlers.
peersSvc := &PeersService{Server: sm.server}
err = s1.RegisterName("server.peers", peersSvc)
if err != nil {
log.Errorf("RegisterName: %v\n", err)
}
// Register "blockchain.claimtrie.*"" handlers.
claimtrieSvc := &ClaimtrieService{sm.db, sm.server}
err = s1.RegisterName("blockchain.claimtrie", claimtrieSvc)
if err != nil {
log.Errorf("RegisterName: %v\n", err)
}
// Register "blockchain.{block,address,scripthash,transaction}.*" handlers.
blockchainSvc := &BlockchainBlockService{sm.db, sm.chain}
err = s1.RegisterName("blockchain.block", blockchainSvc)
if err != nil {
log.Errorf("RegisterName: %v\n", err)
goto fail
}
err = s1.RegisterName("blockchain.headers", &BlockchainHeadersService{sm.db, sm.chain, sm, sess})
if err != nil {
log.Errorf("RegisterName: %v\n", err)
goto fail
}
err = s1.RegisterName("blockchain.address", &BlockchainAddressService{sm.db, sm.chain, sm, sess})
if err != nil {
log.Errorf("RegisterName: %v\n", err)
goto fail
}
err = s1.RegisterName("blockchain.scripthash", &BlockchainScripthashService{sm.db, sm.chain, sm, sess})
if err != nil {
log.Errorf("RegisterName: %v\n", err)
goto fail
}
err = s1.RegisterName("blockchain.transaction", &BlockchainTransactionService{sm.db, sm.chain, sm})
if err != nil {
log.Errorf("RegisterName: %v\n", err)
goto fail
}
sm.grp.Add(1)
go func() {
s1.ServeCodec(&sessionServerCodec{jsonrpc.NewServerCodec(newJsonPatchingCodec(conn)), sess})
log.Infof("session %v goroutine exit", sess.addr.String())
sm.grp.Done()
}()
return sess
fail:
sm.removeSession(sess)
return nil
}
func (sm *sessionManager) removeSession(sess *session) {
sm.sessionsMut.Lock()
defer sm.sessionsMut.Unlock()
sm.removeSessionLocked(sess)
}
func (sm *sessionManager) removeSessionLocked(sess *session) {
if sess.headersSub {
delete(sm.headerSubs, sess.id)
}
for hashX := range sess.hashXSubs {
subs, ok := sm.hashXSubs[hashX]
if !ok {
continue
}
delete(subs, sess.id)
}
delete(sm.sessions, sess.id)
sess.client.Close()
sess.conn.Close()
}
func (sm *sessionManager) broadcastTx(rawTx []byte) (*chainhash.Hash, error) {
var msgTx wire.MsgTx
err := msgTx.Deserialize(bytes.NewReader(rawTx))
if err != nil {
return nil, err
}
return sm.lbcd.SendRawTransaction(&msgTx, false)
}
func (sm *sessionManager) peersSubscribe(sess *session, subscribe bool) {
sm.sessionsMut.Lock()
defer sm.sessionsMut.Unlock()
if subscribe {
sm.peerSubs[sess.id] = sess
sess.peersSub = true
return
}
delete(sm.peerSubs, sess.id)
sess.peersSub = false
}
func (sm *sessionManager) headersSubscribe(sess *session, raw bool, subscribe bool) {
sm.sessionsMut.Lock()
defer sm.sessionsMut.Unlock()
if subscribe {
sm.headerSubs[sess.id] = sess
sess.headersSub = true
sess.headersSubRaw = raw
return
}
delete(sm.headerSubs, sess.id)
sess.headersSub = false
sess.headersSubRaw = false
}
func (sm *sessionManager) hashXSubscribe(sess *session, hashX []byte, original string, subscribe bool) {
sm.sessionsMut.Lock()
defer sm.sessionsMut.Unlock()
var key [HASHX_LEN]byte
copy(key[:], hashX)
subs, ok := sm.hashXSubs[key]
if subscribe {
if !ok {
subs = make(sessionMap)
sm.hashXSubs[key] = subs
}
subs[sess.id] = sess
sess.hashXSubs[key] = original
return
}
if ok {
delete(subs, sess.id)
if len(subs) == 0 {
delete(sm.hashXSubs, key)
}
}
delete(sess.hashXSubs, key)
}
func (sm *sessionManager) doNotify(notification interface{}) {
switch note := notification.(type) {
case internal.HeightHash:
// The HeightHash notification translates to headerNotification.
notification = &headerNotification{HeightHash: note}
}
sm.sessionsMut.RLock()
var subsCopy sessionMap
switch note := notification.(type) {
case headerNotification:
log.Infof("header notification @ %#v", note)
subsCopy = sm.headerSubs
if len(subsCopy) > 0 {
hdr := [HEADER_SIZE]byte{}
copy(hdr[:], note.BlockHeader)
note.blockHeaderElectrum = newBlockHeaderElectrum(&hdr, uint32(note.Height))
note.blockHeaderStr = hex.EncodeToString(note.BlockHeader[:])
}
case hashXNotification:
log.Infof("hashX notification @ %#v", note)
hashXSubs, ok := sm.hashXSubs[note.hashX]
if ok {
subsCopy = hashXSubs
}
if len(subsCopy) > 0 {
note.statusStr = hex.EncodeToString(note.status)
}
case peerNotification:
subsCopy = sm.peerSubs
default:
log.Warnf("unknown notification type: %v", notification)
}
sm.sessionsMut.RUnlock()
// Deliver notification to relevant sessions.
for _, sess := range subsCopy {
sess.doNotify(notification)
}
// Produce secondary hashXNotification(s) corresponding to the headerNotification.
switch note := notification.(type) {
case headerNotification:
touched, err := sm.db.GetTouchedHashXs(uint32(note.Height))
if err != nil {
log.Errorf("failed to get touched hashXs at height %v, error: %v", note.Height, err)
break
}
for _, hashX := range touched {
hashXstatus, err := sm.db.GetStatus(hashX)
if err != nil {
log.Errorf("failed to get status of hashX %v, error: %v", hashX, err)
continue
}
note2 := hashXNotification{}
copy(note2.hashX[:], hashX)
note2.status = hashXstatus
sm.doNotify(note2)
}
}
}
type sessionServerCodec struct {
rpc.ServerCodec
sess *session
}
// ReadRequestHeader provides ability to rewrite the incoming
// request "method" field. For example:
//
// blockchain.block.get_header -> blockchain.block.Get_header
// blockchain.address.listunspent -> blockchain.address.Listunspent
//
// This makes the "method" string compatible with rpc.Server
// requirements.
func (c *sessionServerCodec) ReadRequestHeader(req *rpc.Request) error {
log.Infof("from %v receive header", c.sess.addr.String())
err := c.ServerCodec.ReadRequestHeader(req)
if err != nil {
log.Warnf("error: %v", err)
return err
}
log.Infof("from %v receive header: %#v", c.sess.addr.String(), *req)
rawMethod := req.ServiceMethod
parts := strings.Split(rawMethod, ".")
if len(parts) < 2 {
return fmt.Errorf("blockchain rpc: service/method ill-formed: %q", rawMethod)
}
service := strings.Join(parts[0:len(parts)-1], ".")
method := parts[len(parts)-1]
if len(method) < 1 {
return fmt.Errorf("blockchain rpc: method ill-formed: %q", method)
}
method = strings.ToUpper(string(method[0])) + string(method[1:])
req.ServiceMethod = service + "." + method
return err
}
// ReadRequestBody wraps the regular implementation, but updates session stats too.
func (c *sessionServerCodec) ReadRequestBody(params any) error {
log.Infof("from %v receive body", c.sess.addr.String())
err := c.ServerCodec.ReadRequestBody(params)
if err != nil {
log.Warnf("error: %v", err)
return err
}
log.Infof("from %v receive body: %#v", c.sess.addr.String(), params)
// Bump last receive time.
c.sess.lastRecv = time.Now()
return err
}
// WriteResponse wraps the regular implementation, but updates session stats too.
func (c *sessionServerCodec) WriteResponse(resp *rpc.Response, reply any) error {
log.Infof("respond to %v", c.sess.addr.String())
err := c.ServerCodec.WriteResponse(resp, reply)
if err != nil {
return err
}
// Bump last send time.
c.sess.lastSend = time.Now()
return err
}
// serverRequest is a duplicate of serverRequest from
// net/rpc/jsonrpc/server.go with an added Version which
// we can check.
type serverRequest struct {
Version string `json:"jsonrpc"`
Method string `json:"method"`
Params *json.RawMessage `json:"params"`
Id *json.RawMessage `json:"id"`
}
// serverResponse is a duplicate of serverResponse from
// net/rpc/jsonrpc/server.go with an added Version which
// we can set at will.
type serverResponse struct {
Version string `json:"jsonrpc"`
Id *json.RawMessage `json:"id"`
Result any `json:"result,omitempty"`
Error any `json:"error,omitempty"`
}
// jsonPatchingCodec is able to intercept the JSON requests/responses
// and tweak them. Currently, it appears we need to make several changes:
// 1) add "jsonrpc": "2.0" (or "jsonrpc": "1.0") in response
// 2) add newline to frame response
// 3) add "params": [] when "params" is missing
// 4) replace params ["arg1", "arg2", ...] with [["arg1", "arg2", ...]]
type jsonPatchingCodec struct {
conn net.Conn
inBuffer *bytes.Buffer
dec *json.Decoder
enc *json.Encoder
outBuffer *bytes.Buffer
}
func newJsonPatchingCodec(conn net.Conn) *jsonPatchingCodec {
buf1, buf2 := bytes.NewBuffer(nil), bytes.NewBuffer(nil)
return &jsonPatchingCodec{
conn: conn,
inBuffer: buf1,
dec: json.NewDecoder(buf1),
enc: json.NewEncoder(buf2),
outBuffer: buf2,
}
}
func (c *jsonPatchingCodec) Read(p []byte) (n int, err error) {
if c.outBuffer.Len() > 0 {
// Return remaining decoded bytes.
return c.outBuffer.Read(p)
}
// Buffer contents consumed. Try to decode more JSON.
// Read until framing newline. This allows us to print the raw request.
for !bytes.ContainsAny(c.inBuffer.Bytes(), "\n") {
var buf [1024]byte
n, err = c.conn.Read(buf[:])
if err != nil {
return 0, err
}
c.inBuffer.Write(buf[:n])
}
log.Infof("raw request: %v", c.inBuffer.String())
var req serverRequest
err = c.dec.Decode(&req)
if err != nil {
return 0, err
}
if req.Params != nil {
n := len(*req.Params)
if n < 2 || (*req.Params)[0] != '[' && (*req.Params)[n-1] != ']' {
// This is an error, but we're not going to try to correct it.
goto encode
}
// FIXME: The heuristics here don't cover all possibilities.
// For example: [{obj1}, {obj2}] or ["foo,bar"] would not
// be handled correctly.
bracketed := (*req.Params)[1 : n-1]
n = len(bracketed)
if n > 1 && (bracketed[0] == '{' || bracketed[0] == '[') {
// Probable single object or list argument.
goto encode
}
// The params look like ["arg1", "arg2", "arg3", ...].
// We're in trouble because our jsonrpc library does not
// handle this. So pack these args in an inner list.
// The handler method will receive ONE list argument.
params := json.RawMessage(fmt.Sprintf("[[%s]]", bracketed))
req.Params = &params
} else {
// Add empty argument list if params omitted.
params := json.RawMessage("[]")
req.Params = &params
}
encode:
// Encode the request. This allows us to print the patched request.
buf, err := json.Marshal(req)
if err != nil {
return 0, err
}
log.Infof("patched request: %v", string(buf))
err = c.enc.Encode(req)
if err != nil {
return 0, err
}
return c.outBuffer.Read(p)
}
func (c *jsonPatchingCodec) Write(p []byte) (n int, err error) {
log.Infof("raw response: %v", string(p))
var resp serverResponse
err = json.Unmarshal(p, &resp)
if err != nil {
return 0, err
}
// Add "jsonrpc": "2.0" if missing.
if len(resp.Version) == 0 {
resp.Version = "2.0"
}
buf, err := json.Marshal(resp)
if err != nil {
return 0, err
}
log.Infof("patched response: %v", string(buf))
// Add newline for framing.
return c.conn.Write(append(buf, '\n'))
}
func (c *jsonPatchingCodec) Close() error {
return c.conn.Close()
}

View file

@ -219,8 +219,9 @@ func UDPPing(ip, port string) (*SPVPong, error) {
// UDPServer is a goroutine that starts an udp server that implements the hubs
// Ping/Pong protocol to find out about each other without making full TCP
// connections.
func (s *Server) UDPServer() error {
address := ":" + s.Args.Port
func (s *Server) UDPServer(port int) error {
address := ":" + strconv.Itoa(port)
tip := make([]byte, 32)
addr, err := net.ResolveUDPAddr("udp", address)
if err != nil {

View file

@ -11,7 +11,7 @@ import (
// TestUDPPing tests UDPPing correctness against prod server.
func TestUDPPing(t *testing.T) {
args := makeDefaultArgs()
args := server.MakeDefaultTestArgs()
args.DisableStartUDP = true
tests := []struct {
@ -39,10 +39,10 @@ func TestUDPPing(t *testing.T) {
toPort := "50001"
pong, err := server.UDPPing(toAddr, toPort)
gotCountry := pong.DecodeCountry()
if err != nil {
log.Println(err)
t.Skipf("ping failed: %v", err)
}
gotCountry := pong.DecodeCountry()
res, err := exec.Command("dig", "@resolver4.opendns.com", "myip.opendns.com", "+short").Output()

View file

@ -8,12 +8,13 @@ import (
"os"
"os/signal"
"github.com/lbryio/lbry.go/v3/extras/stop"
log "github.com/sirupsen/logrus"
)
// shutdownRequestChannel is used to initiate shutdown from one of the
// subsystems using the same code paths as when an interrupt signal is received.
var shutdownRequestChannel = make(chan struct{})
var shutdownRequestChannel = make(stop.Chan)
// interruptSignals defines the default signals to catch in order to do a proper
// shutdown. This may be modified during init depending on the platform.
@ -48,7 +49,6 @@ func interruptListener() <-chan struct{} {
case sig := <-interruptChannel:
log.Infof("Received signal (%s). Already "+
"shutting down...", sig)
case <-shutdownRequestChannel:
log.Info("Shutdown requested. Already " +
"shutting down...")

View file

@ -1,6 +1,6 @@
S,,
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a6b67006286030000,0000007615cbad28
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a706a0063105c0000,000000000bebc200
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd01000a6b67006286030000,0000007615cbad28
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd01000a706a0063105c0000,000000000bebc200
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a73ea006367550000,0000000005f5e100
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a7d63006469750000,0000000db0b7c894
S,532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a7ebf00648c480000,00000000b2d05e00

1 S
2 S 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a6b67006286030000 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd01000a6b67006286030000 0000007615cbad28
3 S 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a706a0063105c0000 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd01000a706a0063105c0000 000000000bebc200
4 S 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a73ea006367550000 0000000005f5e100
5 S 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a7d63006469750000 0000000db0b7c894
6 S 532556ed1cab9d17f2a9392030a9ad7f5d138f11bd02000a7ebf00648c480000 00000000b2d05e00

18
testdata/Si_resolve.csv vendored Normal file
View file

@ -0,0 +1,18 @@
Si,,
S,53000000a420c44374f4f399ab4807fa1901eefc8701000e94ad0297ec210000,00000000000f4240
S,53000000c27eef5ea69e0d73f118826c7e326bb46901000773de00371d660000,000000001dcd6500
S,5300000110e40894573f528c393fbcec7a472ec85301000d069c01516b320000,0000000000989680
S,5300000324e40fcb63a0b517a3660645e9bd99244a01000f2fd8030bb6ba0000,00000000000f4240
S,5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff4030bc8a50000,0000000001312d00
S,5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff6030bc8b00000,0000000000000003
S,5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff7030bc8b10000,0000000000000002
S,5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff9030bc8cf0000,0000000000000001
S,53000003d1538a0f19f5cd4bc1a62cc294f5c8993401000c816a011c7c990000,00000000000f4240
S,53000008d47beeff8325e795a8604226145b01702b01000ef1ed02dbb2a20000,00000000000186a0
S,5300000906499e073e94370ceff37cb21c2821244401000fa7c40369842d0000,00000000000186a0
S,5300000906499e073e94370ceff37cb21c2821244402000fa7c403698fff0000,00000000000000a1
S,5300000906499e073e94370ceff37cb21c2821244402000fa7c80369f0010000,000000000000000f
S,53000009c3172e034a255f3c03566dca84bb9f046a01000e07020225c69c0000,000000000007a120
S,53000009ca6e0caaaef16872b4bd4f6f1b8c2363e201000eb5af02b169560000,00000000000f4240
i,6900000324e40fcb63a0b517a3660645e9bd99244a,0000000001406f460000000001312d06
i,6900000906499e073e94370ceff37cb21c28212444,000000000001875000000000000000b0
1 Si
2 S 53000000a420c44374f4f399ab4807fa1901eefc8701000e94ad0297ec210000 00000000000f4240
3 S 53000000c27eef5ea69e0d73f118826c7e326bb46901000773de00371d660000 000000001dcd6500
4 S 5300000110e40894573f528c393fbcec7a472ec85301000d069c01516b320000 0000000000989680
5 S 5300000324e40fcb63a0b517a3660645e9bd99244a01000f2fd8030bb6ba0000 00000000000f4240
6 S 5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff4030bc8a50000 0000000001312d00
7 S 5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff6030bc8b00000 0000000000000003
8 S 5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff7030bc8b10000 0000000000000002
9 S 5300000324e40fcb63a0b517a3660645e9bd99244a02000f2ff9030bc8cf0000 0000000000000001
10 S 53000003d1538a0f19f5cd4bc1a62cc294f5c8993401000c816a011c7c990000 00000000000f4240
11 S 53000008d47beeff8325e795a8604226145b01702b01000ef1ed02dbb2a20000 00000000000186a0
12 S 5300000906499e073e94370ceff37cb21c2821244401000fa7c40369842d0000 00000000000186a0
13 S 5300000906499e073e94370ceff37cb21c2821244402000fa7c403698fff0000 00000000000000a1
14 S 5300000906499e073e94370ceff37cb21c2821244402000fa7c80369f0010000 000000000000000f
15 S 53000009c3172e034a255f3c03566dca84bb9f046a01000e07020225c69c0000 000000000007a120
16 S 53000009ca6e0caaaef16872b4bd4f6f1b8c2363e201000eb5af02b169560000 00000000000f4240
17 i 6900000324e40fcb63a0b517a3660645e9bd99244a 0000000001406f460000000001312d06
18 i 6900000906499e073e94370ceff37cb21c28212444 000000000001875000000000000000b0

21
testdata/c.csv vendored Normal file
View file

@ -0,0 +1,21 @@
c,
631457da9061c90a8fd211994ba8e3701a76c43fa66937673f,e41d47b10d8b768793c75e4b2bb35784
632de81b0213a1b6e390e4c1859dba94b2b0e9a74e360a2b9e,1326b6b9eb9ad8ecc591aa54c365dafa
63325af388b77d3ed3df8a5b1483b83fb0b5153ad51de15ac0,b3985bb638840f1c0c7aadaa32848fc1
6339c7574004d908068b73e2f898a241dceaa19d2e4f5fd2c6,b55b277d1598b93cad3cbcdbdc796c04
6363d895c26d023913ae5c84680d8acbf0c4b2dd6fa1842a2c,9c33af364b69814cc868fadc48547ef9
637d45cd2b29ba27353f889660780d2c5edd0d490058c06dd1,6597a63fa0de8aaf717e031029830cc1
637e1d5b825273eaf7457f40d97fc18ab2f99e25552e14e185,2c9e0e7145297d8eaee06f36567a529c
638151f59e498873ef82ef0271186f0b60b9ceeaa10aec120e,9b64b4276a1059e4ecf19560b566d503
6384e22f9b0fc6f63c9a221786091ecf02b0df2925895b8132,e12f4a8a130f1419ff4ae3a9bb8a31ee
63a92ad4fe7abbf72db94f49092764329c4d9b5cf30115eb2a,152300368cecfaf42debe1e7cccba9cc
63ab7cc5574087640b78b46e9548cfbefabc581e479883eb70,1f8e2f0abf79e263c3bd3fa29085f454
63b7cceb793d1e8a3729c9f9bc7a580b7d3d1b42a3c13c5e99,fb5b20d556d3362da5e4e880b8feec7a
63b9b943c661dfad86644fdf34d956273996a261692227d6a9,8b4aeb0ad6f6275025df1fb2a173c5a7
63bba32a7015a47db0da6381c30f95200858637fb82cf367ee,83841279d3c9a345e87f02ba431479fe
63beea81eeec6fadf422df5800d013278ccd351dc77cabf363,d3ea0bcc5e7a5453855d96220fc02e97
63bf6872e4541eaa7ffe0659e11eff43520a6571a634576c56,d01ae01321c2617c17767446f624a348
63cce2b1651ed5575052abbb75747d059b5a54e09c7a330b56,46a4dbf4d155da400b30038a0ccd3bdc
63d5165b6b9c42249409c8e616fc17481bd296f69d0b4564f2,a18bff62b8cbe7aea8a46aa2e83432a3
63e616d85d1425ea0686aa58438ff416db5176da015cef2eb3,8c1e763b02f9f3f1b4c6f0e5dd18cb19
63f5476e70301ba6fdd6d0317b2c03d678e2623ee66fd4110a,f04df6c132e1d2d14feeb17ca34b65f3
1 c
2 631457da9061c90a8fd211994ba8e3701a76c43fa66937673f e41d47b10d8b768793c75e4b2bb35784
3 632de81b0213a1b6e390e4c1859dba94b2b0e9a74e360a2b9e 1326b6b9eb9ad8ecc591aa54c365dafa
4 63325af388b77d3ed3df8a5b1483b83fb0b5153ad51de15ac0 b3985bb638840f1c0c7aadaa32848fc1
5 6339c7574004d908068b73e2f898a241dceaa19d2e4f5fd2c6 b55b277d1598b93cad3cbcdbdc796c04
6 6363d895c26d023913ae5c84680d8acbf0c4b2dd6fa1842a2c 9c33af364b69814cc868fadc48547ef9
7 637d45cd2b29ba27353f889660780d2c5edd0d490058c06dd1 6597a63fa0de8aaf717e031029830cc1
8 637e1d5b825273eaf7457f40d97fc18ab2f99e25552e14e185 2c9e0e7145297d8eaee06f36567a529c
9 638151f59e498873ef82ef0271186f0b60b9ceeaa10aec120e 9b64b4276a1059e4ecf19560b566d503
10 6384e22f9b0fc6f63c9a221786091ecf02b0df2925895b8132 e12f4a8a130f1419ff4ae3a9bb8a31ee
11 63a92ad4fe7abbf72db94f49092764329c4d9b5cf30115eb2a 152300368cecfaf42debe1e7cccba9cc
12 63ab7cc5574087640b78b46e9548cfbefabc581e479883eb70 1f8e2f0abf79e263c3bd3fa29085f454
13 63b7cceb793d1e8a3729c9f9bc7a580b7d3d1b42a3c13c5e99 fb5b20d556d3362da5e4e880b8feec7a
14 63b9b943c661dfad86644fdf34d956273996a261692227d6a9 8b4aeb0ad6f6275025df1fb2a173c5a7
15 63bba32a7015a47db0da6381c30f95200858637fb82cf367ee 83841279d3c9a345e87f02ba431479fe
16 63beea81eeec6fadf422df5800d013278ccd351dc77cabf363 d3ea0bcc5e7a5453855d96220fc02e97
17 63bf6872e4541eaa7ffe0659e11eff43520a6571a634576c56 d01ae01321c2617c17767446f624a348
18 63cce2b1651ed5575052abbb75747d059b5a54e09c7a330b56 46a4dbf4d155da400b30038a0ccd3bdc
19 63d5165b6b9c42249409c8e616fc17481bd296f69d0b4564f2 a18bff62b8cbe7aea8a46aa2e83432a3
20 63e616d85d1425ea0686aa58438ff416db5176da015cef2eb3 8c1e763b02f9f3f1b4c6f0e5dd18cb19
21 63f5476e70301ba6fdd6d0317b2c03d678e2623ee66fd4110a f04df6c132e1d2d14feeb17ca34b65f3

21
testdata/d.csv vendored Normal file
View file

@ -0,0 +1,21 @@
d,
64188d8e8e56c823919ba5eea5b60d0e2a27b313b314a83cd79ec882e042ba47d1,27f60d5852ab8e9538b5c35891ebd915c14b02a679607b01ae33e040a816685fba36f7e9918136dba9999c13cc
64254b85d06da94e2c7723699a684dfcf38664bcadb4e6aa35541cd5b2975bbcb9,fbc9d8e21a2192182aba69c73a6e3f7f56ba2fac8a634ef1f0b16625a12db3757c27dbddd74c3e598005a7c529f13410d4ff3a02456164e973040dec661f78106441
642984b5855a4a1894d881f82d3703f184e6c1b380daa5d09147c98c1b71bee9ea,3ff17d6d132128a85f8262399a6ee09401672ec20e668ff70fe63024753d8b9ecd915720e2fc4b52d857034b066c2e316ab2d2d3c77d20649bfdd1e86d7f0ffa1b44302989e1f103470aebbaf4
64299c1c1b5dabf41bd83f3c91efce9eb5c0acd635dc6e669b42c3bf27cc4dc418,144ab7485a18bdfc8ed9543e1d5783941d602f9b012441da55f028b37d679f046173b4ab1c10e424
6435d0497f800004c1a23d3471242dbcf8012eb45792621e2185d675b1c3a21021,a03bf241d35ac46c51aad53c83b2f445fc8e97654e843b0d83b0ba85b0d8130c9e7c7b13bb4d6157f5f73df8c80e4f4851d29c0501e8fcba518d3dbd80c0e87e94ec1bc781e0f6092fd0d4749c418afd
644515ee2686c2e0410a965fae5a8ff3e707bab2ba3969d9557ab529aa219da650,662ce7d0284408744733f63ea84cb9db34413f261913c3fce59933a196458b3a1e9b52a636af1fb778a0edaedae51be1aedb09b9d605e1e7ef8c0da3e8eba9b99d723a9c1635473554b0bf45db5fb790a110f0d3f89cbe
6458f48aa991fc0a2c6f79f138fcc758646b025fce9d02525ee077dbbb56c64043,a48b7d67a08ebf8a9298c7b6576a1daae2e0b8fcc35fc95bd7097c54fed39df5bab602e389e1378523688109525e8be4b23d
645b00b38d41e9e74d7af8b88c6840deacd9af74a25de3f352440b0087a111af2e,0d6b55f6eae73445f41335666b345be2afc15989331f8478efd86f7c420d7f71cd6a23723a25c1da963dce93e5993a74529a4cddced9ca3a6ede21b597ba2c26d2
645c00301ef63070ab0912e3378b2d59d19953a74143b584d686e59638ede0250c,16fa8a614ee7bc188c92772bd8f41311e518ea04a4063eae2e3f0ac6c86fcb34a821afe711c4cabe6a6b4245dec139
645c241e29e0a3e406f4a908faa7d39df87c91190fb3e073b006d22f6695735873,84b2dd6db4cdd508d31f4fa0ca561f90d0cdffdb958cf8a5d297260d
6468c52a1fbf769451bcd1c99021ee0b309ae67bb5f03e83ab50674bb959e5845c,ae39e4716dc15ece68c57794720d787193b28632e13dea5050e95f1f251674370ef3aa64
646acbb4b11cfa5ead5a2c38515ace8f4fc87d39c3cf8866401900ee822e8ce238,c31db7d0ce2537e1fe0c6fc9cd4e84d5c9f73df537425f1035938fa49fb0f9334f86be59b8
6478d257a7fd6779ad36b351e88cc9f34e55cf8d200bc3f095505168d31dafc21c,f8e3051555b19ecc5af92ba46f7db73190d9e1e0ecf84c259cad97371480ea3c7c5036157fad5c1d0d008bf1ab4ae558b78f4426a9303cc53401b9085b5c23966f48fbb1d76809ea3376e3d08a6d10b048d06da6a5ff32
64b099e855102c54d054907e42637536b93f2b5c8482795a4d89bd420dff876fe3,19bfabe9d9633c1741bf051db2ba9b0d0b265a66ac9869ce
64b567cd2cb2d61062b66aeb2364f7bf3fc706f67ecf34674fdfc0b793587c6e3b,ccfc02a82b2e0f925a53aff5c040e610af1eee11f2aba92a9ce57e975c1937fb7888e9da98712bc5be906f0ed4946077f4ecb7d5c2fd167d892a67
64bfd045aaaeded94be7a756ca44bf3c3b1825c32ce8df02023ba5349aab3cae4e,2a890e23f7282e5d38f5575e83d72b369c365a4772b0f109ce
64c3fbfe842cf0e183d79b9340da544ac8afeee1351f4d67ba407afd0db8dc20b7,df3b8fc3e4b169c0cbeeb701ddc8a50ea4dab3ce5a32553bc5be28e5cd1c65a76669fa71c141c639965f8a7d71ef93f2a193cf9025a67509ac7bae8152a6e36a3c283e3186dc35ed11de23810a1cbe13b0889f465b8e70dfc96671821a4504c0
64c610888ad1cb913b13be9f52e51269bfa664862b213d102838cfa04350eb3431,7a065900bc937ec5426525b13375ccc7f07b1230a3369eb6a107ba5a253182a2660ebe7f45
64d41e007768c674b134ff3f75b7c682a08fe673929673a445cd2e176b63d5aff5,9fd9c6ceee853474dbd77c73640befc524d8e3f3
64ee07557244e772cf9384d37ace73921388c05a8cadcab8aa17e82935bd5b95a7,4f396aef717bd3b9f57ca99af6db26114794c059472b8951dfe0cf588f35c7c74a91dbbac4f26faa565c18fb5b7d0ddbef53ae92945bf74e3f81a453d6b16b03208dbf5ae310f0
1 d
2 64188d8e8e56c823919ba5eea5b60d0e2a27b313b314a83cd79ec882e042ba47d1 27f60d5852ab8e9538b5c35891ebd915c14b02a679607b01ae33e040a816685fba36f7e9918136dba9999c13cc
3 64254b85d06da94e2c7723699a684dfcf38664bcadb4e6aa35541cd5b2975bbcb9 fbc9d8e21a2192182aba69c73a6e3f7f56ba2fac8a634ef1f0b16625a12db3757c27dbddd74c3e598005a7c529f13410d4ff3a02456164e973040dec661f78106441
4 642984b5855a4a1894d881f82d3703f184e6c1b380daa5d09147c98c1b71bee9ea 3ff17d6d132128a85f8262399a6ee09401672ec20e668ff70fe63024753d8b9ecd915720e2fc4b52d857034b066c2e316ab2d2d3c77d20649bfdd1e86d7f0ffa1b44302989e1f103470aebbaf4
5 64299c1c1b5dabf41bd83f3c91efce9eb5c0acd635dc6e669b42c3bf27cc4dc418 144ab7485a18bdfc8ed9543e1d5783941d602f9b012441da55f028b37d679f046173b4ab1c10e424
6 6435d0497f800004c1a23d3471242dbcf8012eb45792621e2185d675b1c3a21021 a03bf241d35ac46c51aad53c83b2f445fc8e97654e843b0d83b0ba85b0d8130c9e7c7b13bb4d6157f5f73df8c80e4f4851d29c0501e8fcba518d3dbd80c0e87e94ec1bc781e0f6092fd0d4749c418afd
7 644515ee2686c2e0410a965fae5a8ff3e707bab2ba3969d9557ab529aa219da650 662ce7d0284408744733f63ea84cb9db34413f261913c3fce59933a196458b3a1e9b52a636af1fb778a0edaedae51be1aedb09b9d605e1e7ef8c0da3e8eba9b99d723a9c1635473554b0bf45db5fb790a110f0d3f89cbe
8 6458f48aa991fc0a2c6f79f138fcc758646b025fce9d02525ee077dbbb56c64043 a48b7d67a08ebf8a9298c7b6576a1daae2e0b8fcc35fc95bd7097c54fed39df5bab602e389e1378523688109525e8be4b23d
9 645b00b38d41e9e74d7af8b88c6840deacd9af74a25de3f352440b0087a111af2e 0d6b55f6eae73445f41335666b345be2afc15989331f8478efd86f7c420d7f71cd6a23723a25c1da963dce93e5993a74529a4cddced9ca3a6ede21b597ba2c26d2
10 645c00301ef63070ab0912e3378b2d59d19953a74143b584d686e59638ede0250c 16fa8a614ee7bc188c92772bd8f41311e518ea04a4063eae2e3f0ac6c86fcb34a821afe711c4cabe6a6b4245dec139
11 645c241e29e0a3e406f4a908faa7d39df87c91190fb3e073b006d22f6695735873 84b2dd6db4cdd508d31f4fa0ca561f90d0cdffdb958cf8a5d297260d
12 6468c52a1fbf769451bcd1c99021ee0b309ae67bb5f03e83ab50674bb959e5845c ae39e4716dc15ece68c57794720d787193b28632e13dea5050e95f1f251674370ef3aa64
13 646acbb4b11cfa5ead5a2c38515ace8f4fc87d39c3cf8866401900ee822e8ce238 c31db7d0ce2537e1fe0c6fc9cd4e84d5c9f73df537425f1035938fa49fb0f9334f86be59b8
14 6478d257a7fd6779ad36b351e88cc9f34e55cf8d200bc3f095505168d31dafc21c f8e3051555b19ecc5af92ba46f7db73190d9e1e0ecf84c259cad97371480ea3c7c5036157fad5c1d0d008bf1ab4ae558b78f4426a9303cc53401b9085b5c23966f48fbb1d76809ea3376e3d08a6d10b048d06da6a5ff32
15 64b099e855102c54d054907e42637536b93f2b5c8482795a4d89bd420dff876fe3 19bfabe9d9633c1741bf051db2ba9b0d0b265a66ac9869ce
16 64b567cd2cb2d61062b66aeb2364f7bf3fc706f67ecf34674fdfc0b793587c6e3b ccfc02a82b2e0f925a53aff5c040e610af1eee11f2aba92a9ce57e975c1937fb7888e9da98712bc5be906f0ed4946077f4ecb7d5c2fd167d892a67
17 64bfd045aaaeded94be7a756ca44bf3c3b1825c32ce8df02023ba5349aab3cae4e 2a890e23f7282e5d38f5575e83d72b369c365a4772b0f109ce
18 64c3fbfe842cf0e183d79b9340da544ac8afeee1351f4d67ba407afd0db8dc20b7 df3b8fc3e4b169c0cbeeb701ddc8a50ea4dab3ce5a32553bc5be28e5cd1c65a76669fa71c141c639965f8a7d71ef93f2a193cf9025a67509ac7bae8152a6e36a3c283e3186dc35ed11de23810a1cbe13b0889f465b8e70dfc96671821a4504c0
19 64c610888ad1cb913b13be9f52e51269bfa664862b213d102838cfa04350eb3431 7a065900bc937ec5426525b13375ccc7f07b1230a3369eb6a107ba5a253182a2660ebe7f45
20 64d41e007768c674b134ff3f75b7c682a08fe673929673a445cd2e176b63d5aff5 9fd9c6ceee853474dbd77c73640befc524d8e3f3
21 64ee07557244e772cf9384d37ace73921388c05a8cadcab8aa17e82935bd5b95a7 4f396aef717bd3b9f57ca99af6db26114794c059472b8951dfe0cf588f35c7c74a91dbbac4f26faa565c18fb5b7d0ddbef53ae92945bf74e3f81a453d6b16b03208dbf5ae310f0

21
testdata/e.csv vendored Normal file
View file

@ -0,0 +1,21 @@
e,
6500f23ec1,7b471b15ac811403113bf4
654b6af788,7c38d58c240503b936f4c1204a4ed317680f6fbc09c95c4d6ab2598f31d3e09e9a
654dceae45,2b36ece4081037b0ec8136d4a41a667f9736548ff85892fb178ed0008ea17fe7582985b489d9d3c455d23b1b
65673f9cef,8cc057ce0c7190316c9269a6e2807e63417637b5f82eef1f94762e584191166662f6a446199ab950a6b96a98
656845f85a,4ef94f090853d39618c561f4d6b1dab800b3fd46b95c56641079f36f8e3d8c3d24126ef86be8d456e93a5d4c
656fd477dc,08e664da615c0dd584b91e210848ea2949dc60c555bc
6575c86b58,421fb2a0f544ae76b850b45af8749b65eb5880fca17f6ba9b70cc9f6746cf04632
6585892310,c2043f7e7ff3b392d46c381682da2f60baf85c34ed6e9f5a2a5cced6f972b9847b
659459b414,8f8a3713c0abe3c94ef3aa4b449693df448683aa6192395d4bd61c66ef71f69e89
659839e3bd,6baddd761d7c6b8bbc8dce4f7a0240f4db5bbe19b9eb0874ff3b8c1d0fd5ba48ff
65a0e881ac,c7ccd582382f46df2095dff1d484af80f40fff68a3a92397d413a9818260e18cd40d2d35b4072dea89eb0d08
65b4164cd2,6b8bcfd57d29fb94128767b24e4b09f3f6fbf1773785
65b8989fc8,7e712054cbb6dc0e292684
65b9996832,997ed9e6c10df1c78f3e1f
65d805f1ba,3af5fcf80e392d3daec547de5d9171d9c24a79c5e3cc5551ea432377c277f58aa0
65edc9cdf2,7e37479e9bb38fc69e1b0d
65ef0d9209,c88ffcfba33856508b4ba58c82b65cf60927ffaa45faf1f671b27965ab7e87fc4e
65f2b2764b,2a5cc7a625a03a55170954202ba6a95675acbb79897a79256c6913deeb583918198769fe1e2e4c2802623315
65f72d65f3,77ef24d0a1a6d1c17580a8612cccd8398148834ff341
65ffbd56f8,2a015033fd5beb3320f748a4589a5eb81d9a5241ab3c561341f1ae2de993957dc29a273e6056c5676e5ebabc
1 e
2 6500f23ec1 7b471b15ac811403113bf4
3 654b6af788 7c38d58c240503b936f4c1204a4ed317680f6fbc09c95c4d6ab2598f31d3e09e9a
4 654dceae45 2b36ece4081037b0ec8136d4a41a667f9736548ff85892fb178ed0008ea17fe7582985b489d9d3c455d23b1b
5 65673f9cef 8cc057ce0c7190316c9269a6e2807e63417637b5f82eef1f94762e584191166662f6a446199ab950a6b96a98
6 656845f85a 4ef94f090853d39618c561f4d6b1dab800b3fd46b95c56641079f36f8e3d8c3d24126ef86be8d456e93a5d4c
7 656fd477dc 08e664da615c0dd584b91e210848ea2949dc60c555bc
8 6575c86b58 421fb2a0f544ae76b850b45af8749b65eb5880fca17f6ba9b70cc9f6746cf04632
9 6585892310 c2043f7e7ff3b392d46c381682da2f60baf85c34ed6e9f5a2a5cced6f972b9847b
10 659459b414 8f8a3713c0abe3c94ef3aa4b449693df448683aa6192395d4bd61c66ef71f69e89
11 659839e3bd 6baddd761d7c6b8bbc8dce4f7a0240f4db5bbe19b9eb0874ff3b8c1d0fd5ba48ff
12 65a0e881ac c7ccd582382f46df2095dff1d484af80f40fff68a3a92397d413a9818260e18cd40d2d35b4072dea89eb0d08
13 65b4164cd2 6b8bcfd57d29fb94128767b24e4b09f3f6fbf1773785
14 65b8989fc8 7e712054cbb6dc0e292684
15 65b9996832 997ed9e6c10df1c78f3e1f
16 65d805f1ba 3af5fcf80e392d3daec547de5d9171d9c24a79c5e3cc5551ea432377c277f58aa0
17 65edc9cdf2 7e37479e9bb38fc69e1b0d
18 65ef0d9209 c88ffcfba33856508b4ba58c82b65cf60927ffaa45faf1f671b27965ab7e87fc4e
19 65f2b2764b 2a5cc7a625a03a55170954202ba6a95675acbb79897a79256c6913deeb583918198769fe1e2e4c2802623315
20 65f72d65f3 77ef24d0a1a6d1c17580a8612cccd8398148834ff341
21 65ffbd56f8 2a015033fd5beb3320f748a4589a5eb81d9a5241ab3c561341f1ae2de993957dc29a273e6056c5676e5ebabc

21
testdata/f.csv vendored Normal file
View file

@ -0,0 +1,21 @@
f,
660d649ba1defa4ab5ab71f8,919be5811844077f4660af66afa9a59a5ad17cf5c541524e780fe2137bfa250c
6623c6895027f70a5330bbcb,8dadcde1a6f676d4004eacd399f825006ddf136d1e92b1c92113377b3e1741b4
664f095b24484ebce8f31fbf,c0c4a751f569c1f9c01531f57ba674b2ad2338d9c08f9e9fc85b0209d15466b2
665201a38de7d7243df717c9,d9293577cc0d51fe3a5bee78fea9b2b2222e6c2aa0d26a4ef4bfb7dd095587e8
665328b2449e537b0ca4733f,624f80a361e47c7eb1b815e8714a40f67b4f642a5546547a3fcb5bf5593d8fab
665ec882021f55b1fbaa5fad,1e917fbc04385290d654f711bdef12773dd54b6b5ea26fe2a9d58ed051f2cb7f
6671c131cd433750ba6d3908,a2ebfbdf7a23024c340a45f201645aa46f48bc1fdd8d34ed83fcffbf1ee90523
667fb93d9ae877ba11f337f2,4710649e06619e13250754937e9c17c20b07434751171aac2f2f78b184aa0146
668ed5f39a5db059dc326137,8dd8ca749b87f43e290904749a546fe319c9d53e765f065bb8beb234a117655e
66951782f6ba94f2b71e46d0,4f5c9434dd0886c57c2530991cebd973e1b50d5ba8fcfc019e54561217a49bbb
66970565dfe2b01cad49b73a,f6ca0ae18c896d9bc97c5a9d0c3a06256485f59c77fb91780b213f933b80f48b
669f6a30a6712062da0cc271,5c6604bfd63b871daceb7893dd618850458974fe4108871c1a1323fb8ae34e4e
66a9a7b89b78553592acf3df,0561f28c3a5ea0027ecb3c53fa068772a6b7cb73d23104a14f9aba8cd1f070a2
66aba81567ba48f001f843f0,b0f6ae2c1db8263f7e11fc79423109e718d1f3c30bd123c4243401b5e4f1fee6
66b569cc3d28be4466fb28d1,ecee392ad8217f325508ba38d280436fb0a520b79a9627e5e18197bf55540885
66d4662cd100d66055917d63,5762a8ac767fa30d2ca76db7081f8a2e4f5da4f0bf92d29e1322da9a154cc3d6
66d6fa6ac71d0255dd3f185d,5fc193e5e51b3bd8e95f4eb9df63236da7abf678fc47c0b339ceb5c127d0f488
66e5b6c7c231a02a32eedd83,58c70ffbfada12550f24bf7931cee06eb2e267dec3560e2e46843e383415f163
66e673cce02c2163f756491e,b8db43d1f6e62361e2e3b8fa765f79c08ddfb3035caa06f8250d6d1b063a7140
66fc4ad75184e6029c805d94,fc7ac5e785f73732d95183d6bdc3423d41a074fc3f04b1304bae1efa652edde1
1 f
2 660d649ba1defa4ab5ab71f8 919be5811844077f4660af66afa9a59a5ad17cf5c541524e780fe2137bfa250c
3 6623c6895027f70a5330bbcb 8dadcde1a6f676d4004eacd399f825006ddf136d1e92b1c92113377b3e1741b4
4 664f095b24484ebce8f31fbf c0c4a751f569c1f9c01531f57ba674b2ad2338d9c08f9e9fc85b0209d15466b2
5 665201a38de7d7243df717c9 d9293577cc0d51fe3a5bee78fea9b2b2222e6c2aa0d26a4ef4bfb7dd095587e8
6 665328b2449e537b0ca4733f 624f80a361e47c7eb1b815e8714a40f67b4f642a5546547a3fcb5bf5593d8fab
7 665ec882021f55b1fbaa5fad 1e917fbc04385290d654f711bdef12773dd54b6b5ea26fe2a9d58ed051f2cb7f
8 6671c131cd433750ba6d3908 a2ebfbdf7a23024c340a45f201645aa46f48bc1fdd8d34ed83fcffbf1ee90523
9 667fb93d9ae877ba11f337f2 4710649e06619e13250754937e9c17c20b07434751171aac2f2f78b184aa0146
10 668ed5f39a5db059dc326137 8dd8ca749b87f43e290904749a546fe319c9d53e765f065bb8beb234a117655e
11 66951782f6ba94f2b71e46d0 4f5c9434dd0886c57c2530991cebd973e1b50d5ba8fcfc019e54561217a49bbb
12 66970565dfe2b01cad49b73a f6ca0ae18c896d9bc97c5a9d0c3a06256485f59c77fb91780b213f933b80f48b
13 669f6a30a6712062da0cc271 5c6604bfd63b871daceb7893dd618850458974fe4108871c1a1323fb8ae34e4e
14 66a9a7b89b78553592acf3df 0561f28c3a5ea0027ecb3c53fa068772a6b7cb73d23104a14f9aba8cd1f070a2
15 66aba81567ba48f001f843f0 b0f6ae2c1db8263f7e11fc79423109e718d1f3c30bd123c4243401b5e4f1fee6
16 66b569cc3d28be4466fb28d1 ecee392ad8217f325508ba38d280436fb0a520b79a9627e5e18197bf55540885
17 66d4662cd100d66055917d63 5762a8ac767fa30d2ca76db7081f8a2e4f5da4f0bf92d29e1322da9a154cc3d6
18 66d6fa6ac71d0255dd3f185d 5fc193e5e51b3bd8e95f4eb9df63236da7abf678fc47c0b339ceb5c127d0f488
19 66e5b6c7c231a02a32eedd83 58c70ffbfada12550f24bf7931cee06eb2e267dec3560e2e46843e383415f163
20 66e673cce02c2163f756491e b8db43d1f6e62361e2e3b8fa765f79c08ddfb3035caa06f8250d6d1b063a7140
21 66fc4ad75184e6029c805d94 fc7ac5e785f73732d95183d6bdc3423d41a074fc3f04b1304bae1efa652edde1

21
testdata/g.csv vendored Normal file
View file

@ -0,0 +1,21 @@
g,
6702c124856d5168381a3297,575696fd653a4de2f9a8c1f580cf0c229631b0f5d95fceb354cda133e2eb2d34
6707f1511e3a2cb28493f91b,ba368e0f859ee36da8701df1c0b52cbf0c0f8a4b1a91f6d0db83a408f5a937d1
6707fd4213cae8d5342a98ba,bd3a44d30f66444f8732119bc7e0cf0bb47f8f0ab2840987fc06b629f3e6d3f4
6710294a5693224a6222404b,de35a8ea0a26d17445e2f509db23188961b5cd1229b96d2411565adf63731b5c
6716a9f84e02143b50d9034a,5823640ae4529f8df2dab20386c887d0a1ba1ffa4583b99dff761c01f670c2fa
672e51bc65c9b97d482b0b72,0687df449bd8cb8d8f526f4189973d084d786ab0927d81c127f56b03c61aa955
67682620db65932047689e5e,b262d40758edb28d1c04fa3a24d8268990516de6846ad94d002ce55640866239
676e8c320dbbf5eebc2969a9,c9e2a8e7181a70e2a488b884c8baadb4043a075c6876cb012c67fbec5aa9f615
6772e2ac48891ee3c2c72783,985a9c9ee7a0626d78dab431e663289762ce6959be314f91f7b08b1466097fd6
67847dd1dac117b85d1e20d9,62e6b1b8c2961703a90276dcde6dad182b2d14e23f27dccc927cca7770b9890e
678f49948c72b7295f12092a,2e7c456dac5206c5627736924e96ac016a09a88ec5f4835fbe0cf9e294611c88
67948b9633ab2ec07d752593,66b5c54b3a685de3ea18f9e69254eec065eb3207ac1f93494fdcd585e9a267a0
679674c162db8d3bb57c434f,05425880d80258f7441859b3494415a3fd7398c9e209a19674abd48372b283c6
67a8d3f17df85502bd644a36,1efce69a3a05c505e9f9cc5c2241d02099c043d934389b430fd8b185e6dfe6cb
67bad7f4fb3c6828b6fc4624,04a1c0a7ffe7acbf974ca18cf3debbd8e1be3d6703f842f57ef14af6d4c336d3
67c13fb0c65acca5520bc2f5,7fdc6989cd778baad45cd98358ea060237b169a4aeaeb14da6ac4686b7858c9f
67d4314588b4424b0ee02653,c63fd7a85a533b8591577bab805104708ba5458fab0e343d46b3e24a28b92cb5
67d734244f85f32a58e34e2d,d19a6307c24470b3973973319770bdb896218bb58d1f2d07c7226266075057d0
67d9c159c5d5e407e6b0a4ca,89cbdb903fdfe0b44e74b0a69eed3de7029f18c28f77e5509f8ace766ab86610
67fafc73d674250f11e559ab,1752ffbf9807bb2e4e480bf045b4bacc472befe755287384b5a526065a58c065
1 g
2 6702c124856d5168381a3297 575696fd653a4de2f9a8c1f580cf0c229631b0f5d95fceb354cda133e2eb2d34
3 6707f1511e3a2cb28493f91b ba368e0f859ee36da8701df1c0b52cbf0c0f8a4b1a91f6d0db83a408f5a937d1
4 6707fd4213cae8d5342a98ba bd3a44d30f66444f8732119bc7e0cf0bb47f8f0ab2840987fc06b629f3e6d3f4
5 6710294a5693224a6222404b de35a8ea0a26d17445e2f509db23188961b5cd1229b96d2411565adf63731b5c
6 6716a9f84e02143b50d9034a 5823640ae4529f8df2dab20386c887d0a1ba1ffa4583b99dff761c01f670c2fa
7 672e51bc65c9b97d482b0b72 0687df449bd8cb8d8f526f4189973d084d786ab0927d81c127f56b03c61aa955
8 67682620db65932047689e5e b262d40758edb28d1c04fa3a24d8268990516de6846ad94d002ce55640866239
9 676e8c320dbbf5eebc2969a9 c9e2a8e7181a70e2a488b884c8baadb4043a075c6876cb012c67fbec5aa9f615
10 6772e2ac48891ee3c2c72783 985a9c9ee7a0626d78dab431e663289762ce6959be314f91f7b08b1466097fd6
11 67847dd1dac117b85d1e20d9 62e6b1b8c2961703a90276dcde6dad182b2d14e23f27dccc927cca7770b9890e
12 678f49948c72b7295f12092a 2e7c456dac5206c5627736924e96ac016a09a88ec5f4835fbe0cf9e294611c88
13 67948b9633ab2ec07d752593 66b5c54b3a685de3ea18f9e69254eec065eb3207ac1f93494fdcd585e9a267a0
14 679674c162db8d3bb57c434f 05425880d80258f7441859b3494415a3fd7398c9e209a19674abd48372b283c6
15 67a8d3f17df85502bd644a36 1efce69a3a05c505e9f9cc5c2241d02099c043d934389b430fd8b185e6dfe6cb
16 67bad7f4fb3c6828b6fc4624 04a1c0a7ffe7acbf974ca18cf3debbd8e1be3d6703f842f57ef14af6d4c336d3
17 67c13fb0c65acca5520bc2f5 7fdc6989cd778baad45cd98358ea060237b169a4aeaeb14da6ac4686b7858c9f
18 67d4314588b4424b0ee02653 c63fd7a85a533b8591577bab805104708ba5458fab0e343d46b3e24a28b92cb5
19 67d734244f85f32a58e34e2d d19a6307c24470b3973973319770bdb896218bb58d1f2d07c7226266075057d0
20 67d9c159c5d5e407e6b0a4ca 89cbdb903fdfe0b44e74b0a69eed3de7029f18c28f77e5509f8ace766ab86610
21 67fafc73d674250f11e559ab 1752ffbf9807bb2e4e480bf045b4bacc472befe755287384b5a526065a58c065

21
testdata/i.csv vendored Normal file
View file

@ -0,0 +1,21 @@
i,
691d3476414324a257c62079b055446cdfdb58fcb7,3fc1f36ad9acdae3160db55befe1fdcf
692514be0d49b250058c2c59c62b07383705d71c54,055bf209d8b7132f1743aee761d9c64d
6932de7e3a7bae762b9533e955fd2b2444f6656aa7,9371781e0655df3b914d18da42605b3d
6938e5c7d134233b0758e54f5eacb9dcee412f51f9,12079ef8dffde085385b2aafe7e8be53
693a56c48c3ec6bc90fdd02e0f0e2f01472c3f13f5,8c422f4f35e4170079a13f3e9f25a9db
693fe5c0979c7c4892486d478c8c57b75f0fa9bba3,8eeaafae4e906ccc36ec29bc3d4f1676
694abea2af1c27003a1c777e84891a0f81b3b5a382,fe24b3d28f8cf49fad2d4548726ac8bd
694c245cf621a28925da7d84e77137a1d54085d1b8,c04cf11c401c4fbc8976825e9b6db9ca
6951010e69f84df87d09cdae7706e25ecdc10a8a6f,a93d6f9c06d1e807c1d674828167cd7c
695661d8955be621b5f44c7965f517c17d2d1d48c6,8b945701a1d2920c3e6283ab7fda14ee
696fac500a5674eaa078abc71ceb48006c14a3f6aa,1f8000aec89b229349aa154e72fd4de3
697506379203bd2f8980d13c766966e400509e28f9,5ce938e06a98aa8b8bb0cfea5dce7e33
6975c5f2cdc6e8fdb64682557b1bcbb92f52c6113f,2817aa0f0806bb276e1af4af42504720
6984b87daaba891147b4c0f25c15703b2640df9833,169009ea3ff014d352a13152c8d39999
699f3d1a3f634bb035c626cdfa927caa59e2617bc4,8f3e2352ed874155a3aa3dd90a61430e
69b9dfcdaced1d6d696fab02be31cbee50cbffcdf9,281d9817a5360c1be0ac7710132adebe
69ca1fa3e939d061d74163b2da17c3d2b926484c5e,40ecc3bd5dc6b871ce3095456e043427
69cea59483161df9420027c4328f85382871798ee4,3919862cc4f0f910a954ffc4d08a6195
69d8ff3b5f44f5585e7d7e1349d1d62ba3fbe6831c,d48bd4f6c44ef8b6aabeb6b6e1a61894
69f22b1918f28b1e10d2946f84f6f3c8fa25865ba3,b49011d36a56e0dbe5a5cbce94159f66
1 i
2 691d3476414324a257c62079b055446cdfdb58fcb7 3fc1f36ad9acdae3160db55befe1fdcf
3 692514be0d49b250058c2c59c62b07383705d71c54 055bf209d8b7132f1743aee761d9c64d
4 6932de7e3a7bae762b9533e955fd2b2444f6656aa7 9371781e0655df3b914d18da42605b3d
5 6938e5c7d134233b0758e54f5eacb9dcee412f51f9 12079ef8dffde085385b2aafe7e8be53
6 693a56c48c3ec6bc90fdd02e0f0e2f01472c3f13f5 8c422f4f35e4170079a13f3e9f25a9db
7 693fe5c0979c7c4892486d478c8c57b75f0fa9bba3 8eeaafae4e906ccc36ec29bc3d4f1676
8 694abea2af1c27003a1c777e84891a0f81b3b5a382 fe24b3d28f8cf49fad2d4548726ac8bd
9 694c245cf621a28925da7d84e77137a1d54085d1b8 c04cf11c401c4fbc8976825e9b6db9ca
10 6951010e69f84df87d09cdae7706e25ecdc10a8a6f a93d6f9c06d1e807c1d674828167cd7c
11 695661d8955be621b5f44c7965f517c17d2d1d48c6 8b945701a1d2920c3e6283ab7fda14ee
12 696fac500a5674eaa078abc71ceb48006c14a3f6aa 1f8000aec89b229349aa154e72fd4de3
13 697506379203bd2f8980d13c766966e400509e28f9 5ce938e06a98aa8b8bb0cfea5dce7e33
14 6975c5f2cdc6e8fdb64682557b1bcbb92f52c6113f 2817aa0f0806bb276e1af4af42504720
15 6984b87daaba891147b4c0f25c15703b2640df9833 169009ea3ff014d352a13152c8d39999
16 699f3d1a3f634bb035c626cdfa927caa59e2617bc4 8f3e2352ed874155a3aa3dd90a61430e
17 69b9dfcdaced1d6d696fab02be31cbee50cbffcdf9 281d9817a5360c1be0ac7710132adebe
18 69ca1fa3e939d061d74163b2da17c3d2b926484c5e 40ecc3bd5dc6b871ce3095456e043427
19 69cea59483161df9420027c4328f85382871798ee4 3919862cc4f0f910a954ffc4d08a6195
20 69d8ff3b5f44f5585e7d7e1349d1d62ba3fbe6831c d48bd4f6c44ef8b6aabeb6b6e1a61894
21 69f22b1918f28b1e10d2946f84f6f3c8fa25865ba3 b49011d36a56e0dbe5a5cbce94159f66

4
testdata/i_resolve.csv vendored Normal file
View file

@ -0,0 +1,4 @@
i,,
i,692556ed1cab9d17f2a9392030a9ad7f5d138f11bd,0000007615cbad28
i,692556ed1cab9d17f2a9392030a9ad7f5d138faf01,000000000bebc200
i,692556ed1cab9d17f2a9392030a9ad7f5d138fb074,0000000005f5e100
1 i
2 i 692556ed1cab9d17f2a9392030a9ad7f5d138f11bd 0000007615cbad28
3 i 692556ed1cab9d17f2a9392030a9ad7f5d138faf01 000000000bebc200
4 i 692556ed1cab9d17f2a9392030a9ad7f5d138fb074 0000000005f5e100

21
testdata/j.csv vendored Normal file
View file

@ -0,0 +1,21 @@
j,
6a2bb6a2e0505748602cb9a194ba8ea4abb6935407,cc786896
6a44f45448398072520cd2415044dc3fbfc4f77d94,b5d69fdb
6a464f73e50c5ac613e29959eaf7862989381fd2d7,f4a3151d
6a615c78bcea987123689221ec5546f4555c7ddf4d,02e0ca23
6a86f3151c381d0e7061583051ea2de133976cab73,b1f56fd8
6a875a5f2579fce1aed7b452dbcfb982161d9d35ad,fbe72e11
6a8edc85a5a8aa78fd6a7f0a9e3755121238ae5dcb,2f3ec916
6a90efc239731fa0b83c2a386c1426e8768ceb2123,6b8b1649
6a951540c279d1286d7800d205aea75f514b9e8fdb,e78656c9
6aa687dae05e6d629d5056e1af651519dfc669f40c,07665a81
6abaa8f75ae7182dfa70b293317acd3aaa8d021b5f,f51abc2b
6abc6bcaf274827e976bfa8ee5801d24c4b37bb77b,d171f0fe
6ac5717e0820d8bcf758690666a7dff87850e58af1,afbe5e50
6ac6cfb7ee16de9c7f6498939558881ffa346f0918,00a40c49
6ad24f0b126ae7bcdfb70f51b3ade58bbfd22dc94c,739a1ba9
6ad89bcd32e80b4b89b6ac066d87e1a1356d7d5e4e,5605f288
6ae49f7dcc373786b526e4393ff46d300ee5f4a9dd,dfe41d24
6aedfe781fb0ce858856eff6aacc2206525545e476,59508a47
6aeffec292f14000b7c073b861f2ad83c5511a2df8,afe94781
6afbdd9ec076dbf81511264f00f021c9667e52cb67,51ffc92a
1 j
2 6a2bb6a2e0505748602cb9a194ba8ea4abb6935407 cc786896
3 6a44f45448398072520cd2415044dc3fbfc4f77d94 b5d69fdb
4 6a464f73e50c5ac613e29959eaf7862989381fd2d7 f4a3151d
5 6a615c78bcea987123689221ec5546f4555c7ddf4d 02e0ca23
6 6a86f3151c381d0e7061583051ea2de133976cab73 b1f56fd8
7 6a875a5f2579fce1aed7b452dbcfb982161d9d35ad fbe72e11
8 6a8edc85a5a8aa78fd6a7f0a9e3755121238ae5dcb 2f3ec916
9 6a90efc239731fa0b83c2a386c1426e8768ceb2123 6b8b1649
10 6a951540c279d1286d7800d205aea75f514b9e8fdb e78656c9
11 6aa687dae05e6d629d5056e1af651519dfc669f40c 07665a81
12 6abaa8f75ae7182dfa70b293317acd3aaa8d021b5f f51abc2b
13 6abc6bcaf274827e976bfa8ee5801d24c4b37bb77b d171f0fe
14 6ac5717e0820d8bcf758690666a7dff87850e58af1 afbe5e50
15 6ac6cfb7ee16de9c7f6498939558881ffa346f0918 00a40c49
16 6ad24f0b126ae7bcdfb70f51b3ade58bbfd22dc94c 739a1ba9
17 6ad89bcd32e80b4b89b6ac066d87e1a1356d7d5e4e 5605f288
18 6ae49f7dcc373786b526e4393ff46d300ee5f4a9dd dfe41d24
19 6aedfe781fb0ce858856eff6aacc2206525545e476 59508a47
20 6aeffec292f14000b7c073b861f2ad83c5511a2df8 afe94781
21 6afbdd9ec076dbf81511264f00f021c9667e52cb67 51ffc92a

4
testdata/j_resolve.csv vendored Normal file
View file

@ -0,0 +1,4 @@
j,,
j,6a2556ed1cab9d17f2a9392030a9ad7f5d138f11bd,00000005
j,6a255761310145baa958b5587d9b5571423e5a0d3c,00000005
j,6a255761310145baa958b5587d9b5571423f00c85b,0000000a
1 j
2 j 6a2556ed1cab9d17f2a9392030a9ad7f5d138f11bd 00000005
3 j 6a255761310145baa958b5587d9b5571423e5a0d3c 00000005
4 j 6a255761310145baa958b5587d9b5571423f00c85b 0000000a

Binary file not shown.

View file

@ -0,0 +1 @@
MANIFEST-000004

View file

@ -0,0 +1 @@
2c7866e6-c325-4a0e-bfac-4cb01a746a45

File diff suppressed because it is too large Load diff

Binary file not shown.

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1 +1 @@
v0.2022.08.09.1
v0.2022.10.05.1