WIP: Json rpc federation, search/getclaimbyid, and shutdown #76
No reviewers
Labels
No labels
consider soon
documentation
good first issue
hacktoberfest
help wanted
priority: blocker
priority: high
priority: low
priority: medium
type: bug
type: discussion
type: improvement
type: new feature
type: refactor
type: task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: LBRYCommunity/herald.go#76
Loading…
Reference in a new issue
No description provided.
Delete branch "json-rpc-federation-and-shutdown"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
@ -28,0 +67,4 @@
metrics.RequestsCount.With(prometheus.Labels{"method": "blockchain.claimtrie.getclaimbyid"}).Inc()
// FIXME: this has txos and extras and so does GetClaimById?
allTxos := make([]*pb.Output, 0)
OK, I see why you pass the
server.Server
into the session manager, and also store it inClaimtrieService
.The python hub has a
SearchIndex
class which contains the search functionality of theHubServerService
:75d64f9dc6/hub/herald/search.py (L29)
The
SearchIndex
is passed off to the session manager instead of the wholeHubServerService
:https://github.com/lbryio/hub/search?q=SearchIndex
Low priority to do this now, but splitting the search functionality (elastic search client, caches for search, etc) from
server.Server
into aSearchIndex
struct/obj would be nice.I'm OK with having the
*Server
held insidesessionManager
for now.I don't see why this is disabled...
Might as well complete this by adding sessionManager to PeersService (or other appropriate pointer). You have
sessionManager.peersSubs
already in place.@ -69,3 +76,4 @@
address := ":" + fmt.Sprintf("%d", s.Args.NotifierPort)
addr, err := net.ResolveTCPAddr("tcp", address)
if err != nil {
return err
Probably need
peerNotification
case here.Rename
peersBool
->peersSub
to match comment.@ -95,6 +100,13 @@ func (s *session) doNotify(notification interface{}) {
status = hex.EncodeToString(note.status)
}
@ -19,3 +16,3 @@
shutdownRequestChannel = stopCh
func initsignals() {
interruptSignals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}
It looks like
shutdownRequestChannel
is not hooked up to anything now -- nobody sends anything on it that I see. Hence you were able to add the code lines 52-54 in signal.go without triggering the spam problem again. It (shutdownRequestChannel
) should be eliminated if it's not being used.@ -69,3 +76,4 @@
address := ":" + fmt.Sprintf("%d", s.Args.NotifierPort)
addr, err := net.ResolveTCPAddr("tcp", address)
if err != nil {
return err
Maybe. We need to refactor this whole section of code, but I think that ends up getting handled by the catch-all
s.sessionManager.doNotify
@ -19,3 +16,3 @@
shutdownRequestChannel = stopCh
func initsignals() {
interruptSignals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}
I think I left this in because the intention is that this program can be run in either client or server mode, so there would ostensibly be a control endpoint that lets it be shutdown from a local only port that you could run the binary and send it a command to shutdown versus a ctrl+c. Maybe it's right to remove it for now anyways, but I feel like it's feature we'll implement eventually.
This should be removed. After some refactoring the stop group StopAndWait gets called in the higher up server Shutdown routine.
@ -28,0 +67,4 @@
metrics.RequestsCount.With(prometheus.Labels{"method": "blockchain.claimtrie.getclaimbyid"}).Inc()
// FIXME: this has txos and extras and so does GetClaimById?
allTxos := make([]*pb.Output, 0)
Yup, this definitely needs to be refactored in the future.
@ -69,3 +76,4 @@
address := ":" + fmt.Sprintf("%d", s.Args.NotifierPort)
addr, err := net.ResolveTCPAddr("tcp", address)
if err != nil {
return err
Never mind... I have been reading the code more, and NotifierChan is only populated with HeightHash notifications coming from observing the DB.
Ahh.... I meant register the subscription in the
sessionManager.peerSubs
map. It needs registration there as it's communicating by JSONRPC. Then insideServer.notifyPeerSubs
, calls.sessionManager.doNotify()
to forward a notification to the session-based mechanism.The stuff in federation.go is more oriented towards grpc service (and managing the list of known peers). If PeersSubscribe comes in via GRPC, there's a separate mapping
Server.PeerSubs
for those.https://github.com/lbryio/herald.go/pull/76/files#r1026987357
@ -69,3 +76,4 @@
address := ":" + fmt.Sprintf("%d", s.Args.NotifierPort)
addr, err := net.ResolveTCPAddr("tcp", address)
if err != nil {
return err
I send it the new peers in federation.go
addPeer
. So I'm pretty sure this does get notified of new peers.Ahh, yeah, just hooking it into the gRPC like this isn't going to quite work I suppose hahah
f774823b2e
👍 🙏 Thank You!
@ -57,3 +64,3 @@
var params interface{}
switch notification.(type) {
switch note := notification.(type) {
case headerNotification:
Nice!
You should add
validatePort
for these.Suggest adding
validatePort
for the port num arguments.