lbcd/rpcwebsocket.go

2075 lines
65 KiB
Go
Raw Normal View History

blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package main
import (
"bytes"
"container/list"
"crypto/subtle"
"encoding/base64"
"encoding/hex"
"encoding/json"
2014-02-24 15:10:59 +01:00
"errors"
"fmt"
2014-07-02 15:50:08 +02:00
"io"
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
"math"
2014-07-02 15:50:08 +02:00
"sync"
"time"
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/btcjson"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/btcsuite/fastsha256"
"github.com/btcsuite/golangcrypto/ripemd160"
"github.com/btcsuite/websocket"
)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
const (
// websocketSendBufferSize is the number of elements the send channel
// can queue before blocking. Note that this only applies to requests
// handled directly in the websocket client input handler or the async
// handler since notifications have their own queuing mechanism
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// independent of the send channel buffer.
websocketSendBufferSize = 50
)
type semaphore chan struct{}
func makeSemaphore(n int) semaphore {
return make(chan struct{}, n)
}
func (s semaphore) acquire() { s <- struct{}{} }
func (s semaphore) release() { <-s }
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// timeZeroVal is simply the zero value for a time.Time and is used to avoid
// creating multiple instances.
var timeZeroVal time.Time
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// wsCommandHandler describes a callback function used to handle a specific
// command.
type wsCommandHandler func(*wsClient, interface{}) (interface{}, error)
// wsHandlers maps RPC command strings to appropriate websocket handler
// functions. This is set by init because help references wsHandlers and thus
// causes a dependency loop.
var wsHandlers map[string]wsCommandHandler
var wsHandlersBeforeInit = map[string]wsCommandHandler{
"help": handleWebsocketHelp,
"notifyblocks": handleNotifyBlocks,
"notifynewtransactions": handleNotifyNewTransactions,
"notifyreceived": handleNotifyReceived,
"notifyspent": handleNotifySpent,
2015-09-15 20:03:48 +02:00
"session": handleSession,
"stopnotifyblocks": handleStopNotifyBlocks,
"stopnotifynewtransactions": handleStopNotifyNewTransactions,
"stopnotifyspent": handleStopNotifySpent,
"stopnotifyreceived": handleStopNotifyReceived,
"rescan": handleRescan,
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// WebsocketHandler handles a new websocket client by creating a new wsClient,
// starting it, and blocking until the connection closes. Since it blocks, it
// must be run in a separate goroutine. It should be invoked from the websocket
// server handler which runs each new connection in a new goroutine thereby
// satisfying the requirement.
func (s *rpcServer) WebsocketHandler(conn *websocket.Conn, remoteAddr string,
authenticated bool, isAdmin bool) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Clear the read deadline that was set before the websocket hijacked
// the connection.
conn.SetReadDeadline(timeZeroVal)
// Limit max number of websocket clients.
rpcsLog.Infof("New websocket client %s", remoteAddr)
if s.ntfnMgr.NumClients()+1 > cfg.RPCMaxWebsockets {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
rpcsLog.Infof("Max websocket clients exceeded [%d] - "+
"disconnecting client %s", cfg.RPCMaxWebsockets,
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
remoteAddr)
conn.Close()
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Create a new websocket client to handle the new websocket connection
// and wait for it to shutdown. Once it has shutdown (and hence
// disconnected), remove it and any notifications it registered for.
2015-09-15 20:03:48 +02:00
client, err := newWebsocketClient(s, conn, remoteAddr, authenticated, isAdmin)
if err != nil {
rpcsLog.Errorf("Failed to serve client %s: %v", remoteAddr, err)
conn.Close()
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
s.ntfnMgr.AddClient(client)
client.Start()
client.WaitForShutdown()
s.ntfnMgr.RemoveClient(client)
rpcsLog.Infof("Disconnected websocket client %s", remoteAddr)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// wsNotificationManager is a connection and notification manager used for
// websockets. It allows websocket clients to register for notifications they
// are interested in. When an event happens elsewhere in the code such as
// transactions being added to the memory pool or block connects/disconnects,
// the notification manager is provided with the relevant details needed to
// figure out which websocket clients need to be notified based on what they
// have registered for and notifies them accordingly. It is also used to keep
// track of all connected websocket clients.
type wsNotificationManager struct {
// server is the RPC server the notification manager is associated with.
server *rpcServer
// queueNotification queues a notification for handling.
queueNotification chan interface{}
// notificationMsgs feeds notificationHandler with notifications
// and client (un)registeration requests from a queue as well as
// registeration and unregisteration requests from clients.
notificationMsgs chan interface{}
// Access channel for current number of connected clients.
numClients chan int
// Shutdown handling
wg sync.WaitGroup
quit chan struct{}
}
// queueHandler manages a queue of empty interfaces, reading from in and
// sending the oldest unsent to out. This handler stops when either of the
// in or quit channels are closed, and closes out before returning, without
// waiting to send any variables still remaining in the queue.
func queueHandler(in <-chan interface{}, out chan<- interface{}, quit <-chan struct{}) {
var q []interface{}
var dequeue chan<- interface{}
skipQueue := out
var next interface{}
out:
for {
select {
case n, ok := <-in:
if !ok {
// Sender closed input channel.
break out
}
// Either send to out immediately if skipQueue is
// non-nil (queue is empty) and reader is ready,
// or append to the queue and send later.
select {
case skipQueue <- n:
default:
q = append(q, n)
dequeue = out
skipQueue = nil
next = q[0]
}
case dequeue <- next:
copy(q, q[1:])
q[len(q)-1] = nil // avoid leak
q = q[:len(q)-1]
if len(q) == 0 {
dequeue = nil
skipQueue = out
} else {
next = q[0]
}
case <-quit:
break out
}
}
close(out)
}
// queueHandler maintains a queue of notifications and notification handler
// control messages.
func (m *wsNotificationManager) queueHandler() {
queueHandler(m.queueNotification, m.notificationMsgs, m.quit)
m.wg.Done()
}
// NotifyBlockConnected passes a block newly-connected to the best chain
// to the notification manager for block and transaction notification
// processing.
func (m *wsNotificationManager) NotifyBlockConnected(block *btcutil.Block) {
// As NotifyBlockConnected will be called by the block manager
// and the RPC server may no longer be running, use a select
// statement to unblock enqueuing the notification once the RPC
// server has begun shutting down.
select {
case m.queueNotification <- (*notificationBlockConnected)(block):
case <-m.quit:
}
}
// NotifyBlockDisconnected passes a block disconnected from the best chain
// to the notification manager for block notification processing.
func (m *wsNotificationManager) NotifyBlockDisconnected(block *btcutil.Block) {
// As NotifyBlockDisconnected will be called by the block manager
// and the RPC server may no longer be running, use a select
// statement to unblock enqueuing the notification once the RPC
// server has begun shutting down.
select {
case m.queueNotification <- (*notificationBlockDisconnected)(block):
case <-m.quit:
}
}
// NotifyMempoolTx passes a transaction accepted by mempool to the
// notification manager for transaction notification processing. If
// isNew is true, the tx is is a new transaction, rather than one
// added to the mempool during a reorg.
func (m *wsNotificationManager) NotifyMempoolTx(tx *btcutil.Tx, isNew bool) {
n := &notificationTxAcceptedByMempool{
isNew: isNew,
tx: tx,
}
// As NotifyMempoolTx will be called by mempool and the RPC server
// may no longer be running, use a select statement to unblock
// enqueuing the notification once the RPC server has begun
// shutting down.
select {
case m.queueNotification <- n:
case <-m.quit:
}
}
// Notification types
type notificationBlockConnected btcutil.Block
type notificationBlockDisconnected btcutil.Block
type notificationTxAcceptedByMempool struct {
isNew bool
tx *btcutil.Tx
}
// Notification control requests
type notificationRegisterClient wsClient
type notificationUnregisterClient wsClient
type notificationRegisterBlocks wsClient
type notificationUnregisterBlocks wsClient
type notificationRegisterNewMempoolTxs wsClient
type notificationUnregisterNewMempoolTxs wsClient
type notificationRegisterSpent struct {
wsc *wsClient
ops []*wire.OutPoint
}
type notificationUnregisterSpent struct {
wsc *wsClient
op *wire.OutPoint
}
type notificationRegisterAddr struct {
wsc *wsClient
addrs []string
}
type notificationUnregisterAddr struct {
wsc *wsClient
addr string
}
// notificationHandler reads notifications and control messages from the queue
// handler and processes one at a time.
func (m *wsNotificationManager) notificationHandler() {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// clients is a map of all currently connected websocket clients.
clients := make(map[chan struct{}]*wsClient)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Maps used to hold lists of websocket clients to be notified on
// certain events. Each websocket client also keeps maps for the events
// which have multiple triggers to make removal from these lists on
// connection close less horrendously expensive.
//
// Where possible, the quit channel is used as the unique id for a client
// since it is quite a bit more efficient than using the entire struct.
blockNotifications := make(map[chan struct{}]*wsClient)
txNotifications := make(map[chan struct{}]*wsClient)
watchedOutPoints := make(map[wire.OutPoint]map[chan struct{}]*wsClient)
watchedAddrs := make(map[string]map[chan struct{}]*wsClient)
out:
for {
select {
case n, ok := <-m.notificationMsgs:
if !ok {
// queueHandler quit.
break out
}
switch n := n.(type) {
case *notificationBlockConnected:
block := (*btcutil.Block)(n)
// Skip iterating through all txs if no
// tx notification requests exist.
if len(watchedOutPoints) != 0 || len(watchedAddrs) != 0 {
for _, tx := range block.Transactions() {
m.notifyForTx(watchedOutPoints,
watchedAddrs, tx, block)
}
}
if len(blockNotifications) != 0 {
m.notifyBlockConnected(blockNotifications,
block)
}
case *notificationBlockDisconnected:
m.notifyBlockDisconnected(blockNotifications,
(*btcutil.Block)(n))
case *notificationTxAcceptedByMempool:
if n.isNew && len(txNotifications) != 0 {
m.notifyForNewTx(txNotifications, n.tx)
}
m.notifyForTx(watchedOutPoints, watchedAddrs, n.tx, nil)
case *notificationRegisterBlocks:
wsc := (*wsClient)(n)
blockNotifications[wsc.quit] = wsc
case *notificationUnregisterBlocks:
wsc := (*wsClient)(n)
delete(blockNotifications, wsc.quit)
case *notificationRegisterClient:
wsc := (*wsClient)(n)
clients[wsc.quit] = wsc
case *notificationUnregisterClient:
wsc := (*wsClient)(n)
// Remove any requests made by the client as well as
// the client itself.
delete(blockNotifications, wsc.quit)
delete(txNotifications, wsc.quit)
for k := range wsc.spentRequests {
op := k
m.removeSpentRequest(watchedOutPoints, wsc, &op)
}
for addr := range wsc.addrRequests {
m.removeAddrRequest(watchedAddrs, wsc, addr)
}
delete(clients, wsc.quit)
case *notificationRegisterSpent:
m.addSpentRequests(watchedOutPoints, n.wsc, n.ops)
case *notificationUnregisterSpent:
m.removeSpentRequest(watchedOutPoints, n.wsc, n.op)
case *notificationRegisterAddr:
m.addAddrRequests(watchedAddrs, n.wsc, n.addrs)
case *notificationUnregisterAddr:
m.removeAddrRequest(watchedAddrs, n.wsc, n.addr)
case *notificationRegisterNewMempoolTxs:
wsc := (*wsClient)(n)
txNotifications[wsc.quit] = wsc
case *notificationUnregisterNewMempoolTxs:
wsc := (*wsClient)(n)
delete(txNotifications, wsc.quit)
default:
rpcsLog.Warn("Unhandled notification type")
}
case m.numClients <- len(clients):
case <-m.quit:
// RPC server shutting down.
break out
}
}
for _, c := range clients {
c.Disconnect()
}
m.wg.Done()
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// NumClients returns the number of clients actively being served.
func (m *wsNotificationManager) NumClients() (n int) {
select {
case n = <-m.numClients:
case <-m.quit: // Use default n (0) if server has shut down.
}
return
}
// RegisterBlockUpdates requests block update notifications to the passed
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// websocket client.
func (m *wsNotificationManager) RegisterBlockUpdates(wsc *wsClient) {
m.queueNotification <- (*notificationRegisterBlocks)(wsc)
}
// UnregisterBlockUpdates removes block update notifications for the passed
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// websocket client.
func (m *wsNotificationManager) UnregisterBlockUpdates(wsc *wsClient) {
m.queueNotification <- (*notificationUnregisterBlocks)(wsc)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// notifyBlockConnected notifies websocket clients that have registered for
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// block updates when a block is connected to the main chain.
func (*wsNotificationManager) notifyBlockConnected(clients map[chan struct{}]*wsClient,
block *btcutil.Block) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Notify interested websocket clients about the connected block.
ntfn := btcjson.NewBlockConnectedNtfn(block.Hash().String(),
int32(block.Height()), block.MsgBlock().Header.Timestamp.Unix())
marshalledJSON, err := btcjson.MarshalCmd(nil, ntfn)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
rpcsLog.Error("Failed to marshal block connected notification: "+
"%v", err)
return
}
for _, wsc := range clients {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
wsc.QueueNotification(marshalledJSON)
}
}
// notifyBlockDisconnected notifies websocket clients that have registered for
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// block updates when a block is disconnected from the main chain (due to a
// reorganize).
func (*wsNotificationManager) notifyBlockDisconnected(clients map[chan struct{}]*wsClient, block *btcutil.Block) {
// Skip notification creation if no clients have requested block
// connected/disconnected notifications.
if len(clients) == 0 {
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Notify interested websocket clients about the disconnected block.
ntfn := btcjson.NewBlockDisconnectedNtfn(block.Hash().String(),
int32(block.Height()), block.MsgBlock().Header.Timestamp.Unix())
marshalledJSON, err := btcjson.MarshalCmd(nil, ntfn)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
rpcsLog.Error("Failed to marshal block disconnected "+
"notification: %v", err)
return
}
for _, wsc := range clients {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
wsc.QueueNotification(marshalledJSON)
}
}
// RegisterNewMempoolTxsUpdates requests notifications to the passed websocket
// client when new transactions are added to the memory pool.
func (m *wsNotificationManager) RegisterNewMempoolTxsUpdates(wsc *wsClient) {
m.queueNotification <- (*notificationRegisterNewMempoolTxs)(wsc)
}
// UnregisterNewMempoolTxsUpdates removes notifications to the passed websocket
// client when new transaction are added to the memory pool.
func (m *wsNotificationManager) UnregisterNewMempoolTxsUpdates(wsc *wsClient) {
m.queueNotification <- (*notificationUnregisterNewMempoolTxs)(wsc)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
2014-09-08 21:19:47 +02:00
// notifyForNewTx notifies websocket clients that have registered for updates
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// when a new transaction is added to the memory pool.
func (m *wsNotificationManager) notifyForNewTx(clients map[chan struct{}]*wsClient, tx *btcutil.Tx) {
txHashStr := tx.Hash().String()
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
mtx := tx.MsgTx()
var amount int64
for _, txOut := range mtx.TxOut {
amount += txOut.Value
}
ntfn := btcjson.NewTxAcceptedNtfn(txHashStr, btcutil.Amount(amount).ToBTC())
marshalledJSON, err := btcjson.MarshalCmd(nil, ntfn)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
rpcsLog.Errorf("Failed to marshal tx notification: %s", err.Error())
return
}
var verboseNtfn *btcjson.TxAcceptedVerboseNtfn
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
var marshalledJSONVerbose []byte
for _, wsc := range clients {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if wsc.verboseTxUpdates {
if marshalledJSONVerbose != nil {
wsc.QueueNotification(marshalledJSONVerbose)
continue
}
net := m.server.server.chainParams
rawTx, err := createTxRawResult(net, mtx, txHashStr, nil,
"", 0, 0)
if err != nil {
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
verboseNtfn = btcjson.NewTxAcceptedVerboseNtfn(*rawTx)
marshalledJSONVerbose, err = btcjson.MarshalCmd(nil,
verboseNtfn)
if err != nil {
rpcsLog.Errorf("Failed to marshal verbose tx "+
"notification: %s", err.Error())
return
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
wsc.QueueNotification(marshalledJSONVerbose)
} else {
wsc.QueueNotification(marshalledJSON)
}
}
}
// RegisterSpentRequests requests a notification when each of the passed
// outpoints is confirmed spent (contained in a block connected to the main
// chain) for the passed websocket client. The request is automatically
// removed once the notification has been sent.
func (m *wsNotificationManager) RegisterSpentRequests(wsc *wsClient, ops []*wire.OutPoint) {
m.queueNotification <- &notificationRegisterSpent{
wsc: wsc,
ops: ops,
}
}
// addSpentRequests modifies a map of watched outpoints to sets of websocket
// clients to add a new request watch all of the outpoints in ops and create
// and send a notification when spent to the websocket client wsc.
func (*wsNotificationManager) addSpentRequests(opMap map[wire.OutPoint]map[chan struct{}]*wsClient,
wsc *wsClient, ops []*wire.OutPoint) {
for _, op := range ops {
// Track the request in the client as well so it can be quickly
// be removed on disconnect.
wsc.spentRequests[*op] = struct{}{}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Add the client to the list to notify when the outpoint is seen.
// Create the list as needed.
cmap, ok := opMap[*op]
if !ok {
cmap = make(map[chan struct{}]*wsClient)
opMap[*op] = cmap
}
cmap[wsc.quit] = wsc
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
// UnregisterSpentRequest removes a request from the passed websocket client
// to be notified when the passed outpoint is confirmed spent (contained in a
// block connected to the main chain).
func (m *wsNotificationManager) UnregisterSpentRequest(wsc *wsClient, op *wire.OutPoint) {
m.queueNotification <- &notificationUnregisterSpent{
wsc: wsc,
op: op,
}
2014-02-24 15:10:59 +01:00
}
// removeSpentRequest modifies a map of watched outpoints to remove the
// websocket client wsc from the set of clients to be notified when a
// watched outpoint is spent. If wsc is the last client, the outpoint
// key is removed from the map.
func (*wsNotificationManager) removeSpentRequest(ops map[wire.OutPoint]map[chan struct{}]*wsClient,
wsc *wsClient, op *wire.OutPoint) {
// Remove the request tracking from the client.
delete(wsc.spentRequests, *op)
// Remove the client from the list to notify.
notifyMap, ok := ops[*op]
if !ok {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
rpcsLog.Warnf("Attempt to remove nonexistent spent request "+
"for websocket client %s", wsc.addr)
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
delete(notifyMap, wsc.quit)
// Remove the map entry altogether if there are
// no more clients interested in it.
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if len(notifyMap) == 0 {
delete(ops, *op)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
2014-02-24 15:10:59 +01:00
// txHexString returns the serialized transaction encoded in hexadecimal.
func txHexString(tx *btcutil.Tx) string {
buf := bytes.NewBuffer(make([]byte, 0, tx.MsgTx().SerializeSize()))
2014-02-24 15:10:59 +01:00
// Ignore Serialize's error, as writing to a bytes.buffer cannot fail.
tx.MsgTx().Serialize(buf)
2014-02-24 15:10:59 +01:00
return hex.EncodeToString(buf.Bytes())
}
// blockDetails creates a BlockDetails struct to include in btcws notifications
// from a block and a transaction's block index.
func blockDetails(block *btcutil.Block, txIndex int) *btcjson.BlockDetails {
if block == nil {
return nil
}
return &btcjson.BlockDetails{
Height: int32(block.Height()),
Hash: block.Hash().String(),
Index: txIndex,
Time: block.MsgBlock().Header.Timestamp.Unix(),
}
}
// newRedeemingTxNotification returns a new marshalled redeemingtx notification
// with the passed parameters.
func newRedeemingTxNotification(txHex string, index int, block *btcutil.Block) ([]byte, error) {
// Create and marshal the notification.
ntfn := btcjson.NewRedeemingTxNtfn(txHex, blockDetails(block, index))
return btcjson.MarshalCmd(nil, ntfn)
}
2014-02-24 15:10:59 +01:00
// notifyForTxOuts examines each transaction output, notifying interested
// websocket clients of the transaction if an output spends to a watched
// address. A spent notification request is automatically registered for
// the client for each matching output.
func (m *wsNotificationManager) notifyForTxOuts(ops map[wire.OutPoint]map[chan struct{}]*wsClient,
addrs map[string]map[chan struct{}]*wsClient, tx *btcutil.Tx, block *btcutil.Block) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Nothing to do if nobody is listening for address notifications.
if len(addrs) == 0 {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return
}
2014-02-24 15:10:59 +01:00
txHex := ""
wscNotified := make(map[chan struct{}]struct{})
2014-02-24 15:10:59 +01:00
for i, txOut := range tx.MsgTx().TxOut {
_, txAddrs, _, err := txscript.ExtractPkScriptAddrs(
txOut.PkScript, m.server.server.chainParams)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
continue
}
for _, txAddr := range txAddrs {
cmap, ok := addrs[txAddr.EncodeAddress()]
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if !ok {
continue
}
2014-02-24 15:10:59 +01:00
if txHex == "" {
txHex = txHexString(tx)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
ntfn := btcjson.NewRecvTxNtfn(txHex, blockDetails(block,
tx.Index()))
marshalledJSON, err := btcjson.MarshalCmd(nil, ntfn)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
rpcsLog.Errorf("Failed to marshal processedtx notification: %v", err)
2014-02-24 15:10:59 +01:00
continue
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
op := []*wire.OutPoint{wire.NewOutPoint(tx.Hash(), uint32(i))}
2014-02-24 15:10:59 +01:00
for wscQuit, wsc := range cmap {
m.addSpentRequests(ops, wsc, op)
2014-02-24 15:10:59 +01:00
if _, ok := wscNotified[wscQuit]; !ok {
wscNotified[wscQuit] = struct{}{}
2014-02-24 15:10:59 +01:00
wsc.QueueNotification(marshalledJSON)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
}
}
// notifyForTx examines the inputs and outputs of the passed transaction,
2014-02-24 15:10:59 +01:00
// notifying websocket clients of outputs spending to a watched address
// and inputs spending a watched outpoint.
func (m *wsNotificationManager) notifyForTx(ops map[wire.OutPoint]map[chan struct{}]*wsClient,
addrs map[string]map[chan struct{}]*wsClient, tx *btcutil.Tx, block *btcutil.Block) {
if len(ops) != 0 {
m.notifyForTxIns(ops, tx, block)
}
if len(addrs) != 0 {
m.notifyForTxOuts(ops, addrs, tx, block)
}
}
2014-02-24 15:10:59 +01:00
// notifyForTxIns examines the inputs of the passed transaction and sends
// interested websocket clients a redeemingtx notification if any inputs
// spend a watched output. If block is non-nil, any matching spent
// requests are removed.
func (m *wsNotificationManager) notifyForTxIns(ops map[wire.OutPoint]map[chan struct{}]*wsClient,
tx *btcutil.Tx, block *btcutil.Block) {
// Nothing to do if nobody is watching outpoints.
if len(ops) == 0 {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return
}
2014-02-24 15:10:59 +01:00
txHex := ""
wscNotified := make(map[chan struct{}]struct{})
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for _, txIn := range tx.MsgTx().TxIn {
prevOut := &txIn.PreviousOutPoint
if cmap, ok := ops[*prevOut]; ok {
2014-02-24 15:10:59 +01:00
if txHex == "" {
txHex = txHexString(tx)
}
marshalledJSON, err := newRedeemingTxNotification(txHex, tx.Index(), block)
if err != nil {
rpcsLog.Warnf("Failed to marshal redeemingtx notification: %v", err)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
continue
}
2014-02-24 15:10:59 +01:00
for wscQuit, wsc := range cmap {
if block != nil {
m.removeSpentRequest(ops, wsc, prevOut)
2014-02-24 15:10:59 +01:00
}
if _, ok := wscNotified[wscQuit]; !ok {
wscNotified[wscQuit] = struct{}{}
2014-02-24 15:10:59 +01:00
wsc.QueueNotification(marshalledJSON)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
}
}
// RegisterTxOutAddressRequests requests notifications to the passed websocket
// client when a transaction output spends to the passed address.
func (m *wsNotificationManager) RegisterTxOutAddressRequests(wsc *wsClient, addrs []string) {
m.queueNotification <- &notificationRegisterAddr{
wsc: wsc,
addrs: addrs,
2014-02-24 15:10:59 +01:00
}
}
// addAddrRequests adds the websocket client wsc to the address to client set
// addrMap so wsc will be notified for any mempool or block transaction outputs
// spending to any of the addresses in addrs.
func (*wsNotificationManager) addAddrRequests(addrMap map[string]map[chan struct{}]*wsClient,
wsc *wsClient, addrs []string) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for _, addr := range addrs {
// Track the request in the client as well so it can be quickly be
// removed on disconnect.
wsc.addrRequests[addr] = struct{}{}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Add the client to the set of clients to notify when the
// outpoint is seen. Create map as needed.
cmap, ok := addrMap[addr]
if !ok {
cmap = make(map[chan struct{}]*wsClient)
addrMap[addr] = cmap
}
cmap[wsc.quit] = wsc
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
// UnregisterTxOutAddressRequest removes a request from the passed websocket
// client to be notified when a transaction spends to the passed address.
func (m *wsNotificationManager) UnregisterTxOutAddressRequest(wsc *wsClient, addr string) {
m.queueNotification <- &notificationUnregisterAddr{
wsc: wsc,
addr: addr,
}
}
// removeAddrRequest removes the websocket client wsc from the address to
// client set addrs so it will no longer receive notification updates for
// any transaction outputs send to addr.
func (*wsNotificationManager) removeAddrRequest(addrs map[string]map[chan struct{}]*wsClient,
wsc *wsClient, addr string) {
// Remove the request tracking from the client.
delete(wsc.addrRequests, addr)
// Remove the client from the list to notify.
cmap, ok := addrs[addr]
if !ok {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
rpcsLog.Warnf("Attempt to remove nonexistent addr request "+
"<%s> for websocket client %s", addr, wsc.addr)
return
}
delete(cmap, wsc.quit)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Remove the map entry altogether if there are no more clients
// interested in it.
if len(cmap) == 0 {
delete(addrs, addr)
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// AddClient adds the passed websocket client to the notification manager.
func (m *wsNotificationManager) AddClient(wsc *wsClient) {
m.queueNotification <- (*notificationRegisterClient)(wsc)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// RemoveClient removes the passed websocket client and all notifications
// registered for it.
func (m *wsNotificationManager) RemoveClient(wsc *wsClient) {
select {
case m.queueNotification <- (*notificationUnregisterClient)(wsc):
case <-m.quit:
}
}
// Start starts the goroutines required for the manager to queue and process
// websocket client notifications.
func (m *wsNotificationManager) Start() {
m.wg.Add(2)
go m.queueHandler()
go m.notificationHandler()
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// WaitForShutdown blocks until all notification manager goroutines have
// finished.
func (m *wsNotificationManager) WaitForShutdown() {
m.wg.Wait()
}
// Shutdown shuts down the manager, stopping the notification queue and
// notification handler goroutines.
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
func (m *wsNotificationManager) Shutdown() {
close(m.quit)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// newWsNotificationManager returns a new notification manager ready for use.
// See wsNotificationManager for more details.
func newWsNotificationManager(server *rpcServer) *wsNotificationManager {
return &wsNotificationManager{
server: server,
queueNotification: make(chan interface{}),
notificationMsgs: make(chan interface{}),
numClients: make(chan int),
quit: make(chan struct{}),
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
2014-09-08 21:19:47 +02:00
// wsResponse houses a message to send to a connected websocket client as
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// well as a channel to reply on when the message is sent.
type wsResponse struct {
msg []byte
doneChan chan bool
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// wsClient provides an abstraction for handling a websocket client. The
// overall data flow is split into 3 main goroutines, a possible 4th goroutine
// for long-running operations (only started if request is made), and a
// websocket manager which is used to allow things such as broadcasting
// requested notifications to all connected websocket clients. Inbound
// messages are read via the inHandler goroutine and generally dispatched to
// their own handler. However, certain potentially long-running operations such
// as rescans, are sent to the asyncHander goroutine and are limited to one at a
// time. There are two outbound message types - one for responding to client
// requests and another for async notifications. Responses to client requests
// use SendMessage which employs a buffered channel thereby limiting the number
// of outstanding requests that can be made. Notifications are sent via
// QueueNotification which implements a queue via notificationQueueHandler to
// ensure sending notifications from other subsystems can't block. Ultimately,
// all messages are sent via the outHandler.
type wsClient struct {
sync.Mutex
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// server is the RPC server that is servicing the client.
server *rpcServer
// conn is the underlying websocket connection.
conn *websocket.Conn
// disconnected indicated whether or not the websocket client is
// disconnected.
disconnected bool
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// addr is the remote address of the client.
addr string
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// authenticated specifies whether a client has been authenticated
// and therefore is allowed to communicated over the websocket.
authenticated bool
// isAdmin specifies whether a client may change the state of the server;
// false means its access is only to the limited set of RPC calls.
isAdmin bool
2015-09-15 20:03:48 +02:00
// sessionID is a random ID generated for each client when connected.
// These IDs may be queried by a client using the session RPC. A change
// to the session ID indicates that the client reconnected.
sessionID uint64
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// verboseTxUpdates specifies whether a client has requested verbose
// information about all new transactions.
verboseTxUpdates bool
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// addrRequests is a set of addresses the caller has requested to be
// notified about. It is maintained here so all requests can be removed
// when a wallet disconnects. Owned by the notification manager.
addrRequests map[string]struct{}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// spentRequests is a set of unspent Outpoints a wallet has requested
// notifications for when they are spent by a processed transaction.
// Owned by the notification manager.
spentRequests map[wire.OutPoint]struct{}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Networking infrastructure.
serviceRequestSem semaphore
ntfnChan chan []byte
sendChan chan wsResponse
quit chan struct{}
wg sync.WaitGroup
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// inHandler handles all incoming messages for the websocket connection. It
// must be run as a goroutine.
func (c *wsClient) inHandler() {
out:
for {
// Break out of the loop once the quit channel has been closed.
// Use a non-blocking select here so we fall through otherwise.
select {
case <-c.quit:
break out
default:
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
_, msg, err := c.conn.ReadMessage()
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
// Log the error if it's not due to disconnecting.
if err != io.EOF {
rpcsLog.Errorf("Websocket receive error from "+
"%s: %v", c.addr, err)
}
break out
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
var request btcjson.Request
err = json.Unmarshal(msg, &request)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
if !c.authenticated {
break out
}
jsonErr := &btcjson.RPCError{
Code: btcjson.ErrRPCParse.Code,
Message: "Failed to parse request: " + err.Error(),
}
reply, err := createMarshalledReply(nil, nil, jsonErr)
if err != nil {
rpcsLog.Errorf("Failed to marshal parse failure "+
"reply: %v", err)
continue
}
c.SendMessage(reply, nil)
continue
}
// The JSON-RPC 1.0 spec defines that notifications must have their "id"
// set to null and states that notifications do not have a response.
//
// A JSON-RPC 2.0 notification is a request with "json-rpc":"2.0", and
// without an "id" member. The specification states that notifications
// must not be responded to. JSON-RPC 2.0 permits the null value as a
// valid request id, therefore such requests are not notifications.
//
// Bitcoin Core serves requests with "id":null or even an absent "id",
// and responds to such requests with "id":null in the response.
//
// Btcd does not respond to any request without and "id" or "id":null,
// regardless the indicated JSON-RPC protocol version unless RPC quirks
// are enabled. With RPC quirks enabled, such requests will be responded
// to if the reqeust does not indicate JSON-RPC version.
//
// RPC quirks can be enabled by the user to avoid compatibility issues
// with software relying on Core's behavior.
if request.ID == nil && !(cfg.RPCQuirks && request.Jsonrpc == "") {
if !c.authenticated {
break out
}
continue
}
cmd := parseCmd(&request)
if cmd.err != nil {
if !c.authenticated {
break out
}
reply, err := createMarshalledReply(cmd.id, nil, cmd.err)
if err != nil {
rpcsLog.Errorf("Failed to marshal parse failure "+
"reply: %v", err)
continue
}
c.SendMessage(reply, nil)
continue
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
rpcsLog.Debugf("Received command <%s> from %s", cmd.method, c.addr)
// Check auth. The client is immediately disconnected if the
// first request of an unauthentiated websocket client is not
// the authenticate request, an authenticate request is received
// when the client is already authenticated, or incorrect
// authentication credentials are provided in the request.
switch authCmd, ok := cmd.cmd.(*btcjson.AuthenticateCmd); {
case c.authenticated && ok:
rpcsLog.Warnf("Websocket client %s is already authenticated",
c.addr)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
break out
case !c.authenticated && !ok:
rpcsLog.Warnf("Unauthenticated websocket message " +
"received")
break out
case !c.authenticated:
// Check credentials.
login := authCmd.Username + ":" + authCmd.Passphrase
auth := "Basic " + base64.StdEncoding.EncodeToString([]byte(login))
authSha := fastsha256.Sum256([]byte(auth))
cmp := subtle.ConstantTimeCompare(authSha[:], c.server.authsha[:])
limitcmp := subtle.ConstantTimeCompare(authSha[:], c.server.limitauthsha[:])
if cmp != 1 && limitcmp != 1 {
rpcsLog.Warnf("Auth failure.")
break out
}
c.authenticated = true
c.isAdmin = cmp == 1
// Marshal and send response.
reply, err := createMarshalledReply(cmd.id, nil, nil)
if err != nil {
rpcsLog.Errorf("Failed to marshal authenticate reply: "+
"%v", err.Error())
continue
}
c.SendMessage(reply, nil)
continue
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// Check if the client is using limited RPC credentials and
// error when not authorized to call this RPC.
if !c.isAdmin {
if _, ok := rpcLimited[request.Method]; !ok {
jsonErr := &btcjson.RPCError{
Code: btcjson.ErrRPCInvalidParams.Code,
Message: "limited user not authorized for this method",
}
// Marshal and send response.
reply, err := createMarshalledReply(request.ID, nil, jsonErr)
if err != nil {
rpcsLog.Errorf("Failed to marshal parse failure "+
"reply: %v", err)
continue
}
c.SendMessage(reply, nil)
continue
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
// Asynchronously handle the request. A semaphore is used to
// limit the number of concurrent requests currently being
// serviced. If the semaphore can not be acquired, simply wait
// until a request finished before reading the next RPC request
// from the websocket client.
//
// This could be a little fancier by timing out and erroring
// when it takes too long to service the request, but if that is
// done, the read of the next request should not be blocked by
// this semaphore, otherwise the next request will be read and
// will probably sit here for another few seconds before timing
// out as well. This will cause the total timeout duration for
// later requests to be much longer than the check here would
// imply.
//
// If a timeout is added, the semaphore acquiring should be
// moved inside of the new goroutine with a select statement
// that also reads a time.After channel. This will unblock the
// read of the next request from the websocket client and allow
// many requests to be waited on concurrently.
c.serviceRequestSem.acquire()
go func() {
c.serviceRequest(cmd)
c.serviceRequestSem.release()
}()
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Ensure the connection is closed.
c.Disconnect()
c.wg.Done()
rpcsLog.Tracef("Websocket client input handler done for %s", c.addr)
}
// serviceRequest services a parsed RPC request by looking up and executing the
// appropriate RPC handler. The response is marshalled and sent to the
// websocket client.
func (c *wsClient) serviceRequest(r *parsedRPCCmd) {
var (
result interface{}
err error
)
// Lookup the websocket extension for the command and if it doesn't
// exist fallback to handling the command as a standard command.
wsHandler, ok := wsHandlers[r.method]
if ok {
result, err = wsHandler(c, r.cmd)
} else {
result, err = c.server.standardCmdResult(r, nil)
}
reply, err := createMarshalledReply(r.id, result, err)
if err != nil {
rpcsLog.Errorf("Failed to marshal reply for <%s> "+
"command: %v", r.method, err)
return
}
c.SendMessage(reply, nil)
}
// notificationQueueHandler handles the queuing of outgoing notifications for
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// the websocket client. This runs as a muxer for various sources of input to
// ensure that queuing up notifications to be sent will not block. Otherwise,
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// slow clients could bog down the other systems (such as the mempool or block
// manager) which are queuing the data. The data is passed on to outHandler to
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// actually be written. It must be run as a goroutine.
func (c *wsClient) notificationQueueHandler() {
ntfnSentChan := make(chan bool, 1) // nonblocking sync
// pendingNtfns is used as a queue for notifications that are ready to
// be sent once there are no outstanding notifications currently being
// sent. The waiting flag is used over simply checking for items in the
// pending list to ensure cleanup knows what has and hasn't been sent
// to the outHandler. Currently no special cleanup is needed, however
// if something like a done channel is added to notifications in the
// future, not knowing what has and hasn't been sent to the outHandler
// (and thus who should respond to the done channel) would be
// problematic without using this approach.
pendingNtfns := list.New()
waiting := false
out:
for {
select {
// This channel is notified when a message is being queued to
// be sent across the network socket. It will either send the
// message immediately if a send is not already in progress, or
// queue the message to be sent once the other pending messages
// are sent.
case msg := <-c.ntfnChan:
if !waiting {
c.SendMessage(msg, ntfnSentChan)
} else {
pendingNtfns.PushBack(msg)
}
waiting = true
// This channel is notified when a notification has been sent
// across the network socket.
case <-ntfnSentChan:
// No longer waiting if there are no more messages in
// the pending messages queue.
next := pendingNtfns.Front()
if next == nil {
waiting = false
continue
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Notify the outHandler about the next item to
// asynchronously send.
msg := pendingNtfns.Remove(next).([]byte)
c.SendMessage(msg, ntfnSentChan)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
case <-c.quit:
break out
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Drain any wait channels before exiting so nothing is left waiting
// around to send.
cleanup:
for {
select {
case <-c.ntfnChan:
case <-ntfnSentChan:
default:
break cleanup
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
c.wg.Done()
rpcsLog.Tracef("Websocket client notification queue handler done "+
"for %s", c.addr)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// outHandler handles all outgoing messages for the websocket connection. It
// must be run as a goroutine. It uses a buffered channel to serialize output
// messages while allowing the sender to continue running asynchronously. It
// must be run as a goroutine.
func (c *wsClient) outHandler() {
out:
for {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Send any messages ready for send until the quit channel is
// closed.
select {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
case r := <-c.sendChan:
err := c.conn.WriteMessage(websocket.TextMessage, r.msg)
if err != nil {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
c.Disconnect()
break out
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if r.doneChan != nil {
r.doneChan <- true
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
case <-c.quit:
break out
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Drain any wait channels before exiting so nothing is left waiting
// around to send.
cleanup:
for {
select {
case r := <-c.sendChan:
if r.doneChan != nil {
r.doneChan <- false
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
default:
break cleanup
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
c.wg.Done()
rpcsLog.Tracef("Websocket client output handler done for %s", c.addr)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// SendMessage sends the passed json to the websocket client. It is backed
// by a buffered channel, so it will not block until the send channel is full.
// Note however that QueueNotification must be used for sending async
// notifications instead of the this function. This approach allows a limit to
// the number of outstanding requests a client can make without preventing or
// blocking on async notifications.
func (c *wsClient) SendMessage(marshalledJSON []byte, doneChan chan bool) {
// Don't send the message if disconnected.
if c.Disconnected() {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if doneChan != nil {
doneChan <- false
}
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
c.sendChan <- wsResponse{msg: marshalledJSON, doneChan: doneChan}
}
2014-02-24 15:10:59 +01:00
// ErrClientQuit describes the error where a client send is not processed due
// to the client having already been disconnected or dropped.
var ErrClientQuit = errors.New("client quit")
// QueueNotification queues the passed notification to be sent to the websocket
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// client. This function, as the name implies, is only intended for
// notifications since it has additional logic to prevent other subsystems, such
// as the memory pool and block manager, from blocking even when the send
// channel is full.
2014-02-24 15:10:59 +01:00
//
// If the client is in the process of shutting down, this function returns
// ErrClientQuit. This is intended to be checked by long-running notification
// handlers to stop processing if there is no more work needed to be done.
func (c *wsClient) QueueNotification(marshalledJSON []byte) error {
// Don't queue the message if disconnected.
if c.Disconnected() {
2014-02-24 15:10:59 +01:00
return ErrClientQuit
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
c.ntfnChan <- marshalledJSON
2014-02-24 15:10:59 +01:00
return nil
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// Disconnected returns whether or not the websocket client is disconnected.
func (c *wsClient) Disconnected() bool {
c.Lock()
isDisconnected := c.disconnected
c.Unlock()
return isDisconnected
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Disconnect disconnects the websocket client.
func (c *wsClient) Disconnect() {
c.Lock()
defer c.Unlock()
// Nothing to do if already disconnected.
if c.disconnected {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
rpcsLog.Tracef("Disconnecting websocket client %s", c.addr)
close(c.quit)
c.conn.Close()
c.disconnected = true
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Start begins processing input and output messages.
func (c *wsClient) Start() {
rpcsLog.Tracef("Starting websocket client %s", c.addr)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Start processing input and output.
c.wg.Add(3)
go c.inHandler()
go c.notificationQueueHandler()
go c.outHandler()
}
// WaitForShutdown blocks until the websocket client goroutines are stopped
// and the connection is closed.
func (c *wsClient) WaitForShutdown() {
c.wg.Wait()
}
// newWebsocketClient returns a new websocket client given the notification
// manager, websocket connection, remote address, and whether or not the client
// has already been authenticated (via HTTP Basic access authentication). The
// returned client is ready to start. Once started, the client will process
// incoming and outgoing messages in separate goroutines complete with queuing
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// and asynchrous handling for long-running operations.
func newWebsocketClient(server *rpcServer, conn *websocket.Conn,
2015-09-15 20:03:48 +02:00
remoteAddr string, authenticated bool, isAdmin bool) (*wsClient, error) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
2015-09-15 20:03:48 +02:00
sessionID, err := wire.RandomUint64()
if err != nil {
return nil, err
}
client := &wsClient{
conn: conn,
addr: remoteAddr,
authenticated: authenticated,
isAdmin: isAdmin,
sessionID: sessionID,
server: server,
addrRequests: make(map[string]struct{}),
spentRequests: make(map[wire.OutPoint]struct{}),
serviceRequestSem: makeSemaphore(cfg.RPCMaxConcurrentReqs),
ntfnChan: make(chan []byte, 1), // nonblocking sync
sendChan: make(chan wsResponse, websocketSendBufferSize),
quit: make(chan struct{}),
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
2015-09-15 20:03:48 +02:00
return client, nil
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// handleWebsocketHelp implements the help command for websocket connections.
func handleWebsocketHelp(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.HelpCmd)
if !ok {
return nil, btcjson.ErrRPCInternal
}
// Provide a usage overview of all commands when no specific command
// was specified.
var command string
if cmd.Command != nil {
command = *cmd.Command
}
if command == "" {
usage, err := wsc.server.helpCacher.rpcUsage(true)
if err != nil {
context := "Failed to generate RPC usage"
return nil, internalRPCError(err.Error(), context)
}
return usage, nil
}
// Check that the command asked for is supported and implemented.
// Search the list of websocket handlers as well as the main list of
// handlers since help should only be provided for those cases.
valid := true
if _, ok := rpcHandlers[command]; !ok {
if _, ok := wsHandlers[command]; !ok {
valid = false
}
}
if !valid {
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCInvalidParameter,
Message: "Unknown command: " + command,
}
}
// Get the help for the command.
help, err := wsc.server.helpCacher.rpcMethodHelp(command)
if err != nil {
context := "Failed to generate help"
return nil, internalRPCError(err.Error(), context)
}
return help, nil
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// handleNotifyBlocks implements the notifyblocks command extension for
// websocket connections.
func handleNotifyBlocks(wsc *wsClient, icmd interface{}) (interface{}, error) {
wsc.server.ntfnMgr.RegisterBlockUpdates(wsc)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return nil, nil
}
2015-09-15 20:03:48 +02:00
// handleSession implements the session command extension for websocket
// connections.
func handleSession(wsc *wsClient, icmd interface{}) (interface{}, error) {
return &btcjson.SessionResult{SessionID: wsc.sessionID}, nil
}
// handleStopNotifyBlocks implements the stopnotifyblocks command extension for
// websocket connections.
func handleStopNotifyBlocks(wsc *wsClient, icmd interface{}) (interface{}, error) {
wsc.server.ntfnMgr.UnregisterBlockUpdates(wsc)
return nil, nil
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// handleNotifySpent implements the notifyspent command extension for
// websocket connections.
func handleNotifySpent(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.NotifySpentCmd)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if !ok {
return nil, btcjson.ErrRPCInternal
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
outpoints, err := deserializeOutpoints(cmd.OutPoints)
if err != nil {
return nil, err
}
wsc.server.ntfnMgr.RegisterSpentRequests(wsc, outpoints)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return nil, nil
}
// handleNotifyNewTransations implements the notifynewtransactions command
// extension for websocket connections.
func handleNotifyNewTransactions(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.NotifyNewTransactionsCmd)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if !ok {
return nil, btcjson.ErrRPCInternal
}
wsc.verboseTxUpdates = cmd.Verbose != nil && *cmd.Verbose
wsc.server.ntfnMgr.RegisterNewMempoolTxsUpdates(wsc)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return nil, nil
}
// handleStopNotifyNewTransations implements the stopnotifynewtransactions
// command extension for websocket connections.
func handleStopNotifyNewTransactions(wsc *wsClient, icmd interface{}) (interface{}, error) {
wsc.server.ntfnMgr.UnregisterNewMempoolTxsUpdates(wsc)
return nil, nil
}
// handleNotifyReceived implements the notifyreceived command extension for
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// websocket connections.
func handleNotifyReceived(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.NotifyReceivedCmd)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if !ok {
return nil, btcjson.ErrRPCInternal
}
// Decode addresses to validate input, but the strings slice is used
// directly if these are all ok.
err := checkAddressValidity(cmd.Addresses)
if err != nil {
return nil, err
}
wsc.server.ntfnMgr.RegisterTxOutAddressRequests(wsc, cmd.Addresses)
return nil, nil
}
// handleStopNotifySpent implements the stopnotifyspent command extension for
// websocket connections.
func handleStopNotifySpent(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.StopNotifySpentCmd)
if !ok {
return nil, btcjson.ErrRPCInternal
}
outpoints, err := deserializeOutpoints(cmd.OutPoints)
if err != nil {
return nil, err
}
for _, outpoint := range outpoints {
wsc.server.ntfnMgr.UnregisterSpentRequest(wsc, outpoint)
}
return nil, nil
}
// handleStopNotifyReceived implements the stopnotifyreceived command extension
// for websocket connections.
func handleStopNotifyReceived(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.StopNotifyReceivedCmd)
if !ok {
return nil, btcjson.ErrRPCInternal
}
// Decode addresses to validate input, but the strings slice is used
// directly if these are all ok.
err := checkAddressValidity(cmd.Addresses)
if err != nil {
return nil, err
}
for _, addr := range cmd.Addresses {
wsc.server.ntfnMgr.UnregisterTxOutAddressRequest(wsc, addr)
}
return nil, nil
}
// checkAddressValidity checks the validity of each address in the passed
// string slice. It does this by attempting to decode each address using the
// current active network parameters. If any single address fails to decode
// properly, the function returns an error. Otherwise, nil is returned.
func checkAddressValidity(addrs []string) error {
for _, addr := range addrs {
_, err := btcutil.DecodeAddress(addr, activeNetParams.Params)
if err != nil {
return &btcjson.RPCError{
Code: btcjson.ErrRPCInvalidAddressOrKey,
Message: fmt.Sprintf("Invalid address or key: %v",
addr),
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
return nil
}
// deserializeOutpoints deserializes each serialized outpoint.
func deserializeOutpoints(serializedOuts []btcjson.OutPoint) ([]*wire.OutPoint, error) {
outpoints := make([]*wire.OutPoint, 0, len(serializedOuts))
for i := range serializedOuts {
blockHash, err := chainhash.NewHashFromStr(serializedOuts[i].Hash)
if err != nil {
return nil, rpcDecodeHexError(serializedOuts[i].Hash)
}
index := serializedOuts[i].Index
outpoints = append(outpoints, wire.NewOutPoint(blockHash, index))
}
return outpoints, nil
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
type rescanKeys struct {
fallbacks map[string]struct{}
pubKeyHashes map[[ripemd160.Size]byte]struct{}
scriptHashes map[[ripemd160.Size]byte]struct{}
2015-08-02 23:21:27 +02:00
compressedPubKeys map[[33]byte]struct{}
uncompressedPubKeys map[[65]byte]struct{}
unspent map[wire.OutPoint]struct{}
}
// unspentSlice returns a slice of currently-unspent outpoints for the rescan
// lookup keys. This is primarily intended to be used to register outpoints
// for continuous notifications after a rescan has completed.
func (r *rescanKeys) unspentSlice() []*wire.OutPoint {
ops := make([]*wire.OutPoint, 0, len(r.unspent))
for op := range r.unspent {
opCopy := op
ops = append(ops, &opCopy)
}
return ops
}
// ErrRescanReorg defines the error that is returned when an unrecoverable
// reorganize is detected during a rescan.
var ErrRescanReorg = btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Reorganize",
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// rescanBlock rescans all transactions in a single block. This is a helper
// function for handleRescan.
func rescanBlock(wsc *wsClient, lookups *rescanKeys, blk *btcutil.Block) {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for _, tx := range blk.Transactions() {
2014-02-24 15:10:59 +01:00
// Hexadecimal representation of this tx. Only created if
// needed, and reused for later notifications if already made.
var txHex string
// All inputs and outputs must be iterated through to correctly
// modify the unspent map, however, just a single notification
// for any matching transaction inputs or outputs should be
// created and sent.
spentNotified := false
recvNotified := false
for _, txin := range tx.MsgTx().TxIn {
if _, ok := lookups.unspent[txin.PreviousOutPoint]; ok {
delete(lookups.unspent, txin.PreviousOutPoint)
2014-02-24 15:10:59 +01:00
if spentNotified {
continue
}
if txHex == "" {
txHex = txHexString(tx)
}
marshalledJSON, err := newRedeemingTxNotification(txHex, tx.Index(), blk)
if err != nil {
rpcsLog.Errorf("Failed to marshal redeemingtx notification: %v", err)
continue
}
err = wsc.QueueNotification(marshalledJSON)
// Stop the rescan early if the websocket client
// disconnected.
if err == ErrClientQuit {
return
}
spentNotified = true
}
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for txOutIdx, txout := range tx.MsgTx().TxOut {
_, addrs, _, _ := txscript.ExtractPkScriptAddrs(
txout.PkScript, wsc.server.server.chainParams)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for _, addr := range addrs {
switch a := addr.(type) {
case *btcutil.AddressPubKeyHash:
if _, ok := lookups.pubKeyHashes[*a.Hash160()]; !ok {
continue
}
case *btcutil.AddressScriptHash:
if _, ok := lookups.scriptHashes[*a.Hash160()]; !ok {
continue
}
case *btcutil.AddressPubKey:
found := false
switch sa := a.ScriptAddress(); len(sa) {
case 33: // Compressed
var key [33]byte
copy(key[:], sa)
2015-08-02 23:21:27 +02:00
if _, ok := lookups.compressedPubKeys[key]; ok {
found = true
}
case 65: // Uncompressed
var key [65]byte
copy(key[:], sa)
2015-08-02 23:21:27 +02:00
if _, ok := lookups.uncompressedPubKeys[key]; ok {
found = true
}
default:
rpcsLog.Warnf("Skipping rescanned pubkey of unknown "+
"serialized length %d", len(sa))
continue
}
// If the transaction output pays to the pubkey of
// a rescanned P2PKH address, include it as well.
if !found {
pkh := a.AddressPubKeyHash()
if _, ok := lookups.pubKeyHashes[*pkh.Hash160()]; !ok {
continue
}
}
default:
// A new address type must have been added. Encode as a
// payment address string and check the fallback map.
addrStr := addr.EncodeAddress()
_, ok := lookups.fallbacks[addrStr]
if !ok {
continue
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
outpoint := wire.OutPoint{
Hash: *tx.Hash(),
Index: uint32(txOutIdx),
}
lookups.unspent[outpoint] = struct{}{}
2014-02-24 15:10:59 +01:00
if recvNotified {
continue
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
2014-02-24 15:10:59 +01:00
if txHex == "" {
txHex = txHexString(tx)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
ntfn := btcjson.NewRecvTxNtfn(txHex,
blockDetails(blk, tx.Index()))
2014-02-24 15:10:59 +01:00
marshalledJSON, err := btcjson.MarshalCmd(nil, ntfn)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
2014-02-24 15:10:59 +01:00
rpcsLog.Errorf("Failed to marshal recvtx notification: %v", err)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return
}
2014-02-24 15:10:59 +01:00
err = wsc.QueueNotification(marshalledJSON)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Stop the rescan early if the websocket client
// disconnected.
2014-02-24 15:10:59 +01:00
if err == ErrClientQuit {
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return
}
2014-02-24 15:10:59 +01:00
recvNotified = true
}
}
}
}
// recoverFromReorg attempts to recover from a detected reorganize during a
// rescan. It fetches a new range of block shas from the database and
// verifies that the new range of blocks is on the same fork as a previous
// range of blocks. If this condition does not hold true, the JSON-RPC error
// for an unrecoverable reorganize is returned.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
func recoverFromReorg(chain *blockchain.BlockChain, minBlock, maxBlock int32,
lastBlock *chainhash.Hash) ([]chainhash.Hash, error) {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
hashList, err := chain.HeightRange(minBlock, maxBlock)
if err != nil {
rpcsLog.Errorf("Error looking up block range: %v", err)
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Database error: " + err.Error(),
}
}
if lastBlock == nil || len(hashList) == 0 {
return hashList, nil
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
blk, err := chain.BlockByHash(&hashList[0])
if err != nil {
rpcsLog.Errorf("Error looking up possibly reorged block: %v",
err)
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Database error: " + err.Error(),
}
}
jsonErr := descendantBlock(lastBlock, blk)
if jsonErr != nil {
return nil, jsonErr
}
return hashList, nil
}
// descendantBlock returns the appropriate JSON-RPC error if a current block
// fetched during a reorganize is not a direct child of the parent block hash.
func descendantBlock(prevHash *chainhash.Hash, curBlock *btcutil.Block) error {
curHash := &curBlock.MsgBlock().Header.PrevBlock
if !prevHash.IsEqual(curHash) {
rpcsLog.Errorf("Stopping rescan for reorged block %v "+
"(replaced by block %v)", prevHash, curHash)
return &ErrRescanReorg
}
return nil
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// handleRescan implements the rescan command extension for websocket
// connections.
//
// NOTE: This does not smartly handle reorgs, and fixing requires database
// changes (for safe, concurrent access to full block ranges, and support
// for other chains than the best chain). It will, however, detect whether
// a reorg removed a block that was previously processed, and result in the
// handler erroring. Clients must handle this by finding a block still in
// the chain (perhaps from a rescanprogress notification) to resume their
// rescan.
func handleRescan(wsc *wsClient, icmd interface{}) (interface{}, error) {
cmd, ok := icmd.(*btcjson.RescanCmd)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if !ok {
return nil, btcjson.ErrRPCInternal
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
outpoints := make([]*wire.OutPoint, 0, len(cmd.OutPoints))
2014-04-11 03:41:36 +02:00
for i := range cmd.OutPoints {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
cmdOutpoint := &cmd.OutPoints[i]
blockHash, err := chainhash.NewHashFromStr(cmdOutpoint.Hash)
2014-04-11 03:41:36 +02:00
if err != nil {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
return nil, rpcDecodeHexError(cmdOutpoint.Hash)
2014-04-11 03:41:36 +02:00
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
outpoint := wire.NewOutPoint(blockHash, cmdOutpoint.Index)
outpoints = append(outpoints, outpoint)
2014-04-11 03:41:36 +02:00
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
numAddrs := len(cmd.Addresses)
if numAddrs == 1 {
rpcsLog.Info("Beginning rescan for 1 address")
} else {
rpcsLog.Infof("Beginning rescan for %d addresses", numAddrs)
}
// Build lookup maps.
lookups := rescanKeys{
fallbacks: map[string]struct{}{},
pubKeyHashes: map[[ripemd160.Size]byte]struct{}{},
scriptHashes: map[[ripemd160.Size]byte]struct{}{},
2015-08-02 23:21:27 +02:00
compressedPubKeys: map[[33]byte]struct{}{},
uncompressedPubKeys: map[[65]byte]struct{}{},
unspent: map[wire.OutPoint]struct{}{},
}
var compressedPubkey [33]byte
var uncompressedPubkey [65]byte
for _, addrStr := range cmd.Addresses {
addr, err := btcutil.DecodeAddress(addrStr, activeNetParams.Params)
if err != nil {
jsonErr := btcjson.RPCError{
Code: btcjson.ErrRPCInvalidAddressOrKey,
Message: "Rescan address " + addrStr + ": " +
err.Error(),
}
return nil, &jsonErr
}
switch a := addr.(type) {
case *btcutil.AddressPubKeyHash:
lookups.pubKeyHashes[*a.Hash160()] = struct{}{}
case *btcutil.AddressScriptHash:
lookups.scriptHashes[*a.Hash160()] = struct{}{}
case *btcutil.AddressPubKey:
pubkeyBytes := a.ScriptAddress()
switch len(pubkeyBytes) {
case 33: // Compressed
copy(compressedPubkey[:], pubkeyBytes)
2015-08-02 23:21:27 +02:00
lookups.compressedPubKeys[compressedPubkey] = struct{}{}
case 65: // Uncompressed
copy(uncompressedPubkey[:], pubkeyBytes)
2015-08-02 23:21:27 +02:00
lookups.uncompressedPubKeys[uncompressedPubkey] = struct{}{}
default:
jsonErr := btcjson.RPCError{
Code: btcjson.ErrRPCInvalidAddressOrKey,
Message: "Pubkey " + addrStr + " is of unknown length",
}
return nil, &jsonErr
}
default:
// A new address type must have been added. Use encoded
// payment address string as a fallback until a fast path
// is added.
lookups.fallbacks[addrStr] = struct{}{}
}
}
2014-04-11 03:41:36 +02:00
for _, outpoint := range outpoints {
lookups.unspent[*outpoint] = struct{}{}
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
chain := wsc.server.chain
minBlockHash, err := chainhash.NewHashFromStr(cmd.BeginBlock)
if err != nil {
return nil, rpcDecodeHexError(cmd.BeginBlock)
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
minBlock, err := chain.BlockHeightByHash(minBlockHash)
if err != nil {
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCBlockNotFound,
Message: "Error getting block: " + err.Error(),
}
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
maxBlock := int32(math.MaxInt32)
if cmd.EndBlock != nil {
maxBlockHash, err := chainhash.NewHashFromStr(*cmd.EndBlock)
if err != nil {
return nil, rpcDecodeHexError(*cmd.EndBlock)
}
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
maxBlock, err = chain.BlockHeightByHash(maxBlockHash)
if err != nil {
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCBlockNotFound,
Message: "Error getting block: " + err.Error(),
}
}
}
// lastBlock and lastBlockHash track the previously-rescanned block.
// They equal nil when no previous blocks have been rescanned.
var lastBlock *btcutil.Block
var lastBlockHash *chainhash.Hash
// A ticker is created to wait at least 10 seconds before notifying the
// websocket client of the current progress completed by the rescan.
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Instead of fetching all block shas at once, fetch in smaller chunks
// to ensure large rescans consume a limited amount of memory.
fetchRange:
for minBlock < maxBlock {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
// Limit the max number of hashes to fetch at once to the
// maximum number of items allowed in a single inventory.
// This value could be higher since it's not creating inventory
// messages, but this mirrors the limiting logic used in the
// peer-to-peer protocol.
maxLoopBlock := maxBlock
if maxLoopBlock-minBlock > wire.MaxInvPerMsg {
maxLoopBlock = minBlock + wire.MaxInvPerMsg
}
hashList, err := chain.HeightRange(minBlock, maxLoopBlock)
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
rpcsLog.Errorf("Error looking up block range: %v", err)
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Database error: " + err.Error(),
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
if len(hashList) == 0 {
// The rescan is finished if no blocks hashes for this
// range were successfully fetched and a stop block
// was provided.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if maxBlock != math.MaxInt32 {
break
}
// If the rescan is through the current block, set up
// the client to continue to receive notifications
// regarding all rescanned addresses and the current set
// of unspent outputs.
//
// This is done safely by temporarily grabbing exclusive
// access of the block manager. If no more blocks have
// been attached between this pause and the fetch above,
// then it is safe to register the websocket client for
// continuous notifications if necessary. Otherwise,
// continue the fetch loop again to rescan the new
// blocks (or error due to an irrecoverable reorganize).
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
blockManager := wsc.server.server.blockManager
pauseGuard := blockManager.Pause()
best := blockManager.chain.BestSnapshot()
curHash := best.Hash
again := true
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if lastBlockHash == nil || *lastBlockHash == *curHash {
again = false
n := wsc.server.ntfnMgr
n.RegisterSpentRequests(wsc, lookups.unspentSlice())
n.RegisterTxOutAddressRequests(wsc, cmd.Addresses)
}
close(pauseGuard)
if err != nil {
rpcsLog.Errorf("Error fetching best block "+
"hash: %v", err)
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Database error: " +
err.Error(),
}
}
if again {
continue
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
break
}
loopHashList:
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
for i := range hashList {
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
blk, err := chain.BlockByHash(&hashList[i])
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
if err != nil {
// Only handle reorgs if a block could not be
// found for the hash.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if dbErr, ok := err.(database.Error); !ok ||
dbErr.ErrorCode != database.ErrBlockNotFound {
rpcsLog.Errorf("Error looking up "+
"block: %v", err)
return nil, &btcjson.RPCError{
Code: btcjson.ErrRPCDatabase,
Message: "Database error: " +
err.Error(),
}
}
// If an absolute max block was specified, don't
// attempt to handle the reorg.
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
if maxBlock != math.MaxInt32 {
rpcsLog.Errorf("Stopping rescan for "+
"reorged block %v",
cmd.EndBlock)
return nil, &ErrRescanReorg
}
// If the lookup for the previously valid block
// hash failed, there may have been a reorg.
// Fetch a new range of block hashes and verify
// that the previously processed block (if there
// was any) still exists in the database. If it
// doesn't, we error.
//
// A goto is used to branch executation back to
// before the range was evaluated, as it must be
// reevaluated for the new hashList.
minBlock += int32(i)
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
hashList, err = recoverFromReorg(chain,
minBlock, maxBlock, lastBlockHash)
if err != nil {
return nil, err
}
if len(hashList) == 0 {
break fetchRange
}
goto loopHashList
}
if i == 0 && lastBlockHash != nil {
// Ensure the new hashList is on the same fork
// as the last block from the old hashList.
jsonErr := descendantBlock(lastBlockHash, blk)
if jsonErr != nil {
return nil, jsonErr
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
// A select statement is used to stop rescans if the
// client requesting the rescan has disconnected.
select {
case <-wsc.quit:
rpcsLog.Debugf("Stopped rescan at height %v "+
"for disconnected client", blk.Height())
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
return nil, nil
default:
rescanBlock(wsc, &lookups, blk)
lastBlock = blk
lastBlockHash = blk.Hash()
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Periodically notify the client of the progress
// completed. Continue with next block if no progress
// notification is needed yet.
select {
case <-ticker.C: // fallthrough
default:
continue
}
n := btcjson.NewRescanProgressNtfn(hashList[i].String(),
int32(blk.Height()),
blk.MsgBlock().Header.Timestamp.Unix())
mn, err := btcjson.MarshalCmd(nil, n)
if err != nil {
rpcsLog.Errorf("Failed to marshal rescan "+
"progress notification: %v", err)
continue
}
if err = wsc.QueueNotification(mn); err == ErrClientQuit {
// Finished if the client disconnected.
rpcsLog.Debugf("Stopped rescan at height %v "+
"for disconnected client", blk.Height())
return nil, nil
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
}
minBlock += int32(len(hashList))
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
// Notify websocket client of the finished rescan. Due to how btcd
// asynchronously queues notifications to not block calling code,
// there is no guarantee that any of the notifications created during
// rescan (such as rescanprogress, recvtx and redeemingtx) will be
// received before the rescan RPC returns. Therefore, another method
// is needed to safely inform clients that all rescan notifications have
// been sent.
n := btcjson.NewRescanFinishedNtfn(lastBlockHash.String(),
blockchain: Rework to use new db interface. This commit is the first stage of several that are planned to convert the blockchain package into a concurrent safe package that will ultimately allow support for multi-peer download and concurrent chain processing. The goal is to update btcd proper after each step so it can take advantage of the enhancements as they are developed. In addition to the aforementioned benefit, this staged approach has been chosen since it is absolutely critical to maintain consensus. Separating the changes into several stages makes it easier for reviewers to logically follow what is happening and therefore helps prevent consensus bugs. Naturally there are significant automated tests to help prevent consensus issues as well. The main focus of this stage is to convert the blockchain package to use the new database interface and implement the chain-related functionality which it no longer handles. It also aims to improve efficiency in various areas by making use of the new database and chain capabilities. The following is an overview of the chain changes: - Update to use the new database interface - Add chain-related functionality that the old database used to handle - Main chain structure and state - Transaction spend tracking - Implement a new pruned unspent transaction output (utxo) set - Provides efficient direct access to the unspent transaction outputs - Uses a domain specific compression algorithm that understands the standard transaction scripts in order to significantly compress them - Removes reliance on the transaction index and paves the way toward eventually enabling block pruning - Modify the New function to accept a Config struct instead of inidividual parameters - Replace the old TxStore type with a new UtxoViewpoint type that makes use of the new pruned utxo set - Convert code to treat the new UtxoViewpoint as a rolling view that is used between connects and disconnects to improve efficiency - Make best chain state always set when the chain instance is created - Remove now unnecessary logic for dealing with unset best state - Make all exported functions concurrent safe - Currently using a single chain state lock as it provides a straight forward and easy to review path forward however this can be improved with more fine grained locking - Optimize various cases where full blocks were being loaded when only the header is needed to help reduce the I/O load - Add the ability for callers to get a snapshot of the current best chain stats in a concurrent safe fashion - Does not block callers while new blocks are being processed - Make error messages that reference transaction outputs consistently use <transaction hash>:<output index> - Introduce a new AssertError type an convert internal consistency checks to use it - Update tests and examples to reflect the changes - Add a full suite of tests to ensure correct functionality of the new code The following is an overview of the btcd changes: - Update to use the new database and chain interfaces - Temporarily remove all code related to the transaction index - Temporarily remove all code related to the address index - Convert all code that uses transaction stores to use the new utxo view - Rework several calls that required the block manager for safe concurrency to use the chain package directly now that it is concurrent safe - Change all calls to obtain the best hash to use the new best state snapshot capability from the chain package - Remove workaround for limits on fetching height ranges since the new database interface no longer imposes them - Correct the gettxout RPC handler to return the best chain hash as opposed the hash the txout was found in - Optimize various RPC handlers: - Change several of the RPC handlers to use the new chain snapshot capability to avoid needlessly loading data - Update several handlers to use new functionality to avoid accessing the block manager so they are able to return the data without blocking when the server is busy processing blocks - Update non-verbose getblock to avoid deserialization and serialization overhead - Update getblockheader to request the block height directly from chain and only load the header - Update getdifficulty to use the new cached data from chain - Update getmininginfo to use the new cached data from chain - Update non-verbose getrawtransaction to avoid deserialization and serialization overhead - Update gettxout to use the new utxo store versus loading full transactions using the transaction index The following is an overview of the utility changes: - Update addblock to use the new database and chain interfaces - Update findcheckpoint to use the new database and chain interfaces - Remove the dropafter utility which is no longer supported NOTE: The transaction index and address index will be reimplemented in another commit.
2015-08-26 06:03:18 +02:00
lastBlock.Height(),
lastBlock.MsgBlock().Header.Timestamp.Unix())
if mn, err := btcjson.MarshalCmd(nil, n); err != nil {
rpcsLog.Errorf("Failed to marshal rescan finished "+
"notification: %v", err)
} else {
// The rescan is finished, so we don't care whether the client
// has disconnected at this point, so discard error.
_ = wsc.QueueNotification(mn)
}
Rework and improve websocket notification system. This commit refactors the entire websocket client code to resolve several issues with the previous implementation. Note that this commit does not change the public API for websockets. It only consists of internal improvements. The following is the major issues which have been addressed: - A slow websocket client could impede notifications to all clients - Long-running operations such as rescans would block all other requests until it had completed - The above two points taken together could lead to apparant hangs since the client doing the rescan would eventually run out of channel buffer and block the entire group of clients until the rescan completed - Disconnecting a websocket during certain operations could lead to a hang - Stopping the rpc server with operations under way could lead to a hang - There were no limits to the number of websocket clients that could connect The following is a summary of the major changes: - The websocket code has been split into two entities: a connection/notification manager and a websocket client - The new connection/notification manager acts as the entry point from the rest of the subsystems to feed data which potentially needs to notify clients - Each websocket client now has its own instance of the new websocket client type which controls its own lifecycle - The data flow has been completely redesigned to closely resemble the peer data flow - Each websocket now has its own long-lived goroutines for input, output, and queuing of notifications - Notifications use the new notification queue goroutine along with queueing to ensure they dont't block on stalled or slow peers - There is a new infrastructure for asynchronously executing long-running commands such as a rescan while still allowing the faster operations to continue to be serviced by the same client - Since long-running operations now run asynchronously, they have been limited to one at a time - Added a limit of 10 websocket clients. This is hard coded for now, but will be made configurable in the future Taken together these changes make the code far easier to reason about and update as well solve the aforementioned issues. Further optimizations to improve performance are possible in regards to the way the connection/notification manager works, however this commit already contains a ton of changes, so they are being left for another time.
2014-02-19 00:23:33 +01:00
rpcsLog.Info("Finished rescan")
return nil, nil
}
func init() {
wsHandlers = wsHandlersBeforeInit
}