3.1.4 Component Based Architecture

The basis of the platform is the node software built on top of shared modules, grouped into internal services. The node architecture contains the following features:

  • Network support is conducted with the ZeroMQ high-performance asynchronous messaging library. This library allows for the organization of distributed and parallel computing.

  • The node-level consensus mechanism (Pluggable Consensus Algorithms) is implemented as a separate service and allows for the configuration of validation rules and voting mechanisms.

  • Transaction processing is taken out in a separate component called the transaction processor. The network can simultaneously process various types of transactions – the transaction families. Each family of transactions is processed differently, has different data formats, and has its own set of validation rules. Although the rules themselves are tied to the family of transactions (that is, to the transaction processor), the part of the node that implements the general verification procedures and interaction with other nodes is called the Validator.

  • Interaction with clients that allows them to receive information about the state of the network, and the status of transactions, as well as send (“throw in”) new transactions, is carried out through the REST API. This supports the micro-service architecture. Clients include:

DASHBOARD (Explorer): a specialized web portal that provides information about the state of the node and the network.

CLI (Command Line Interface): a console that encapsulates individual node management commands.

Applications that implement the logic of working with the end node (for example, a wallet with tokens, a store of digital assets, an external system – a source of information)

  • Each service inside a node is implemented in a separate container (running Docker), which allows the node to be represented as a set of connected virtual machines, which collectively form a virtual supercomputer. This allows switching some services from .

  • The ledger shared among all nodes is stored in a separate database, which allows quick access to all records-transactions (LMDB is used in the basic version, it is focused on storing key-value pairs in memory; DAG is implemented over the database). The ledger is the key element of the storage subsystem, while the main efforts of the network are spent on synchronization and collision resolution. The ledger stores not only the data of the transactions themselves but also general settings and topology data.

The figure above shows the main subsystems of the node. They are listed below in the same order:

VALIDATOR

Combining a group of subsystems around the process of obtaining, checking, and selecting data for insertion into the database (ledger)

#SUBSYSTEMDESCRIPTION

1

NETWORKING

The subsystem is represented by the legacy ZeroMQ library and is responsible both for interactions with other nodes (port 8800) and for the interactions with other node components, such as the transaction processor (port 4004), consensus engine (TCP 5050), REST API (TCP 4004), etc.

2

ENCRYPTION

The library for generating gash-functions and digital signatures. Sawtooth uses the ECDSA (elliptic curve) algorithm with secp256k1 parameters to generate private and public keys. The same PyCryptodome library is used for forming the 64-bit digital signature in DER encoding. The cryptography is selected when the node is assembled, and it must be the same for the entire network.

3

STATE

Data warehouse, ledger, DAG. Despite the arbitrary positioning of storage to “hold” the ledger, Sawtooth uses . The LMDB - Lightning Memory-Mapped Database (block-00.lmdb file) is used for the ledger (essentially the transaction) by default. The canonical configuration of the Sawtooth kernel also allows Redis to be used for this purpose. Both databases are NoSQL databases and store . DGT also accommodates using graph databases (ex. ).

4

JOURNAL

A group of components that work on processing transactions and inserting them into the ledger.

5

CONSENSUS

The platform supports the so-called Dynamic Consensus. For these purposes, an abstract consensus mechanism is implemented. Consensus is accessed as a separate component (port 5050 within INTERCONNECT). Interacting with it is carried out through the Consensus API, an interface that allows the consensus mechanism to interact with the validator to establish consensus functions as a separate process. Three related processes are used in this work:

  • Consensus.BlockPublisher – creates candidate blocks in three steps: initialization, validation, and finalization

  • Consensus.BlockVerifier – helps to check if a block candidate has been published following the agreed rules.

  • Consensus.ForkResolver – selects the side that can be a chain header

Using this mechanism, “built-in” Sawtooth consensuses can be connected: DevMode (the simplest way to verify transactions), PoET, PoET-SGX, Raft, .

The F-BFT consensus has an extended interpretation as the achievement of consensus in the federation and its further distribution to the entire network. The consensus process checks the rules for the selected transaction family and then the vote is conducted.

6

ORACLE

An intelligent component for obtaining additional conditions regarding the transaction. For example, this can be used to detect fraudulent transactions or select the names of organizations with the help of a neural network.

SERVICES

Set of connected components

#SUBSYSTEMDESCRIPTION

7

TRANSACTION PROCESSORS

A module implemented as a separate service (TCP 4004) that supports various transaction families.

In any implementation, there is a settings management family (Settings Transaction Family), which saves configuration settings in the ledger. DGT adds another mandatory family – the topology processor. The transaction processors are responsible for verifying transactions and all actions taken. The BGT family is used as an additional testing component (modification of the IntegerKey Transaction Family).

8

REST API

A component that allows clients to interact with the host core via HTTP/JSON. Processing is done through ZMQ/Protobuff. The legacy API is described here. DGT significantly expands the existing APIs.

CLIENTS

#SUBSYSTEMDESCRIPTION

9

CLI

A command line interface that allows for the processing of information inside a node through a standardized API. It is the primary node administration tool.

10

DASHBOARD

A component is written for specifically DGT, which is a lightweight web portal for visualizing the main network parameters.

11

MOBILE APP

Implements the main business logic. Written by DGT as a digital content management tool (with connection to the Magento platform)

Since many of DGT’s components are run in the Docker environment, interaction occurs through those same network interfaces, which forms a specific inter-connect, as shown in the figure below.

Last updated