DGT DOCS
  • 1. INTRODUCTION
    • 1.1 Executive Summary
    • 1.2 Why DGT
    • 1.3 Distributed Ledgers Technology
      • 1.3.1 Decentralization approach
      • 1.3.2 Consensus Mechanism
      • 1.3.3 Transactions
      • 1.3.4 Layered Blockchain Architecture
      • 1.3.5 Tokenomics
      • 1.3.6 Web 3 Paradigm
      • 1.3.7 Common Myths about Blockchain
    • 1.4 The DGT Overview
      • 1.4.1 Platform Approach
      • 1.4.2 DGT Functional Architecture
      • 1.4.3 Technology Roadmap
    • 1.5 How to create a Solution with DGT Networks
    • 1.6 Acknowledgments
  • 2. REAL WORLD APPLICATIONS
    • 2.1 Case-Based Approach
      • 2.1.1 DGT Mission
      • 2.1.2 The Methodology
      • 2.1.3 Case Selection
    • 2.2 Supply Chain and Vertical Integration
      • 2.2.1 Logistics Solution for Spare Parts Delivery
      • 2.2.2 DGT Based Solution for Coffee Chain Products
    • 2.3 Innovative Financial Services
      • 2.3.1 Crowdfunding Platform
      • 2.3.2 Real World Assets Tokenization
      • 2.3.3 Virtual Neobank over DGT Network
      • 2.3.4 DGT based NFT Marketplace
    • 2.4 Decentralized Green Energy Market
      • 2.4.1 Peer To Peer Energy Trading
      • 2.4.2 DGT based Carbon Offset Trading
    • 2.5 B2B2C Ecosystems and Horizontal Integration
      • 2.5.1 KYC and User Scoring
      • 2.5.2 Decentralized Marketing Attribution
      • 2.5.3 Case Decentralized Publishing Platform
      • 2.5.4 Value Ecosystem
    • 2.6 More Cases
  • 3. DGT ARCHITECTURE
    • 3.1 Scalable Architecture Design
      • 3.1.1 High Level Architecture
      • 3.1.2 DGT Approach
      • 3.1.3 Unique contribution
      • 3.1.4 Component Based Architecture
    • 3.2 Performance Metrics
    • 3.3 Network Architecture
      • 3.3.1 Nework Architecture in General
      • 3.3.2 Network Identification
      • 3.3.3 H-Net Architecture
      • 3.3.4 Transport Level
      • 3.3.5 Segments
      • 3.3.6 Static and Dynamic Topologies
      • 3.3.7 Cluster Formation
      • 3.3.8 Node Networking
      • 3.3.9 Permalinks Control Protocol
    • 3.4 Fault-Tolerant Architecture
      • 3.4.1 Introduction to Fault Tolerance
      • 3.4.2 F-BFT: The Hierarchical Consensus Mechanism
      • 3.4.3 Cluster Based Algorithms
      • 3.4.4 Arbitrator Security Scheme
      • 3.4.5 Heartbeat Protocol
      • 3.4.6 Oracles and Notaries
      • 3.4.7 DID & KYC
    • 3.5 Transactions and Performance
      • 3.5.1 Transaction Basics
      • 3.5.2 Transaction Processing
      • 3.5.3 Transaction and block signing
      • 3.5.4 Transaction Families
      • 3.5.5 Transaction Receipts
      • 3.5.6 Smart Transactions
      • 3.5.7 Private Transactions
      • 3.5.8 Multi signature
    • 3.6 Data-Centric Model
      • 3.6.1 Data layer overview
      • 3.6.2 Global State
      • 3.6.3 Genesis Record
      • 3.6.4 Sharding
      • 3.6.5 DAG Synchronization
    • 3.7 Cryptography and Security
      • 3.7.1 Security Architecture Approach
      • 3.7.2 Base Cryptography
      • 3.7.3 Permission Design
      • 3.7.4 Key Management
      • 3.7.5 Encryption and Decryption
      • 3.7.6 Secure Multi Party Computation
      • 3.7.7 Cryptographic Agility
      • DGTTECH_3.8.4 Gateway Nodes
    • 3.8 Interoperability
      • 3.8.1 Interoperability Approach
      • 3.8.2 Relay Chain Pattern
      • 3.8.3 Virtual Machine Compatibility
      • 3.8.4 Gateway Nodes
      • 3.8.5 Token Bridge
    • 3.9 DGT API and Consumer Apps
      • 3.9.1 Presentation Layer
      • 3.9.2 Application Architecture
    • 3.10 Technology Stack
    • REFERENCES
  • 4. TOKENIZATION AND PROCESSING
    • 4.1 Introduction to Tokenization
      • 4.1.1 DGT Universe
      • 4.1.2 Driving Digital Transformation with Tokens
      • 4.1.3 Real-World Tokenization
      • 4.1.4 Key Concepts and Definitions
    • 4.2 Foundations of Tokenization
      • 4.2.1 Definition and Evolution of Tokenization
      • 4.2.2 Tokenization in the Blockchain/DLT Space
      • 4.2.3 The Tokenization Process
      • 4.2.4 Tokenization on the DGT Platform
      • 4.2.5 Regulatory and Legal Aspects of Tokenization
      • 4.2.6 Typical Blockchain-Based Business Models
    • 4.3 The DEC Transaction Family
      • 4.3.1 DEC Transaction Family Overview
      • 4.3.2 DEC Token Features
      • 4.3.3 DEC Token Protocol
      • 4.3.4 DEC Account Design
      • 4.3.5 DEC Transaction Family Flow
      • 4.3.6 DEC Commands
      • 4.3.7 DEC Processing
      • 4.3.8 Payment Gateways
    • 4.4 Understanding Secondary Tokens
      • 4.4.1 The different types of tokens supported by DGT
      • 4.4.2 How secondary tokens are produced
  • 5. EXPLORING TOKENOMICS
    • 5.1 Introduction
      • 5.1.1 What does tokenomics mean?
      • 5.1.2 Goals of Building the Model for DGT Network
      • 5.1.3 Tokens vs Digital Money
      • 5.1.4 The Phenomenon of Cryptocurrency
      • 5.1.5 Basic Principles of Tokenomics
      • 5.1.6 AB2023 Model
    • 5.2 Node & User Growth
      • 5.2.1 Node Ecosystem
      • 5.2.2 User Growth and Retention Modeling
    • 5.3 Transactions
      • 5.3.1 Transaction Amount Components
      • 5.3.2 Shaping the Transaction Profile: A Three-pronged Approach
      • 5.3.3 Calculation of Transaction Number
    • 5.4 Network Performance Simulation
      • 5.4.1 Endogenous Model
      • 5.4.2 Network Entropy
      • 5.4.3 Network Utility
    • 5.5 Token Supply Model
      • 5.5.1 Introduction to Supply and Demand Dynamics
      • 5.5.2 Token distribution
      • 5.5.3 Supply Protocol
      • 5.5.4 Token Balance and Cumulative Supply
    • 5.6 Token Demand Model
      • 5.6.1 Node-Base Demand
      • 5.6.2 Transaction-Based Token Demand
      • 5.6.3 Staking Part Modeling
      • 5.6.4 Total Demand
    • 5.7 Token Price Simulation
      • 5.7.1 Nelson-Siegel-Svensson model
      • 5.7.2 The Price Model
    • 5.8 Decentralization Measurement
      • 5.8.1 Active Node Index
      • 5.8.2 Node Diversity in Hybrid Networks
      • 5.8.3 Token distribution
      • 5.8.4 Integral Calculation of Decentralization Metric
    • 5.9 Aggregated Metrics
      • 5.9.1 Transaction Throughput: Evaluating Network Performance and Scalability
      • 5.9.2 Market Capitalization: A Dimension of Valuation in Cryptocurrency
      • 5.9.3 Total Value Locked (TVL): A Spotlight on Network Engagement and Trust
  • 6. ADMINISTRATOR GUIDE
    • 6.1 Introduction
      • 6.1.1 Administrator Role
      • 6.1.2 Platform sourcing
      • 6.1.3 DGT Virtualization
      • 6.1.4 Using Pre-Built Virtual Machine Images
      • 6.1.5 Server Preparation
      • 6.1.6 OS Setup and initialization
    • 6.2 DGT CORE: Single Node Setup
      • 6.2.1 Launch the First DGT Node
      • 6.2.2 Dashboard setup
      • 6.2.3 Nodes Port Configuration
      • 6.2.4 Single Node Check
    • 6.3 DGT CORE: Setup Private/Public Network
      • 6.3.1 Network launch preparation
      • 6.3.2 A Virtual Cluster
      • 6.3.3 A Physical Network
      • 6.3.4 Attach node to Existing Network
    • 6.4 DGT Dashboard
    • 6.5 DGT CLI and base transaction families
    • 6.6 GARANASKA: Financial Processing
      • 6.6.1 Overview of DGT’s financial subsystem
      • 6.6.2 DEC emission
      • 6.6.3 Consortium account
      • 6.6.4 User accounts
      • 6.6.5 Payments
    • 6.7 Adjust DGT settings
      • 6.7.1 DGT Topology
      • 6.7.2 Manage local settings
    • 6.8 DGT Maintenance
      • 6.8.1 Stopping and Restarting the Platform
      • 6.8.2 Backing up Databases
      • 6.8.3 Network Performance
      • 6.8.4 Log & Monitoring
Powered by GitBook
On this page
  • 6.3.3.1 Network Deployment Preparation
  • 6.3.3.2 Set up a physical network
  • 6.3.3.3 DGT Network Example
  1. 6. ADMINISTRATOR GUIDE
  2. 6.3 DGT CORE: Setup Private/Public Network

6.3.3 A Physical Network

Previous6.3.2 A Virtual ClusterNext6.3.4 Attach node to Existing Network

Last updated 1 year ago

6.3.3.1 Network Deployment Preparation

It is typical to deploy a virtual network (see ) for solving experimental problems or developing applications. Operating the network in enterprise environments may require the formation of nodes on physically distinct servers. Servers may also be located on different physical networks protected by a firewall. The installation is generally similar to deploying a virtual network (see ) with the following amendments:

  • When assembling nodes located on physically different servers, additional parameters are used for the bash command, depending on whether the node is connected to closed segments (private network) or public segments – see below.

  • When servers are behind firewalls, network-supporting ports must be opened ([NET] parameter in the «upDgtCluster.sh» file)

  • In case of deploying nodes in the internal network, you must use an explicit indication of the node’s IP (flag -H 7, host – )

  • Network deployment must be preceded by the design of its topology, including planning for the size of clusters and segments ()

  • The initial implementation of the network, also called the “static core” or “seed network” is a group of nodes / clusters that form special trust relationships (public keys of such nodes are known in advance and are ledgered at the time of kernel deployment). The joining of other nodes requires the processing of node certificates for private segments and / or dynamic joining in the case of public segments.

  • A node attached to a seed-network is called an external node. To establish interaction with the network, an entry point must be defined – a gateway, a node of the source network through which a new node is connected. Connecting to private (closed) and public segments is :

o in case of a private segment, the attaching node has a concrete entry point (cluster number and cell number, as well as a verifiable and valid certificate that assigns a public key to the node)

o in case of a public segment, the dynamic topology is used (the certificate is accepted, but verified, while joining is conducted through any available point, subject to the restrictions of the cluster).

Each network node is a server that simultaneously acts as a client of the rest of the network that gains access through other nodes (gateways). To prepare for correct network deployment, it is suggested that you perform the following self-check:

#

Action

Description

Complete?

1

Proposition Alignment

Deployment goals and objectives to be solved are defined. These largely determine the planned configuration. You read the DGT licenses and guidelines (see 8.4) and do not have any barriers for using DGT.

▢

2

System Requirements

Make sure that the platform is ready for the nodes, i.e., it has the hardware, the appropriate operating system and system software installed (Docker, Docker Compose)

▢

3

Check Environment

You have access to manage routing on your network. When connecting external nodes, you must set up ports. The network has no restrictions on the protocols and ports supported by the DGT

▢

4

Network Design

▢

5

Cryptography design

A single cryptography package has been selected for the entire network

▢

6.3.3.2 Set up a physical network

Network deployment is determined by the selected configuration options (including topology, network environment, and cryptography). The following basic steps allow you to deploy a physical network:

o Prepare the hardware platform and system software (one of more physical servers that meet the requirements for platform nodes).

Sequentially expand the seed network nodes using the command:

bash upDgtCluster.sh  -G -SC -H [SERVER_IP] -CB openssl -S 
[GATE_URL:GATE_PORT] [Num_Cluster] [Num_Node]

Here:

-G – a requirement to create or synchronize DAG starting from the genesis block.

-SC – a flag indicating the need for transactions to be signed by nodes.

-H [SERVER_IP] – host, IP address of the physical server on which the node is running. This is important for launching a network in the internal network; in case of absence, it will be determined as an address in the Internet and nodes will need to be launched even if the network is internal

-CB openssl/bitcoin – a flag that indicates the selected cryptography; cryptography must be the same for the entire network.

-S [GATE_URL:GATE_PORT] – a pointer to the gateway through which each subsequent node is connected (except or the first one, moreover this is unnecessary in case of deploying a virtual cluster.)

[Num_Cluster] – the number of the cluster to which the current node is joining (“1” is recommended for the first node)

[Num_Node] – the number of the node joining (“1” is recommended for the first node)

- Launching the component (optional) with the command:

bash upDgtDashboard.sh -CB openssl

- Check the correctness of seed-network deployment using procedures similar to the virtual cluster checks: BGT transaction check, API check, Dashboard check (if it is running).

  • Connect external nodes to the seed network:

bash upDgtCluster.sh -G -E -SC -CB openssl -P [NODE_PORT] -H 
[SERVER_IP] -N [NETWORK_NAME] -S [GATE_URL:GATE_PORT] 
[Num_Cluster] [Num_Node] 

Here:

-G – a requirement to create or synchronize DAG, starting from the genesis block.

-E – flag indicating that the connected node is external

-SC – flag indicating the need for transactions to be signed by nodes

-P [NODE_PORT] – flag that defines the port opened on a remote node, through which a given node communicates with the network.

-H [SERVER_IP] – host, IP address of the physical server on which a node is running. It is important for starting a network on an internal network; in its absence, it will be defined as an address on the Internet and nodes will need to be opened even if the network is internal.

-CB openssl/bitcoin – flag indicated the selected cryptography; cryptography must be the same for the entire network.

-S [GATE_URL:GATE_PORT] – a pointer to the gateway through which each subsequent node is connected (except for the first one; it is also not necessary in case of deploying a virtual cluster).

NumCluster – the number of the cluster to which the current node is connecting (“1” recommended for the first node)

NumNode – the number of a node that is connecting (“1” is recommended for the first node)

-In case of connecting external nodes to a public segment, use the following command:

bash upDgtCluster.sh -G -E -P [NODE_PORT] -N my_host_net -S 
[GATE_URL:GATE_PORT] dyn 1

Here:

dyn 1 – a pointer to the dynamic topology and cluster to which the node wants to connect.

-S [GATE_URL:GATE_PORT] – you can state the gateway as a link to a file (an anchor file in JSON format) hosted in the cloud (for example, Google Drive). For example:

https://drive.google.com/file/d/1o6SEUvogow432pIKQEL8-
EEzNBinzW9R/view?usp=sharing

The anchor file has the following structure, which contains a directory of available gateways to public networks (can also use special services that provide dynamic SD-WAN configuration):

{
  "public":["tcp://validator-dgt-c1-1:8101","tcp://209.124.84.6:8101"],
  "private": [],
  "restapi": ["http://dgt-api:8108"]
}
  • After connecting external nodes, carry out the checks.

6.3.3.3 DGT Network Example

This section provides an example of physical network configuration with the following settings:

- The network unites 6 nodes, from which three clusters are formed: cluster 1 (nodes c1-1, c1-2, c1-3), cluster 2 (nodes c2-1, c2-2) and cluster 3 (sole node c3-1);

- Static core (seed network) that is represented by a virtual cluster of nodes located on one physical server Dell Server-1 with = 192.168.1.134 (thus the initial network is represented by two clusters).

- Node c2-2 is located on a separate physical server, AMD Server-2 with IP = 192.168.1.16 and “completes” cluster 2 in the private seed-network segment.

- Nodes c2-3 and c3-1 are located on a separate physical server, as well as in a virtual cluster and are placed in clusters 2 and 3, respectively.

- Node c1-1 acts at the only gateway for connecting external nodes to the seed-network. Services and network (NET) ports are set automatically according to upDgtCluster.sh

The nature of the testnet is presented below in principle:

The network is deployed as follows:

  • Installation of a virtual cluster (c1-1, c1-2, c1-3, c2-1) representing the seed-network is done in the CLI Dell-Server-1:

bash upDgtCluster.sh  -G -SC -H 192.168.1.134 -CB openssl 1 1
bash upDgtCluster.sh  -G -SC -H 192.168.1.134 -CB openssl 1 2
bash upDgtCluster.sh  -G -SC -H 192.168.1.134 -CB openssl 1 3
bash upDgtCluster.sh  -G -SC -H 192.168.1.134 -CB openssl 2 1
  • The installation of the c2-2 node (external node in a private segment) is set up as follows:

bash upDgtCluster.sh -G -E -SC -CB openssl -P 8202 -H 
192.168.1.16 -N net2022 -S tcp://192.168.1.134:8101 2 2

You should pay attention to the parameters:

- -P 8202 – of the c2-2 node, through which communication with the network is maintained.

- -N net2022 – network name (domain name) must be the same for all nodes of this physical network.

- -H 192.168.1.16 – IP of the physical server (AMD Server-2) on which the node c2-2 is installed.

- -S tcp://192.168.1.134:8101 – pointer to the gateway to the network (which is node 1)

- 2 2 – cluster number and node number

  • Deploying nodes c2-3 and in the CLI servers AMD Server-3:

bash upDgtCluster.sh -G -E -SC -CB openssl -P 8203 -H 192.168.1.126 -N 
net2022 -S tcp://192.168.1.134:8101 2 3

bash upDgtCluster.sh -G -E -SC -CB openssl -P 8301 -H 192.168.1.126 -N 
net2022 -S tcp://192.168.1.134:8101 3 1

You have determined the network topology, including the number and size o clusters, private and public segments, and selected gateways for accessing external nodes – see

Carefully check that the prerequisites are met (see ).

Deploy the Seed Network (static network core) by performing the following procedures (the initial network can be completely virtual – see ):

o Prepare the necessary topology configuration of the seed network (see ).

- Launching the system monitoring subsystem (if necessary) – see .

- If nodes are included into closed (private) segments defined by topology (see ), then for each such node execute the following command in sequence (such nodes must have agreed-upon certificates and their place in the network (a cell determined by the topology configuration)):

6.3.2
6.7.1
6.8.4
6.7.1
6.3.3.1
6.7.1
6.3.2
6.3.2
6.7.1
Figure 126 DGT Network Topology and Node Attaching
Figure 127 Example of DGT Network