DGT DOCS
  • 1. INTRODUCTION
    • 1.1 Executive Summary
    • 1.2 Why DGT
    • 1.3 Distributed Ledgers Technology
      • 1.3.1 Decentralization approach
      • 1.3.2 Consensus Mechanism
      • 1.3.3 Transactions
      • 1.3.4 Layered Blockchain Architecture
      • 1.3.5 Tokenomics
      • 1.3.6 Web 3 Paradigm
      • 1.3.7 Common Myths about Blockchain
    • 1.4 The DGT Overview
      • 1.4.1 Platform Approach
      • 1.4.2 DGT Functional Architecture
      • 1.4.3 Technology Roadmap
    • 1.5 How to create a Solution with DGT Networks
    • 1.6 Acknowledgments
  • 2. REAL WORLD APPLICATIONS
    • 2.1 Case-Based Approach
      • 2.1.1 DGT Mission
      • 2.1.2 The Methodology
      • 2.1.3 Case Selection
    • 2.2 Supply Chain and Vertical Integration
      • 2.2.1 Logistics Solution for Spare Parts Delivery
      • 2.2.2 DGT Based Solution for Coffee Chain Products
    • 2.3 Innovative Financial Services
      • 2.3.1 Crowdfunding Platform
      • 2.3.2 Real World Assets Tokenization
      • 2.3.3 Virtual Neobank over DGT Network
      • 2.3.4 DGT based NFT Marketplace
    • 2.4 Decentralized Green Energy Market
      • 2.4.1 Peer To Peer Energy Trading
      • 2.4.2 DGT based Carbon Offset Trading
    • 2.5 B2B2C Ecosystems and Horizontal Integration
      • 2.5.1 KYC and User Scoring
      • 2.5.2 Decentralized Marketing Attribution
      • 2.5.3 Case Decentralized Publishing Platform
      • 2.5.4 Value Ecosystem
    • 2.6 More Cases
  • 3. DGT ARCHITECTURE
    • 3.1 Scalable Architecture Design
      • 3.1.1 High Level Architecture
      • 3.1.2 DGT Approach
      • 3.1.3 Unique contribution
      • 3.1.4 Component Based Architecture
    • 3.2 Performance Metrics
    • 3.3 Network Architecture
      • 3.3.1 Nework Architecture in General
      • 3.3.2 Network Identification
      • 3.3.3 H-Net Architecture
      • 3.3.4 Transport Level
      • 3.3.5 Segments
      • 3.3.6 Static and Dynamic Topologies
      • 3.3.7 Cluster Formation
      • 3.3.8 Node Networking
      • 3.3.9 Permalinks Control Protocol
    • 3.4 Fault-Tolerant Architecture
      • 3.4.1 Introduction to Fault Tolerance
      • 3.4.2 F-BFT: The Hierarchical Consensus Mechanism
      • 3.4.3 Cluster Based Algorithms
      • 3.4.4 Arbitrator Security Scheme
      • 3.4.5 Heartbeat Protocol
      • 3.4.6 Oracles and Notaries
      • 3.4.7 DID & KYC
    • 3.5 Transactions and Performance
      • 3.5.1 Transaction Basics
      • 3.5.2 Transaction Processing
      • 3.5.3 Transaction and block signing
      • 3.5.4 Transaction Families
      • 3.5.5 Transaction Receipts
      • 3.5.6 Smart Transactions
      • 3.5.7 Private Transactions
      • 3.5.8 Multi signature
    • 3.6 Data-Centric Model
      • 3.6.1 Data layer overview
      • 3.6.2 Global State
      • 3.6.3 Genesis Record
      • 3.6.4 Sharding
      • 3.6.5 DAG Synchronization
    • 3.7 Cryptography and Security
      • 3.7.1 Security Architecture Approach
      • 3.7.2 Base Cryptography
      • 3.7.3 Permission Design
      • 3.7.4 Key Management
      • 3.7.5 Encryption and Decryption
      • 3.7.6 Secure Multi Party Computation
      • 3.7.7 Cryptographic Agility
      • DGTTECH_3.8.4 Gateway Nodes
    • 3.8 Interoperability
      • 3.8.1 Interoperability Approach
      • 3.8.2 Relay Chain Pattern
      • 3.8.3 Virtual Machine Compatibility
      • 3.8.4 Gateway Nodes
      • 3.8.5 Token Bridge
    • 3.9 DGT API and Consumer Apps
      • 3.9.1 Presentation Layer
      • 3.9.2 Application Architecture
    • 3.10 Technology Stack
    • REFERENCES
  • 4. TOKENIZATION AND PROCESSING
    • 4.1 Introduction to Tokenization
      • 4.1.1 DGT Universe
      • 4.1.2 Driving Digital Transformation with Tokens
      • 4.1.3 Real-World Tokenization
      • 4.1.4 Key Concepts and Definitions
    • 4.2 Foundations of Tokenization
      • 4.2.1 Definition and Evolution of Tokenization
      • 4.2.2 Tokenization in the Blockchain/DLT Space
      • 4.2.3 The Tokenization Process
      • 4.2.4 Tokenization on the DGT Platform
      • 4.2.5 Regulatory and Legal Aspects of Tokenization
      • 4.2.6 Typical Blockchain-Based Business Models
    • 4.3 The DEC Transaction Family
      • 4.3.1 DEC Transaction Family Overview
      • 4.3.2 DEC Token Features
      • 4.3.3 DEC Token Protocol
      • 4.3.4 DEC Account Design
      • 4.3.5 DEC Transaction Family Flow
      • 4.3.6 DEC Commands
      • 4.3.7 DEC Processing
      • 4.3.8 Payment Gateways
    • 4.4 Understanding Secondary Tokens
      • 4.4.1 The different types of tokens supported by DGT
      • 4.4.2 How secondary tokens are produced
  • 5. EXPLORING TOKENOMICS
    • 5.1 Introduction
      • 5.1.1 What does tokenomics mean?
      • 5.1.2 Goals of Building the Model for DGT Network
      • 5.1.3 Tokens vs Digital Money
      • 5.1.4 The Phenomenon of Cryptocurrency
      • 5.1.5 Basic Principles of Tokenomics
      • 5.1.6 AB2023 Model
    • 5.2 Node & User Growth
      • 5.2.1 Node Ecosystem
      • 5.2.2 User Growth and Retention Modeling
    • 5.3 Transactions
      • 5.3.1 Transaction Amount Components
      • 5.3.2 Shaping the Transaction Profile: A Three-pronged Approach
      • 5.3.3 Calculation of Transaction Number
    • 5.4 Network Performance Simulation
      • 5.4.1 Endogenous Model
      • 5.4.2 Network Entropy
      • 5.4.3 Network Utility
    • 5.5 Token Supply Model
      • 5.5.1 Introduction to Supply and Demand Dynamics
      • 5.5.2 Token distribution
      • 5.5.3 Supply Protocol
      • 5.5.4 Token Balance and Cumulative Supply
    • 5.6 Token Demand Model
      • 5.6.1 Node-Base Demand
      • 5.6.2 Transaction-Based Token Demand
      • 5.6.3 Staking Part Modeling
      • 5.6.4 Total Demand
    • 5.7 Token Price Simulation
      • 5.7.1 Nelson-Siegel-Svensson model
      • 5.7.2 The Price Model
    • 5.8 Decentralization Measurement
      • 5.8.1 Active Node Index
      • 5.8.2 Node Diversity in Hybrid Networks
      • 5.8.3 Token distribution
      • 5.8.4 Integral Calculation of Decentralization Metric
    • 5.9 Aggregated Metrics
      • 5.9.1 Transaction Throughput: Evaluating Network Performance and Scalability
      • 5.9.2 Market Capitalization: A Dimension of Valuation in Cryptocurrency
      • 5.9.3 Total Value Locked (TVL): A Spotlight on Network Engagement and Trust
  • 6. ADMINISTRATOR GUIDE
    • 6.1 Introduction
      • 6.1.1 Administrator Role
      • 6.1.2 Platform sourcing
      • 6.1.3 DGT Virtualization
      • 6.1.4 Using Pre-Built Virtual Machine Images
      • 6.1.5 Server Preparation
      • 6.1.6 OS Setup and initialization
    • 6.2 DGT CORE: Single Node Setup
      • 6.2.1 Launch the First DGT Node
      • 6.2.2 Dashboard setup
      • 6.2.3 Nodes Port Configuration
      • 6.2.4 Single Node Check
    • 6.3 DGT CORE: Setup Private/Public Network
      • 6.3.1 Network launch preparation
      • 6.3.2 A Virtual Cluster
      • 6.3.3 A Physical Network
      • 6.3.4 Attach node to Existing Network
    • 6.4 DGT Dashboard
    • 6.5 DGT CLI and base transaction families
    • 6.6 GARANASKA: Financial Processing
      • 6.6.1 Overview of DGT’s financial subsystem
      • 6.6.2 DEC emission
      • 6.6.3 Consortium account
      • 6.6.4 User accounts
      • 6.6.5 Payments
    • 6.7 Adjust DGT settings
      • 6.7.1 DGT Topology
      • 6.7.2 Manage local settings
    • 6.8 DGT Maintenance
      • 6.8.1 Stopping and Restarting the Platform
      • 6.8.2 Backing up Databases
      • 6.8.3 Network Performance
      • 6.8.4 Log & Monitoring
Powered by GitBook
On this page
  • 3.4.2.1 High-Level Protocol Overview
  • 3.4.2.2 Protocol Correctness
  • 3.4.2.3 Communication Model
  • 3.4.2.4 Transaction Validation
  1. 3. DGT ARCHITECTURE
  2. 3.4 Fault-Tolerant Architecture

3.4.2 F-BFT: The Hierarchical Consensus Mechanism

Previous3.4.1 Introduction to Fault ToleranceNext3.4.3 Cluster Based Algorithms

Last updated 1 year ago

3.4.2.1 High-Level Protocol Overview

In the decentralized ecosystem of the DGT network, a unique form of Byzantine Fault Tolerance (BFT) consensus called Federated BFT (F-BFT) is employed to ensure reliable and scalable consensus. F-BFT is a hierarchical and cluster-based consensus algorithm that addresses the scalability challenges faced by traditional BFT algorithms, including Practical Byzantine Fault Tolerance (PBFT). While PBFT offers notable advantages, it suffers from scalability issues due to its O(n2) communication complexity.

To overcome these challenges, F-BFT leverages a novel network topology and group-based approach to enhance the efficiency and scalability of achieving consensus in the DGT network. By utilizing this innovative approach, F-BFT significantly reduces communication times between nodes, resulting in an O(n) communication complexity. This optimization allows for improved scalability and efficiency in achieving consensus in distributed systems:

  • The DGT network follows a uniform architecture for its on-chain nodes, which can process various types of transactions through the transaction processor. Validators play a crucial role in verifying transactions and making proposals, while arbitrators are responsible for certifying transaction packets and blocks. The level of decentralization in the network is determined by the number of independent arbitrators, which contributes to the overall security and resilience of the system. Moreover, the consensus engine in DGT has the flexibility to dynamically switch between different cluster-level consensus mechanisms.

  • Within each cluster, a consensus is achieved by ensuring that the number of Byzantine replicas is less than one-third, denoted as . The collection of signatures during consensus can be performed using different modes:

    • The LAZY_MODE involves a simple vote count, like a threshold PBFT scheme, with leader rotation.

    • The FAST_RUN mode assumes a constant leader and a synchronous network, resembling the HOT-STUFF scheme.

    • The ALPINE mode utilizes an aggregated cluster signature based on the Lagrange Interpolant model, providing an efficient consensus mechanism.

  • The transaction storage system in DGT adopts a hybrid approach, combining both block organization and a top-directed DAG graph that is maintained in topological order. The DAG plays a critical role in determining network time and fragmentation through the HEARTBEAT mechanism. This mechanism governs various events, including cluster leader rotation and token issuance order, ensuring proper coordination and synchronization within the network.

  • At the top level, transaction certification takes place within the ring of arbitrators. An arbitrator located outside the cluster receives a transaction and verifies it by consulting one or more adjacent arbiter nodes. The confirmation process varies depending on the mode being used: For instance, in LAZY_MODE, the transaction is immediately confirmed, while in FAST_RUN, confirmation is based on the aggregated signature and its connection with the previous block. The participation of arbitrators in the ring is determined by a staking mechanism like TPOS (Transaction Proof of Stake) or the anchoring principle based on certificates, providing additional security and integrity to the overall transaction certification process.

The communication scheme allows for the propagation of transaction proposals from the lowest-level clusters to the higher-level clusters, where consensus is achieved at each level. The leaders/ or connection point of each level coordinate the communication and consensus process, ensuring the integrity and correctness of the proposed transactions. By following this hierarchical structure, the network can achieve consensus in a scalable and efficient manner, with each level contributing to the overall agreement on the transaction proposals.

Here is a rough outline of the F-BFT consensus algorithm:

  • Initialize the network into a "Cluster View" with a group of nodes, a set of connection points, leader selection, and permalink setup.

  • For each cluster:

    • Use the F-BFT consensus algorithm, which combines a hierarchical and cluster-based Byzantine Fault Tolerance (BFT) approach, to reach consensus within the cluster.

    • Broadcast the result with permalinks to other Arbitrator Rings by randomly selecting an Arbitrator.

  • Arbitrators collect the results of the F-BFT consensus from all clusters.

  • Reach a final consensus by combining the results of the F-BFT consensus from all clusters.

This outline describes the high-level steps of the F-BFT consensus algorithm, where each cluster independently reaches consensus using the F-BFT approach, and the results are then combined by the arbitrators to reach a final consensus. The use of permalinks and random selection of arbitrators helps in achieving a decentralized and fault-tolerant consensus process within the F-BFT framework.

3.4.2.2 Protocol Correctness

The proof of the correctness of the protocol follows from the basic properties of the protocol: Liveness and Safety. Liveness ensures that progress can be made in the consensus algorithm, meaning that transactions will eventually be agreed upon and added to the blockchain. Safety ensures that the consensus algorithm guarantees the integrity and consistency of the blockchain, meaning that once a transaction is agreed upon and added to the blockchain, it cannot be altered or removed.

ASSUMPTION 1: At the cluster level, conflicting messages are discarded based on the consensus reached within the cluster.

DEFINITION 1: The process we are considering is limited to an inter-block time, during which a series of Byzantine messages may be received, some of which may conflict or be incorrect. We discard messages that have reached the arbitrators but have not been explicitly accepted or included in an approved block of the network.

ASSUMPTION 2: During the inter-block time, all permalinks are functioning, ensuring that messages generated in any of the clusters will reach the arbitrators and propagate throughout the network.

Proof:

During the inter-block time, the arbitrators collaborate to achieve consensus on the validity and ordering of transactions. They reach an agreement on which transactions to include in the blockchain structure and ensure that any subsequent transactions can be negotiated and incorporated within the epoch. Therefore, by ensuring the presence of permalinks, enough honest arbitrators, and the collaboration among the arbitrators, the F-BFT consensus algorithm guarantees liveness within the inter-block time. ⬛

Proof:

The reasonability of this assumption can be justified based on the following considerations:

Although this property is used to prove F-BFT, the assumption is true for a wide range of networks, including Bitcoin, Ethereum, Algorand, Avalanche. Below are averaged empirical data, a more rigorous proof can be made by taking into account models such as the Network-on-Chip (NoC) model (Matoussi, 2021).

Table 11 Inter-block time for

Network

Inter-Block Time

Network Delivery Time

Negotiation Time

Bitcoin

10 minutes

10 seconds

1 minute

Ethereum

15 seconds

2 seconds

5 seconds

Algorand

4.5 seconds

1 second

2 seconds

Avalanche

1-2 seconds

0.5 second

1 second

Proof:

  • Case 1: Same Arbitrator

  • Case 2: Different Arbitrators

Note: Lemma 3 is based on Lemma 1, which ensures that conflicts are resolved within each arbitrator, and Assumption 3, which guarantees that the inter-block time is sufficiently larger than the network delivery time and negotiation time. This combination of assumptions and lemma provides a strong basis for the safety of the F-BFT consensus protocol.

3.4.2.3 Communication Model

The communication model defines the assumptions and constraints on how messages are exchanged between nodes in the network. It determines the level of synchrony or asynchrony in message delivery and the assumptions about message delays and failures. All three modes of F-BFT utilize Partially Synchronous model. In this approach, the network is assumed to exhibit some degree of synchrony, but with the possibility of occasional delays or failures. The system assumes that most messages are delivered within a known and reasonable time frame, but some messages may experience delays or failures.

Estimating the communication complexity in F-BFT involves analyzing the number of messages exchanged between nodes during the consensus process. The complexity can be evaluated based on parameters such as the number of nodes, the number of rounds in the consensus protocol, and the size of messages exchanged.

In a two-level consensus scheme with a hierarchical clustered network, where the low-level consensus is achieved using PBFT within clusters and PoS is used for arbitrators at the high level, we can explore the minimum communication complexity (C) required for consensus.

ASSUMPTION 4. Let's assume the minimum configuration for the PBFT + PoS setup, where the total number of nodes in the network is denoted as n, and the minimum number of nodes per cluster is set to 4 as required by PBFT. Additionally, we consider a minimum number of arbitrators, denoted as a, with only 2 of them randomly selected for consensus.

LEMMA 5. Based on Assumption 4 the minimum communication complexity C required to reach consensus is given by

Proof:

To prove this lemma, we need to consider the communication complexity at each level of the consensus scheme.

Therefore, the total communication complexity is given by

This minimum communication complexity ensures the secure operation of the hybrid/consortium architecture in the minimum configuration. By considering the minimum number of nodes per cluster and the minimum number of arbitrators, the system can maintain the necessary fault tolerance and security while minimizing the communication overhead associated with consensus protocols.

It's important to note that the specific values of n, a, and the minimum nodes per cluster can be adjusted according to the desired security and performance requirements of the system. The minimum communication complexity provides a baseline for achieving consensus in the most efficient manner while ensuring the integrity and availability of the network.

In more complex implementations, when all arbitrators are involved in communication or a two-level PBFT pass is used, the complexity of communication can be computed as (Li, et al., 2020):

Omitting the details of the calculations for the remaining configurations, we present the main results of the comparative characteristics:

Table 12 Communication Model Parameters

Mode

Adv.

Tolerance

Communication Model

Communication Complexity

Throughput

Latency

Finality

Leader

Optimistic Responsive

F-BFT (LAZY_MODE)

f < n/3

Partially Synchronous

O(n2)

High

Low

Deterministic

Rotation

No

F-BFT (FAST_RUN)

f < n/3

Partially Synchronous

O(n)

High

Low

Probabilistic

Stable

Yes

F-BFT (ALPINE)

f < n/3

Partially Synchronous

O(n)

High

Low

Probabilistic

Stable

Yes

3.4.2.4 Transaction Validation

Transaction validation is a critical component of the DGT network, ensuring the accuracy and integrity of transactions. It involves multiple stages and utilizes various technical solutions to achieve consensus and maintain network security. The following key elements contribute to the transaction validation process in the DGT network:

  • Permalinks:

    • Permalinks serve as communication routes between nodes, facilitating efficient and reliable message exchange.

    • They allow nodes to establish connections and exchange information with one another.

    • Permalinks reduce the communication complexity and enable seamless transaction propagation across the network.

  • Ledger:

    • The ledger in the DGT network is structured as a block-based system with a directed acyclic graph (DAG).

    • It serves as the primary storage mechanism for recording and managing transactions.

    • The DAG structure allows for additional connections between transactions, enhancing network flexibility and efficiency.

    • The ledger also incorporates the concept of a Journal component, similar to the Sawtooth blockchain, which enables parallel processing and advanced batching management.

  • Cluster-Based Consensus:

    • The DGT network utilizes a cluster-based consensus mechanism.

    • Clusters are coordinated by a leader node that dynamically changes based on predefined criteria and timeouts.

    • Nodes within clusters participate in the validation process, voting on the correctness of transactions.

    • A sufficient number of votes, known as a quorum, is required to achieve consensus within a cluster.

  • External Arbitrators:

    • External arbitrators play a crucial role in the final verification of transactions.

    • They form a separate layer of consensus and provide an additional level of validation.

    • Arbitrators approve transactions before they are included in the ledger.

    • The consensus mechanism involving arbitrators ensures the integrity and reliability of the transaction validation process.

  • Transaction Families and Processors:

    • Different transaction types, known as transaction families, are supported in the DGT network.

    • Each transaction family has its own validation and processing requirements.

    • Transaction processors are responsible for validating and processing transactions within their respective families.

    • They ensure the correctness and compliance of transactions based on predefined rules and conditions.

  • Batching and Parallel Processing:

    • Transactions within the DGT network are wrapped in batches before being sent to the ledger.

    • Batching allows for efficient processing and optimization of transaction handling.

    • Parallel processing techniques are employed to enhance transaction throughput and performance.

  • Block Publishing:

    • Once transactions are validated and approved, they are included in blocks within the ledger.

    • The blocks are then published and distributed throughout the network.

    • Block publishing ensures the transparency and immutability of transactions within the DGT network.

DEFINITION 2: The digest of a TD message is a Boolean expression that represents the stack of message checks within the message: . A message digest is considered to exist if .

DEFINITION 3 (Liveness): A protocol is considered alive if it ensures that any transaction and any subsequent transaction (where ) that have reached the arbitrators within the inter-block time can be negotiated and incorporated into the blockchain structure within the same epoch, given the condition that the number of arbitrators satisfies (whereis the number of Byzantine replicas among the arbitrators).

LEMMA 1 (Liveness): Under the assumption that all permalinks exist and the number of arbitrators satisfies , where is the number of Byzantine replicas among the arbitrators, the F-BFT consensus algorithm ensures liveness within the inter-block time.

To prove liveness, we need to show that any transaction and any subsequent transaction (where ) that have reached the arbitrators within the inter-block time can be negotiated and incorporated within the epoch within the blockchain structure.

Since all permalinks exist, messages generated in any of the clusters are guaranteed to reach the arbitrators and propagate in the network. Within the inter-block time, the cluster-level consensus ensures that conflicting messages are discarded, and only approved messages are considered. By the assumption that the number of arbitrators satisfies , there are enough honest arbitrators to reach a consensus and include valid transactions in the blockchain structure. The F-BFT consensus algorithm leverages the cluster-level consensus and the ring of arbitrators to validate and certify transactions.

LEMMA 2 (Safety): If any two conflicting transactions and have reached the ring of arbitrators at , then the conflict is resolved by the arbitrators under the condition:

Suppose there exist two conflicting transactions and that have reached inside inter-block time the ring of arbitrators at , and the conflict is not resolved by the arbitrators. This implies that , meaning there is a common subset of checks that both messages pass.

However, since the conflict is not resolved, neither nor is included in the blockchain. This contradicts the definition of liveness, which states that any transaction that has reached the arbitrators within the epoch should be negotiated and incorporated into the blockchain structure.

By the assumption that the number of arbitrators satisfies , where is the number of Byzantine replicas among the arbitrators, we can guarantee that there are enough honest arbitrators to reach a consensus. Therefore, there must be at least one honest arbitrator who will ensure that one of the conflicting messages is included in the blockchain. Hence, by contradiction, we can conclude that if any two conflicting transactions and have reached the ring of arbitrators at , the conflict will be resolved by the arbitrators. At least one of the conflicting messages will be included in the blockchain, satisfying the conditions of Lemma 2. ⬛

ASSUMPTION 3: In the F-BFT consensus protocol, assume that the inter-block time, denoted as , is sufficiently larger than the network delivery time of conflicting messages, denoted as, and the time to negotiate a block in the ring of arbitrators, denoted as. Under the constraints of the F-BFT consensus algorithm, where the inter-block time could be presented:

Network Latency: The network delivery time, , represents the time it takes for messages to propagate through the network from one cluster to another. In a distributed network, network latency can vary depending on factors such as network congestion, distance, and communication protocols. By assuming that the inter-block time, , is sufficiently larger than , we can provide a buffer for potential delays and ensure that conflicting messages have enough time to reach the arbitrators.

Negotiation Time: The negotiation time, , refers to the time it takes for the arbitrators to reach a consensus and finalize a block. This involves exchanging and validating transaction information, checking for conflicts, and reaching an agreement. By assuming that is sufficiently larger than , we allow enough time for the arbitrators to perform the necessary computations and reach consensus on conflicting transactions.

System Scalability: As the F-BFT consensus protocol operates in a hierarchical and cluster-based architecture, it is designed to handle large-scale distributed systems. By assuming that is sufficiently larger than both and , we account for potential increases in the network size, complexity, and transaction volume. This ensures that the protocol can scale effectively and maintain a high level of safety.

LEMMA 3 (Safety): In the F-BFT consensus protocol, under the condition that the number of Byzantine replicas of the arbitrators satisfies , where is the number of Byzantine replicas, if any two conflicting messages and have reached the ring of arbitrators, then the conflict is resolved and at least one of the messages will be included in the blockchain.

If both and have reached the same arbitrator, according to Lemma 1, the conflict is resolved by the arbitrator under the condition . Therefore, at least one of the conflicting messages will be included in the blockchain.

If both and have reached different arbitrators, let's assume without loss of generality that has reached arbitrator A and has reached arbitrator B. By Assumption 3, the inter-block time is larger than the network delivery time and the negotiation time . Thus, it is guaranteed that both and will be considered within the inter-block time .

Since Lemma 1 guarantees that conflicts are resolved within a single arbitrator, if and have conflicting digests , then the conflict is resolved within each arbitrator. As a result, at least one of the conflicting messages will be included in the blockchain. Therefore, in both cases, Lemma 3 holds true, ensuring the safety of the F-BFT consensus protocol. ⬛

At the low-level cluster consensus, with a minimum of 4 nodes per cluster, PBFT requires messages for complete consensus within each cluster. At the high-level arbitrator consensus, we have 2 randomly selected arbitrators participating in the consensus. In this case, we can assume that each arbitrator communicates with all other arbitrators, resulting in messages exchanged among the arbitrators.

⬛

Here is n the total number, m is the number of nodes in the cluster, and . It should also be noted that the number of arbitrators is excluded from this formula, since the ideal optimal is assumed:

Figure 16 F-BFT Communication scheme
Figure 17 Transaction Validation