3.4.2 F-BFT: The Hierarchical Consensus Mechanism

3.4.2.1 High-Level Protocol Overview

In the decentralized ecosystem of the DGT network, a unique form of Byzantine Fault Tolerance (BFT) consensus called Federated BFT (F-BFT) is employed to ensure reliable and scalable consensus. F-BFT is a hierarchical and cluster-based consensus algorithm that addresses the scalability challenges faced by traditional BFT algorithms, including Practical Byzantine Fault Tolerance (PBFT). While PBFT offers notable advantages, it suffers from scalability issues due to its O(n2) communication complexity.

To overcome these challenges, F-BFT leverages a novel network topology and group-based approach to enhance the efficiency and scalability of achieving consensus in the DGT network. By utilizing this innovative approach, F-BFT significantly reduces communication times between nodes, resulting in an O(n) communication complexity. This optimization allows for improved scalability and efficiency in achieving consensus in distributed systems:

  • The DGT network follows a uniform architecture for its on-chain nodes, which can process various types of transactions through the transaction processor. Validators play a crucial role in verifying transactions and making proposals, while arbitrators are responsible for certifying transaction packets and blocks. The level of decentralization in the network is determined by the number of independent arbitrators, which contributes to the overall security and resilience of the system. Moreover, the consensus engine in DGT has the flexibility to dynamically switch between different cluster-level consensus mechanisms.

  • Within each cluster, a consensus is achieved by ensuring that the number of Byzantine replicas is less than one-third, denoted as . The collection of signatures during consensus can be performed using different modes:

    • The LAZY_MODE involves a simple vote count, like a threshold PBFT scheme, with leader rotation.

    • The FAST_RUN mode assumes a constant leader and a synchronous network, resembling the HOT-STUFF scheme.

    • The ALPINE mode utilizes an aggregated cluster signature based on the Lagrange Interpolant model, providing an efficient consensus mechanism.

  • The transaction storage system in DGT adopts a hybrid approach, combining both block organization and a top-directed DAG graph that is maintained in topological order. The DAG plays a critical role in determining network time and fragmentation through the HEARTBEAT mechanism. This mechanism governs various events, including cluster leader rotation and token issuance order, ensuring proper coordination and synchronization within the network.

  • At the top level, transaction certification takes place within the ring of arbitrators. An arbitrator located outside the cluster receives a transaction and verifies it by consulting one or more adjacent arbiter nodes. The confirmation process varies depending on the mode being used: For instance, in LAZY_MODE, the transaction is immediately confirmed, while in FAST_RUN, confirmation is based on the aggregated signature and its connection with the previous block. The participation of arbitrators in the ring is determined by a staking mechanism like TPOS (Transaction Proof of Stake) or the anchoring principle based on certificates, providing additional security and integrity to the overall transaction certification process.

The communication scheme allows for the propagation of transaction proposals from the lowest-level clusters to the higher-level clusters, where consensus is achieved at each level. The leaders/ or connection point of each level coordinate the communication and consensus process, ensuring the integrity and correctness of the proposed transactions. By following this hierarchical structure, the network can achieve consensus in a scalable and efficient manner, with each level contributing to the overall agreement on the transaction proposals.

Here is a rough outline of the F-BFT consensus algorithm:

  • Initialize the network into a "Cluster View" with a group of nodes, a set of connection points, leader selection, and permalink setup.

  • For each cluster:

    • Use the F-BFT consensus algorithm, which combines a hierarchical and cluster-based Byzantine Fault Tolerance (BFT) approach, to reach consensus within the cluster.

    • Broadcast the result with permalinks to other Arbitrator Rings by randomly selecting an Arbitrator.

  • Arbitrators collect the results of the F-BFT consensus from all clusters.

  • Reach a final consensus by combining the results of the F-BFT consensus from all clusters.

This outline describes the high-level steps of the F-BFT consensus algorithm, where each cluster independently reaches consensus using the F-BFT approach, and the results are then combined by the arbitrators to reach a final consensus. The use of permalinks and random selection of arbitrators helps in achieving a decentralized and fault-tolerant consensus process within the F-BFT framework.

3.4.2.2 Protocol Correctness

The proof of the correctness of the protocol follows from the basic properties of the protocol: Liveness and Safety. Liveness ensures that progress can be made in the consensus algorithm, meaning that transactions will eventually be agreed upon and added to the blockchain. Safety ensures that the consensus algorithm guarantees the integrity and consistency of the blockchain, meaning that once a transaction is agreed upon and added to the blockchain, it cannot be altered or removed.

ASSUMPTION 1: At the cluster level, conflicting messages are discarded based on the consensus reached within the cluster.

DEFINITION 1: The process we are considering is limited to an inter-block time, during which a series of Byzantine messages may be received, some of which may conflict or be incorrect. We discard messages that have reached the arbitrators but have not been explicitly accepted or included in an approved block of the network.

ASSUMPTION 2: During the inter-block time, all permalinks are functioning, ensuring that messages generated in any of the clusters will reach the arbitrators and propagate throughout the network.

Proof:

During the inter-block time, the arbitrators collaborate to achieve consensus on the validity and ordering of transactions. They reach an agreement on which transactions to include in the blockchain structure and ensure that any subsequent transactions can be negotiated and incorporated within the epoch. Therefore, by ensuring the presence of permalinks, enough honest arbitrators, and the collaboration among the arbitrators, the F-BFT consensus algorithm guarantees liveness within the inter-block time. ⬛

Proof:

The reasonability of this assumption can be justified based on the following considerations:

Although this property is used to prove F-BFT, the assumption is true for a wide range of networks, including Bitcoin, Ethereum, Algorand, Avalanche. Below are averaged empirical data, a more rigorous proof can be made by taking into account models such as the Network-on-Chip (NoC) model (Matoussi, 2021).

Table 11 Inter-block time for

Network

Inter-Block Time

Network Delivery Time

Negotiation Time

Bitcoin

10 minutes

10 seconds

1 minute

Ethereum

15 seconds

2 seconds

5 seconds

Algorand

4.5 seconds

1 second

2 seconds

Avalanche

1-2 seconds

0.5 second

1 second

Proof:

  • Case 1: Same Arbitrator

  • Case 2: Different Arbitrators

Note: Lemma 3 is based on Lemma 1, which ensures that conflicts are resolved within each arbitrator, and Assumption 3, which guarantees that the inter-block time is sufficiently larger than the network delivery time and negotiation time. This combination of assumptions and lemma provides a strong basis for the safety of the F-BFT consensus protocol.

3.4.2.3 Communication Model

The communication model defines the assumptions and constraints on how messages are exchanged between nodes in the network. It determines the level of synchrony or asynchrony in message delivery and the assumptions about message delays and failures. All three modes of F-BFT utilize Partially Synchronous model. In this approach, the network is assumed to exhibit some degree of synchrony, but with the possibility of occasional delays or failures. The system assumes that most messages are delivered within a known and reasonable time frame, but some messages may experience delays or failures.

Estimating the communication complexity in F-BFT involves analyzing the number of messages exchanged between nodes during the consensus process. The complexity can be evaluated based on parameters such as the number of nodes, the number of rounds in the consensus protocol, and the size of messages exchanged.

In a two-level consensus scheme with a hierarchical clustered network, where the low-level consensus is achieved using PBFT within clusters and PoS is used for arbitrators at the high level, we can explore the minimum communication complexity (C) required for consensus.

ASSUMPTION 4. Let's assume the minimum configuration for the PBFT + PoS setup, where the total number of nodes in the network is denoted as n, and the minimum number of nodes per cluster is set to 4 as required by PBFT. Additionally, we consider a minimum number of arbitrators, denoted as a, with only 2 of them randomly selected for consensus.

LEMMA 5. Based on Assumption 4 the minimum communication complexity C required to reach consensus is given by

Proof:

To prove this lemma, we need to consider the communication complexity at each level of the consensus scheme.

Therefore, the total communication complexity is given by

This minimum communication complexity ensures the secure operation of the hybrid/consortium architecture in the minimum configuration. By considering the minimum number of nodes per cluster and the minimum number of arbitrators, the system can maintain the necessary fault tolerance and security while minimizing the communication overhead associated with consensus protocols.

It's important to note that the specific values of n, a, and the minimum nodes per cluster can be adjusted according to the desired security and performance requirements of the system. The minimum communication complexity provides a baseline for achieving consensus in the most efficient manner while ensuring the integrity and availability of the network.

In more complex implementations, when all arbitrators are involved in communication or a two-level PBFT pass is used, the complexity of communication can be computed as (Li, et al., 2020):

Omitting the details of the calculations for the remaining configurations, we present the main results of the comparative characteristics:

Table 12 Communication Model Parameters

Mode

Adv.

Tolerance

Communication Model

Communication Complexity

Throughput

Latency

Finality

Leader

Optimistic Responsive

F-BFT (LAZY_MODE)

f < n/3

Partially Synchronous

O(n2)

High

Low

Deterministic

Rotation

No

F-BFT (FAST_RUN)

f < n/3

Partially Synchronous

O(n)

High

Low

Probabilistic

Stable

Yes

F-BFT (ALPINE)

f < n/3

Partially Synchronous

O(n)

High

Low

Probabilistic

Stable

Yes

3.4.2.4 Transaction Validation

Transaction validation is a critical component of the DGT network, ensuring the accuracy and integrity of transactions. It involves multiple stages and utilizes various technical solutions to achieve consensus and maintain network security. The following key elements contribute to the transaction validation process in the DGT network:

  • Permalinks:

    • Permalinks serve as communication routes between nodes, facilitating efficient and reliable message exchange.

    • They allow nodes to establish connections and exchange information with one another.

    • Permalinks reduce the communication complexity and enable seamless transaction propagation across the network.

  • Ledger:

    • The ledger in the DGT network is structured as a block-based system with a directed acyclic graph (DAG).

    • It serves as the primary storage mechanism for recording and managing transactions.

    • The DAG structure allows for additional connections between transactions, enhancing network flexibility and efficiency.

    • The ledger also incorporates the concept of a Journal component, similar to the Sawtooth blockchain, which enables parallel processing and advanced batching management.

  • Cluster-Based Consensus:

    • The DGT network utilizes a cluster-based consensus mechanism.

    • Clusters are coordinated by a leader node that dynamically changes based on predefined criteria and timeouts.

    • Nodes within clusters participate in the validation process, voting on the correctness of transactions.

    • A sufficient number of votes, known as a quorum, is required to achieve consensus within a cluster.

  • External Arbitrators:

    • External arbitrators play a crucial role in the final verification of transactions.

    • They form a separate layer of consensus and provide an additional level of validation.

    • Arbitrators approve transactions before they are included in the ledger.

    • The consensus mechanism involving arbitrators ensures the integrity and reliability of the transaction validation process.

  • Transaction Families and Processors:

    • Different transaction types, known as transaction families, are supported in the DGT network.

    • Each transaction family has its own validation and processing requirements.

    • Transaction processors are responsible for validating and processing transactions within their respective families.

    • They ensure the correctness and compliance of transactions based on predefined rules and conditions.

  • Batching and Parallel Processing:

    • Transactions within the DGT network are wrapped in batches before being sent to the ledger.

    • Batching allows for efficient processing and optimization of transaction handling.

    • Parallel processing techniques are employed to enhance transaction throughput and performance.

  • Block Publishing:

    • Once transactions are validated and approved, they are included in blocks within the ledger.

    • The blocks are then published and distributed throughout the network.

    • Block publishing ensures the transparency and immutability of transactions within the DGT network.

Last updated