3.4.3 Cluster Based Algorithms

The DGT protocol employs a combination of cluster-level algorithms to achieve consensus in a Byzantine fault-tolerant (BFT) approach on the low-level and a proof-of-stake (PoS) mechanism for the Arbitrators. This hybrid consensus model brings together the strengths of both approaches to ensure a secure and efficient consensus process. By combining low-level cluster-level consensus with high-level PoS for Arbitrators, the DGT protocol achieves a robust and secure consensus mechanism. The utilization of BFT algorithms provides fault tolerance and Byzantine fault resilience, while the PoS mechanism ensures the network's Sybil-resistance and economic incentives through token staking.

Table 13 Cluster-level consensus algorithms overview

#

Algorithm

Description

Pros

Cons

Reference

1

LAZY_WAY

Based on PBFT with leader rotation. Utilizes lazy leader selection approach.

- Optimizes performance and efficiency of consensus process

- Minimizes communication overhead

- Reduces need for frequent leader changes

- May introduce delays in leader rotation

- Requires careful configuration to ensure optimal performance

(Castro & Liskov, 1999)

2

FAST_RUN

Based on piped HotStuff consensus algorithm. Utilizes pipelined approach.

- Accelerates consensus process within clusters

- Enables parallelization of consensus protocol

- Improves overall throughput and responsiveness

- Requires careful tuning for optimal performance

- Increased complexity compared to traditional algorithms

(Jalalzai, Niu, & Feng, 2020)

3

ALPINE

Based on Lagrange Construction for PoS consensus. Designed for Arbitrators.

- Provides secure and resilient consensus mechanism

- Allows Arbitrators to stake tokens and participate in consensus

- Incentivizes honest behavior through token stake

- Requires understanding of PoS mechanisms and token economics

- Staking and governance considerations need to be carefully managed

(Bogdanov, Uteshev, & Khvatov, 2019)

The utilization of different consensus algorithms in the DGT protocol is driven by specific motivations. While PBFT (Practical Byzantine Fault Tolerance) is considered a reliable scheme for enterprise networks with moderate loads, its increasing communication complexity becomes less practical in open and highly dynamic environments. On the other hand, the ALPINE scheme incorporates homomorphic encryption, which enhances the security of complex transactions but can impact the speed and efficiency of the consensus process.

The selection of the appropriate consensus algorithm depends on the specific requirements and considerations of the network. In scenarios where reliability and fault tolerance are paramount, PBFT may be a suitable choice. However, in more open and dynamic environments where scalability and speed are crucial, alternative algorithms such as FAST_RUN or LAZY_WAY may be preferred.

It is important to strike a balance between the reliability, performance, and security requirements of the network when choosing the appropriate consensus algorithm. The DGT protocol aims to provide flexibility and adaptability by offering a range of algorithms that can be tailored to specific use cases and network conditions.

3.4.3.1 LAZY_WAY: MLBFT based consensus

The simplest version is PBFT. PBFT (Practical Byzantine Fault Tolerance) is indeed a relatively simple and deterministic consensus algorithm that guarantees finality once consensus is reached. While it may not be the most performant algorithm in cluster-level solutions, it offers predictable and reliable results.

Recent research has highlighted some limitations and challenges in using PBFT, as outlined in the work by Alqahtani & Demirbas in 2021 (Alqahtani & Demirbas, Bottlenecks in Blockchain Consensus Protocols, 2021). However, in certain scenarios, PBFT can still be advantageous, especially considering its deterministic finality guarantee. In the context of DGT, a modification of the PBFT algorithm called Multi-leader Byzantine Fault Tolerance (MLBFT) is employed. MLBFT (Alqahtani & Demirbas, BigBFT: A Multileader Byzantine Fault Tolerance Protocol for High Throughput, 2021) improves system performance by introducing additional leaders, reducing the number of rounds required for consensus. This parallelization approach allows the system to move on to the next round while previous rounds are still in progress, leading to potentially higher performance. Furthermore, MLBFT can easily switch to regular PBFT, simplifying testing and evaluation. The Multi-Leader PBFT (MLBFT) approach follows a series of steps to achieve consensus among multiple leaders and the participating nodes.

It is important to consider the specific requirements and trade-offs of a given application or network when selecting a consensus algorithm. While PBFT may have performance limitations, its deterministic finality and simplicity make it a suitable choice in certain use cases. The modified MLBFT algorithm used in DGT offers performance improvements while maintaining the essential guarantees of PBFT. Here is an outline of the main steps involved:

  • Leader Selection: The network selects multiple leaders from the pool of participating nodes. The number of leaders and the selection mechanism may vary depending on the specific implementation. The leaders are responsible for proposing blocks of transactions and coordinating the consensus process.

  • Proposal Phase: Each leader independently creates a proposal for a new block of transactions. The proposed block includes the transactions that the leader believes should be included in the blockchain.

  • Pre-Prepare Phase: The leaders broadcast their proposed blocks to the other nodes in the network. Upon receiving the proposals, the nodes validate the transactions within the block and verify the digital signatures. They then send a pre-prepare message to all other nodes, indicating their acceptance of the proposed block.

  • Prepare Phase: After receiving pre-prepare messages from a sufficient number of nodes, each node broadcasts a prepare message to the network. The prepare message includes a hash of the proposed block, indicating the node's agreement on the block's validity.

  • Commit Phase: Once a node has received enough prepare messages, it broadcasts a commit message to the network. The commit message serves as a final confirmation of the block's validity and indicates that the node is ready to add the block to the blockchain.

  • Checkpointing: Periodically, the network reaches a checkpoint where all nodes agree on a certain block. Checkpointing helps optimize the consensus process by allowing nodes to discard older blocks from their memory, reducing storage requirements.

  • Block Addition: After the commit phase, the proposed block is added to the blockchain by all nodes. Once added, the block is considered final and cannot be altered.

  • Leader Rotation: To ensure fairness and prevent leader bias, the leadership role rotates among the participating nodes. The rotation mechanism may vary, but it typically ensures that each node has a chance to become a leader over time.

By default, the platform is deployed in PBFT mode.

3.4.3.2 FAST_RUN: HotStuff adaptation

Fast-HotStuff is a consensus algorithm based on the HotStuff protocol, designed to provide fast and efficient consensus in distributed networks. It builds on the principles of classical Byzantine fault tolerant (BFT) consensus algorithms but introduces optimizations to improve performance:

  • Linear View Change and Responsiveness: Fast-HotStuff achieves linear view change and responsiveness, overcoming a long-standing challenge in BFT consensus. Linear view change enables fast leader rotation, while responsiveness ensures consensus at wire-speed, reflecting the actual network latency.

  • Pipelining of Phases: Fast-HotStuff utilizes a chained structure that pipelines all phases into a unified propose-vote pattern. This simplifies the protocol, improves throughput, and reduces latency by overlapping proposal and commit phases.

  • Three-Chain Commit Rule: Fast-HotStuff implements a three-chain commit rule, requiring two uninterrupted rounds of communication and an additional round (not necessarily uninterrupted) among replicas to commit a block. This rule ensures timely certification and finality of blocks.

  • Voting Rule and Forking Prevention: Fast-HotStuff employs a specific voting rule where replicas accept and vote for proposals if they do not point to the predecessor of the grandparent of the latest block. This prevents forks, maintains high throughput, and reduces latency.

  • Threshold Signatures: Fast-HotStuff utilizes threshold signatures, allowing for efficient verification without revealing the identities of message senders. This reduces the cost of signature verification during view changes.

  • Quorum Certificate (QC) and Aggregated QC: Fast-HotStuff relies on quorum certificates (QC) and aggregated QC for block verification. QCs provide proof that a block has received votes from most nodes, and aggregated QC comprises signatures from distinct replicas. This ensures block validity and certification.

  • Efficient View Change: Fast-HotStuff incorporates an efficient view change mechanism. During view changes, replicas send NEWVIEW messages to the next primary, and the primary aggregates them into an Aggregated QC. Replicas verify the Aggregated QC to determine the highest QC and guarantee the extension of the proposed block.

Here are the main steps involved in the Fast-HotStuff consensus algorithm:

  • Prepare Phase:

    • The primary replica waits for new-view messages (η) from n - f replicas.

    • The new-view message contains information about the current view, the latest PrepareQC known to the replica, and the replica's ID.

    • The primary creates and signs a prepare message (B) that includes the aggregated QC (AggQC), commands, current view, and replica information.

    • The prepare message is broadcasted to all replicas.

  • Precommit Phase:

    • The primary collects prepare votes from n - f replicas and builds a PrepareQC.

    • The primary broadcasts the PrepareQC to all replicas in a precommit message.

    • Upon receiving a valid precommit message, a replica responds with a precommit vote to the primary.

    • Replicas also check if they have committed the block pointed by the highQC (highest quorum certificate).

  • Commit Phase:

    • Similar to the precommit phase, the primary collects precommit votes from n - f replicas and combines them into a PrecommitQC.

    • As replicas receive and verify the PrecommitQC, they execute the commands contained in the block.

    • Replicas increment their view number and start the next view.

  • New-View:

    • As replicas move to the next view, they send their NEWVIEW messages (η) to the next primary.

    • The primary aggregates the η messages and their signatures into an AggQC, which is sent back to all replicas.

    • Replicas verify the AggQC and determine the highest QC among the contained QC messages.

    • Replicas also verify the aggregated signature of the highQC or latest QC.

    • Replicas increment their view number and start the next view if the NEXTVIEW utility interrupts the waiting period.

3.4.3.3. ALPINE: Lagrange Construction

This mode is experimental and is designed to work with special transactions. The ALPINE mode consensus mechanism, based on Lagrange Constructions, offers a robust and privacy-preserving approach to achieve consensus within a cluster. It leverages mathematical principles and cryptographic techniques to ensure the integrity of the shared decision-making process. Additionally, the scheme can be enhanced with homomorphic encryption for increased security.

This consensus mechanism finds applications in various scenarios where privacy and decentralized decision-making are paramount. For instance, it can be utilized in social or political voting transactions, ensuring the confidentiality of voters' choices while maintaining the integrity of the final decision. Additionally, ALPINE mode can be employed in network-centric or other environments where data from multiple sources must be aggregated securely in an untrusted setting.

Although ALPINE mode provides strong security guarantees and preserves privacy, it is important to note that it may not be the fastest consensus scheme. The communication complexity involved in exchanging polynomial shares and reconstructing the shared value can result in increased latency compared to other consensus algorithms. However, this trade-off in speed is compensated by the enhanced security and privacy measures provided by ALPINE mode. Step-by-step description of the consensus mechanism based on Lagrange Construction within a cluster:

  • Generation of Polynomials:

    • Each node in the cluster, acting as a voter, generates a polynomial of degree ≤ k - 1, where k is the number of administrators (multiple leaders) in the cluster.

    • The polynomial is constructed with arbitrary integer coefficients, except for the free term which represents the voting decision (e.g., +1 for "yes" and -1 for "no").

    • The generated polynomials are kept secret by the respective nodes.

  • Privacy Preservation:

    • Each node holds its own polynomial privately and does not disclose it to other nodes or administrators.

    • Privacy is maintained as nodes only communicate polynomial shares without revealing their individual polynomials.

  • Polynomial Shares:

    • Each node computes the value of its polynomial at specific points (x1, x2, ..., xk), where x values are distinct integers generated by the administrators.

    • The computed values, known as polynomial shares, are communicated to the corresponding administrators.

    • Each administrator receives polynomial shares from the nodes and accumulates them.

  • Reconstruction and Verification of the Shared Value:

    • Administrators collect the polynomial shares and compute the shared value by summing up the received shares.

    • The shared value represents the voting result and is calculated using Lagrange interpolation or a similar technique.

    • Verification is performed to ensure that the shared value is correctly reconstructed from the aggregated polynomial shares.

  • Consensus and Decision:

    • The administrators collectively determine the consensus based on the reconstructed shared value.

    • The decision-making process can involve various rules or algorithms specific to the application or use case.

    • The consensus outcome may represent the final decision, agreement, or result of the cluster-wide voting process.

Regarding scalability, ALPINE mode's communication complexity and reliance on multiple leaders can introduce challenges in large-scale networks. As the number of nodes and administrators increases, the coordination and communication overhead may grow, potentially impacting the scalability of the scheme.

Last updated