Leading the Blockchain Privacy Revolution: A Deep Dive into $Aleo's Latest Algorithm
@AleoHQ is a blockchain project focused on privacy protection, achieving enhanced privacy and scalability through zero-knowledge proof technology (#ZKP). The core idea of #$Aleo is to enable users to verify their identity and process data without disclosing personal information.
This article mainly introduces an overview of the $Aleo project and its latest developments, with a detailed interpretation of the puzzle algorithm update, which is of great interest to the market. If this helps your understanding of @AleoHQ, feel free to give it a thumbs up.
Latest Algorithm Sneak Peek ;) TLDR
The $Aleo network generates a ZK circuit randomly every hour;
Miners need to try different nonces as inputs for the circuit within that hour, calculate the witness (i.e., all variables in the circuit, this computation process is also called synthesis), and then determine the Merkle root of the witness to see if it meets the mining difficulty requirements.
Due to the randomness of the circuit, this mining algorithm is not friendly to GPUs, presenting significant challenges in computational acceleration.
Funding Background
$Aleo completed its Series A funding round in 2021, raising $28 million led by a16z, and completed a Series B funding round of $200 million in 2024, with investors including Kora Management, SoftBank Vision Fund 2, Tiger Global, Sea Capital, Slow Ventures, and Samsung Next. This round of funding has valued $Aleo at $1.45 billion.
Project Overview
Privacy
The core of $Aleo is the zero-knowledge proof (ZKP) technology, which allows transactions and smart contract executions to be carried out while maintaining privacy. User transaction details, such as sender and transaction amount, are hidden by default. This design not only protects user privacy but also allows for selective disclosure when necessary, making it very suitable for the development of DeFi applications.
Its main components include:
#$Leo Programming Language: Adapted from the #Rust language, specifically designed for developing zero-knowledge applications (ZKApps), reducing the cryptography knowledge requirements for developers.
snarkVM and snarkOS: snarkVM allows off-chain execution of computations, with only the verification results uploaded to the blockchain, thereby improving efficiency. snarkOS ensures the security of data and computations and allows for permissionless functionality execution.
zkCloud: Provides a secure, private off-chain computing environment to support programming interactions between users, organizations, and DAOs.
$Aleo also offers an integrated development environment (IDE) and software development kit (SDK), enabling developers to quickly write and deploy applications; additionally, developers can deploy applications in $Aleo's program registry without relying on third parties, thus reducing platform risk.
Scalability
$Aleo adopts an off-chain processing approach, where transactions are first computed on user devices, and only the verification results are uploaded to the blockchain. This method significantly improves transaction processing speed and system scalability, avoiding network congestion and high fees similar to Ethereum.
Consensus Mechanism
$Aleo introduces AleoBFT, a hybrid architecture consensus mechanism that combines the instant finality of validators with the computational capabilities of provers. AleoBFT not only enhances the decentralization of the network but also improves performance and security.
Fast Block Finality: AleoBFT ensures that each block is immediately confirmed after generation, enhancing node stability and user experience.
Decentralization Guarantee: By separating block production from coinbase generation, validators are responsible for block generation, while provers perform proof computations, preventing a few entities from monopolizing the network.
Incentive Mechanism: Validators and provers share block rewards; provers are encouraged to stake tokens to become validators, thereby enhancing the network's level of decentralization and computational power.
$Aleo allows developers to create applications without gas limitations, making it particularly suitable for applications that require long-running processes, such as machine learning.
Current Developments
$Aleo will launch its incentive testnet on July 1st, and here are some important updates:
ARC-100 Voting Passed: The voting for ARC-100 ("Best Practices for Compliance for $Aleo Developers and Operators" proposal, addressing compliance, fund locking, and delayed transactions on the $Aleo network for security measures) has concluded with approval. The team is making final adjustments.
Validator Incentive Program: This program will start on July 1st and aims to validate the new puzzle mechanism. It will run until July 15th, during which 1 million $Aleo points will be allocated as rewards. The percentage of points generated by nodes will determine their share of the rewards, with each validator required to earn at least 100 tokens to qualify for rewards. Specific details have yet to be finalized.
Initial Supply and Circulating Supply: The initial supply is 1.50 billion tokens, with an initial circulating supply of approximately 10% (not yet finalized). These tokens mainly come from Coinbase tasks (75 million), which will be distributed over the first six months, also including rewards for staking, running validators, and verifying nodes.
Testnet Beta Reset: This is the final network reset, and no new features will be added after completion; the network will be similar to the mainnet. The reset is intended to add ARC-41 and new puzzle functionality.
Code Freeze: The code freeze was completed a week ago.
Validator Node Expansion Plan: The initial number of validator nodes is 15, with a target to increase to 50 within the year and ultimately reach 500. Becoming a delegator requires 10 thousand tokens, while becoming a validator requires 10 million tokens; these amounts will gradually decrease over time.
Algorithm Update Interpretation
$Aleo recently announced news about the latest testnet, along with an update to the latest version of the puzzle algorithm. The new algorithm no longer focuses on generating zk proof results, removing MSM and NTT (both of which are computational modules widely used in zk for generating proofs; previous testnet participants optimized the efficiency of this algorithm to enhance mining rewards) calculations, and instead focuses on generating intermediate data witnesses prior to producing proofs. We provide a brief introduction to the latest algorithm based on the official puzzle spec (https://t.co/7Kk5OMfKX7) and code.
Consensus Process
At the consensus protocol level, the process involves provers and validators who are responsible for producing computational results (solutions) and creating blocks while aggregating and packaging solutions. The flow is as follows:
1/ Prover computes puzzles to construct solutions and broadcasts them to the network.
2/ Validator aggregates transactions and solutions for the next new block, ensuring that the number of solutions does not exceed the consensus limit (MAX_SOLUTIONS).
3/ The legality of the solution needs to be verified against the epoch_hash maintained by the validator comparing it with the latest_epoch_hash, ensuring that the calculated proof_target matches the latest_proof_target maintained by the validator in the network, and that the number of solutions included in the block is below the consensus limit.
4/ Valid solutions can earn consensus rewards.
Synthesis Puzzle
The core of the latest algorithm is called Synthesis Puzzle, which focuses on producing a common EpochProgram for each epoch. By constructing R1CS proof circuits for inputs and the EpochProgram, it generates corresponding R1CS assignments (commonly referred to as witnesses) to serve as leaf nodes of the Merkle tree. After calculating all leaf nodes, it generates the Merkle root and converts it into the proof_target for the solution. The detailed process and specifications for constructing Synthesis Puzzle are as follows:
1/ Each puzzle computation is referred to as a nonce, which is constructed from the address receiving mining rewards, epoch_hash, and a random counter. Each time a new solution is computed, a new nonce can be obtained by updating the counter.
2/ Within each epoch, all provers in the network compute the same EpochProgram, which is sampled from the instruction set based on a random number generated by the current epoch_hash. The sampling logic is:
· The instruction set is fixed, with each instruction containing one or more computational operations, each having a preset weight and operation count.
· During sampling, a random number is generated based on the current epoch_hash, and instructions are obtained and ordered from the instruction set according to the weight until the cumulative operation count reaches 97, at which point sampling stops.
· All instructions are then combined to form the EpochProgram.
3/ The nonce is used as a random seed to generate inputs for the EpochProgram.
4/ The R1CS corresponding to the EpochProgram and input is aggregated to perform witness (R1CS assignment) calculations.
5/ After calculating all witnesses, these witnesses are converted into the corresponding sequence of leaf nodes of the Merkle tree, which is an 8-depth 8-ary Merkle tree.
6/ The Merkle root is calculated and converted into the proof_target for the solution, checking if it meets the current epoch's latest_proof_target. If it meets the criteria, the calculation is successful, and the required reward address, epoch_hash, and counter for constructing the input are submitted as the solution and broadcasted.
7/ Within the same epoch, multiple solution calculations can be performed by iterating the counter to update the inputs for the EpochProgram.
Changes and Impacts on Mining
After this update, the puzzle has shifted from generating proofs to generating witnesses. The computation logic for all solutions within each epoch is consistent, but there are significant differences in computation logic across different epochs.
From the previous testnet, we found that many optimization methods focused on using GPUs to optimize the MSM and NTT calculations during the generation of proofs to improve mining efficiency. This update completely eliminates that part of the computation. At the same time, the process of generating witnesses arises from executing a program that follows the changes in epochs, where some instructions will have serial execution dependencies. Thus, achieving parallelization presents significant challenges.