Copy Image Button (Invisible Helper)

Back

Share

Share

DeFi

Oct 6, 2025

11 min read

Real-time Blockchains and the Rise of Onchain CLOBs

Real-time Blockchains and the Rise of Onchain CLOBs

Introduction: The Need For Speed

Decentralized finance has long held the promise of building a more open, efficient, and global financial system. Early blockchains like Bitcoin established the foundation with immutable ledgers, and Ethereum ignited the revolution with smart contracts, enabling programmable and composable onchain finance. Yet, for all its innovation, DeFi has been constrained by a critical bottleneck: performance.

Scaling limitations, resulting in low transaction throughput and high fees, have largely prevented widespread institutional adoption and kept DeFi from truly competing with traditional finance. In order to replace centralized exchanges, a blockchain must order, match, and execute transactions in real-time. For context, the NASDAQ’s matching engine can process orders in under 100 microseconds, while the NYSE handles over a million messages per second. This is the world of high-frequency trading, where maximally efficient pricing and low-cost execution are not just advantages, they are requirements.

This performance gap is now closing. A multi-year, industry-wide race to solve the scaling dilemma has given rise to a new generation of real-time blockchains. These powerful networks are purpose-built to support the most efficient market structure known to finance: the onchain Central Limit Order Book (CLOB).

CLOBs are not new; they are the engine of modern finance, used by the world's largest exchanges to aggregate and match buy and sell orders. The breakthrough lies in bringing this battle-tested model fully onchain. By doing so, developers are combining the raw performance of traditional markets with the unique, native features of blockchain technology. This fusion of programmability, composability, and interoperability is creating a dynamic framework set to redefine how assets are transacted on a global scale.

While this may have seemed impossible just a few years ago, blockchain technology has progressed to a point where its infrastructure is ready to challenge global markets. In this report, we will assess the emerging landscape of these real-time blockchains and the specialized CLOBs ushering in this new era.

  1. The Real-Time Blockchain Landscape

The following list includes select leaders within the real-time blockchain landscape:

  • L1 blockchains: Solana, Aptos, Sui, Sei, and Monad

  • L2 rollups: MegaETH and Rise Chain

Each one of these blockchains introduces a unique approach to enabling fast, cheap, and efficient onchain markets. Below, we’ll describe the architecture and features of each, as well as how they optimize the performance of CLOB-based exchanges.

Solana

When the @solana L1 blockchain launched in March 2020, it was among the first to pioneer the emerging landscape of real-time blockchains. Instead of sacrificing performance to achieve a certain decentralization threshold, Solana was built to maximize performance via specialized infrastructure.

Fig 1. Solana Summary

Hybrid Consensus

In order to support significant throughput, Solana built an innovative hybrid consensus mechanism consisting of two main components: Proof of History (PoH) and a Proof of Stake (PoS) variant called Tower BFT.

Essentially, the PoH component serves as the record-keeping mechanism, or “decentralized clock,” of the PoS component. This allows Solana to process transactions in parallel by recording a verifiable sequence of “ticks,” or onchain events (such as transactions), via SHA-56 hashing. Once transactions are ordered within a block based on PoH’s records, Tower BFT handles the finalization process by verifying that the transactions have been ordered accurately before voting on its validity.

Another key component of Solana’s consensus infrastructure is Turbine: the network’s block propagation protocol. Turbine’s job is to communicate block information across all of Solana’s validators. Rather than send the full block to 1,000+ nodes, which would require massive network bandwidth requirements and communications overhead, Turbine shards blocks into “shreds.” Each shred can contain 1.28KB worth of data, making them over 1000x smaller than the average Solana block.

This process significantly improves network efficiency, specifically reducing latency by decreasing the amount of time spent on block propagation.

Sealevel Execution

While Solana’s unique consensus mechanism enables significant throughput capabilities, its Sealevel execution infrastructure ensures that latency remains low, even during spikes in activity.

Sealevel played a major role in advancing blockchain technology by serving as one of the first execution environments that could handle multiple smart contract transactions in parallel. At the time this technology was developed, single-threaded execution environments like Ethereum's EVM and other chains like @EOSIO only allowed sequential transaction processing, where transactions were executed one after another. However, Sealevel improved on this by tapping into the processing power of multiple cores in validator hardware (primarily CPUs, with GPUs used for signature verification rather than transaction execution), enabling a theoretical throughput of up to 65,000 transactions per second.

Once transactions are executed, Sealevel sends the resulting state changes, such as updated balances, to Solana’s AccountsDB database.

Coming Upgrades

Solana's 65,000 TPS breakthrough in 2020 redefined blockchain performance. Now, three groundbreaking upgrades are set to shatter that ceiling.

One such development is Firedancer, a unique new validator client which is built from the ground up using the C++ language. In addition to providing validators with increased client selection (the Agave and Jito-Solana clients hold a >95% market share), Firedancer has also demonstrated the ability to support throughput of over 1 million transactions per second in a controlled testnet environment. In September 2024, Frankendancer, an early hybrid version of Firedancer, went live on Solana mainnet. However, Firedancer is expected to launch in full sometime in early 2026.

Two more developments that can make a notable impact on Solana’s performance are Votor and Rotor, both of which are part of the coming Alpenglow network upgrade. Alpenglow aims to bring significant improvements to Solana’s consensus performance through architectural optimizations.

Votor is designed to replace TowerBFT as the consensus mechanism responsible for block finalization. Its core innovation is reducing the communication rounds necessary between validators to achieve finality. By streamlining this voting process, it targets a reduction in finality latency from the current ~12.8 seconds to 100-150 milliseconds, potentially a 100x improvement.

While Votor makes communication more efficient, Rotor’s job is to remove the bottlenecks caused by excessive data transfers between validators. Rotor is based on the current Turbine block propagation mechanism, it aims to improve on it by distributing blocks to validators faster, supporting higher throughput levels.

The mainnet Alpenglow upgrade is expected to take place in Q1 2026, with Votor being implemented immediately upon launch and Rotor being introduced in a future version.

While these protocol-level upgrades push Solana toward million-TPS territory, certain applications demand even more specialized performance. Central limit order books (CLOBs), the backbone of professional trading, require microsecond-level execution and deterministic ordering that challenges even Solana's capabilities. That's where Bullet comes in: a Solana-based rollup (a.k.a. Network extensions) purpose-built for CLOB markets, delivering the sub-millisecond latency and guaranteed transaction ordering that institutional DeFi demands.

Bullet

As the first true network extension of Solana, @bulletxyz_ is a sovereign execution environment built on the @sovereign_labs SDK, combining modular architecture with ZK cryptography. The platform's flagship product, BulletX, delivers a fully featured CLOB exchange with perpetual futures, spot trading, and integrated money markets. What sets this apart is the architecture: the matching engine runs natively within Bullet's Rust-based runtime, free from SVM constraints like compute unit limits or transaction-per-block restrictions.

Fig 2. Bullet Tech Stack

The technical stack leverages cutting-edge infrastructure across four layers. Execution happens on Bullet Core through a centralized sequencer that processes transactions in a streaming model, executing them immediately upon arrival rather than batching into blocks. This delivers soft confirmations in 1 millisecond, a 400x improvement over Solana's current 400ms block times. Settlement anchors to Solana L1 for security and finality, while @celestia handles data availability at up to 27 MB/s throughput. ZK proving through @SuccinctLabs' SP1 zkVM ensures verifiable execution guarantees without custody trade-offs.

This architecture enables critical innovations for professional trading. Application-specific sequencing prioritizes maker orders and cancellations over taker orders, protecting liquidity providers from adverse selection. The platform supports sophisticated order types including Post-Only variants and conditional orders, while a multi-tiered risk engine manages positions through initial margin, maintenance margin, community liquidation vaults, and an insurance fund.

Currently live on testnet, Bullet will initially launch as an optimistic rollup with ZK fraud proofs before transitioning to full validity rollup architecture, significantly reducing settlement times. The roadmap includes BulletSVM, which will bring SVM compatibility to an execution environment built around BulletCore, and allow developers to deploy Solana programs directly to Bullet's execution layer, transforming it from a single exchange into a composable financial ecosystem.

By remaining deeply integrated with Solana rather than creating another isolated chain, Bullet can tap directly into the network's $12B TVL and massive user base while providing the dedicated blockspace and sub-millisecond execution that institutional trading demands. This positions Bullet not just as Solana's answer to Hyperliquid, but as the blueprint for how general-purpose blockchains can support specialized, high-performance applications through modular extensions. Dive deeper in our recent article here.

Next up, we’ll cover two prominent CLOB DEXs within Solana’s powerful DeFi sector: Drift V2 and Pacifica.

Drift V2

Initially launching in 2021, @DriftProtocol has had a longstanding presence in the Solana ecosystem. Over the platform’s lifespan, it’s gone through several changes, but Drift V2 has already seen the most success. Currently, Drift V2 supports margin trading for spot and perps markets, while also allowing users to earn yield via overcollateralized lending.

From an infrastructure standpoint, it uses a hybrid CLOB model which directs onchain orders into an offchain orderbook for matching, while keeping settlement fully onchain.

Drift V2 uses a specialized offchain matching engine which handles liquidity from 3 mechanisms:

The first is JIT (Just-In-Time) liquidity, which features a dutch auction marketplace where market makers can bid to fill orders at or better than the market price in ~5-second auctions. This allows for more efficient pricing than an AMM; rather than rely on virtual liquidity (i.e. the constant product function), you have the benefit of active and dynamic market makers.

Drift V2’s offchain matching engine is also an ideal solution for market makers. Compared to fully onchain CLOBs, offchain solutions typically involve lower setup and infrastructure costs, while also reducing potential for latency arbitrage.

The second liquidity mechanism is constant liquidity, where Drift’s Virtual AMM (VAMM) serves as a backstop market price taker given that:

  • Market orders aren’t filled in auction

  • Any given asset’s price reaches resting order levels

While CLOB infrastructure poses clear advantages over AMMs in some ways, the main benefit of Drift’s VAMM is that it guarantees liquidity. Specifically, it makes it possible for Drift to support new markets even without bootstrapping market maker liquidity first. The VAMM can also participate in Drift V2’s JIT auctions (e.g. perform operations such as reducing inventory).

Drift V2’s third source of liquidity is its limit orderbook (DLOB). Within the DLOB, a network of keeper bots handles limit orders by sorting them by size and age in an offchain orderbook. Then, when limit orders are triggered, a Keeper will use the AMM to submit and fill it. Additionally, all keepers maintain independent orderbooks to maximize decentralization.

Drift incentivizes Keepers to be fast, give the best prices, and fill orders in an efficient manner (oldest and largest first) by paying them a portion of the taker fee for each trade they execute.

Drift V2’s DLOB was designed to effectively balance decentralization and computational efficiency.

Its efforts to stay decentralized include its network of hybrid offchain Keepers, which anyone can run, as well as its onchain storage and settlement of limit orders.

The CLOB also maintains a high capacity for computational efficiency by leaving the computationally-heavy order-filling logic offchain.

With over $110B in trading volume since its inception in mid-2023, Drift is currently Solana’s top CLOB DEX for trading perpetual derivatives. However, as more emphasis within the industry is put on CLOB performance, competition will only get stronger. One such promising up-and-coming CLOB in the Solana ecosystem is Pacifica.

Pacifica

Like Drift V2, @pacifica_fi uses a hybrid model, featuring an offchain matching engine and orderbook, with onchain settlement and self-custody.

When orders are placed on Pacifica, offchain relayers aggregate them into a shared orderbook. Since this process occurs offchain, it avoids incurring a fee (~$0.00025) for every order placement/cancellation.

All orders received by Pacifica’s matching engine are sorted first by price (best bid/ask first), and then time (FIFO). After sorting the orders, the engine scans for matches. Typically, trading on Pacifica is facilitated by JIT liquidity (especially for large orders), but its infrastructure can also source external liquidity (dynamic liquidity providers and/or backstop AMMs) to maximize price efficiency.

By keeping its matching engine offchain, Pacifica can scale significantly to support HFT activity. Once trades have been matched, they’re batched and submitted as a single transaction, which is verified by Solana runtime by checking the match against oracle prices and program-derived addresses for deterministic state updates. Any matches that fail verification are rolled back without execution. Lastly, the transaction is settled onchain.

Pacifica is seeing early success. Despite being in a closed beta since its mainnet launch on June 10th, it’s amassed over $5B in total volume and has accrued over 6,000 active users.

To start earning points on Pacifica today, including a 10% bonus after October 15th, click here.

Fogo

While Solana pushes the boundaries of a general-purpose L1, a new wave of specialized blockchains is emerging from its ecosystem. These chains leverage the power of the Solana Virtual Machine (SVM) but make opinionated design choices to optimize for a single use case. A prime example of this trend is Fogo, a high-performance L1 blockchain designed from the ground up to be a trading-first environment.

Fig 3. Fogo Summary

Founded by veterans from Citadel, Jump Trading, and JPMorgan, Fogo’s mission is to create an onchain trading experience indistinguishable from centralized exchanges. To achieve this, the project adopts a pragmatic philosophy it calls minimum viable decentralization, maximum viable performance. This approach purposefully trades purist decentralization for the institutional-grade latency and throughput required by demanding market structures like CLOBs. Fogo’s architecture is built on three core pillars:

  1. Unified Firedancer Implementation. Fogo enshrines Firedancer as its canonical validator client. Instead of balancing multiple clients and being constrained by the slowest one, Fogo creates strong economic incentives that drive convergence towards a single, hyper-optimized implementation. This ensures the network consistently operates at the peak performance levels made possible by Firedancer’s C++ architecture, maximizing throughput and stability.

  2. Multi-Local Consensus. Fogo introduces a novel consensus model inspired by global financial markets. Validators are physically clustered in data centers within major financial hubs in North America, Europe, and Asia. This colocation minimizes physical latency, enabling sub-second finality within the active zone. Using a "follow the sun" model, the active consensus zone rotates geographically to align with peak trading hours, ensuring optimal performance around the clock. For resilience, the network maintains a global consensus mode as a failover, protecting against zone-specific disruptions.

  3. Curated Validator Set. Unlike permissionless networks, Fogo utilizes a curated and permissioned validator set. This allows the network to enforce a high performance bar, pruning under-provisioned nodes and ensuring all participants meet stringent operational standards. This controlled environment guarantees greater determinism and stability while enabling the implementation of features like built-in MEV prevention, which is critical for protecting traders and market makers.

These design decisions are particularly relevant for onchain CLOBs. Order book markets are uniquely sensitive to latency, sequencing, and data consistency. Fogo’s colocation model directly minimizes physical latency for order placements and cancellations. Its Firedancer-first approach guarantees the high throughput needed to handle institutional volume, and its curated validator design provides the stability and MEV resistance that market makers require to quote tight spreads. In short, Fogo embodies the thesis that the next wave of DeFi will be powered not by general-purpose blockchains, but by opinionated, performance-maximized networks explicitly built for trading.

Ambient Finance: A New Market Structure on Fogo

Fogo’s specialized architecture is already attracting protocols with novel designs, most notably Ambient Finance, a perpetual futures DEX building on the network. Rather than implementing a traditional continuous CLOB, Ambient introduces a dual-sided frequent batch auction (dsFBA) model to create a fairer and more efficient trading environment.

Instead of matching orders continuously as they arrive, Ambient’s system accumulates all orders throughout a block. At the end of the block, these orders are cleared simultaneously at a single price determined by an oracle feed like Pyth. This model fundamentally shifts the basis of competition from raw speed to price, directly mitigating two of the biggest challenges for onchain order books: latency arbitrage and MEV.

By neutralizing the speed advantage, the dsFBA model offers several benefits. Harmful sniping strategies are neutralized, traders can receive price improvements if the market moves in their favor during the batch, and market makers are protected from having their stale orders picked off. This creates a hybrid execution environment that combines the fairness of an AMM with the capital efficiency of an order book.

Ambient's decision to build on Fogo is strategic, as running computationally intensive batch auctions every block would be impractical on slower, more generalized chains due to cost and congestion. Fogo’s high-throughput, low-latency architecture provides the ideal foundation, allowing the dsFBA mechanism to operate natively within SVM smart contracts and deliver a near CEX-level experience while retaining the transparency of DeFi.

The Origin of Aptos and Sui: The “Move Chains”

@Aptos and @SuiNetwork were both born out of Facebook’s Libra project (later renamed “Diem”). While Diem was ultimately shut down in 2022, its goal was to create a global blockchain-based system optimized for financial applications, specifically stablecoin payments.

An evolved form of Diem’s infrastructure still lives through Aptos and Sui. For example, both blockchains use Diem’s specialized Move language, as well as the MoveVM – one of the first challengers to Ethereum’s EVM. MoveVM, and its Move language, are optimized to outperform the EVM in two primary categories: security and scalability.

Fig 4. Move vs Solidity

MoveVM

Within the Move language are powerful built-in security features, including its resource-oriented framework, static dispatch, and the Move Prover.

Move’s resource-oriented model prevents digital assets from being duplicated or lost. Unlike other popular smart contract languages like Solidity, this naturally safeguards against reentrency attacks, which are responsible for $325 million in stolen funds so far in 2025 alone.

Additionally, it enhances security by making multisig functionality safer and easier to implement correctly, which is particularly useful for enterprises and DAOs which require strict limitations on who can access and transfer funds. Double-spending and unauthorized access risks are also eliminated due to Move's linear type system, which ensures resources can only be moved, not copied or duplicated. This removes the need for callback functions, reducing the smart contracts’ complexity and thereby making them more readable, maintainable, and secure.

Move's static dispatch enables compile-time verification rather than runtime checks, while the Move Prover provides formal verification that mathematically proves contract properties before deployment. Together, these tools dramatically reduce development time and debugging costs by catching errors early and ensuring smart contracts are thoroughly audited before going live, eliminating the risk of deploying faulty or vulnerable logic.

Ultimately, the benefits of Move can be summed up in the following points:

  • Linear type system prevents resource duplication, loss, and reentrancy attacks

  • Compile-time verification catches errors before deployment, reducing debugging costs

  • Move Prover enables formal verification of smart contract properties

  • Resource-oriented design simplifies complex asset operations without callback functions

  • Parallel execution enabled by default through resource independence

In addition to providing critical security features, the MoveVM is built to enable high-level performance by inherently enabling parallel execution, thus making it possible to create competitive CLOBs. Specifically, parallel execution allows for significant scalability without the need for sharding or rollups, eliminating the throughput bottleneck faced by many L1s.

While Aptos and Sui both benefit from Move's security and performance advantages, they implement fundamentally different approaches. Aptos maintains Move's original resource-oriented vision while pushing the boundaries of modular consensus and execution, while Sui reimagines the model entirely through an object-centric architecture.

Fig 5. Custom Move Implementations: Aptos vs Sui

Aptos

A defining property of Aptos is its modular infrastructure, enabling developers to create and upgrade various modules to support virtually any use case.

While this level of customization is important, Aptos also uses account-based infrastructure, which prioritizes security and speed ideal for institution-targeted financial applications (e.g. stablecoins, payments, CLOBs, etc.).

Fig 6. Aptos Summary

AptosBFT Consensus

At the core of Aptos is the AptosBFT consensus mechanism, which optimizes communication and synchronization between validators. To maximize efficiency, operations are handled via pipelining, which enables multiple consensus processes to take place concurrently, minimizing overall network latency.

A major optimization for AptosBFT was implemented earlier this year via the Zaptos upgrade, which has reduced end-to-end latency on Aptos by over 40%, bringing it to approximately 900ms end-to-end latency. Zaptos is able to achieve this important milestone by providing further optimizations to AptosBFT’s parallelization process.

Specifically, Zaptos enables optimistic execution and optimistic commit. Optimistic execution enables validators to add blocks to the pipeline without waiting for its transactions to be ordered, while optimistic commit allows blocks to be committed to storage without waiting for state to be certified. Ultimately, these optimizations bring further reductions in latency as well as enhanced support for higher throughput, bringing Aptos another step closer to “real-time” performance.

Block-STM Execution

While AptosBFT optimizes for low latency, its Block-STM parallel execution engine is built to maximize throughput.

A core piece of Block-STM’s infrastructure is its collaborative scheduler, which coordinates execution and validation tasks in parallel. For example, the scheduler oversees key functions such as concurrent transactions and dynamic conflict resolution.

  • Concurrent transactions: Transactions in a block are executed optimistically in parallel across multiple threads,

  • Dynamic conflict resolution: Conflicts are detected post-execution and resolved by re-executing affected transactions sequentially, ensuring deterministic outcomes without user input, and allowing Aptos to efficiently handle complex and interdependent contracts

Due to its impressive performance, including executing an industry-wide record 326 million transactions on Aptos in a single day, Block-STM’s execution engine has been adopted by several other high-performance blockchains and rollups, including @SeiNetwork, @0xPolygon, and @Starknet. To put the magnitude of this achievement in perspective, 326 million transactions is roughly what Solana processes in a week; this is especially impressive considering that Solana sometimes processes half of all public blockchain transactions.

Coming Upgrades

While Aptos is already one of the most impressive blockchains in terms of performance, ongoing updates have demonstrated remarkable results in testnet environments. Two specific upgrades that stand out from the rest are Raptr and Shardines.

Raptr aims to reduce network latency by a further 20% (~150 milliseconds) by making improvements to Aptos’ consensus infrastructure. Currently, Raptr is in the process of a gradual rollout; the initial implementation (Baby Raptr) was launched in June, but there’s no public timetable for the next stages to go live. The primary upgrade that Raptr has implemented thus far involves reducing the number of consensus rounds from 3 to 2, streamlining validator communication. With less communication overhead, the network saves time.

On the execution side, Shardines aims to further improve Block-STM performance and bring throughput of 1M+ transactions per second to Aptos. One important aspect of this upgrade is that it decouples execution from consensus, enabling independent scaling for each. However, Shardines brings a completely new sharding architecture to blockchain execution engines by distributing execution across multiple shards within a single validator cluster. Unlike other sharding mechanisms, Shardines fragments execution activity without fragmenting state, ensuring validators remain completely aligned.

Between its current infrastructure and upcoming improvements, the performance of Aptos is clearly impressive. Additionally, its account-based infrastructure makes it a prime location for the next generation of high-performance DeFi apps, as well as institutions due to its extensive security benefits. Decibel demonstrates how Aptos's account-based model and sub-second finality create ideal conditions for fully onchain orderbook matching, a design choice that prioritizes security and simplicity over raw speed.

Decibel

Despite having some of the most performant infrastructure in web3, Aptos has yet to see breakthrough success from a CLOB-based app. @DecibelTrade's goal is to change this.

Decibel isn’t just a DEX; it aims to serve as Aptos’ onchain trading engine, unifying spot and perps markets while providing margin, vaults, and more via composable integrations. Due to Aptos’ flexible infrastructure, Decibel brings several interesting features to the CLOB space.

For example, it integrates Aptos X-chain accounts, which allows users to fund an account from external sources and ecosystems, such as Ethereum, Solana, or even from a centralized exchange – all without having to switch wallets.

Decibel also reaps the benefits of having its own custom-built Trading VM, which:

  • Adds additional risk checks and routing logic

  • Makes latency-sensitive strategies (i.e. arbitrage) feasible.

  • Ensures that matching, balance transfers, and event emission are all finalized inside a single transaction

Another way Decibel stands out from many currently-operating CLOBs is that ordering, matching, and settlement are all fully handled onchain.

While Drift, Pacifica, and BulletX all express the benefits of offchain matching, Decibel provides an alternative point of view. Specifically, its onchain matching infrastructure ensures that each match is protected by the same safety guarantees as the Aptos L1. Additionally, its matching engine can directly leverage Aptos’ impressive tech stack; with upcoming upgrades such as Raptr and Block-STM v2, Decibel will soon be able to achieve sub-20ms block times and over 1 million orders/second.

Sui

While Aptos’ infrastructure adheres more closely to Diem’s original MoveVM, the Sui network uses a fork called “Sui Move.”

As opposed to Aptos’s resource-oriented model, Sui uses an object storage model, which treats all data as independent objects. This enables more flexible storage of different types of data, including media such as images, audio, and video, making it more ideal for games and other consumer-facing applications. Ultimately, Sui's model enables deterministic parallel execution without retry mechanisms, though it requires more complex state management."

Fig 7. Sui Summary

Mysticeti Consensus

Sui's consensus mechanism, Mysticeti, is unique in that it utilizes a DAG (Directed Acyclic Graph) structure for consensus ordering, while still producing a blockchain of committed checkpoints. However, the end goal is the same: Mysticeti’s primary responsibilities are to organize transactions into blocks and then verify them.

Specifically, Mysticeti reduces latency by enabling validators to perform two unique actions: sign/share blocks without certification, and propose blocks in parallel.

Its ability to sign and share blocks without certification marks a major difference from traditional BFT consensus mechanisms. While traditional BFT protocols require validators to agree on a block before sharing it, Mysticeti skips that step and allows validators to immediately broadcast their independently-signed blocks to the network.

The latter feature is made possible by object-level parallelism. Since data on Sui is treated as independent objects, parallel execution of transactions can be achieved more efficiently, as transactions can process independently if objects don’t overlap. Unlike Aptos’ optimistic parallelization, this creates more predictable outcomes and avoids potential latency disruptions caused by retroactive verification (which Aptos is vulnerable to).

Mysticeti is also designed to enable optimistic finality, as opposed to consensus finality, giving it the unique property of being able to finalize “simple” transactions (e.g. token transfers) locally (within any given validator) before network-wide consensus finality is reached. However, “complex” transactions, such as liquidity pool calculations, must be validated by the entire network.

Ultimately, Mysticeti combines benefits from both DAGs and blockchains to maximize efficiency, achieving ~200ms finality for simple transactions via the fast path and ~400ms for complex transactions requiring full consensus.

Parallel Execution

A central component to Sui’s execution infrastructure is Programmable Transaction Blocks (PTBs), which complement Sui’s object-based functionality by allowing for atomic multi-operation chains and narrow-scope object access. Essentially, this increases developer’s freedom to customize transaction types, optimizing for asset-heavy use cases and independent transactions (e.g. NFT mints, gaming transactions).

Sui’s unique architecture also gives it natural advantages to achieve strong execution performance.

First, Sui’s object-oriented infrastructure inherently localizes state to individual objects, which improves execution efficiency by making the state updating process less complex.

Additionally, Mysticeti’s DAG structure enables Sui to execute transactions in parallel by naturally representing transactions and their dependencies. While simple transactions can be executed and finalized at the local level, complex transactions are executed via optimistic finality, a similar process to Aptos.

These inherent efficiencies in Sui’s unique infrastructure make it one of the best-performing blockchains in the industry, achieving throughput speeds of up to 297,000 transactions per second in a testnet environment.

Coming Upgrades

2025 has been a year of transformation for Sui. Its storage infrastructure went through a major upgrade in March with the launch of the Walrus decentralized storage protocol in March, followed by a large-scale consensus upgrade in May with Mysticeti V2.

However, one ongoing development in the Sui ecosystem that stands out is the release of SuiPlay0X1: Sui’s blockchain-native handheld gaming console. While not related to CLOBs, SuiPlay harnesses the unique features enabled by Sui’s object-oriented model to create a unique and interesting gaming product. It also highlights the massive performance capabilities of Sui to handle its own dedicated gaming network while also hosting many other types of applications (DeFi, DePIN, etc.). The first batch of SuiPlay0X1 consoles were officially shipped in August, potentially marking the start of the first successful large-scale release of gaming hardware in web3.

Sui's object model enables a unique approach to CLOB infrastructure: DeepBook provides shared orderbook infrastructure that multiple frontends can access atomically, demonstrating the power of Sui's architecture for DeFi composability. Let’s explore this in more depth.

DeepBook

Similar to Decibel, the @DeepBookonSui CLOB isn’t a front-end app. Instead, it’s a trading infrastructure built to power DEXs on Sui, essentially functioning as a liquidity layer.

Another trait that DeepBook shares with Decibel is operating fully onchain.

Orders on DeepBook are represented as Sui objects containing information such as price, quantity, and side (bid/ask). These objects are atomically added to the overall order pool in a single transaction, where Move’s resource-oriented safety is utilized to prevent duplication or loss.

DeepBook’s onchain matching engine automatically matches orders via price-time priority. Specifically, the best bid/ask orders are filled first, and transactions with the same price are ordered by FIFO. Additionally, the matching process takes place onchain during execution.

After matching takes place, transactions are settled atomically via PTBs, which transfer assets (e.g. from seller’s wallet to buyer’s wallet) without intermediaries.

DeepBook V3, which is the CLOB’s most recent implementation, has handled ~$7.3B in trading volume since its launch in October 2024. While this amount isn’t comparable to some of the other major DEXs in the space, it’s still a significant part of the Sui ecosystem; in fact, DeepBook currently powers ~80% of Sui’s DEX volume.

Bluefin

When it comes to perpetual futures trading on Sui, @bluefinapp is a dominant force. In fact, it commands a vast majority (over 90%) of the network’s perps trading volume.

The Bluefin platform uses a hybrid architecture, offering both spot and perps trading; for spot, it uses a concentrated liquidity market maker (CLMM), powered by liquidity providers (LPs) who deposit liquidity into specific trading pair vaults at specific price ranges to earn fees. But for perps, Bluefin uses their own offchain CLOB to maximize performance. Additionally, the platform offers a DEX aggregation service which routes trades in the most efficient manner within the Sui ecosystem.

As for perps markets, Bluefin recently launched Bluefin Pro, which replaces the platform’s former offchain perps CLOB with fully rebuilt, onchain infrastructure. Specifically, this leverages Sui’s parallel execution engine and recently-upgraded Mysticeti consensus to deliver sub-millisecond order matching and ~390 millisecond finality. In addition to being optimized for institutional-grade order size and frequency, Bluefin Pro also provides additional privacy via Sui’s TEE-based Nautilus infrastructure, ensuring that order matching and other trade logic operations are executed in a shielded environment.

Sei

Launched in August 2023, @SeiNetwork is an EVM-compatible L1 blockchain which is specifically optimized to support high-frequency trading activity.

Many builders in the industry have chosen alternatives to EVMs (e.g. Solana with SVM, Aptos/Sui with MoveVM) due to its perceived lack of performance. Some common reasons that developers have opted for “AltVMs” in recent years cite perceived EVM shortcomings, such as:

  • Limited expressivity: EVM-based smart contracts have a very limited ability to express complex logic, which not only curbs potential innovation but also results in vulnerable code

  • Sequential processing: the EVM must process one transaction at a time, which greatly limits scalability

  • Language constraints: the EVM was purpose-built to run solidity-based smart contracts, which poses a barrier to entry as "web2" developers must learn a new language to create EVM-compatible apps

However, Sei has opted to keep its EVM-compatible status by creating its own optimized version that’s built to support high-frequency trading activity. In fact, when Sei’s V2 upgrade launched in May 2024, it made Sei the first EVM-compatible L1 blockchain that enabled parallel transaction processing.

Sei’s V2 architecture is made up of 3 main components: Twin Turbo Consensus, a Parallelization Engine, and SeiDB.

Fig 8. Sei Summary

Twin Turbo Consensus

Sei’s Twin Turbo consensus mechanism is an enhanced version of Tendermint BFT, commonly used within the @cosmos ecosystem. However, Sei has engineered Tendermint to further minimize latency, achieving sub-500ms finality alongside throughput of 12,500 transactions per second. This specifically enables applications that require high-throughput activity, such as trading.

As the name Twin Turbo suggests, Sei’s consensus consists of two main components: intelligent block propagation and optimistic block processing.

Intelligent block propagation prioritizes transaction ordering over immediate state verification, allowing validators to locally reconstruct and broadcast full blocks without having to wait for all validators to receive all necessary information.

Optimistic block processing leverages Sei’s custom EVM client to execute transactions in parallel, working in tandem with the network’s parallelization engine.

Parallelization Engine

In order to achieve parallel execution, Sei uses a modified version of the EVM client, which increases throughput capacity via optimistic parallelization.

Optimistic parallelization allows transactions to execute in parallel before state resources are locked. In other words, the network predicts the dependencies of transactions, and can execute multiple transactions if they’re expected not to conflict with one another. This enables higher throughput as the network doesn’t have to wait for full assurance that transactions won’t conflict. Transactions which are later detected as invalid or conflicting are rolled back and resolved, similar to Aptos’ optimistic parallelization mechanism.

SeiDB

Sei’s storage layer, SeiDB, leverages Sei’s custom EVM implementation to maintain scalability (high throughput, low latency, and low transaction fees) while minimizing state bloat and read/write overhead challenges. Specifically, the key features of SeiDB are efficient state storage and parallel read/write access.

SeiDB makes the storage process more efficient by recording historical data as raw key-value pairs, vastly reducing overhead and disk usage. It also aggressively prunes redundant or outdated state data, reducing state bloat and storage requirements for validators.

Additionally, SeiDB uses the network’s optimistic parallelization engine to support parallel read/write access. This effectively parallelizes the data retrieval process by allowing rapid detection and resolution of state conflicts post-execution.

While Sei’s current architecture is impressive on its own, the coming Sei Giga upgrade brings several improvements to all 3 components.

Coming Upgrades

While Sei's current architecture already delivers impressive performance, the upcoming Sei Giga upgrade represents a complete reimagining of EVM capabilities. Announced in December 2024, Giga targets 5 gigagas per second, a 50x improvement over current EVM chains, with sub-400ms finality by Q1 2026.

The centerpiece is Autobahn BFT, a revolutionary consensus protocol that fundamentally decouples transaction ordering from state computation. Unlike traditional BFT systems where validators must execute transactions before voting, Autobahn allows consensus on transaction ordering first, with deterministic execution happening asynchronously. This architectural shift enables potential block production speed improvements of up to 70x.

Autobahn also introduces a multi-proposer architecture where validators continuously disseminate data proposals in parallel "lanes" rather than waiting for a single leader. The consensus layer periodically commits a "tip cut" (a compact snapshot aggregating the latest proposals from every lane) allowing multiple blocks' worth of data to be ordered in a single consensus instance. Combined with Proofs of Availability (PoA) that certify data accessibility without requiring immediate downloads, and a reduction in voting rounds from three to just 1.5, this dramatically reduces latency while maintaining security.

On the execution front, Sei Giga rebuilds the EVM from scratch with automatic parallelization. The new engine dynamically analyzes transaction dependencies in real-time, allowing non-conflicting transactions within a block to execute in parallel while maintaining sequential block execution. This intelligent parallelization, combined with ahead-of-time compilation and custom binary serialization, pushes theoretical throughput to 5 gigagas per second.

The storage layer receives equally significant upgrades. Enhanced SeiDB optimizes for high-volume activity through asynchronous state root generation and disk writes, utilizing io_uring for optimized I/O operations. Data is intelligently tiered as recent data remains on fast SSDs while older data migrates to slower, cost-effective storage, keeping all information accessible while minimizing overhead and maintaining exceptionally low fees (~$0.0001 per transaction).

These improvements position Sei to finally bridge the gap between blockchain's promise and web2's proven scalability, enabling applications that require 100,000+ complex transactions per second, the level needed for truly global financial infrastructure.

Monaco

Building on Sei's performance foundation is @MonacoOnSei, the industry's first microsecond-grade trading layer targeting 100 microsecond (0.1 millisecond) execution with final settlement in under 400 milliseconds. This represents a return to Sei's original vision from its V1 whitepaper of becoming a blockchain purpose-built for trading, now realized through specialized infrastructure rather than protocol-level features.

Monaco employs a hybrid architecture that makes strategic trade-offs on the decentralization spectrum. The offchain Rust-based matching engine handles computationally intensive order matching with backend benchmarks showing 5-25 microsecond cancel/replace execution, comparable to Nasdaq's ~50 microsecond standard. This translates to sub-millisecond median execution (p50) with 99% of trades executing within 10-20ms even during peak volatility, a 1000x improvement over current onchain CLOBs like Hyperliquid's 200ms median latency.

Crucially, this performance doesn't compromise custody or settlement security. After offchain matching, the Merkle root of the new state is committed and verified onchain by Sei's validator set, with final settlement on Sei's L1 in under 400ms, a 200,000x improvement over traditional finance's T+1 standard. Users maintain self-custody throughout, and the elimination of gas fees for non-settlement actions (order placement, cancellation, modification) enables market makers to provide tighter spreads without being penalized for active risk management.

Monaco's most significant innovation may be its solution to DeFi's liquidity fragmentation. Rather than forcing each application to bootstrap its own orderbook, Monaco provides a unified liquidity layer seeded from day one with Tier 1 and Tier 2 market makers. Any application, whether a professional trading UI, RWA platform, or in-game exchange, can immediately access deep, competitive pricing through Monaco's shared infrastructure.

This ecosystem approach is codified through PitPass, Monaco's permissionless revenue-sharing model. Unlike traditional Payment for Order Flow (PFOF) where brokers route orders to the highest bidder often at traders' expense, PitPass programmatically distributes trading fees back to applications that bring order flow. Developers can generate a unique onchain identifier, integrate Monaco's SDK, and immediately begin earning protocol revenue share, no negotiations or business development required. Applications can also implement custom frontend fees, giving them full control over their business model while tapping into shared liquidity.

By operating purely as infrastructure with no competing frontend, Monaco creates a level playing field for an ecosystem of specialized platforms: professional trading UIs for complex strategies, dedicated RWA markets for tokenized stocks and commodities, short-duration options with minute-level settlement, institutional prediction markets, and high-performance trading layers for web3 gaming economies.

This positions Monaco not just as another DEX, but as the foundational trading layer for Sei's vision of a full-stack decentralized Wall Street: diverse applications at the top, Monaco providing microsecond execution in the middle, and Sei delivering sub-second global settlement at the base, finally bringing institutional-grade performance to decentralized finance. If you want to dive deeper, stay tuned for our Monaco article dropping tomorrow!

Monad

@monad Chain is an upcoming L1 blockchain which, like Sei, uses a custom implementation of the EVM to optimize performance for high-activity applications. However, unlike the L1 chains we’ve already discussed, Monad is currently in its testnet phase. While no mainnet launch date has been confirmed, Monad is generally expected to launch on mainnet in either Q4 2025 or Q1 2026.

By reimagining the EVM, Monad has developed novel consensus, execution, and data storage mechanisms while maintaining bytecode-level compatibility with EVM-based chains. The project targets 10,000 transactions per second with sub-second finality, a significant leap from Ethereum's ~15 TPS baseline.

Fig 9. Monad Summary

MonadBFT Consensus

MonadBFT's most prominent feature is its use of speculative execution with single-slot finality, achieving consensus on transaction ordering in one round while execution happens asynchronously. This is made possible by pipelining; transaction ordering and transaction execution are separated, allowing validators to agree on the order of transactions in a block before executing them.

Essentially, this means validators must assume each block is valid before knowing for certain. Similar to Aptos and Sei, this enables parallel execution of non-conflicting transactions. While consensus on ordering is achieved in a single slot, the actual execution and state computation happen asynchronously in the background.

By using pipelining to allow different stages of the block proposal process (proposals, votes, commits) to occur simultaneously, MonadBFT allows the network to continuously produce blocks without waiting for full validation of prior blocks, further reducing latency.

Overall, MonadBFT enables confirmation in less than 1 second, much faster than networks which rely on multiple confirmation rounds. For example, Ethereum’s multi-slot finality mechanism has an average finality time of 12-15 seconds.

Parallel Execution Engine

Monad’s parallel execution engine is designed to scale Ethereum’s execution environment natively rather than relying on an L2 solution such as rollups. It aims to achieve this by using optimistic execution to execute transactions in parallel while previous transactions are still being processed. If the network detects that a transaction has been incorrectly executed, it is re-executed and the state is updated accordingly.

Additionally, Monad’s execution engine pipelines consensus and execution operations via asynchronous execution. This allows validators to establish consensus on a block without requiring that its transactions are executed first. Once the block is confirmed, its transactions can be executed to produce a consensus state.

MonadDB

Rather than rely on an external data availability solution, the Monad team built MonadDB from the ground up to simultaneously optimize the performance of computation and state retrieval, overcoming performance limitations faced by many L1s.

For example, MonadDB natively implements a native Patricia Trie data structure that’s inherently designed for blockchain-specific data. This makes processes such as state retrieval more efficient than general purpose key-value stores such as LevelDB and RocksDB, which are commonly used by Ethereum clients. It also maintains full compatibility with Ethereum’s state structure (Merkle Patricia Trie), ensuring a seamless onboarding process for Ethereum-based apps.

Another key feature of MonadDB is asynchronous I/O.

In order to reap the performance benefits of parallel execution, state access must be highly efficient as well. So, just as asynchronous execution pipelines consensus and execution operations, asynchronous I/O adds pipelining adds pipelining for MonadDB.

In other words, asynchronous I/O allows the network to handle read/write operations (e.g. fetching wallet balances, updating contract storage) in a way that doesn’t affect the performance of Monad’s consensus or parallel execution processes. Essentially, this enables another dimension of multi-tasking for Monad’s operations, further optimizing overall performance.

Notably, Monad achieves this performance while maintaining relatively modest hardware requirements. Its full node RAM requirement of 32GB is significantly lower than other high-performance networks (e.g., 256GB for Solana validators, 128GB for Sui), making it more accessible for node operators.

Monad’s mainnet launch, expected in late 2025, is one of the highest-anticipated events of the year. Their popularity is evident in their testnet stats, which include over 2.7 billion transactions from 311 million unique addresses. Despite being pre-mainnet, Monad has already attracted an impressive ecosystem of builders. Two CLOBs leveraging Monad's high-performance architecture stand out: Kuru and Perpl, both demonstrating how Monad's infrastructure enables fully onchain orderbook trading without offchain compromises. Let’s dive in.

Kuru

As Monad’s first fully onchain CLOB, @KuruExchange leverages Monad’s powerful infrastructure to achieve rapid throughput and sub-second settlement without having to rely on any offchain components. Currently, Kuru offers trading for spot assets and perpetual futures, while their Kuru Flow aggregator aims to find the most efficient price opportunities across the Monad ecosystem.

Kuru also brings innovative features to their platform, such as flip orders and hybrid liquidity infrastructure.

By harnessing the programmable nature of onchain products, Kuru enables a special type of limit order called flip orders. Flip orders enable traders to place continuous orders which mimic the design of providing concentrated liquidity. For example, if a trader places a buy order at $98 and sets a flip price of $100, once the $98 buy order is filled, there’s an automatic limit sell order placed at $100. Then, once the $100 sell order is filled, the $98 buy order is back in effect.

Kuru also brings an interesting approach to handling potential liquidity issues by combining CLOB and AMM infrastructure. While professional market makers typically handle liquidity for larger and more liquid assets, there’s much less demand to manage smaller and more speculative tokens; in many cases, this is because the substantial costs associated with hiring a market maker to handle liquidity are unaffordable for startup teams. So, to increase the potential for deep liquidity in smaller markets, Kuru has created vaults.

Vaults essentially bring AMM infrastructure (similar to Uniswap v2) to CLOBs by decomposing the AMM's price curve into discrete limit orders, and submitting them to the CLOB. Since vaults are open for all markets on Kuru by default, anyone can provide liquidity for any asset, in addition to the liquidity supplied by market makers. Ultimately, vaults bring a solution to liquidity inefficiency that also utilizes the efficiency of CLOB infrastructure.

Perpl

Like Kuru, @perpltrade is building an onchain CLOB which fully harnesses Monad’s powerful performance rather than seek offchain solutions. Perpl's core features include its onchain clearinghouse, transparent risk engine and its independent asset-specific vaults run by professional market makers (with an emphasis on providing liquidity for new/small assets to enable fast listings).

Since the platform is not yet live on testnet, the amount of public information on its infrastructure is limited. However, with Monad expected to launch on mainnet within the next 3-6 months, it’s likely that more information regarding Perpl as well as their testnet launch will be available in a matter of months, if not weeks.

MegaETH

Similar to Sei and Monad, @megaeth_labs aims to transform the EVM’s capabilities by building its own optimized implementation. However, unlike Monad and Sei, MegaETH is building a rollup on top of the Ethereum L1.

While rollups have addressed and solved some of Ethereum’s scalability issues, there remains a critical bottleneck in the process of finality. MegaETH aims to solve this by becoming the first "real-time Ethereum," targeting latency of 1-10 milliseconds, throughput of 100k transactions per second, and up to 10 gigagas/second.

The MegaETH team has not only built a custom implementation of the EVM to eliminate performance bottlenecks, but they’ve also reimagined the standard L2 framework to maximize execution performance and minimize dependency on consensus.

Since its testnet launch on March 21, 2025, MegaETH has delivered promising performance, processing over 6.15 billion transactions from more than 511,000 addresses.

Now, let’s dive into some of MegaETH’s most important modifications to the EVM.

Fig 10. MegaETH Summary

State Management

One of the MegaETH team’s key findings when looking for ways to optimize the EVM was the discovery that read/write opcodes were potentially a major bottleneck, accounting for ~50% of computation time. To improve on this issue, they came up with two approaches.

The first was to increase the efficiency of state value updates by using a Verkle Tree variant rather than the standard Merkle Patricia Trie. Specifically, Verkle Trees require fewer operations during state updates due to a more compact structure; this reduction of necessary operations makes them a faster alternative, while their smaller proof size reduces cost.

The second solution was to completely transform the way that sequencers operate. Specifically, MegaETH sequencers are programmed to contain 100 GB of RAM – a much larger amount than other high-performance blockchains. This allows the entire state to be stored in memory, eliminating relatively large delays caused by SSD interaction and reducing state access speed by ~1000x, greatly improving latency.

State Value Communication

In addition to solving the inefficiencies created by suboptimal storage of state, MegaETH improves the state synchronization process between sequencers and full nodes. This isn’t only an EVM issue, but one that’s faced by other high-performance blockchains as well.

To optimize for more efficient intra-network communication, MegaETH’s block propagation protocol optimizes how network bandwidth is used, resulting in faster distribution of blocks created by the sequencer.

Sequencer Operations

Many L2s use sequencers for several operations, such as bundling transactions, submitting blocks of bundled transactions, and creating proofs that all transactions are accurately executed. However, these operations are done in a sequence, which means slow operation of one portion results in decreased overall performance.

To increase efficiency and minimize potential bottlenecks, MegaETH splits up these operations between the following components:

  • Sequencer node: receives transactions, generates blocks, and stores EVM state in RAM

  • Prover node: generates proofs to verify transactions

  • Full node: stores MegaETH blockchain history, verifies state by re-executing all transactions

  • Replica node: receives state from sequencer node, updates it in local environments, verifies it via proofs generated by prover node

  • Light client: stores latest state received by full and replica nodes, provides it to users

Since replica nodes can provide a verified copy of the latest state without going through the full transaction re-execution process, they can send this data to applications in real-time, providing a much smoother experience and enabling unprecedented support for apps that require maximum performance standards.

Additionally, MegaETH has plans to continue to optimize their sequencer infrastructure. Future testnet iterations are expected to feature:

  • Multiple sequencers

  • Permissionless full nodes and replica nodes

  • Permissionless prover nodes running in optimistic mode

Block Creation

MegaETH’s unique architecture also reforms the block creation process.

In addition to generating standard EVM blocks at 1-second intervals, MegaETH also produces “mini blocks” every 10 milliseconds. Mini blocks eliminate overhead by only containing transaction results, rather than the full metadata load of regular blocks, making it possible to update the network’s state at a “real-time” rate.

Throughput Improvements

For many rollups on Ethereum, a major bottleneck lies within data availability throughput.

Specifically, all rollups that use Ethereum for data storage are competing for space, creating a major disadvantage for builders who want to optimize performance. That’s because Ethereum only offers a total of ~400 TPS of data availability throughput, and all rollups must compete for that space.

To put in perspective how limiting this is, MegaETH requires 20MB/s of throughput, or roughly 300x the amount offered to all Ethereum rollups combined.

So, instead of using Ethereum for data availability, MegaETH has opted to use @eigen_da, which has achieved throughput of 100MB/s since its V2 mainnet launch on July 30. And unlike Ethereum, a major advantage of EigenDA is that its throughput capacity grows linearly with its network, creating potential for significant improvements in future throughput capability.

MegaMafia Accelerator

Since its testnet launch earlier this year, MegaETH has already accrued an exciting ecosystem of partners and applications.

A major force driving builders to MegaETH is the combination of its performance capabilities, which enable use cases impossible on other EVMs, and the @0xMegaMafia accelerator system, which aims to help teams building in the ecosystem optimize their products, receive access to capital, and more.

The first cohort of MegaMafia projects serves as a testament to the program’s success. Collectively, the 15 “MegaMafia 1.0” projects have raised over $40M in funding from leading venture firms around the world.

MegaMafia 2.0, which began in April and is currently in process, aims to bring a second cohort of 15 teams into the ecosystem. Within those 15 projects are 3 CLOBs: Valhalla, Avon, and World Capital Markets (WCM), each of which brings a unique product to MegaETH.

Valhalla

@valhalla_defi offers native CLOB-based trading for spot assets and perpetual futures, as well as lending markets on MegaETH. The team behind Valhalla has two primary objectives: maximize execution speed (targeting sub-100 milliseconds) and maximize cross-market composability.

To achieve the first objective, Valhalla uses hybrid infrastructure with onchain settlement and offchain matching. The platform leverages co-located sequencers positioned near MegaETH's infrastructure to minimize network latency, enabling near-instant order updates and cancellations without gas costs. As for composability, Valhalla features shared liquidity which enables collateral to be used across spot, perps, and lending markets.

The resulting combination of performance and composability brings new possibilities for capital efficiency, such as performing multiple actions (e.g. cover short position, roll margin into yield vault, and rebalance collateral) within a single atomic transaction.

WCM

@wcm_inc offers the same services as Valhalla – spot trading, perps trading, and lending – and its platform optimizes for maximum composability across the three. At the core of WCM is the universal margin account, which is powered by the ATLAS risk engine. ATLAS is built to maximize the ability for traders to use leverage, enabling features such as undercollateralized lending and the use of collateral across spot, perps, and lending markets. While this allows for broad customization of trading and yield-based strategies, it’s particularly useful for strategies such as basis trades and funding rate arbitrage.

While WCM is not yet live on public testnet, it plans to keep all infrastructure onchain, fully leveraging MegaETH's high-performance infrastructure. The ATLAS risk engine continuously recalculates portfolio margin requirements in real-time, adjusting for market movements and cross-asset correlations, computations that would be prohibitively expensive on traditional EVMs but are feasible on MegaETH's optimized architecture.This speaks to the power of MegaETH, as WCM will require real-time updates to accurately track cross-margin transactions, as well as substantial throughput capabilities to handle simultaneous matching and execution of spot, perps, and lending orders.

Avon

@avon_xyz brings a different approach to CLOBs by using its architecture to facilitate fully onchain CLOB-based credit markets, enabling interest rates to be truly driven by the market. Since the handling of credit markets via CLOB brings additional complexities, Avon uses a hybrid engine consisting of its orderbook as well as lending pools.

Instead of matching buyers and sellers against price like a traditional CLOB, Avon's orderbook creates a three-dimensional matching engine that considers interest rate, loan-to-value (LTV) ratio, and borrowing period simultaneously. This multi-parameter matching requires complex state updates with every order, demanding the sub-10ms latency that MegaETH provides to ensure real-time rate discovery. To complement its orderbook, Avon also offers lending pools for passive lenders as an easier way to interact with the matching engine.

Avon's markets will be centered around MegaETH's native stablecoin, USDm, built using Ethena's stablecoin-as-a-service infrastructure. USDm maintains its peg through staked ETH yields and basis trading strategies, providing native yield while serving as collateral across the ecosystem. As the first L2-native stablecoin and a significant use case for Ethena's nascent stack, this integration enhances USDm's liquidity and composability, borrowed USDm can seamlessly flow across the broader MegaETH ecosystem.

This innovative structure for credit markets highlights the potential for onchain customization, enables lenders and borrowers to get the best possible rates and customize their loan terms, and is only possible on a high-performance network like MegaETH.

Rise Chain

Known as “the Gigagas Layer 2,” @rise_chain is an up-and-coming Ethereum L2 rollup built on Ethereum that’s bringing massive performance upgrades to the EVM-based ecosystem.

Like MegaETH, Rise has chosen to use their custom-built, optimized version of the EVM to maximize throughput and minimize latency. Rise is currently live on testnet, and its team aims to achieve unprecedented performance: 10 Gigagas per second, sub-5 millisecond latency and over 100,000 transactions per second.

In order to maximize performance, Rise’s infrastructure brings five primary improvements to the standard EVM: a parallel EVM (PEVM), a continuous block pipeline (CBP), partial blocks called “shreds,” based sequencing, and more efficient data management.

Fig 11. Rise Chain Summary

Parallel EVM (PEVM)

The Rise team constructed an optimized version known as the Parallel EVM, or PEVM, to overcome the limitations of the standard EVM.

Specifically, PEVM is an adaptation of Aptos’ Block-STM engine, enabling parallel execution while maintaining deterministic outcomes. This ensures that Rise’s transaction history matches sequential EVM execution to guarantee consensus compatibility.

Continuous Block Pipeline (CBP)

Many rollups face performance bottlenecks due to the fact that a relatively large amount of time spent during the block-building process is dedicated to consensus operations. To improve this process and minimize latency, Rise improves network speeds by pipelining its processes, similar to many other high performance chains.

Rise’s CBP essentially enables the network to multi-task, allowing for two major improvements:

  • Transactions can be executed while still residing in the mempool

  • Transactions can be executed before consensus is reached

Shreds

In order to make the block-building process even more efficient, Rise also breaks down blocks into “shreds,” which are mini-blocks that aren’t required to contain a state root (the full state of the blockchain).

Because shreds bypass the process of state root Merkelization, they can be constructed and validated much faster than traditional blocks (as fast as 1 millisecond). Additionally, propagation is performed for each shred rather than waiting for the full block to be verified. This dramatically improves overall latency without sacrificing security.

Based Sequencing

Another innovation in the Rise stack is their use of “based sequencing” as an alternative to traditional rollup architecture – rollups that adopt this form of architecture are therefore typically referred to as based rollups.

While many rollups use a single sequencer, Rise avoids this procedure due to its inherent centralization risks (e.g. central point of failure, potential censorship, etc.). Instead, Rise opts for based sequencing, which uses a subset of underlying L1 (in this case, Ethereum) block proposers to build blocks. Specifically, transactions made on Rise are submitted to L1 proposers for ordering and block generation, while simultaneously being processed by Rise’s own execution nodes.

This not only takes advantage of Rise’s performance-optimized infrastructure, but also inherits the decentralization, security, and reliability of Ethereum’s decentralized validator set.

Data Management

To improve data availability performance and state management, Rise integrates Celestia’s DA layer as well as their custom-built database: RiseDB.

Celestia brings efficiencies to the network by optimizing data availability operations.

Due to the bottlenecks associated with using Ethereum for data availability (as explained in the MegaETH overview), Rise opts to use an alternative DA layer. But unlike MegaETH, Rise has chosen Celestia rather than EigenDA to store its data.

Currently, Celestia offers potential throughput of ~1.33 MB/s: a 21x improvement over Ethereum’s DA throughput. However, speeds of 27MB/s have been achieved in their Mamo testnet, with plans to further support throughput of 1GB/s, and the long-term target being set to 1 GB/s.

Additionally, Rise uses their own database, RiseDB, to optimize state management. RiseDB differs from Ethereum’s Merkle Patricia Trie by enabling parallel execution to reduce latency. Specifically, it utilizes Rise’s pipelining architecture to update state concurrently, so it operates at full capacity without hindering sequencer and execution activity.

Coming Upgrades + NitroDEX

Looking ahead, the Rise team is working on several features that could increase performance by an additional 3-5x, moving towards their ultimate goal of achieving over 10 Gigagas per second. Rise’s ongoing developments include:

  • Optimizing concurrent data structures

  • Implementing more granular memory locations

  • Adding pre-provided metadata from statically analyzed mempools

  • Writing custom memory allocators

  • Supporting multiple EVM executors

With Rise’s mainnet coming soon (estimated arrival in late 2025), its performance-maximized infrastructure will be a force to be reckoned with for high-activity apps such as CLOBs.

Currently, one of the most popular apps on Rise’s testnet is @nitro_dex, a fully onchain CLOB. Rather than relying on native liquidity, NitroDEX operates as an “aggregator of aggregators,” finding the most efficient route for trades across multiple ecosystems while leveraging Rise’s inherent low-cost, high-speed infrastructure.

  1. Comparative Analysis

Decentralization Trade-Offs

A notable trend within the CLOB space is the increasing shift to fully onchain infrastructure. For example, all three of the Solana-based CLOBs covered in this report (Drift V2, Pacifica, and Bullet) use a hybrid architecture:

  • Drift V2: offchain orderbook with Keeper bots for matching, onchain settlement

  • Pacifica: offchain orderbook/matching, onchain settlement and self-custody

  • Bullet: offchain orderbook with a custom-built sequencer for matching, onchain settlement

However, the rest of the CLOBs that we’ve discussed operate fully onchain orderbooks, matching engines, and settlement mechanisms.

There are clear trade-offs for each. For example, offchain matching engines may provide lower latency, lower fees for traders, as well as lower infrastructure costs and better UX for market makers. However, maintaining operations in offchain environments is unlikely to succeed in the long-term due to two main factors:

  • Centralized infrastructure reduces transparency and increases vulnerabilities by relying on central points of failure

  • While offchain operations are potentially more efficient in the short term, their performance advantages are at risk of being matched, if not outpaced, by the rapid rate of onchain innovation

Looking ahead, the increasing trend of CLOBs moving fully onchain is likely to persist, and many CLOBs that currently utilize offchain infrastructure will likely shift all operations onchain due to current and future performance advantages offered by onchain alternatives.

Developer Experience

As mentioned earlier, the emergence of real-time blockchains initially resulted in an industry-wide shift away from EVM into AltVMs such as SVM and MoveVM due to the EVM’s perceived performance limitations. However, newer blockchains have demonstrated that the EVM can be modified to overcome these issues and deliver AltVM-comparable performance.

Now that the moat between EVM and AltVM performance is shrinking, developers who want to build performance-optimized products have more flexibility on where they can achieve their goals. As a result, the moat of providing appealing developer experience is quickly replacing the performance moat.

For existing blockchain developers, EVM-based ecosystems tend to have an advantage; since Ethereum remains by far the biggest ecosystem in web3, many developers are already familiar with Solidity-based smart contracts and EVM tooling. In fact, there are currently over 10,000 developers building on EVM-compatible blockchains.

The EVM stack also provides advantages for non-native “web2” developers to easily onboard into web3 for two main reasons:

  • Solidity is the most comparable to widely-used traditional programming languages, such as JavaScript and C++

  • Since Solidity is the oldest and most battle-tested smart contract programming language, it has the widest array of tools to optimize overall developer experience

However, there are important nuances to consider. For example, the more changes that are made to the EVM and related SDKs by each blockchain, the larger the learning curve for new developers to begin building.

At the same time, the AltVM-based real-time blockchains (Solana, Aptos, and Sui) are all relatively large ecosystems with robust tooling and ecosystem support. While newer blockchains such as MegaETH and Rise Chain may have better theoretical performance benchmarks, they have relatively limited resources for onboarding developers simply because of their age and size. However, as we’ve seen with MegaETH’s MegaMafia program, it’s still very much possible to successfully onboard new projects, even if there’s a steeper learning curve for developers.

Ultimately, as AltVM ecosystems face more competition from EVM-based ecosystems, we’re likely to see a trend in which significant effort is placed on creating an easy onboarding experience for web3 and web2 developers alike.

Ecosystem Maturity

From an ecosystem standpoint, Solana has a major advantage over the other real-time blockchains due to its first-mover position. In fact, Solana’s mainnet launch occurred before the development of the other real-time blockchains began. However, since this space is still in its infancy, success is far from guaranteed. For example, the rapid ascendance of Hyperliquid has shown that the CLOB space is just starting to take shape, as it established itself as a dominant newcomer.

This means that pre-mainnet ecosystems, such as Monad, MegaETH, and Rise Chain, are not at as much of a disadvantage as it may seem.

  1. The Endgame

Keeping Web3 In Web3

Right now, the state of DeFi is experiencing a significant breakthrough as CLOBs begin to break out of their infancy phase. As the space continues to mature, new features, architectures, cross-platform capabilities, and many unforeseen developments will undoubtedly be realized.

Two of the most important trends in web3 that are currently shaping the future of CLOBs (which are subject to change, of course) are:

  • The increasing interest and presence of traditional financial institutions within the DeFi space

  • The acceleration of performance breakthroughs, such as Gigagas capabilities, more efficient pipelining, partial-block execution, and many more

While the proliferation of institutional activity in web3 is encouraging, teams building real-time web3 infrastructure need to be active in onboarding institutional users in order to achieve sustainable success. That’s because there exists a potential scenario where many large financial organizations simply choose to build their own high-performance network using the model of an existing real-time blockchain.

An advantage currently held by web3 developers is the rapid pace of innovation and the network effects of existing DeFi liquidity and composability—elements difficult for institutions to replicate in isolation. For example, Monad was seen as the gold-standard for high-performance blockchains just 2 years ago. While Monad’s technology is certainly impressive, emerging blockchains like MegaETH and Rise Chain are already targeting 1000x improvements on Monad’s latency and 10x improvements on throughput. What makes this truly amazing is that all of this has happened before Monad’s mainnet launch.

This rate of change presents a powerful moat, but likely one that only lasts for the short-term. Once the rate of acceleration slows, it will be easy for institutions to simply “build their own MegaETH.”

Ultimately, if web3-native teams can convince institutions to adopt not only their infrastructure, but their expertise, CLOBs have a clear path to establishing dominance in the coming global evolution of market infrastructure.

Potential Effects Of Consolidation

An interesting trend that’s recently become more prevalent within the CLOB space is the ascent of projects prioritizing CLOB infrastructure over front-ends, such as Decibel, DeepBook, and Monaco.

This could lead to the CLOB wars being split into different battles: a larger battle to dominate the infrastructure portion of the market, and smaller battles over features such as UX, leverage limitations, order type offerings, asset listings, collateral types, and much more. This scenario would be comparable to what is currently developing within the Hyperliquid ecosystem. Additionally, we may also see the rise of community-based frontends – which could direct most or all protocol fees towards community-centric initiatives – as an alternative to the traditional ref link fee accrual approach.

Ultimately, many of the smaller battles can be boiled down to targeting retail vs professional users. And while there are countless versions of the future that this competition could create, two things are almost certain:

  • Competition will get much stronger, resulting in consolidation

  • The number of web3 users will become much larger

One possibility is an era dominated by consolidation. In this scenario, a majority of users stay on a disproportionately small selection of blockchains. For example, if during the next 5 years, 1 billion users enter the space, and 95% of their activity takes place on just 10 blockchains, we would see a mass migration of the most useful applications onto those 10 blockchains, starving the rest not only of liquidity, but also of users. While this is an extreme example, some degree of it remains well within the realm of possibility. If a selected few blockchains emerge as first-movers to onboard users into web3, and especially if those blockchains have sufficient scalability, appealing UX, and engaging products, it’s unlikely that those first-time users will expand their activity across multiple blockchains.

Another scenario, and perhaps a more realistic one, is that users are distributed between ecosystems at a more moderate rate (i.e. Pareto principle). While this would still constitute the eventual failure of many currently existing blockchains (which is a likely scenario in any future) consolidation would be less extreme, creating healthier competition. Specifically, there would be more room for emerging CLOB platforms to take market share, forcing the leaders to continue a healthy pace of innovation or risk being abandoned for a more promising alternative.

While the amount of scenarios are endless, a likely outcome is that a majority of surviving CLOBs will likely use their infrastructure to support the development of individual DeFi products. This can create a win-win scenario, where users and builders favor infrastructure providers that provide the most appealing incentives (e.g. direct revenue pass-through, subsidized fees, liquidity for builders, etc.) as well as the apps within each CLOB ecosystem which provide the best products (e.g. clean UX, best yields, fastest listing, etc.).

Another likely outcome is, of course, consolidation. However, barring an extreme scenario where just a few infrastructure providers hold sufficient liquidity to power global trading for retail and institutions alike, there will have to be sufficient use of composability and interoperability.

The Importance Of Composability

Ultimately, in order for increased competition to have a positive impact on the space, it’s essential that developers use web3’s inherent composability – one of web3’s largest moats – to its fullest extent. This will result in better products, better opportunities (for yield, trading, AI-driven asset management, etc.), and further expand the capabilities that decentralized, onchain networks have over their centralized predecessors.

Whether web3 experiences significant consolidation or not, it’s also essential that cross-chain solutions are built to address the inevitable fragmentation created by a multi-chain world. Interoperability serves as a powerful feature of composability, and to support the consistent inflow of (potentially) trillions of dollars from global markets, there will need to be secure and scalable mechanisms in place for liquidity to flow between major infrastructure providers.

———————————————————————————————————————————————————

The content provided in this article is for educational and informational purposes only and should not be construed as financial, investment, or trading advice. Digital assets are highly volatile and involve substantial risk. Past performance is not indicative of future results. Always conduct your own research and consult with qualified financial advisors before making any investment decisions. A1 Research is not responsible for any losses incurred based on the information provided in this article. This campaign contains sponsored content. A1 Research and its affiliates may hold positions in the projects and protocols mentioned in this article.


A1 Research - Shaping crypto’s

most compelling stories.

The content published by A1 Research is intended solely for informational and educational purposes. It does not constitute investment advice, financial guidance, or an offer to buy or sell any securities, digital assets, or financial products. All opinions and analyses expressed are those of the individual authors or the A1 Research team, and do not represent the views of any affiliated entities unless explicitly stated.

While A1 Research may collaborate with industry participants, protocols, or investors, we maintain full editorial independence. In some cases, these relationships may influence the areas we choose to explore, but never the integrity of our research or conclusions. Any such relationships will be disclosed where relevant.

Nothing on this website or in associated content, including newsletters, reports, or social media. should be relied upon for investment decisions. Readers are encouraged to conduct their own due diligence and consult with professional advisers before acting on any information found in our materials.

All rights reserved. A1 Research 2025 ©

A1 Research - Shaping crypto’s

most compelling stories.

The content published by A1 Research is intended solely for informational and educational purposes. It does not constitute investment advice, financial guidance, or an offer to buy or sell any securities, digital assets, or financial products. All opinions and analyses expressed are those of the individual authors or the A1 Research team, and do not represent the views of any affiliated entities unless explicitly stated.

While A1 Research may collaborate with industry participants, protocols, or investors, we maintain full editorial independence. In some cases, these relationships may influence the areas we choose to explore, but never the integrity of our research or conclusions. Any such relationships will be disclosed where relevant.

Nothing on this website or in associated content, including newsletters, reports, or social media. should be relied upon for investment decisions. Readers are encouraged to conduct their own due diligence and consult with professional advisers before acting on any information found in our materials.

All rights reserved. A1 Research 2025 ©

A1 Research - Shaping crypto’s

most compelling stories.

The content published by A1 Research is intended solely for informational and educational purposes. It does not constitute investment advice, financial guidance, or an offer to buy or sell any securities, digital assets, or financial products. All opinions and analyses expressed are those of the individual authors or the A1 Research team, and do not represent the views of any affiliated entities unless explicitly stated.

While A1 Research may collaborate with industry participants, protocols, or investors, we maintain full editorial independence. In some cases, these relationships may influence the areas we choose to explore, but never the integrity of our research or conclusions. Any such relationships will be disclosed where relevant.

Nothing on this website or in associated content, including newsletters, reports, or social media. should be relied upon for investment decisions. Readers are encouraged to conduct their own due diligence and consult with professional advisers before acting on any information found in our materials.

All rights reserved. A1 Research 2025 ©