The Cross-Domain Thesis Part 1: Setting The Stage

Words by Maven 11 Venture

Photography by

Unsplash

Maven 11 Research

Moving towards a world of many domains

As the blockchain ecosystem expands with new additions of rollups and app-chains each month, liquidity becomes increasingly fragmented. Yields are scattered, users dispersed. Because of this, one thing has become evident: aggregation is and will increasingly become more imperative. There are many projects striving to improve these issues, but they encounter various challenges that can be addressed through better infrastructure, trust-minimisation, interoperability, and language expression. These challenges encompass persistent oracle issues, liquidity fragmentation, compatibility with different virtual machines (and building for them), collateral management, non-conditional transactions and the need for simplified yet customisable transaction mechanisms.

The past few months have showcased an increase in announcements of app-chains and rollups, often utilising shared or derived security from existing protocols. Here, the rollup part is specifically important to note as well, the increase in rollups plays into the cross-domain thesis – especially as long as there’s no inherently chosen global state, transport layer or in-protocol bridging/messaging system yet. This means that the rollups, while able to utilise liquidity and derive part of the censorship resistance from the underlying layer (depending on the security and trust assumptions on the rollup/smart contracts side) are still quite limited in cross-rollup communication. Essentially, they act like siloed expansions of Ethereum liquidity, and are more like singular chains that run into the same communication problems as others before them. Bar Shared Sequencing setups that will likely alleviate some of these issues for sets of rollups, most existent and many future rollups will run into these communication concerns. The underlying layer’s finality, congestion, and throughput are also limiting factors here. E.g. if I as a user of rollup A want to make a “trustless” transfer to rollup B (without going through a trusted third-party bridge that brings additional risk). I would have to either wait for the probabilistic finality of a ZKP to be verified after some set of supermajority epochs, or wait for the challenge period (in the case of an optimistic rollup) to have ended before I can move my funds to the bridge contract of rollup B. These issues are easier to solve in the case of deterministic finality, as long as the two rollups are sharing a transport layer and can verify the state machine of each other – either via a light node or full node verifying block headers or state that proofs inclusion on the underlying layer. In the former example, utilising the underlying layer’s economic security to guarantee inclusion (if all data is available) to the rollups is also a likely relatively “strong” guarantee that a bridge transfer can happen.

Cross-Rollup Transfers using Cross-Domain sync and restaked collateral

Many of these assumptions also assume that the token standard of the connected chains and rollups is universal (e.g. IBC w/ CW20 from the bank module of chain A to the bank module of chain B or fungible ERC-20 tokens moving from deposit contract to deposit contract on the same underlying settlement layer).

Overview of Cosmos SDK Modules


Although what's important to note is that in IBC’s current form, token routes are path-dependent. Which means that currently tokens that have taken a different path to a certain end-state are non-fungible and aren’t able to be used in the same pool, for example. The path that the tokens have taken via IBC is encoded in their denomination (denoms, read more here). Which uses SHA256 to have a fixed length output, although it does mean that the ICS-20 module in IBC has to keep a mapping of all IBC denoms since it can’t compute the input, to look up the path and denom. The reason for this is entirely security-based (since tokens with different paths have different security guarantees); however, it does make it rather inconvenient for end-users and applications that are trying to do various cross-domain interactions – since tokens (at least in a world with DEXs beyond Osmosis in interchain) are likely to have moved around considerably. These would be unable to be deposited into the same pool (different denom, “same” token), and the user (maybe being unaware of this) would be perplexed – not a great user experience.

There’s an upcoming change for this, called path unwinding, which will help alleviate some of these issues. Path unwinding is essentially just a way to solve the non-native denom issues, by having them be sent back to their native chains before going to the final destination – this does add some inherent latency into the system, though. However, in the case of several hops from various chains to get to the tokens end-state, it does significantly lower hops, by bridging straight to its native chain, and then to its end destinations (in case of a situation where you’d previously need the same winding path for fungibility).

Path Dependent (now) vs Path Unwinding (future)

Another way this is being solved concurrently is via Packet Forward Middleware. Which is a middleware module built for IBC. This essentially means that a chain can utilise this module to route incoming IBC packets from different chains through the source chain first, so get the same denomination so that all tokens are fungible. This is particularly an issue as the number of chains increase by the month, meaning an increase in the number of archival nodes and relayers that must be run for each counterparty chain. The forwarding middleware also helps and ties into multi-hop IBC transactions (e.g. hopping through numerous chains to end up at the destination with the same denom as tokens of the same type). Essentially, by routing through the same chain, we can uphold the same denominations. It also means that the multiple chains that might be involved now becomes a single transaction, instead of a user going from interface A, to B, to C and so on.

Concurrent middleware solution to solve Path Dependency

It also mirrors the way data packets are traditionally routed across the internet to reach destinations, in this case by having the native source chain as the router that glues together the rest of the connections (and as such deriving security from the source chain path). The Packet-Forward-Middleware acts as the gateway out of a network (router interface connected to a local network in traditional terms).

If the destination is directly connected to the same router, the packet is forwarded directly to that. If the destination network is not directly connected, the packet is forwarded on to a second router that is the next-hop router.

If you go to Osmosis today, a good example of how this currently works is that if you want to reach dYdX with your axlUSDC (that’s on Osmosis), you’d have to transfer from Osmosis to Axelar, and then off to dYdX (unless all flows through Osmosis). However, with packet-forwarding-middleware, you can now do this in a single transaction. This is supported by the new memo-field in ICS-20 token transfers, which adds extra information to transfers (denom, ticker, action etc). This plays into the middleware design, since it can now route packets correctly:

Source: IBC-Go

This, of course, adds extra strain on already heavily utilised IBC relayers, which is also why relayer incentivisation and fees are an important topic (and not just limited to the Cosmos ecosystem, but the broader modular and cross-domain world as well). Currently, relayers are relying on compensation from delegations to their validator nodes (relayers are primarily run by validator providers). ICS-29 is a way to help solve this. However, with incentivisation, also comes the importance of censorship resistance, and avoiding profit-seekers to go against the protocols to extract rent. This means that relays must adhere to certain rules through incentivisation such as;

  1. Timely delivery of packets
  2. Relaying acknowledgements
  3. Timeouts, if packets have expired before delivery, without needing a restart of channels
  4. Exposed relayer addresses to IBC modules, so they can be incentivized.

There would likely be three fees involved in the packet relaying, which are distributed to forwarding and reverse relayers (receive and acknowledge);

  1. ReceiveFee
  2. AcknowledgmentFee
  3. TimeoutFee (in the case of timeout of non-delivered packet)

Generally, what relayer incentivisation enables is the ability to get more market participants to be involved in cross-domain message passing. However, another view is that less popular routes would need to be subsidised (or possibly shut down), since you’re creating a market for incentivisation of popular routes. Although, you could also view this as the user base expressing their preferences in the form of fees – a form of intent if you may.

This would hopefully increase the decentralization and censorship resistance while providing a more efficient market for cross-chain message passing. A much-needed change in terms of how relaying is currently handled (while great, it is not very efficient for the existing market participants, nor is it sustainable for most) so that IBC can scale beyond its current size. Furthermore, it also brings transparency to the relayer market. However, what incentivisation and revenue does mean is that it likely leads to the fastest and most efficient (and geographically best located) relayers to accrue the most value from fees. Some ideas of potential solutions to this have been proposed, such as voting mechanisms for a small subset of relayers to be selected for specified time intervals for certain channels, giving more relayers a chance to participate. If you’re interested in reading more about relayer incentivisation related to IBC, have a look at this link.

A current list of large active IBC relayers, can be seen here:

Source: IOBScan

What has probably been clear, if you’re an active cross-domain user, is that, largely (bar a few newer applications) the UX has been less than stellar. The new developments happening here are arguably going to get us closer to that seamless experience so many are working towards.

Taking a step back, and looking a year backwards towards our large modular focused piece, The Modular World, it is interesting to see just how far the modular thesis has developed. The original piece had a diagram that showed our idea of how a modular ecosystem could eventually look. It’s safe to say that we’ve already gone far beyond that.

Source: The Modular World

Here’s an updated version of that, that also shows how the Ethereum ecosystem has developed, and how many rollup SDKs and as-as-service providers that have popped up since then.

Non-Comprehensive Overview of the Modular World

While the modular and cross-domain thesis remains clearer than ever, many issues still loom. Questions still lie unanswered regarding the profitability of many of the rollup service providers, even more so than for SDKs. Some issues arise in the sense that the cost of hardware and DevOps might outweigh the profits from less transactional heavy and utilised rollups. If the case is that the developer(s) utilising the rollup isn’t paying for bandwidth, hardware cost and an overhead, the service provider might quickly run into issues. Otherwise, you’re forced to incentivise by giving away tokens, which if they have to be sold for the project to keep running, leads to a poorer security budget and unhappy token holders. For a more in-depth discussion on security budget, we refer to our modular MEV pieces – part 1, part 2.

However, in the case of the providers running key infrastructure within their stacks, and those rollups seeing a lot of traction – there’s clearly a lot of value to be captured. This value to be captured might also see well established infrastructure providers that are already running validators and nodes in other ecosystems start to want a piece of the pie. This could come from them running sequencers, relayers and more within these stacks – which could help decentralize parts of the hardware, and more importantly provide liveness and censorship resistance, instead of relying on a single team to run everything. Beyond that, it’s also clear that to get those applications that are extremely valuable as an infrastructure provider or service provider to onboard, it’s going to be largely a BD game to a big extent. Although, we’ve seen decentralization and technology win out before. However, in this case, the customers are unlikely to care as much about the underlying infrastructure, as long as they feel properly serviced. What this all means, is that we’re likely to see a ton of new rollups, both singular, but also as (app-specific) L3s.

Carving out a specific niche, or targeting a specific sector, might also be a way for RaaS providers to gain significant traction. SDKs are less clear in terms of how to be monetised, although they’re very popular in building out homogenous ecosystems – as we’ve seen before. Although, if you don’t run key infrastructure within those stacks, monetisation can be difficult to attain. This is the same reason that MEV-software is hard to monetise, unless you run key parts of the infrastructure of a protocol using the software, which enable you to monetise. Selling software is, at this nascent stage, hardly scalable and is also a limiting factor in enabling researching, testing and experimentation. A good example of this is with the loss of Nvidia’s CUDA (Machine Learning Software Development Framework) monopoly, and TensorFlow’s loss to PyTorch (ex-Meta) and OpenAI’s Triton; the disruptive nature of open-source software (and non-limited to specific underlying infrastructure or hardware) and its ability to iterate faster and gain network effects much quicker than closed-source software is a key differentiating factor. The general usability and flexibility (or preference expression, nudge nudge) of a language far outweighs exclusivity. If you’re developing a domain-specific language (DSL), there are good reasons why you would open-source it, make it available for research, usage, and optimisation from fellow researchers and developers – it arguably increases the chances of it taking off. However, the monetisation becomes increasingly difficult, unless you either own the stack on which the language is used, or if you run key parts of the infrastructure. If you’re interested in reading the entire saga, I would highly recommend this article (Thanks, MJ!).

There’s also a high likelihood that to attract the first rollup customers, RaaS providers might have to bear initial cost or even “pay” to attract the applications that we view as “killer-apps”, e.g. those that can bring in a large number of users or liquidity. This can be a daunting task for providers that may have a smaller war chest than others. This “race-to-the-bottom” becomes clearer once you see the considerable number of providers that have already popped up, there’s a real chance the competition gets very tough in attracting the applications that they all want. There’s also a chance, that for some, the cost of DevOps and hardware may outweigh the profit margins of less popular applications (in case of application-specific versus general), a good post on this was recently made public by the Chorus One team.

A likely future in the coming year is a Cambrian explosion of application-specific rollups and general smart contract chains, but with “easier access” to liquidity – just as we’ve had numerous more “fragmented” application-specific chains in the Cosmos ecosystem. Which, despite how incredible IBC and its standards are, still feels quite fragmented from a UX perspective – although there are many projects working on solutions here – such as Squid, Catalyst, Socket, Li.Fi, Connext and many others. As we covered earlier, the interchain ecosystem itself is also working on in-protocol features to solve many of the UX problems currently faced by them. Some applications start to look like actual native cross-chain applications, that require no synchronisation between operations, and allow for asynchronous access. Underneath is an example of Catalyst’s architecture:

Catalyst Architecture

In Catalyst, Unit of Liquidity serves as the intermediary for market makers (pools), facilitating asset matching. Here, the trade intermediate is redefined not as an asset itself, but as the trade cost (Units). To simplify implementation and enable cross-chain swaps, Units can be abstracted as a numerical value, enabling existing message relayers to facilitate the process.

To leverage existing cross-chain messaging layers and streamline cross-chain swaps, a function can be devised. In a pool of assets, each asset is linked to a decreasing, non-negative marginal price function: describing how the price of the asset changes as a certain variable (changeable unknown number, typically a letter, you’ll probably remember these from maths class)*. Decreasing means that as the variable increases, the price of the asset tends to decrease. Non-negative, meaning that the prices are either zero or positive. The trade cost, represented in Units, is calculated from the integral (consider it to be an area) of the price function. The integral represents the area under the curve of the price function, and it gives us a measure of the total value or cost of trading that asset. From this, you can calculate how the price changes regarding the variable.

By utilizing Units as an intermediate between pools on various chains, swaps can be computed solely based on a chain’s local state, eliminating the need for state synchronisation. As such, you can ensure that liquidity can be accessed asynchronously on each chain, guaranteeing liquidity availability regardless of the timing or sequence of interactions with the involved chains.

Some applications likely to make use of the various native cross-chain applications are what have been dubbed (or described) as “frontends of blockchains” such as Dora, FortyTwo and Zapper over the past years. Who aim to provide a singular front-end that interacts with various protocols, applications and middleware but through a singular UX. These types of products are likely to see increases in users, as the amount of cross-domain interactions increases. Intents will also play a prominent role here, as you’ll find out later on in the series.

Regardless of the aforementioned issues, it’s been refreshing to see so many new announcements from old and new applications, as well as infrastructure projects building out new and exciting use-cases in modular fashion. From everything to application-specific rollups, app-chains, general L3s, sovereign rollups, RaaS’s, SDKs and interpreters.

Many of the projects mentioned are also interesting in the sense that they’re servicing the entirety of the modular ecosystem, by offering various customisation options and solutions. You want a rollup using Celestia for DA and Ethereum for Settlement? Go for it. Do you want an Ethereum rollup? Go for it. Do you want a Sovereign Rollup on Celestia? Go for it. Do you want a rollup on a sovereign settlement layer? Go for it. This showcases a unique selling point of modularism, in that everything is possible if you can build it. Build whatever.

As we mentioned at the start, one of the issues that continue to plague the rollup or cross-domain world remains scalability. This is a result of the demand of rollups for blockspace on the underlying layer. The Ethereum community has been at the forefront of rollup development and consequently is also developing solutions specifically catered to rollup developers and users. The most notable effort made is EIP-4844, which focuses on lowering the cost (and increasing the amount) of transaction data rollups can post to Ethereum. This is an indicator that the largest crypto ecosystem, the Ethereum one, is moving towards a modular, cross-domain world. Tons of devs are working tirelessly daily, to ensure that EIP-4844 comes out in a timely manner. However, there are still issues that need to be ironed out, as well as trade-offs and centralization concerns that need to be addressed and considered. As such, we do feel like it warrants covering EIP-4844, both its positives and negatives for Ethereum as the stalwart of decentralization. The rollup centric roadmap further exemplifies the move to a modular cross-domain world, as seen below:

Source: Vitalik Buterin, EF

You can also check out Vitalik’s post from 2020 that showcases the move towards accommodating rollups within the core infrastructure of Ethereum itself.

EIP-4844 (or proto-danksharding) is an upcoming change for the Ethereum protocol that aims to create a separate gas market and gas type for rollups to post their call_data (now blob_gas), which is what they use to send their transactions data on-chain to Ethereum itself. We also refer to this data as “blobs”, which will be gossiped around the consensus layer nodes and persist slightly longer than it takes for rollups to achieve “finality”.

It is set to bring lower fees to rollups (and their users) by increasing the amount of transaction data that rollups can post by a considerable margin. This means lowering the cost of accessing Ethereum liquidity on rollup blockspace, and making new and exciting use-cases possible. This is highly likely to result in an increase in rollup users and applications that might see this as the perfect time to move (or expand) to a rollup.

Beyond that, it is also built in a manner that makes it future-proof (something that’s critical with how many upgrades Ethereum goes through and has gone through). Blobs persist in beacon nodes (consensus layer) rather than the execution layer.

For example, blobs are stored in say Prysm/Lighthouse and not Geth/Besu. This separation allows the execution layer to focus on other initiatives in parallel, while future danksharding work can be done by making changes specifically to the consensus nodes. It’s a unique modular approach that allows for fine-tuning over the years, which is very cool.

For example, blobs are stored in say Prysm/Lighthouse and not Geth/Besu. This separation allows the execution layer to focus on other initiatives in parallel, while future danksharding work can be done by making changes specifically to the consensus nodes. It’s a unique modular approach that allows for fine-tuning over the years, which is very cool.

Overview of popular Consensus/Execution clients. Not listed are others such as Reth and planned Elixir (DSL) built nodes. Source: ClientDiversity

If you’ve paid attention to the trusted ceremony that has been going on over the past few months, that is also related to EIP-4844 and data blobs. You might even have participated in the ceremony (if you were able to get through the queue ;)). This ceremony was to provide a trusted setup for the KZG commitments (we’ll cover these in a bit). The trusted setup was to generate a data set that can be utilized each time a KZG commitment is executed. The generation of this data set involves using certain secret information. The “trust” aspect arises from the fact that a person or group of individuals is responsible for generating these secrets, using them to create the data set, publishing the data set, and then ensuring that the secrets are forgotten.

The purpose of a trusted setup ceremony is to establish a foundation of trust in the generated data set (an essential part). This was done by, so far, involving 113,299 contributions to the recursive secret in the process, this means not a single participant possesses all the secret information required to compromise the security. The number of contributors to this trusted setup makes it the largest in history.

This distributed approach means that the protocol remains secure even if some participants turn out to be malicious or compromised. Because you can arguably never get all 113K to become malicious, plus they might have forgotten their secret, or even randomized it completely (some very cool contributions, Anoma has also done something similar with their trusted setup for Namada).

Kate-Zaverucha-Goldberg (KZG) commitments are a way to allow for succinct commits to data and proving values at specific points. It is a great way to prove values for data blobs, since the prover cannot change the polynomial they are working with. They can also only provide one valid proof for each polynomial, which will be accepted (or not, if not true) by the verifier. While the commitments take some time to compute, they can be verified in batches to accelerate verifying (and cheaper). The reasoning behind using KZG instead of just hashing is to future-proof it, and to keep the proof size constant. It also fits into the eventual data-recovery part of DAS (for danksharding). Same as what Avail uses, Celestia instead uses a fraud proof model.

As mentioned before, the consensus layer nodes of Ethereum have to verify the new blob data, in the form of a sidecar (read how Prysm plans to accommodate this here). A sidecar in blockchain sense is an extra piece of software that is run in addition to the normal validator software (E.g. MEV-Boost). This sidecar batch verifies signed blobs and stores them (not in normal block bodies, but rather separately from a normal block). These blobs are quite large (target for each blob is around 125kb). This data is non-accessible to EVM execution, and can only be viewed by the EVM, not ran.

For rollups, the way blobs transactions fit in doesn’t change much. Instead of putting L2 block data into transaction calldata, they just put the data in blobs for the consensus layer sidecars. The guarantees of availability stay the same, but instead of from the EVM itself, it comes from the consensus layer of Ethereum. For Optimistic rollups in the case of fraud (and needing a fraud proof) the proof submission will require the full contents of the fraudulent blob to be submitted as part of calldata. It will then use the blob verification function to verify the data against the previous hash that was submitted, and then perform the fraud proof verification (IVG or similar) on that data, as is done today. This happens on the execution layer side with hashes (instead of KZG) to ensure future-proofing, the hashes act as references to the consensus layer blobs. The blob versioned hashes in the execution layer and blob KZGs in the consensus layer can be cross-referenced between each other.

For ZK (validity) rollups, this works slightly differently. Scroll has an excellent post on how things change post-EIP-4844, but we’ll do our best to summarize it here as well:

In Scroll’s case, they post 3 separate commitments to Ethereum itself;

  1. The list (block) of executed transactions on the L2 – This is posted as a data blob
  2. State root
  3. A proof of the state root being valid

Posting the list of executed transactions as a data blob makes sure that the data is available (data availability) so the verifier contract has access to the commitment (or hash) of it. The state root is derived from the list, since it causes the state root change. The validity proof that is posted also contains its own KZG commitment that shows it represents the correct list of executed transactions and that computation has been done correctly off-chain. The two separate commitments are checked versus each other to prove that they represent the same thing – called a Proof of Equivalence. Another route to prove that the data blob is correct is using a proof that points to the correct data and a proof of that data. But doing so for all blobs would be needlessly expensive.

Clients have previously needed to (on average lately in 2023) download around 0.1MB in block data (per block). However, this would obviously increase if rollup traction increases (and more calldata is sent). With EIP-4844 the average block size would increase to around 1MB (quite a significant increase) to allow for blobs to be downloaded and propagated, although keep in mind that most of this is temporary data that eventually will be pruned (as we’ll get into later).

Source: Etherscan

The unique part of the blob (blob_gas) is that it doesn’t compete with the gas usage of native Ethereum transactions, but rather creates its own gas market. As such, the supply/demand of normal gas does not affect blobs. There are also a few changes that have to be made on the block building side (primarily off-chain with MEV-Boost) to be compatible with EIP-4844. Block builders need to be able to accept KZGs and put them into consensus layer blocks, in the sidecar (however, the building logic is not required for all clients).

The amount of data that will be posted is a large increase to what is currently being posted, which also means that with EIP4844 must come EIP4444 and data pruning on the consensus layer.

If we consider the around 3 or 6*128 MB per block (3-4x) increase in size per slot, then the already rather high total storage capacity required for node chain syncs (and archival nodes) become quite significant. As such, data pruning is needed to ensure that Ethereum stays decentralized, and that most people can run a node (if they so desire). What EIP-4444 brings to the table is proposal that clients can prune (e.g. “delete”) data that is “old”. The exact timing isn’t set in stone, but likely somewhere from a month. Do note that this is for the execution clients. For the consensus layer, because of the way rollups currently work, consensus clients could delete the blob data after a set amount of time has passed – this should, of course, be longer than the challenge period of Optimistic rollups (likely 18 days). The data blob deletion is needed immediately as EIP-4844 comes online; otherwise data storage is quickly going to become a problem and could significantly increase hardware cost/devops and the decentralization of the network. For the future, state expiry is also needed for other changes to Ethereum, such as danksharding and so on. So building this into the protocol early, and preparing for the future, is a good idea regardless (Celestia is also exploring similar pruning).

One worry that this does bring is the possibility of an increase in archival node centralization, since the cost of running these (archival nodes are already storing north of 10 TB of data) becomes increasingly hard with such a large increase in posted data. Archival nodes and indexers are still needed for various applications that utilise Ethereum, and its rollups. One worry here is that the number of applications that utilise rollup data is likely to be smaller, than the number of people using base Ethereum data, but the amount of data that needs to be stored increases tenfold for rollups. This could lead to a situation where, for most, running an API service that rollup application developers can pay for to access old rollup transaction data might not cover the costs of running such a service. This could lead to a situation where it might only be worth it for large RPC and API companies (and rollups themselves) to offer such a service, making the entire historical data of the chain (beyond block headers) extremely centralized. The same worries could also be applicable for indexers, who, because of the now large increase in data, might be forced to run only the fast node software (leading to a loss of client diversity for indexing). Archival and Indexer centralization is a worry that definitely could use some attention, and how this changes post EIP-4844. One interesting paper that looks into possible trust-minimized and scalable storage solutions for rollups is that of Information Dispersal with Provable Retrievability for Rollups by Kamilla Nazirkhanova, Joachim Neu, David Tse.

Source: Information Dispersal with Provable Retrievability for Rollups by Kamilla Nazirkhanova, Joachim Neu, David Tse. arXiv:2111.12323

Essentially, a storage and communication protocol using linear erasure-correcting codes and vector commitments with random sampling (“Data Availability” Sampling) to ensure provable data retrievability of rollup data in a trust-minimised and scalable manner. However, what is important to note here still, is that historical storage has a lower trust model, since you only need a subset of the data storers to be honest actors. If you need a primer on the difference between data availability and data retrievability, then this post by the great Alex Beckett is perfect.

Something that is also interesting to consider is the way Verkle trees might help alleviate some of the state bloat that Ethereum and many others are plagued by. We’ve previously covered Verkle trees in Beyond IBC and our blockchain commitments article, and essentially, one of the properties that Verkle trees allow for is the idea of statelessness (to varying degrees). Statelessness is the concept of allowing nodes to disregard state they have no interest in (read more). One instance could be that state not updated over a year could be disregarded by a node. However, this state will remain revivable at a later time via polynomial commitments.

This would vastly reduce the resource requirements for nodes. Verkle Trees are a critical component of this because they enable smaller proof sizes, which makes statelessness (either weak or full) feasible. These advantages are a result of the parent node being an offshoot of vector commitments rather than hashing functions. However, this comes at the cost of compute efficiency, as mentioned in one of our previous articles.

Another significant thing to consider is the fact that the consensus layer nodes now need to gossip and propagate around an increase in the block size data in the sidecar, which will put additional strain on the p2p network. There are various ideas about how to best approach this by having less hardware intensive nodes not download the entire data, if you want to stay up to date, I recommend reading the Eth R&D discord.

The target currently for the number of blobs to have in each Ethereum slot (12s) is 3, with the max being 6 (up from 2/4 previously). If there’s >3 blobs in a block, then the blob_gas cost will increase. If it has less than the target blobs, then it will decrease. To make the calculations more efficient, this is dependent on the previous header and excess blob_gas from those – avoiding having to download all txs in that blob. Empty blobs are “banned” since they would still need to be gossiped to avoid breaking the protocol, and also to avoid DDoS. Blobs are priced dynamically similar to EIP1599. +12.5% price increase for max blobs. -12.5% for 0 blobs.

Another worry is that stakers with bandwidth or hardware limitations (likely to be home stakers, the “bastion” of decentralisation) will face issues. The upload and download requirements might force some to be unable to participate in consensus effectively, if they aren’t able to upload/download quick enough (or store that data).

In terms of the fee market, we now have two separate fee markets for Ethereum blocks – the general normal gas fee with block gas limits and blob transactions (and their limits). Block builders will now have to avoid hitting both limits. Block building is generally moving towards “super-builders” and are highly specialized and can run very complex computations, so not many changes here. Regardless, the most significant increase from specialization from block builders is from MEV. Would again recommend Toni Wahrstätter’s Mevboost.pics for an in-depth look into MEV in Ethereum (and MEV-Boost).

Some “smaller” rollups are likely not going to be able to fill up a single blob, which might mean a waste of possible “blob space”. Aggregating blobs across rollups is difficult, as it now becomes an oracle problem  (unless the power is given to the Ethereum validators to aggregate).  One thing that is sure, is that there’s going to be a lot of added complexity. However, despite that, the verdict is apparent – it’s arguably going to be a positive force for a rollup-centric future on Ethereum. The added complexity is likely to be addressed on the go, and also emphasizes that the validator set is likely to be lowered over time via different routes – something we would also need for single slot finality, among other things. This will eventually make Ethereum as a settlement layer much more efficient, and lead to better experiences for cross-domain rollup activities.

While EIP-4844 might have an effect on the efficiency of DA finality for rollups on Ethereum, it is unlikely that it will have any real effect, for now, on perceived finality from the view of the verifying bridge. Ethereum blockspace is still highly contested and, therefore, valuable. Verifying validity proofs is still expensive, playing an interactive fraud proof game or posting a large fraud proofs and rolling back is still expensive on-chain. If you’re interested in rollup finality, I recommend watching or reading my presentation on The Semantics of Rollup Finality.

There are still some open questions being worked on regarding EIP-4844, such as;

  • Archival nodes/indexer specs (indexers may have to run reth to get state difference in a timely manner?)
  • Treating state bloat
  • Network stability for different clients and blob gossiping
  • Fee market with an increase in rollups
  • Decentralized sequencing networks fee fighting for blobs (blob auctions?)
  • Async off-chain communication between rollups for when to post blobs because of limits per block?

*If you want an overview of progression and readiness, we recommend the checklist found here.*

*A fee market analysis can be found here. The exact specification for the blob transaction pool is yet to be decided on, but there’s some great writing on consideration to be considered here.*

EIP-4844 will likely arrive with the Cancun upgrade. A testnet explorer for blobs can be found here.

As we’ve covered previously, there are still significant centralization concerns and trust issues within the way rollups operate today. Two of the issues are that of multisigs and smart contract risks.

What about if we forego the smart contracts, and instead trust social consensus and nodes in the system? Although, keep in mind that proper incentivisation and mechanism design is still incredibly important.

Instead of having smart contracts operate as light node, you could instead run light nodes on the underlying layer and utilise Namespace Merkle Trees (NMTs), which we’ve previously covered slightly in our Data Structures in Blockchain Commitments article back in January. However, we do feel like it warrants a closer look at, to explain exactly why it is important.

Namespace Merkle Trees (NMT) are a data structure that is used to organize and verify large collections of data on a blockchain. One of the key benefits of NMTs is their ability to organize data into a hierarchical structure, wherein you can efficiently verify individual or separate pieces of data that an application requests. As a user, you only need to retrieve the corresponding leaf node and its parent nodes, rather than the entire dataset. Every chain (“rollup”) on top of a DA layer using NMTs has its namespace, for which its data is merkelized and sorted. The final result is one short Merkle root, where each rollup has its own subtree (namespace) that it can pull and watch data from. This is especially relevant for blockchains with potentially more blockspace per block (while enabling light clients to be first-class citizens through DAS) as it allows you to pull data without having to pull the entire block.

Nodes in the NMT are ordered by namespace IDs and the hash function modified so that every node in the tree includes the range of namespaces of all its child nodes. If an application or rollup requests data for a specific namespace, the DA layer then supplies datasets for the specific data relevant to that application as a result of its unique namespace ID and provided nodes in the tree. This allows us to check that the provided data has been made available in DA layer blocks, hence the name Data Availability (DA). NMTs can be seen as a way to allow verifiable queries of blockchain data for specific applications efficiently. If you’re interested in learning the background of NMTs, we highly recommend this article.

Rollup nodes can then have a fork-choice rule embedded in them that includes transactions sent to the namespace on the DA layer related to the rollup every N blocks – enabling force inclusion (forced inbox). The way Ethereum rollup smart contracts work is essentially just like a light client, which in this case is a light node that can verify data from NMTs.

Whatever road is chosen, it’s clear that many ecosystems are trying to solve similar problems, in different ways – but most leading to cross-domain. There are others trying to solve everything within a single network, which is also a valid way – but does hinder you in some.

Even though the blockchain world is slowly moving towards a cross-domain one; there are still many issues to be solved. Some of these issues are easier to solve than others, some alleviated by shared DA, others needing a succinct global state base layer. In part two of the cross-domain thesis, we aim to cover some of the innovations that will help create a more trust-minimised world and others that will help alleviate many of the user experience concerns of our time.

We have intent to go deep.

Part two will cover; state/storage proofs, VM extensions, intents, optimal cross-domain trading and lending and much more.


Thanks to Mathijs van Esch (Maven11), Pim Swart (Maven11), Dougie De Luca (Figment Cap), Jim Parillo (Figment Cap), Dmitriy (1kx), William (Volt) and Walt for review or discussions leading to the release of this article. Writer can be found at: 0xrainandcoffee