Technical Diff Overview
Many modifications were made to successfully support fraud proofs and secure batch posting. The following is an in-depth summary which describes the key changes made to core Arbitrum to enable compatibility with EigenDA.
Batch Submission & Derivation
Nitro
- Extended batch poster to take in an
eigenDAWriter
struct that writes blobs to DA via eigenda-proxy- Embed ABI calldata for tx submissions to
SequencerInbox
- Embed ABI calldata for tx submissions to
- Extended inbox message derivation to support type processing for an eigenda batch type (i.e,
0xed
prefix) - Compute
batchHeaderHash
locally usingbatchHeader
fields when querying blobs from eigenda proxy - Updated batch poster config to support 2MB batches when using eigenda
Nitro Contracts
- Extended
SequencerInbox.sol
to support new entry-point function for processing eigenda batch types (i.e,addSequencerL2BatchFromEigenDA
)- Verifies certificates against stateful dependency
RollupManager.sol
contract which handles communication withEigenDAServiceManager.sol
- Updated data hash computation where
hash = keccak256(msgHeader, bytePrefixFlag, abi.Pack(commitment.X, commitment.Y, blob.len()))
- Verifies certificates against stateful dependency
- Updated forge tests to verify inbox submission flow
- Updated deployment scripts to deploy a
RollupManager
contract which lives as part of theRollupDeployer
contract parameters and is set to theSequencerInbox
storage after deployment
Nitro TestNode
- Updated
config.ts
to enable eigenda system flow - Updated
docker-compose.yml
to use eigenda-proxy dependency with mem-store - Updated core bash script to deploy and teardown eigenda-proxy resource
Nitro Go-Ethereum
- Updated system configs to use eigenda field
Fraud Proofs & Stateless Block Execution
Nitro
- Default encode blobs (i.e, modulo encode, length prefix encoding, pad to nearest of 2) before pre-image injection to ensure data is in proper format for generating kzg commitments and witness proofs
- Decode blobs to raw binary or nitro compressed batch representation when reading
- Generate pre-image hashes using the length and commitment fields provided by the eigenda certificate which is persisted into the sequencer inbox
Arbitrator
- Extended arbitrator to use an eigenda preimage type which is targeted during transpilation from host go code (i.e,
WavmReadEigenDAHashPreimage
) - Embed mainnet SRS values into test-files subdirectory (i.e,
g1.point
,g2.point
,g2.point.powerOf2
) - Updated machine proof serialization logic to target
prove_kzg_preimage_bn254
whenpreimage.type() == PreimageType::EigenDAHash
- Add custom proof generation logic for
READPREIMAGE
opcode withEigenDAHash
type which computes a machine state proof containing a kzg proof using a point opening at the 32 byte offset. The machine state proof buffer format is as follows: - Extended E2E proof equivalence tests to serialize machine state proofs using EigenDA preimage types and ensure that post-states when one step proven on-chain match the post-state machine hashes generated by the off-chain arbitrator opcode test
- Built kzg-bn254 library for performing kzg operations over the bn254 curve in rust
Validator
Updated replay script (replay/main.go
) to use eigenDAReader
when populating pre-image oracle for stateless block execution. EigenDA preimage hashes are computed as:
keccak256(commitment.X, commitment.Y, preimage.len())
Computing the length as part of the preimage hash is necessary for removing a trust assumption on the one-step-proof challenger. Unlike 4844, EigenDA preimages are variadic in size.