Lighthouse

BeaconChain2

From 200+ methods to zero. 7,317 lines to 125.
Our leanest BeaconChain, by far.*

*The top-level BeaconChain<T> struct went from 7,317 to 125 lines. Total crate line count increased by ~5,800 lines (51.8k → 57.6k) — the god object's logic now lives in focused, independently testable components. No methods were harmed in the making of this refactor. Several were rehomed.

The Problem Why

One struct. 7,317 lines. 40+ fields. 200+ methods. Everyone gets Arc<BeaconChain> and can reach everything — which sounds convenient until you try to test, review, or safely change any of it.

Low confidence in generated code

No unit-testable components. Every change requires understanding 7,000 lines to assess impact.

Slow manual review process

No type-enforced boundaries. Reviewers trace the god object to understand what a change actually touches.

Hard to iterate without risk

Attestations, block production, fork choice all share one type and its lock scopes. Isolated testing is impossible.

Before — God Object

BeaconChain<T>
op_pool fork_choice store observed_* slot_clock exec_layer event_handler kzg ...40+ fields

Everyone gets access to everything

After — Components + Composition

Focused components own their state. Callers hold typed refs, no logic. Testable in isolation.

See the Structure diagram for the full component map.

What This Unlocks Workflow

Not a workflow prescription. A foundation for new ones.

Delegatable implementation

Scope a task to one component. No need to understand the rest of the chain.

Unit-testable without the harness

Most components: construct directly, no harness, no store, no fork choice. Fast tests that validate before integration. Orchestrators still use the harness.

Faster review

Smaller components mean finding things is faster and scope is clearer. Once spec-aligned test cases are human-approved, implementation details can be validated by tests rather than line-by-line review.

Safe iteration

Refactor internals without breaking unrelated subsystems. Type-enforced boundaries replace implicit ones.

Structure Overview

Same name. Zero methods. All logic lives in components and orchestrators.

COMPONENTS (own state + logic) OperationsManager AttestationManager SyncCommitteeMgr DataAvailabilityMgr CanonicalHead ExecutionManager ValidatorQuerySvc ORCHESTRATORS (compose components, &self methods) BlockImporter<T> BlockProducer<T> CALLERS (target: hold refs, no logic) HTTP API Context NetworkBeaconProcessor Sync Manager TESTING Unit tests — construct component, pass test state, assert (most components) Acceptance tests — event-based via sync manager TestRig Integration tests — existing BeaconChainHarness (still works)

Results Proof

7,317 lines to 125. 200+ methods to zero. 7 components + 2 orchestrators, each unit-testable.

Key metrics

beacon_chain.rs
7,317 0
lines
Fields
40+ 0
9 component Arcs · 5 infra · 5 genesis · 1 kzg · 1 migrator
Methods
200+ 0
on the top-level type
Components
7 + 2
+ orchestrators
New tests
+145
component tests · most harness-free

Verification

CI
All existing tests pass. +145 new component tests enable targeted testing that wasn't possible with the monolith.
Stability
4-node Lighthouse + Geth Kurtosis testnet. 50+ epochs finalized, 0 errors, chain healthy throughout.
Performance
Comparable to unstable. Detailed benchmarking on dedicated hardware pending — local testnet measurements show no clear regression.

Coverage: monolith vs components

Same test suite (ef_tests + fork_from_env, Fulu fork), same cargo llvm-cov. One blob of coverage becomes per-component visibility.

Module Functions Lines Coverage
unstable — beacon_chain.rs 262/363 3,458/4,273
80.9%
Branch — broken down by component
AttestationManager 24/25 416/461
90.2%
OperationsManager 13/14 126/136
92.7%
SyncCommitteeManager 8/8 100/112
89.3%
BlockProducer 47/62 999/1,136
88.0%
ExecutionManager 11/12 69/81
85.2%
CanonicalHead 73/79 749/896
83.6%
BlockImporter 69/97 1,006/1,208
83.3%
DataAvailabilityMgr 25/31 214/261
82.0%
StateQuery (utility module, not a component) 60/67 619/675
91.7%
beacon_chain.rs (after) 4/5 12/13
92.3%
★ Browse the branch on GitHub Coverage report · Unstable baseline

Known Limitations

NetworkBeaconProcessor holds direct component refs for most accesses but retains Arc<BeaconChain<T>> for ~25 external function calls. HTTP API and sync callers are next.

Design Principles Constraints

Born from specific BeaconChain<T> pain points.

1

Favour composition over god objects

Pass what you need, not what has what you need.

produce_block_on_state uses 4 domain deps out of 40+ fields. See the block production example.
2

Separate business logic from infrastructure

Components own verification logic. Infrastructure (chain state, slot, events) comes from the caller.

Today verification methods reach into CanonicalHead and SlotClock internally. Coupling that makes isolated testing impossible.
3

Components are testable in isolation

Construct, pass state, assert. Most components need no harness, no store, no fork choice. Orchestrators (BlockImporter, BlockProducer) still use the harness for integration-level validation due to cross-component dependencies.

Today you spin up the full chain just to test verification logic. Deterministic components with parameterized context fix that for 5 of 7 components.
4

Fork choice write locks are concentrated

Only block import, head recomputation, attestation application, and EL callbacks acquire write locks.

canonical_head.rs lock ordering is where deadlocks happen. Today any Arc<BeaconChain> holder can reach it. Now that boundary is enforced.

How It Works Details

Block production as a worked example. Component reference below.

What block production actually needs

Looks like it needs the whole BeaconChain. It doesn't.

Domain &OperationPool Pull attestations, exits, slashings, sync aggregate, BLS changes CanonicalHead Head slot, finalized checkpoint, forkchoice params (read-only) ExecutionLayer Get payload from EL, fee recipient, gas limit AttestationManager Early attester cache, attestation packing Infrastructure &ChainSpec Spec constants (everything needs this) BlockProductionConfig ~5 flags: paranoid mode, size limits, builder fallback SlotClock Current slot TaskExecutor Spawn blocking work

4 domain deps vs 40+ fields. Infrastructure deps (spec, clock, executor) are Rust making implicit globals explicit.

BlockProducer<T> — constructor injection with &Arc<Self>
block_production/mod.rs (simplified) — struct owns its Arcs, methods use &Arc<Self>
/// Owns the subsystems required to produce beacon blocks.
/// Constructed once by the builder; methods use &Arc<Self>.
pub struct BlockProducer<T: BeaconChainTypes> {
    spec: Arc<ChainSpec>,
    store: BeaconStore<T>,
    config: Arc<ChainConfig>,
    op_pool: Arc<OperationPool<T::EthSpec>>,
    canonical_head: Arc<CanonicalHead<T>>,
    execution_manager: Arc<ExecutionManager<T>>,
    attestation_manager: Arc<AttestationManager<T::EthSpec>>,
    // ... (20 fields total — key domain deps shown)
}

impl<T: BeaconChainTypes> BlockProducer<T> {
    pub async fn produce_block_on_state(
        self: &Arc<Self>,
        state: BeaconState<T::EthSpec>,
        produce_at_slot: Slot,
        randao_reveal: Signature,
        // ...
    ) -> Result<BeaconBlockResponseWrapper<T::EthSpec>> {
        // All deps accessed via self.* -- no god object needed
        let attestations = self.op_pool.get_attestations(&state, &self.spec)?;
        let health = is_healthy(&self.canonical_head, /* ... */)?;
        // ...
    }
}
Components — what each one owns and holds

Owned state and shared references for each component.

OperationsManager

Voluntary exits, slashings, BLS changes. Verification, dedup, pool insertion.

observed_voluntary_exits observed_proposer_slashings observed_attester_slashings observed_bls_to_execution_changes spec op_pool

AttestationManager

Attestation production, verification, aggregation, pool management.

naive_aggregation_pool observed_attestations observed_aggregators early_attester_cache shuffling_cache spec genesis_block_root

SyncCommitteeManager

Sync committee message/contribution verification, aggregation pool.

naive_sync_aggregation_pool observed_sync_contributions observed_sync_contributors observed_sync_aggregators spec op_pool

DataAvailabilityManager

Blob/data column processing, custody, DA boundary calculations.

data_availability_checker kzg spec store

ExecutionManager

Execution layer integration, proposer preparation, forkchoice updates.

beacon_proposer_cache spec execution_layer

ValidatorQueryService

Validator pubkey lookups, committee cache access.

validator_pubkey_cache

CanonicalHead

Fork choice, cached head block/state, head recomputation lock, fork choice persistence.

fork_choice cached_head recompute_head_lock fork_choice_signal_tx store

BlockImporter<T>

Orchestrator for block, blob, and data-column import. Owns import caches and observation tracking.

observed_block_producers observed_slashable observed_blob_sidecars observed_column_sidecars event_handler validator_monitor canonical_head attestation_manager data_availability_manager validator_query execution_manager sync_committee_manager

BlockProducer<T>

Orchestrator for block production: state loading, partial block assembly, execution payload integration.

op_pool canonical_head execution_manager attestation_manager
Dependency injection — target caller architecture

Target state. NetworkBeaconProcessor has been migrated to hold direct component refs. HTTP API and sync still hold Arc<BeaconChain<T>> — migration is incremental.

The following shows what each caller actually needs, not what it currently holds.

NetworkBeaconProcessor

Gossip handlers, sync

OperationsManager AttestationManager SyncCommitteeManager DataAvailabilityMgr BlockImporter CanonicalHead

HTTP API — Pool Endpoints

/eth/v1/beacon/pool/*

OperationsManager CanonicalHead

HTTP API — Validator Endpoints

/eth/v1/validator/*

ValidatorQueryService ExecutionManager CanonicalHead