Skip to content

fix: pass prepareSlot to fork choice head for Gloas FULL vs EMPTY tie-breaker#9164

Open
twoeths wants to merge 2 commits intounstablefrom
te/prepare_next_slot_fc_slot
Open

fix: pass prepareSlot to fork choice head for Gloas FULL vs EMPTY tie-breaker#9164
twoeths wants to merge 2 commits intounstablefrom
te/prepare_next_slot_fc_slot

Conversation

@twoeths
Copy link
Copy Markdown
Contributor

@twoeths twoeths commented Apr 3, 2026

Motivation

Starting from Gloas (ePBS), fork choice has FULL vs EMPTY block variants with tie-breaker logic that depends on currentSlot (see get_payload_status_tiebreaker and modified get_head). When prepareNextSlot runs at ~67% of slot N to prepare for slot N+1, it should pass prepareSlot (N+1) to recomputeForkChoiceHead() so the tie-breaker correctly evaluates for the next slot rather than the current wall-clock slot.

This is backward compatible: pre-Gloas blocks are always PayloadStatus.FULL, so the tie-breaker is a no-op. The slot param defaults to fcStore.currentSlot when not provided, preserving existing behavior for all other callers.

Description

Thread an optional slot parameter through recomputeForkChoiceHead()updateAndGetHead()updateHead(), used as a fallback for fcStore.currentSlot in applyScoreChanges. Only prepareNextSlot passes it (as prepareSlot).

AI Assistance Disclosure

Used Claude Code for research (other client implementations) and code generation.

…-breaker

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optional slot parameter to the fork-choice head recomputation logic, allowing the prepareNextSlot scheduler to override the current slot for tie-breaker logic. While this enables correct proposer head prediction, feedback suggests that updating the global canonical head state with a future slot value may cause inconsistencies for other node functions, such as attestation production, which rely on the head corresponding to the current wall-clock slot.

Comment thread packages/fork-choice/src/forkChoice/forkChoice.ts
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

⚠️ Performance Alert ⚠️

Possible performance regression was detected for some benchmarks.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold.

Benchmark suite Current: bda0662 Previous: d4d0b21 Ratio
Full columns - reconstruct all 20 blobs 1.8517 ms/op 560.51 us/op 3.30
phase0 getAttestationDeltas - 250000 worstcase 19.386 ms/op 5.4304 ms/op 3.57
Full benchmark results
Benchmark suite Current: bda0662 Previous: d4d0b21 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 890.67 us/op 912.39 us/op 0.98
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 39.591 us/op 39.413 us/op 1.00
BLS verify - blst 714.92 us/op 713.32 us/op 1.00
BLS verifyMultipleSignatures 3 - blst 1.3745 ms/op 1.3743 ms/op 1.00
BLS verifyMultipleSignatures 8 - blst 2.1827 ms/op 2.1981 ms/op 0.99
BLS verifyMultipleSignatures 32 - blst 7.0090 ms/op 6.9886 ms/op 1.00
BLS verifyMultipleSignatures 64 - blst 13.367 ms/op 13.545 ms/op 0.99
BLS verifyMultipleSignatures 128 - blst 26.064 ms/op 25.974 ms/op 1.00
BLS deserializing 10000 signatures 639.52 ms/op 641.47 ms/op 1.00
BLS deserializing 100000 signatures 6.3864 s/op 6.5157 s/op 0.98
BLS verifyMultipleSignatures - same message - 3 - blst 712.66 us/op 793.18 us/op 0.90
BLS verifyMultipleSignatures - same message - 8 - blst 950.09 us/op 906.26 us/op 1.05
BLS verifyMultipleSignatures - same message - 32 - blst 1.5691 ms/op 1.5957 ms/op 0.98
BLS verifyMultipleSignatures - same message - 64 - blst 2.3867 ms/op 2.4785 ms/op 0.96
BLS verifyMultipleSignatures - same message - 128 - blst 4.0825 ms/op 4.0819 ms/op 1.00
BLS aggregatePubkeys 32 - blst 17.734 us/op 17.763 us/op 1.00
BLS aggregatePubkeys 128 - blst 63.533 us/op 64.648 us/op 0.98
getSlashingsAndExits - default max 47.226 us/op 55.527 us/op 0.85
getSlashingsAndExits - 2k 329.27 us/op 331.73 us/op 0.99
proposeBlockBody type=full, size=empty 3.2757 ms/op 2.8211 ms/op 1.16
isKnown best case - 1 super set check 170.00 ns/op 612.00 ns/op 0.28
isKnown normal case - 2 super set checks 175.00 ns/op 184.00 ns/op 0.95
isKnown worse case - 16 super set checks 171.00 ns/op 172.00 ns/op 0.99
validate api signedAggregateAndProof - struct 1.5337 ms/op 1.5555 ms/op 0.99
validate gossip signedAggregateAndProof - struct 1.5332 ms/op 1.5494 ms/op 0.99
batch validate gossip attestation - vc 640000 - chunk 32 107.89 us/op 112.22 us/op 0.96
batch validate gossip attestation - vc 640000 - chunk 64 93.401 us/op 97.708 us/op 0.96
batch validate gossip attestation - vc 640000 - chunk 128 87.916 us/op 91.002 us/op 0.97
batch validate gossip attestation - vc 640000 - chunk 256 101.48 us/op 90.585 us/op 1.12
bytes32 toHexString 308.00 ns/op 293.00 ns/op 1.05
bytes32 Buffer.toString(hex) 181.00 ns/op 189.00 ns/op 0.96
bytes32 Buffer.toString(hex) from Uint8Array 267.00 ns/op 270.00 ns/op 0.99
bytes32 Buffer.toString(hex) + 0x 183.00 ns/op 188.00 ns/op 0.97
Return object 10000 times 0.21530 ns/op 0.21630 ns/op 1.00
Throw Error 10000 times 3.5074 us/op 3.3362 us/op 1.05
toHex 103.84 ns/op 100.56 ns/op 1.03
Buffer.from 93.902 ns/op 94.303 ns/op 1.00
shared Buffer 67.006 ns/op 65.692 ns/op 1.02
fastMsgIdFn sha256 / 200 bytes 1.5170 us/op 1.5180 us/op 1.00
fastMsgIdFn h32 xxhash / 200 bytes 162.00 ns/op 166.00 ns/op 0.98
fastMsgIdFn h64 xxhash / 200 bytes 207.00 ns/op 217.00 ns/op 0.95
fastMsgIdFn sha256 / 1000 bytes 4.8450 us/op 4.8600 us/op 1.00
fastMsgIdFn h32 xxhash / 1000 bytes 245.00 ns/op 256.00 ns/op 0.96
fastMsgIdFn h64 xxhash / 1000 bytes 259.00 ns/op 270.00 ns/op 0.96
fastMsgIdFn sha256 / 10000 bytes 42.575 us/op 42.851 us/op 0.99
fastMsgIdFn h32 xxhash / 10000 bytes 1.2810 us/op 1.2910 us/op 0.99
fastMsgIdFn h64 xxhash / 10000 bytes 826.00 ns/op 837.00 ns/op 0.99
send data - 1000 256B messages 4.2790 ms/op 4.2459 ms/op 1.01
send data - 1000 512B messages 4.3986 ms/op 4.4148 ms/op 1.00
send data - 1000 1024B messages 4.7349 ms/op 4.5355 ms/op 1.04
send data - 1000 1200B messages 4.5393 ms/op 5.0842 ms/op 0.89
send data - 1000 2048B messages 4.7424 ms/op 5.1691 ms/op 0.92
send data - 1000 4096B messages 5.8113 ms/op 6.0219 ms/op 0.97
send data - 1000 16384B messages 18.360 ms/op 39.225 ms/op 0.47
send data - 1000 65536B messages 200.00 ms/op 203.07 ms/op 0.98
enrSubnets - fastDeserialize 64 bits 776.00 ns/op 760.00 ns/op 1.02
enrSubnets - ssz BitVector 64 bits 278.00 ns/op 285.00 ns/op 0.98
enrSubnets - fastDeserialize 4 bits 105.00 ns/op 111.00 ns/op 0.95
enrSubnets - ssz BitVector 4 bits 270.00 ns/op 279.00 ns/op 0.97
prioritizePeers score -10:0 att 32-0.1 sync 2-0 209.56 us/op 207.17 us/op 1.01
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 239.64 us/op 238.14 us/op 1.01
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 347.22 us/op 351.72 us/op 0.99
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 614.25 us/op 605.29 us/op 1.01
prioritizePeers score 0:0 att 64-1 sync 4-1 711.89 us/op 708.23 us/op 1.01
array of 16000 items push then shift 1.3189 us/op 1.3208 us/op 1.00
LinkedList of 16000 items push then shift 7.0460 ns/op 6.8920 ns/op 1.02
array of 16000 items push then pop 66.108 ns/op 65.860 ns/op 1.00
LinkedList of 16000 items push then pop 6.0770 ns/op 5.9760 ns/op 1.02
array of 24000 items push then shift 1.9258 us/op 1.9477 us/op 0.99
LinkedList of 24000 items push then shift 6.5490 ns/op 6.4460 ns/op 1.02
array of 24000 items push then pop 95.580 ns/op 94.309 ns/op 1.01
LinkedList of 24000 items push then pop 6.2680 ns/op 5.9740 ns/op 1.05
intersect bitArray bitLen 8 4.7780 ns/op 4.7840 ns/op 1.00
intersect array and set length 8 29.829 ns/op 29.847 ns/op 1.00
intersect bitArray bitLen 128 23.879 ns/op 24.393 ns/op 0.98
intersect array and set length 128 503.68 ns/op 503.48 ns/op 1.00
bitArray.getTrueBitIndexes() bitLen 128 1.1470 us/op 1.0080 us/op 1.14
bitArray.getTrueBitIndexes() bitLen 248 1.8930 us/op 1.7550 us/op 1.08
bitArray.getTrueBitIndexes() bitLen 512 3.6860 us/op 3.5810 us/op 1.03
Full columns - reconstruct all 6 blobs 126.05 us/op 186.89 us/op 0.67
Full columns - reconstruct half of the blobs out of 6 97.653 us/op 105.47 us/op 0.93
Full columns - reconstruct single blob out of 6 31.984 us/op 35.001 us/op 0.91
Half columns - reconstruct all 6 blobs 397.70 ms/op 383.80 ms/op 1.04
Half columns - reconstruct half of the blobs out of 6 199.29 ms/op 191.22 ms/op 1.04
Half columns - reconstruct single blob out of 6 71.054 ms/op 67.634 ms/op 1.05
Full columns - reconstruct all 10 blobs 181.88 us/op 686.21 us/op 0.27
Full columns - reconstruct half of the blobs out of 10 101.49 us/op 106.63 us/op 0.95
Full columns - reconstruct single blob out of 10 35.151 us/op 32.184 us/op 1.09
Half columns - reconstruct all 10 blobs 660.99 ms/op 644.10 ms/op 1.03
Half columns - reconstruct half of the blobs out of 10 332.18 ms/op 323.94 ms/op 1.03
Half columns - reconstruct single blob out of 10 70.318 ms/op 67.492 ms/op 1.04
Full columns - reconstruct all 20 blobs 1.8517 ms/op 560.51 us/op 3.30
Full columns - reconstruct half of the blobs out of 20 258.02 us/op 176.01 us/op 1.47
Full columns - reconstruct single blob out of 20 32.265 us/op 32.861 us/op 0.98
Half columns - reconstruct all 20 blobs 1.3097 s/op 1.2682 s/op 1.03
Half columns - reconstruct half of the blobs out of 20 657.85 ms/op 637.62 ms/op 1.03
Half columns - reconstruct single blob out of 20 71.011 ms/op 67.081 ms/op 1.06
Set add up to 64 items then delete first 2.5906 us/op 2.1559 us/op 1.20
OrderedSet add up to 64 items then delete first 3.3959 us/op 3.3727 us/op 1.01
Set add up to 64 items then delete last 2.4441 us/op 2.4053 us/op 1.02
OrderedSet add up to 64 items then delete last 3.4861 us/op 3.2602 us/op 1.07
Set add up to 64 items then delete middle 2.2592 us/op 2.1581 us/op 1.05
OrderedSet add up to 64 items then delete middle 5.0096 us/op 4.8092 us/op 1.04
Set add up to 128 items then delete first 4.3122 us/op 4.3912 us/op 0.98
OrderedSet add up to 128 items then delete first 6.4300 us/op 6.7935 us/op 0.95
Set add up to 128 items then delete last 4.3087 us/op 4.2812 us/op 1.01
OrderedSet add up to 128 items then delete last 6.1934 us/op 6.1508 us/op 1.01
Set add up to 128 items then delete middle 4.0490 us/op 4.0470 us/op 1.00
OrderedSet add up to 128 items then delete middle 12.488 us/op 11.773 us/op 1.06
Set add up to 256 items then delete first 8.0051 us/op 8.2040 us/op 0.98
OrderedSet add up to 256 items then delete first 11.851 us/op 12.691 us/op 0.93
Set add up to 256 items then delete last 8.0902 us/op 8.2317 us/op 0.98
OrderedSet add up to 256 items then delete last 12.713 us/op 12.374 us/op 1.03
Set add up to 256 items then delete middle 8.1359 us/op 8.0575 us/op 1.01
OrderedSet add up to 256 items then delete middle 37.949 us/op 36.482 us/op 1.04
pass gossip attestations to forkchoice per slot 2.8562 ms/op 2.5782 ms/op 1.11
forkChoice updateHead vc 100000 bc 64 eq 0 459.12 us/op 426.24 us/op 1.08
forkChoice updateHead vc 600000 bc 64 eq 0 2.7322 ms/op 2.5981 ms/op 1.05
forkChoice updateHead vc 1000000 bc 64 eq 0 5.0060 ms/op 4.4096 ms/op 1.14
forkChoice updateHead vc 600000 bc 320 eq 0 2.8698 ms/op 2.6140 ms/op 1.10
forkChoice updateHead vc 600000 bc 1200 eq 0 2.6997 ms/op 2.6021 ms/op 1.04
forkChoice updateHead vc 600000 bc 7200 eq 0 3.1113 ms/op 2.9427 ms/op 1.06
forkChoice updateHead vc 600000 bc 64 eq 1000 3.3152 ms/op 3.1515 ms/op 1.05
forkChoice updateHead vc 600000 bc 64 eq 10000 3.3465 ms/op 3.1909 ms/op 1.05
forkChoice updateHead vc 600000 bc 64 eq 300000 7.3327 ms/op 7.1417 ms/op 1.03
computeDeltas 1400000 validators 0% inactive 13.517 ms/op 13.033 ms/op 1.04
computeDeltas 1400000 validators 10% inactive 12.956 ms/op 12.466 ms/op 1.04
computeDeltas 1400000 validators 20% inactive 12.263 ms/op 11.527 ms/op 1.06
computeDeltas 1400000 validators 50% inactive 8.9418 ms/op 8.7072 ms/op 1.03
computeDeltas 2100000 validators 0% inactive 20.435 ms/op 20.061 ms/op 1.02
computeDeltas 2100000 validators 10% inactive 22.153 ms/op 18.978 ms/op 1.17
computeDeltas 2100000 validators 20% inactive 18.106 ms/op 17.674 ms/op 1.02
computeDeltas 2100000 validators 50% inactive 13.746 ms/op 10.451 ms/op 1.32
altair processAttestation - 250000 vs - 7PWei normalcase 1.8949 ms/op 1.6932 ms/op 1.12
altair processAttestation - 250000 vs - 7PWei worstcase 2.8655 ms/op 2.4525 ms/op 1.17
altair processAttestation - setStatus - 1/6 committees join 103.86 us/op 105.92 us/op 0.98
altair processAttestation - setStatus - 1/3 committees join 212.75 us/op 198.72 us/op 1.07
altair processAttestation - setStatus - 1/2 committees join 290.76 us/op 281.57 us/op 1.03
altair processAttestation - setStatus - 2/3 committees join 376.41 us/op 374.96 us/op 1.00
altair processAttestation - setStatus - 4/5 committees join 516.19 us/op 507.68 us/op 1.02
altair processAttestation - setStatus - 100% committees join 607.80 us/op 599.06 us/op 1.01
altair processBlock - 250000 vs - 7PWei normalcase 4.3957 ms/op 2.9508 ms/op 1.49
altair processBlock - 250000 vs - 7PWei normalcase hashState 17.894 ms/op 14.191 ms/op 1.26
altair processBlock - 250000 vs - 7PWei worstcase 23.910 ms/op 20.076 ms/op 1.19
altair processBlock - 250000 vs - 7PWei worstcase hashState 43.711 ms/op 39.607 ms/op 1.10
phase0 processBlock - 250000 vs - 7PWei normalcase 1.3913 ms/op 1.3156 ms/op 1.06
phase0 processBlock - 250000 vs - 7PWei worstcase 17.954 ms/op 17.424 ms/op 1.03
altair processEth1Data - 250000 vs - 7PWei normalcase 293.02 us/op 299.84 us/op 0.98
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 3.2610 us/op 3.4010 us/op 0.96
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 20.347 us/op 21.603 us/op 0.94
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 5.6150 us/op 5.8660 us/op 0.96
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 3.5000 us/op 3.8070 us/op 0.92
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 91.027 us/op 98.067 us/op 0.93
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.4238 ms/op 1.3909 ms/op 1.02
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.8674 ms/op 1.8241 ms/op 1.02
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 1.8503 ms/op 1.8228 ms/op 1.02
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 3.7834 ms/op 3.8248 ms/op 0.99
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.0849 ms/op 2.0672 ms/op 1.01
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.0627 ms/op 4.1838 ms/op 0.97
Tree 40 250000 create 319.27 ms/op 314.07 ms/op 1.02
Tree 40 250000 get(125000) 100.34 ns/op 97.612 ns/op 1.03
Tree 40 250000 set(125000) 1.0651 us/op 1.0328 us/op 1.03
Tree 40 250000 toArray() 10.546 ms/op 9.3883 ms/op 1.12
Tree 40 250000 iterate all - toArray() + loop 15.754 ms/op 9.6914 ms/op 1.63
Tree 40 250000 iterate all - get(i) 40.276 ms/op 35.182 ms/op 1.14
Array 250000 create 2.1196 ms/op 2.0867 ms/op 1.02
Array 250000 clone - spread 673.38 us/op 674.19 us/op 1.00
Array 250000 get(125000) 0.29600 ns/op 0.30200 ns/op 0.98
Array 250000 set(125000) 0.29800 ns/op 0.30600 ns/op 0.97
Array 250000 iterate all - loop 57.528 us/op 57.491 us/op 1.00
phase0 afterProcessEpoch - 250000 vs - 7PWei 39.596 ms/op 39.730 ms/op 1.00
Array.fill - length 1000000 2.0588 ms/op 2.2538 ms/op 0.91
Array push - length 1000000 9.0964 ms/op 7.7584 ms/op 1.17
Array.get 0.20708 ns/op 0.20894 ns/op 0.99
Uint8Array.get 0.26307 ns/op 0.24656 ns/op 1.07
phase0 beforeProcessEpoch - 250000 vs - 7PWei 19.037 ms/op 13.121 ms/op 1.45
altair processEpoch - mainnet_e81889 251.89 ms/op 250.39 ms/op 1.01
mainnet_e81889 - altair beforeProcessEpoch 20.719 ms/op 13.979 ms/op 1.48
mainnet_e81889 - altair processJustificationAndFinalization 5.8310 us/op 5.3810 us/op 1.08
mainnet_e81889 - altair processInactivityUpdates 3.6870 ms/op 3.5392 ms/op 1.04
mainnet_e81889 - altair processRewardsAndPenalties 21.287 ms/op 16.581 ms/op 1.28
mainnet_e81889 - altair processRegistryUpdates 546.00 ns/op 565.00 ns/op 0.97
mainnet_e81889 - altair processSlashings 143.00 ns/op 145.00 ns/op 0.99
mainnet_e81889 - altair processEth1DataReset 133.00 ns/op 141.00 ns/op 0.94
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.7652 ms/op 1.5245 ms/op 1.16
mainnet_e81889 - altair processSlashingsReset 707.00 ns/op 711.00 ns/op 0.99
mainnet_e81889 - altair processRandaoMixesReset 1.2280 us/op 1.1980 us/op 1.03
mainnet_e81889 - altair processHistoricalRootsUpdate 137.00 ns/op 151.00 ns/op 0.91
mainnet_e81889 - altair processParticipationFlagUpdates 437.00 ns/op 457.00 ns/op 0.96
mainnet_e81889 - altair processSyncCommitteeUpdates 111.00 ns/op 125.00 ns/op 0.89
mainnet_e81889 - altair afterProcessEpoch 41.761 ms/op 41.648 ms/op 1.00
capella processEpoch - mainnet_e217614 863.93 ms/op 789.28 ms/op 1.09
mainnet_e217614 - capella beforeProcessEpoch 83.413 ms/op 55.593 ms/op 1.50
mainnet_e217614 - capella processJustificationAndFinalization 6.9710 us/op 5.5390 us/op 1.26
mainnet_e217614 - capella processInactivityUpdates 17.146 ms/op 11.742 ms/op 1.46
mainnet_e217614 - capella processRewardsAndPenalties 97.378 ms/op 90.332 ms/op 1.08
mainnet_e217614 - capella processRegistryUpdates 4.5160 us/op 4.5830 us/op 0.99
mainnet_e217614 - capella processSlashings 138.00 ns/op 136.00 ns/op 1.01
mainnet_e217614 - capella processEth1DataReset 139.00 ns/op 132.00 ns/op 1.05
mainnet_e217614 - capella processEffectiveBalanceUpdates 20.822 ms/op 5.6406 ms/op 3.69
mainnet_e217614 - capella processSlashingsReset 704.00 ns/op 703.00 ns/op 1.00
mainnet_e217614 - capella processRandaoMixesReset 1.4200 us/op 1.1330 us/op 1.25
mainnet_e217614 - capella processHistoricalRootsUpdate 135.00 ns/op 138.00 ns/op 0.98
mainnet_e217614 - capella processParticipationFlagUpdates 456.00 ns/op 429.00 ns/op 1.06
mainnet_e217614 - capella afterProcessEpoch 109.72 ms/op 109.15 ms/op 1.01
phase0 processEpoch - mainnet_e58758 298.25 ms/op 274.72 ms/op 1.09
mainnet_e58758 - phase0 beforeProcessEpoch 69.002 ms/op 51.000 ms/op 1.35
mainnet_e58758 - phase0 processJustificationAndFinalization 6.2180 us/op 5.8590 us/op 1.06
mainnet_e58758 - phase0 processRewardsAndPenalties 16.399 ms/op 14.896 ms/op 1.10
mainnet_e58758 - phase0 processRegistryUpdates 2.2680 us/op 2.2970 us/op 0.99
mainnet_e58758 - phase0 processSlashings 138.00 ns/op 146.00 ns/op 0.95
mainnet_e58758 - phase0 processEth1DataReset 219.00 ns/op 146.00 ns/op 1.50
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 849.26 us/op 1.1061 ms/op 0.77
mainnet_e58758 - phase0 processSlashingsReset 947.00 ns/op 844.00 ns/op 1.12
mainnet_e58758 - phase0 processRandaoMixesReset 1.3500 us/op 1.0860 us/op 1.24
mainnet_e58758 - phase0 processHistoricalRootsUpdate 137.00 ns/op 152.00 ns/op 0.90
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.2700 us/op 1.0490 us/op 1.21
mainnet_e58758 - phase0 afterProcessEpoch 33.891 ms/op 33.054 ms/op 1.03
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.0250 ms/op 1.0143 ms/op 1.01
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.5923 ms/op 1.6602 ms/op 0.96
altair processInactivityUpdates - 250000 normalcase 10.835 ms/op 10.696 ms/op 1.01
altair processInactivityUpdates - 250000 worstcase 10.796 ms/op 10.736 ms/op 1.01
phase0 processRegistryUpdates - 250000 normalcase 2.1890 us/op 2.3190 us/op 0.94
phase0 processRegistryUpdates - 250000 badcase_full_deposits 145.69 us/op 150.19 us/op 0.97
phase0 processRegistryUpdates - 250000 worstcase 0.5 65.727 ms/op 63.709 ms/op 1.03
altair processRewardsAndPenalties - 250000 normalcase 16.215 ms/op 15.058 ms/op 1.08
altair processRewardsAndPenalties - 250000 worstcase 15.205 ms/op 14.581 ms/op 1.04
phase0 getAttestationDeltas - 250000 normalcase 5.3972 ms/op 5.4224 ms/op 1.00
phase0 getAttestationDeltas - 250000 worstcase 19.386 ms/op 5.4304 ms/op 3.57
phase0 processSlashings - 250000 worstcase 59.602 us/op 60.892 us/op 0.98
altair processSyncCommitteeUpdates - 250000 10.131 ms/op 10.318 ms/op 0.98
BeaconState.hashTreeRoot - No change 175.00 ns/op 185.00 ns/op 0.95
BeaconState.hashTreeRoot - 1 full validator 82.434 us/op 59.312 us/op 1.39
BeaconState.hashTreeRoot - 32 full validator 853.16 us/op 671.32 us/op 1.27
BeaconState.hashTreeRoot - 512 full validator 8.4053 ms/op 6.1997 ms/op 1.36
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 92.533 us/op 73.403 us/op 1.26
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.4520 ms/op 1.1053 ms/op 1.31
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 18.236 ms/op 13.289 ms/op 1.37
BeaconState.hashTreeRoot - 1 balances 77.629 us/op 60.447 us/op 1.28
BeaconState.hashTreeRoot - 32 balances 757.00 us/op 568.15 us/op 1.33
BeaconState.hashTreeRoot - 512 balances 6.1304 ms/op 4.8226 ms/op 1.27
BeaconState.hashTreeRoot - 250000 balances 128.54 ms/op 116.87 ms/op 1.10
aggregationBits - 2048 els - zipIndexesInBitList 19.968 us/op 19.810 us/op 1.01
regular array get 100000 times 22.966 us/op 23.486 us/op 0.98
wrappedArray get 100000 times 22.985 us/op 23.329 us/op 0.99
arrayWithProxy get 100000 times 13.207 ms/op 10.487 ms/op 1.26
ssz.Root.equals 21.195 ns/op 21.961 ns/op 0.97
byteArrayEquals 21.017 ns/op 21.720 ns/op 0.97
Buffer.compare 9.2220 ns/op 9.0500 ns/op 1.02
processSlot - 1 slots 9.8560 us/op 8.5310 us/op 1.16
processSlot - 32 slots 2.0055 ms/op 1.5888 ms/op 1.26
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 3.8443 ms/op 2.7737 ms/op 1.39
getCommitteeAssignments - req 1 vs - 250000 vc 1.6693 ms/op 1.7001 ms/op 0.98
getCommitteeAssignments - req 100 vs - 250000 vc 3.4065 ms/op 3.4569 ms/op 0.99
getCommitteeAssignments - req 1000 vs - 250000 vc 3.6674 ms/op 3.6988 ms/op 0.99
findModifiedValidators - 10000 modified validators 779.61 ms/op 697.06 ms/op 1.12
findModifiedValidators - 1000 modified validators 439.91 ms/op 425.50 ms/op 1.03
findModifiedValidators - 100 modified validators 285.12 ms/op 266.83 ms/op 1.07
findModifiedValidators - 10 modified validators 240.84 ms/op 151.29 ms/op 1.59
findModifiedValidators - 1 modified validators 184.90 ms/op 158.11 ms/op 1.17
findModifiedValidators - no difference 186.47 ms/op 155.62 ms/op 1.20
migrate state 1500000 validators, 3400 modified, 2000 new 3.1181 s/op 2.5643 s/op 1.22
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 3.7700 ns/op 3.7900 ns/op 0.99
state getBlockRootAtSlot - 250000 vs - 7PWei 406.18 ns/op 289.43 ns/op 1.40
computeProposerIndex 100000 validators 1.3852 ms/op 1.3077 ms/op 1.06
getNextSyncCommitteeIndices 1000 validators 2.8797 ms/op 2.8446 ms/op 1.01
getNextSyncCommitteeIndices 10000 validators 26.319 ms/op 25.072 ms/op 1.05
getNextSyncCommitteeIndices 100000 validators 91.300 ms/op 86.260 ms/op 1.06
computeProposers - vc 250000 575.00 us/op 551.63 us/op 1.04
computeEpochShuffling - vc 250000 39.710 ms/op 38.163 ms/op 1.04
getNextSyncCommittee - vc 250000 9.5754 ms/op 9.3464 ms/op 1.02
nodejs block root to RootHex using toHex 111.04 ns/op 117.78 ns/op 0.94
nodejs block root to RootHex using toRootHex 72.527 ns/op 76.711 ns/op 0.95
nodejs fromHex(blob) 873.80 us/op 738.65 us/op 1.18
nodejs fromHexInto(blob) 659.35 us/op 649.73 us/op 1.01
nodejs block root to RootHex using the deprecated toHexString 491.93 ns/op 486.00 ns/op 1.01
nodejs byteArrayEquals 32 bytes (block root) 26.230 ns/op 26.047 ns/op 1.01
nodejs byteArrayEquals 48 bytes (pubkey) 37.939 ns/op 37.581 ns/op 1.01
nodejs byteArrayEquals 96 bytes (signature) 36.563 ns/op 38.402 ns/op 0.95
nodejs byteArrayEquals 1024 bytes 42.789 ns/op 42.093 ns/op 1.02
nodejs byteArrayEquals 131072 bytes (blob) 1.7807 us/op 1.7619 us/op 1.01
browser block root to RootHex using toHex 146.86 ns/op 146.56 ns/op 1.00
browser block root to RootHex using toRootHex 130.52 ns/op 130.38 ns/op 1.00
browser fromHex(blob) 1.6634 ms/op 1.5155 ms/op 1.10
browser fromHexInto(blob) 654.81 us/op 654.63 us/op 1.00
browser block root to RootHex using the deprecated toHexString 337.67 ns/op 342.92 ns/op 0.98
browser byteArrayEquals 32 bytes (block root) 30.459 ns/op 28.069 ns/op 1.09
browser byteArrayEquals 48 bytes (pubkey) 43.844 ns/op 39.599 ns/op 1.11
browser byteArrayEquals 96 bytes (signature) 83.357 ns/op 74.179 ns/op 1.12
browser byteArrayEquals 1024 bytes 816.25 ns/op 753.96 ns/op 1.08
browser byteArrayEquals 131072 bytes (blob) 95.958 us/op 95.144 us/op 1.01

by benchmarkbot/action

@twoeths twoeths marked this pull request as ready for review April 3, 2026 03:40
@twoeths twoeths requested a review from a team as a code owner April 3, 2026 03:40
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6ff9833288

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".


// calling updateHead() here before we produce a block to reduce reorg possibility
const headBlock = this.chain.recomputeForkChoiceHead(ForkchoiceCaller.prepareNextSlot);
const headBlock = this.chain.recomputeForkChoiceHead(ForkchoiceCaller.prepareNextSlot, prepareSlot);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid caching future-slot head during prepareNextSlot

Passing prepareSlot into recomputeForkChoiceHead makes fork choice recompute and cache this.head using next-slot semantics while the node is still in the current slot. After this runs (~67% into slot N), any code that reads the cached head via forkChoice.getHead() before the next slot tick (for example late validator/API flows) can observe a head selected with slot N+1 FULL-vs-EMPTY tie-breaker instead of slot N rules, which can produce premature head-dependent outputs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after 67% of the slot n I don't see any flows want to use head
if there is in the future, it should compute head again and cache it, especially when there are more ptc messages received
so caching here is not an issue

we can make an ephemeral head in this case, but it will make the current design more complex
the above mechanism should work


// calling updateHead() here before we produce a block to reduce reorg possibility
const headBlock = this.chain.recomputeForkChoiceHead(ForkchoiceCaller.prepareNextSlot);
const headBlock = this.chain.recomputeForkChoiceHead(ForkchoiceCaller.prepareNextSlot, prepareSlot);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't this cause the head to be recomputed so any caller to forkChoice.getHead() in slot N would get the head of the next slot N+1?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't this cause the head to be recomputed so any caller to forkChoice.getHead() in slot N would get the head of the next slot N+1?

yes. From getting both EMPTY + FULL till the next slot, we need to fairly choose one of them, recompute head and cache. My interpretation would be: when preparing for the next slot, if we have both EMPTY and FULL variant, we need to get through tie-breaker logic to get the winner

@@ -78,7 +78,7 @@ export class PrepareNextSlotScheduler {
await sleep(this.config.getSlotComponentDurationMs(PREPARE_NEXT_SLOT_BPS), this.signal);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More general, we run prepareNextSlot at 8 seconds into the slot but with gloas, the payload timeliness deadline and ptc vote deadline is at 9 seconds (PAYLOAD_ATTESTATION_DUE_BPS = 7500), usually if the payload is timely it will arrive before and PTC should vote early, so in the happy case we likley get payload + all ptc votes by 8 seconds into the slot. But if the builder does not reveal and it's EMPTY, then PTC will only cast their votes at 9 seconds into the slot, and we will very likely only receive them at ~10 seconds, or later.

We need to carefully review how if the timings here make sense to call recomputeForkChoiceHead and also if we are the next proposer and issue an fcu to the EL this timing is relevant so we prepare the payload on the correct block hash

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good call, that was already a TODO GLOAS
given this epoch transition metric on mainnet, I think it's safe to run this 10s into the slot, we should recheck when we're close to mainnet because gloas could introduce some epoch transition time

Screenshot 2026-04-08 at 09 54 46


recomputeForkChoiceHead(caller: ForkchoiceCaller): ProtoBlock;
/** @param slot - If provided, overrides fcStore.currentSlot for Gloas FULL vs EMPTY tie-breaker logic */
recomputeForkChoiceHead(caller: ForkchoiceCaller, slot?: Slot): ProtoBlock;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed this offline with @nflaig

would prefer an interface like so:

recomputeForkChoiceHead(
  caller: ForkchoiceCaller,
  opts?: {prepareNextSlot?: boolean}
): ProtoBlock;

// Call site
const headBlock = this.chain.recomputeForkChoiceHead(
  ForkchoiceCaller.prepareNextSlot, {prepareNextSlot: true}
);

Pass optional opts to forkChoice.updateHead and do the const currentSlot = opts?.prepareNextSlot ? store.currentSlot + 1 : store.curentSlot; there

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just realized we already have ForkchoiceCaller.prepareNextSlot, so the boolean here seems redundant

* and preparing the execution payload for the next slot.
* TODO GLOAS: re-check before Gloas mainnet
*/
const PREPARE_NEXT_SLOT_BPS_GLOAS = 8333;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after my spec changes we can keep previous PREPARE_NEXT_SLOT_BPS but the fcu() call needs to be delayed or we need to reach ptc threshold

something like this could work

await Promise.race([
      sleep(SOME_BPS_VALUE - this.clock.msFromSlot(slot), signal),
      this.emitter.waitForPtcThreshold(slot),
    ]);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants