Search…
⌃K

Effectiveness rating

Phase 0 effectiveness rating model

In order to come up with a unified validator effectiveness score, we combine proposer and attester effectiveness in a weighted average. We attribute the following weights to each, guided by the longer term expectation of rewards distribution between the two duties:
  • Proposer effectiveness: 1/8
  • Attester effectiveness: 7/8
Attributions of proposer slots are much rarer than attestation duties, but bear a significantly higher reward if performed correctly. On average, over a long time period, a validator’s ETH rewards will be split between proposer and attester rewards at a ratio of 1:7. We have selected the respective probability weights to reflect that distribution.
Given the above, we calculate validator effectiveness as:
validator_effectiveness == [1/8 * proposer_effectiveness] + [7/8 * attester_effectiveness

Pre- and post- Altair rewards distribution

Due to a bug in the implementation of Phase 0, the ratio of proposer-to-attester rewards ended up being 1:31 instead of 1:7. This has been corrected in the latest upgrade (Altair). Rated v0 is maintaining the original spec in computing validator effectiveness.

Post-Merge effectiveness rating model

As discussed in the proposer-effectiveness-post-mergesection of the documentation, post-Merge, proposal duties carry a lot more weight overall; not only because there is now a significant proportion of the overall yield that comes from successful proposals, but because missed proposals also mean delaying transaction processing for real users and billions of USD in value (EL), apart from not helping the chain progress (CL).
What we have observed thus far is that on-balance, execution to consensus layer rewards come at a 1:4 ratio. We expect that ratio to become even more balanced over time for the following reasons:
  1. 1.
    More active validators on the Beacon Chain crowd out CL APR%
  2. 2.
    More adoption of MEV Relays and out-of-protocol PBS boost overall EL APR%
  3. 3.
    We are going off of 30-days of post-Merge data, in a period where demand for blockspace on Ethereum is below long term averages.
Given the above we propose the following amendment to the weights of the components of effectiveness rating post-Merge:
  • Proposer effectiveness: 1/8 → 3/8
  • Attester effectiveness: 7/8 → 5/8
Such that:
validator_effectiveness == [3/8 * proposer_effectiveness] + [5/8 * attester_effectiveness
In a sample of the first 75 days post-Merge, the updated overall effectiveness rating yielded a negligible change to the scores, compared to the old methodology. The changes we introduced make the scores more sensitive to instances when the % of empty and missed blocks increase significantly.