# Effectiveness rating

**Phase 0 effectiveness rating model **

**Phase 0 effectiveness rating model**

In order to come up with a unified validator effectiveness score, we combine proposer and attester effectiveness in a weighted average. We attribute the following weights to each, guided by the longer term expectation of rewards distribution between the two duties:

Proposer effectiveness: 1/8

Attester effectiveness: 7/8

Attributions of proposer slots are much rarer than attestation duties, but bear a significantly higher reward if performed correctly. On average, over a long time period, a validator’s ETH rewards will be split between proposer and attester rewards at a ratio of 1:7. We have selected the respective probability weights to reflect that distribution.

Given the above, we calculate validator effectiveness as:

### Pre- and post- Altair rewards distribution

Due to a bug in the implementation of Phase 0, the ratio of proposer-to-attester rewards ended up being 1:31 instead of 1:7. This has been corrected in the latest upgrade (Altair). Rated v0 is maintaining the original spec in computing validator effectiveness.

### Post-Merge effectiveness rating **model**

**model**

What we have observed thus far is that on-balance, execution to consensus layer rewards come at a 1:4 ratio. We expect that ratio to become even more balanced over time for the following reasons:

More active validators on the Beacon Chain crowd out

`CL`

APR%More adoption of MEV Relays and out-of-protocol PBS boost overall

`EL`

APR%We are going off of 30-days of post-Merge data, in a period where demand for blockspace on Ethereum is below long term averages.

Given the above we propose the following amendment to the weights of the components of effectiveness rating post-Merge:

Proposer effectiveness: 1/8 →

**3/8**Attester effectiveness: 7/8 →

**5/8**

Such that:

**Note: **If a validator has not been assigned any proposer duties, we only take their attester effectiveness into consideration in calculating their overall effectiveness, such that `validator_effectiveness == attester_effectiveness.`

We do this so that we avoid artificially inflating the overall rating.

Last updated