Execution missed rewards computation
This page discusses Rated's methodology on calculating missed execution layer rewards emanating from missed block proposals.
At a high enough level, all missed rewards at the EL
stem from missing block proposals. This is clear enough to conceptualise.
Where things start getting complicated is when trying to ascertain the value of the block that would be. Unlike CL
rewards, EL
rewards for validators vary a lot more, based on the following two factors:
Demand for blockspace (i.e. transaction volume) on the chain
Procuring blocks from block builders vs crafting your own blocks (as a validator)
Complexity increases further, once you consider groups of validators with similar configurations under a given operator; does one look at “what could have been” from the validator index perspective, or does it pay to aggregate upwards to the operator level and project back down to the index level from there?
In the following sections we discuss the merits and drawbacks of different approaches before arriving to the “current best” approach.
We will soon be offering both Approach 1 and 2 via our API. Get in touch to learn more!
Approach 1: Simple average of block value in an epoch
In the simplest form, missed EL
rewards would be calculated similarly to CL
missed rewards:
There are some clear advantages with this approach, namely that:
It is easy to understand.
It is easy enough to replicate.
It doesn’t really require information that does not live on-chain (so long as the calculation of validator rewards is performed correctly).
It works well at the atomic level (a
pubkey
).
But at the same time there are some pretty serious disadvantages:
It flattens validators and operators insofar as their adoption of mev-boost goes, with the potential to unfairly penalise (for example) a validator that doesn’t run mev-boost, but missed a block in an epoch that all produced blocks were mev-boost blocks.
It doesn’t capture more accurately the specifics of the circumstance as well as a “next block” or “average relay bid” does.
Please note that in this calculation, it’s important to be mindful of MEV value transfer patterns between builders and proposers that include an end_tx
. If these are not accounted for then value “missed” will be undercounted.
Approach 2: Referencing relay bids for opportunity cost
Another approach is to reference the global average of bids from the relay APIs for that particular block slot (addendum: bids that match the parent_hash with the previous block).
While we initially considered taking the max_bid
from relays to measure up against, we quickly have found that in the majority of cases winning_bid
≠ max_bid
. This is most likely due to the fact that bids keep arriving after the winning_bid
gets picked; naturally these bids pack more transactions, and with more transactions the configurations for MEV multiply. The proposer could keep waiting for a later bid, but also that would increase the probability of missing that block. Overall, while this would be the purest form of opportunity cost, it is also highly unrealistic given our observations and penalises missed proposals unfairly.
There are some clear advantages with this approach, namely that:
It captures a pure version of opportunity cost, that references the state of the world at t=n when said block would have been produced.
It offers realistic hard data about the state of the world at t=n.
But at the same time there are some pretty serious disadvantages:
It assumes that every validator is running mev-boost, unfairly penalising those that do not.
At any scale of downstream adoption, it creates incentives for running mev-boost, and is therefore more opinionated.
It is harder to replicate, as it assumes widespread access to the archive of relay bids; this archive is not on-chain, data is in parts ephemeral, and while Rated does have all this data, it also becomes a choke point.
Approach 3: Referencing the next block produced
The third approach available is to reference the value of the next block produced, such that:
The rationale behind this is that the validator who missed the proposal would have had access to the same transactions as the next proposer and could then have at least built the same block with the same transactions and corresponding fees.
There are some advantages with this approach, namely that:
It is more specific to the condition of said validator that missed the block.
The data is on-chain an therefore easy to replicate.
It is simple enough to calculate.
But at the same time there are some pretty serious disadvantages:
It is more stochastic than either Approach 1 or 2, as there is no smoothing and is sensitive to spikes in demand for blockspace.
It does not distinguish between whether the validator is running mev-boost or not, and is therefore subject to the same class of disadvantages as Approach 2 is.
Approach 4: Abstracting to the operator level
The final possible approach is to combine elements of Approach 1 and 2 with some probabilistic weighing of where the next block might come from, guided from an operator level distribution.
Say validator A belongs to a set of keys under operator B. Operator B has produced x% mev-boost blocks and y% vanilla blocks. We could therefore calculate the value of a missed block as:
There are some strong advantages with this approach, namely that:
It captures the probability space well and produces an expected value of opportunity cost.
It is specific to the context of each of the validators.
In a world of perfect information, it is probably the most accurate representation of missed value.
But at the same time there are some pretty serious disadvantages:
It assumes perfect knowledge of pubkey to operator mappings. In reality, this is very fickle and only translates well to operations that have on-chain registries that they correspond to (e.g. the Lido set). Hence this methodology does not scale well horizontally.
It comes with the same challenges as Approach 2 with respect to access to MEV relay data.
Last updated