Framing the problem
The problem
There is currently no commonly agreed upon way to approach evaluating the performance of individual validators and the operators behind them. Looking at the problem from a pure rewards perspective allows for randomness to seep through the modelling approach. Focusing only on uptime/participation rates leaves useful information out of the calculation. And having no commonly agreed upon approach creates coordination issues. These issues will be compounded with every upgrade that changes the rules of the game.
Enter Rated
We believe that the community would benefit from a commonly agreed upon approach to evaluating the performance of validators and validator operators. We also believe that the community would benefit from a dedicated forum to coordinate around how such views should adjust to account for changes in the rules around consensus, such that the standard becomes updatable and useful in perpetuity.
Some of the ideas we think a more standardized approach would help power:
Incentive schemes for staking pool operators
Better pricing for slashing and downtime insurance products
New financial products for validator operators (e.g. hedging instruments, credit products)
Our main goals are to arrive at (i) a more generally accepted way to rate validator and operator performance and (ii) an approach to updating that methodology when the rules of the game change via network upgrades.
Join the conversation here.
Last updated