LogoLogo
HomeCommmunityLegal
  • Getting started
    • Welcome
    • Useful links
    • Framing the problem
    • Design goals
  • Methodologies
    • Ethereum Beacon Chain
      • Resources
      • Network explorer definitions
        • Landing page
        • Entity views
          • Top screener
          • Entity overview
          • Consensus layer statistics
          • Execution layer statistics
          • Aggregate rewards statistics
        • Aggregate views
          • Trending
          • Network overview
            • Gini coefficient measurement
          • Relayer landscape
          • Builder landscape
        • Misc
          • Index Coop dsETH
      • RAVER methodology
        • Proposer effectiveness
        • Attester effectiveness
        • Effectiveness rating
        • Slashing moderator
        • Sync committees
        • Post-hoc analysis of the Rated v0 effectiveness rating
      • Penalties and Missed Rewards methodologies
        • Penalties computation
          • Pre-Altair penalties computation
        • Validator missed rewards computation
          • Consensus missed rewards computation
          • Execution missed rewards computation
      • Baseline MEV computation
      • Aggregating validator indices
        • Classes of aggregation
    • Miscellaneous
      • Value enabled by Rated
  • API documentation
    • Introduction
    • Getting past authentication
    • API guide
    • Swagger schema
  • Legal
    • ToU & Privacy Notice
      • Website Terms of Use
      • API Terms of Use
      • Privacy Notice
  • Community
    • 🍬Let's Rate!
Powered by GitBook
On this page
  • The problem
  • Enter Rated

Was this helpful?

  1. Getting started

Framing the problem

PreviousUseful linksNextDesign goals

Last updated 2 years ago

Was this helpful?

The problem

There is currently no commonly agreed upon way to approach evaluating the performance of individual validators and the operators behind them. Looking at the problem from a pure rewards perspective allows for randomness to seep through the modelling approach. Focusing only on uptime/participation rates leaves useful information out of the calculation. And having no commonly agreed upon approach creates coordination issues. These issues will be compounded with every upgrade that changes the rules of the game.

Enter Rated

We believe that the community would benefit from a commonly agreed upon approach to evaluating the performance of validators and validator operators. We also believe that the community would benefit from a dedicated forum to coordinate around how such views should adjust to account for changes in the rules around consensus, such that the standard becomes updatable and useful in perpetuity.

Some of the ideas we think a more standardized approach would help power:

  • Incentive schemes for staking pool operators

  • Better pricing for slashing and downtime insurance products

  • New financial products for validator operators (e.g. hedging instruments, credit products)

Our main goals are to arrive at (i) a more generally accepted way to rate validator and operator performance and (ii) an approach to updating that methodology when the rules of the game change via network upgrades.

Join the conversation .

here