Governada measures governance quality for DReps, Stake Pool Operators, and Constitutional Committee members on the Cardano network. Every score is computed from on-chain data, calibrated through absolute scoring curves, and decayed over time to reflect current behavior. Scores measure process and engagement, not political positions.
Absolute calibration. Raw pillar scores are mapped through piecewise linear calibration curves to produce a 0-95 score. Your actions determine your score — independent of how other participants perform. This means every DRep and SPO has clear, actionable steps to improve, and if everyone improves, all scores go up. No zero-sum competition.
Temporal decay. Older governance activity decays exponentially with a 180-day half-life. A DRep who was active six months ago but silent now will see their score decline — governance is an ongoing commitment.
Importance weighting. Not all proposals are equal. Hard forks and constitutional changes carry 3x weight. Treasury withdrawals over 1M ADA and parameter changes carry 2x. Close-margin proposals (decided by less than 20% margin) receive a 1.5x bonus.
Confidence gating. Entities with insufficient data have their tiers capped. DReps with fewer than 5 votes are capped at Emerging, 5-9 votes at Bronze, and 10-14 votes at Silver. Only those with 15+ votes can reach Gold and above.
Momentum tracking. Linear regression over recent score history reveals whether a DRep or SPO is improving or declining. DRep momentum uses a 14-day window; SPO momentum uses a 30-day window.
Outcome-blind assessment.Rationale quality is assessed independently of vote direction. A well-reasoned “No” scores the same as a well-reasoned “Yes”. The score never rewards or penalizes political positions — only the quality of governance process.
Honest about limitations.AI-based rationale quality assessment is approximate. It evaluates reasoning structure, not political correctness. Edge cases exist — a technically excellent rationale referencing obscure domain knowledge may score lower than it deserves. We continuously calibrate and welcome community feedback on scoring accuracy.
Each vote rationale is independently assessed on three dimensions. The composite rationale quality score is the average of these three sub-scores, clamped to 0–100. Scoring is outcome-blind: a well-reasoned “No” scores the same as a well-reasoned “Yes.”
| Dimension | What it measures | High score (70+) | Low score (<30) |
|---|---|---|---|
| Specificity | References to specific proposal details, numbers, stakeholders, or technical parameters | “This proposal requests 500K ADA for the Midnight integration, which aligns with CIP-0094 requirements” | “I support this proposal because it seems good for the ecosystem” |
| Reasoning Depth | Cause-effect reasoning explaining WHY the voter reached their conclusion | “Voting No because the 12-month delivery timeline conflicts with the Chang+1 hard fork schedule, creating a dependency risk” | “I vote Yes” |
| Proposal Awareness | Evidence the voter read and understood the specific proposal | “The team’s track record on Fund 11 Project X (delivered on time, 4.2/5 community rating) gives confidence in execution” | Generic text that could apply to any proposal |
Important: Vote direction never affects quality assessment. A well-reasoned dissenting vote scores identically to a well-reasoned supporting vote.
Every DRep receives a composite score from 0 to 100, computed from four weighted pillars. Each pillar is calibrated through absolute scoring curves, meaning your actions directly determine your score. The composite formula:
Score = (Engagement Quality x 0.40) + (Effective Participation x 0.25) + (Reliability x 0.25) + (Governance Identity x 0.10)Engagement Quality
Measures the depth of governance participation through three layers: rationale provision rate (40%), AI-assessed rationale quality with dissent-substance modifier (40%), and deliberation signal (20%) combining rationale diversity and coverage breadth.
Effective Participation
Evaluates voting coverage weighted by proposal importance and temporal decay. Close-margin proposals (decided by <20% margin) receive a 1.5x bonus, rewarding participation on contentious decisions.
Reliability
Tracks consistency and dependability of governance engagement across four sub-components, only counting epochs where proposals existed. Voting within the governance window is sufficient — speed of response is not measured.
Governance Identity
Rewards DReps who provide meaningful identity and intent information. Quality-tiered field scoring (not binary has/hasn't) across CIP-119 metadata fields, with staleness decay for outdated profiles, plus delegation health signals.
Any public scoring system creates incentives. We design against known gaming vectors so the score rewards genuine governance quality, not metric optimization.
What the score does NOT incentivize
Proactive governance is naturally rewarded
DReps who proactively review proposals before voting tend to write more informed, specific rationales. Our rationale quality assessment naturally rewards this behavior — not because we measure the review, but because better preparation produces better reasoning. The score measures the output (rationale quality), not the input (whether you used any particular tool).
Composite scores map to six tiers shared by both DReps and SPOs. Tiers create emotional weight, competitive pressure, and shareability. Low-confidence entities are capped at lower tiers regardless of score.
Emerging
0–39
New or inactive. Insufficient data to rank higher.
Bronze
40–54
Basic participation. Starting to engage with governance.
Silver
55–69
Consistent engagement. Reliable governance contributor.
Gold
70–84
Strong and sustained. Quality participation across pillars.
Diamond
85–94
Elite governance performance across all dimensions.
Legendary
95–100
Exceptional — by definition, very few entities reach this tier.
Stake Pool Operators are scored on their governance participation using the same tier system as DReps. The four pillars are tailored to SPO governance behavior, with absolute calibration curves and a 30-day momentum window.
Score = (Participation x 0.35) + (Deliberation Quality x 0.25) + (Reliability x 0.25) + (Governance Identity x 0.15)Participation
Importance-weighted vote coverage with temporal decay. Close-margin bonus is applied at the proposal level (not per-SPO) to ensure fair weighting across all pools.
Deliberation Quality
Voting behavior signals that reward thoughtful, independent governance participation. Penalizes rubber-stamping and abstain-farming.
Reliability
Proposal-aware reliability that only penalizes inactivity during epochs with active proposals. Includes engagement consistency (steady > bursty).
Governance Identity
Evaluates pool identity quality cross-validated against actual governance behavior. Metadata-only profiles score lower than profiles backed by voting activity. Pool size (delegator count) is excluded from governance scoring.
Three representative SPO profiles showing how V3.2 scoring works in practice. All numbers are illustrative — actual scores depend on the full proposal landscape.
The Thoughtful Operator
Votes on 80% of proposals with mixed Yes/No positions (25% dissent). Published governance statement backed by 15+ votes. Votes across all proposal types.
The Rubber-Stamper
Votes Yes on every proposal without variation. Full metadata profile but no independent judgment. Votes only on treasury proposals.
The Metadata Gamer
Perfect metadata (statement, links, description) but only 2 governance votes cast. Tier capped at Emerging due to low vote count.
Constitutional Committee members receive a Constitutional Fidelity Score from 0 to 100, measuring how faithfully they uphold their constitutional mandate. A CC member who votes against community sentiment but provides thorough constitutional reasoning scores well. The philosophy: do they vote in line with the constitution? In ambiguous cases, do they justify their votes enough to back it up?
Participation
Vote rate on eligible governance actions during their term. Non-participation is the most basic accountability failure for constitutional guardians.
Rationale Provision
Do they explain their votes? Measures whether CC members submit CIP-136 rationale documents — a binary signal independent from reasoning quality.
Reasoning Quality
AI-assessed deliberation substance. Scores rationality (evidence + logic), reciprocity (engagement with counterarguments), and clarity. Includes boilerplate detection to prevent gaming. The primary differentiator — hardest to fake.
Constitutional Engagement
Breadth and depth of constitutional article references across all votes. Credits any constitutional citation — does not penalize for citing different articles than expected.
Every CC rationale is scored by Claude (Anthropic) at deterministic temperature 0.2. Each rationale is scored independently, then averaged across all votes for the member. The AI produces three sub-scores:
Rationality
Evidence-based reasoning and logical soundness. Does the rationale cite specific constitutional articles, precedent, or on-chain data? Are conclusions logically supported?
Reciprocity
Engagement with counterarguments and alternative interpretations. Does the rationale acknowledge opposing views, address edge cases, or explain trade-offs?
Clarity
Prose quality and accessibility. Is the rationale readable by non-experts? Is it well-structured and free of jargon without sacrificing precision?
Boilerplate detection.Each rationale is compared against the member’s own prior submissions. Copy-paste rationales that repeat earlier text without substantive adaptation receive a quality penalty.
AI confidence. The model self-reports confidence in each score. Low-confidence scores are flagged so users can weigh them appropriately.
Governada maps every DRep onto six governance dimensions derived from their voting patterns. AI-classified proposal relevance scores determine which votes contribute to each dimension. Each dimension score ranges from 0 to 100, with 50 as neutral. Temporal decay and amount-weighting ensure recent, material votes carry more weight. The dominant dimension determines a DRep’s “personality archetype” (e.g., The Guardian, The Pioneer), with hysteresis to prevent flickering between labels.
Treasury Conservative
Preference for fiscal restraint. "No" votes on treasury proposals signal conservatism.
Treasury Growth
Preference for ecosystem investment. "Yes" votes on treasury proposals with quality rationale score highest.
Decentralization
Priority on distributing power. Factors in DRep size tier and voting breadth across proposal types.
Security
Priority on protocol safety. Measures caution rate on security-relevant proposals and rationale depth.
Innovation
Openness to protocol evolution. Support for innovation proposals (40%), InfoAction engagement (30%), and voting breadth (30%).
Transparency
Emphasis on governance accountability. AI rationale quality (60%), provision rate (20%), and metadata completeness (20%).
The GHI measures the health of Cardano governance as a whole, not individual entities. It combines eight components across three categories into a single 0-100 score, tracked epoch-by-epoch. Raw metrics are calibrated through piecewise linear curves before weighting. Because individual scores use absolute calibration, when DReps, SPOs, and CC members collectively improve their governance behavior, the GHI rises — making Governada a tool that measurably improves Cardano governance quality.
Engagement (35%)
DRep Participation
Median effective participation score across all active DReps.
SPO Participation
SPO governance vote coverage weighted by importance and temporal decay.
Citizen Engagement
Delegation rate (62.5%) and delegation dynamism/churn (37.5%).
Quality (40%)
Deliberation Quality
Rationale quality (50%), debate diversity (30%), and voting independence (20%).
Governance Effectiveness
Proposal resolution rate (40%), decision velocity (30%), and throughput (30%).
CC Constitutional Fidelity
Aggregate CC participation, constitutional grounding, and reasoning quality.
Resilience (25%)
Power Distribution
Edinburgh Decentralization Index composite (Nakamoto, Gini, Shannon entropy, HHI, Theil, concentration, tau) plus DRep onboarding rate.
System Stability
DRep retention (50%), score volatility (30%), and infrastructure health (20%).
GHI bands: Strong (76+), Good (51-75), Fair (26-50), Critical(<26)
All scoring data is sourced from the Cardano blockchain via the Koios API, a community-maintained, open-source query layer for Cardano. Governada does not run its own indexer — we consume the same public data available to every researcher.
Sync pipeline
Intermediate data is cached in Supabase (PostgreSQL) for query performance. The sync pipeline includes self-healing: failed syncs are retried with exponential backoff, and health is monitored via the System Stability GHI component.
V3.2 (March 2026) — Defensibility Rebuild
V3.1 (February 2026)
V3.0 (January 2026) — Four-Pillar Architecture
We believe accountability requires openness to scrutiny. If you are a CC member, DRep, or community member who believes our scoring methodology is unfair, incomplete, or incorrect, we want to hear from you.
Join the DiscussionAll methodology changes are documented in our public repository. Scoring weights, AI prompts, and grade thresholds are open source.
Researchers, journalists, and governance participants are welcome to reference Governada scores in their work. We suggest the following formats:
Individual DRep score
“[DRep Name] holds a Governada DRep Score of [X]/100 ([Tier] tier) as of epoch [N]. Source: governada.io/drep/[drep_id]”GHI reference
“Cardano governance health stands at [X]/100 ([Band]) per the Governada Governance Health Index, epoch [N]. Source: governada.io/governance/health”Academic citation
Governada. (2026). Scoring Methodology: DRep Score, SPO Governance Score, CC Constitutional Fidelity Score, Governance Health Index. Retrieved from https://governada.io/methodologyAll scores are point-in-time snapshots. Always include the epoch number or date for reproducibility. Score history is available via the Governada API for longitudinal analysis.
Scoring models are open, reproducible, and continuously refined. The source code for all scoring algorithms is available in the lib/scoring/ directory of our codebase.
Questions or feedback? Join the discussion on GitHub