Glass Lewis Pay-For-Performance Overhaul
Glass Lewis has announced a significant overhaul of its Pay-for-Performance (P4P) model, set to take effect for shareholder meetings beginning in 2026. First disclosed in early summer 2025, these changes mark a substantial shift in how executive compensation alignment is evaluated. In anticipation of the new framework, Zayla has prepared a summary of the key updates that have been publicly released to date; specifically comparing Glass Lewis’ historical approach to the upcoming approach expected next year:
Feature | Legacy Approach | New 2026 Approach |
---|---|---|
Scoring / Output Format | Letter grade (A through F) | Numerical score from 0 to 100, with a mapped concern level (e.g. “Low,” “High,” “Severe”) |
Evaluation Period (Time Horizon) | 3 years lookback | Extended to 5 years for key quantitative tests |
Number / Type of Tests | Fewer (legacy set) | Six tests (five quantitative, one qualitative) |
Newly Added / Revised Metrics | Focus more heavily on granted pay relative to performance | Inclusion of Compensation Actually Paid (CAP) vs TSR, analysis of short-term incentive (STI) payouts relative to TSR, expanded use of financial performance metrics (e.g. revenue growth) |
Peer Group Methodology | Heavily based on “peer-of-peers” and company self-disclosed peers | More robust and multi-dimensional peer construction: includes industry-based and country-based peers, and applies screening based on revenue, assets, market capitalization, strength-of-connection weighting, etc. |
Qualitative Assessment / Discretionary Review | Already existing but less formalized | A structured qualitative test covering design and governance features (e.g. one-off awards, upward discretion, fixed vs variable pay ratios, short vesting, non-disclosure) |
Scope / Geography | Primarily U.S. / Canada (for P4P) | Expanded to include UK, Europe, and Australia for pay-for-performance coverage |
Quantitative and Qualitative Testing
One of the key updates includes a set of six specific tests, which include 1 qualitative and 5 quantitative tests. Each test will be weighted to produce a composite numerical score ranging from 0 to 100. This score will correspond to a defined level of concern. While a higher concern level may increase the likelihood of a negative voting recommendation from Glass Lewis on say-on-pay proposals, it does not automatically trigger an “Against” vote. To help stakeholders better understand the implications, Zayla has outlined a detailed breakdown of each of the six tests included in the revised framework:
- Granted CEO Pay vs. TSR
- Compares granted CEO compensation to total shareholder return over a 5-year weighted average (requires a minimum of three years of data to conduct the test).
- Granted CEO Pay vs. Financial Performance
- Benchmarks granted CEO pay against financial performance metrics over the 5-year period.
- There is a requirement of a minimum of four metrics (including TSR plus three financial metrics) for inclusion.
- CEO Short-term Incentive Payout vs. TSR
- Similar to test number two, this test compares the aggregate pay across all named executive officers (NEOs) to financial metrics, over 5-year weighted average.
- Total Granted NEO Pay vs. Financial Performance
- Similar to test number two, this test compares the aggregate pay across all named executive officers (NEOs) to financial metrics, over 5-year weighted average.
- CEO “Compensation Actually Paid” vs. TSR
- For US companies, compares the CEO’s Compensation Actually Paid (CAP) as disclosed in the pay-versus-performance table, to cumulative TSR over the same period.
- Qualitative / Governance Test
- Evaluates structural or discretionary pay practices — e.g. one-time special awards, use of upward discretion, instances where fixed pay is disproportionately high, uncapped incentives, short LTI vesting, non-disclosure of performance goals, etc.
Knowns and Unknowns
Other notable revisions to the revised framework include stricter eligibility criteria such as a minimum of three years of consistent data, comprehensive financial metrics, and a peer group of at least 10 valid companies, which may limit full applicability for some issuers, especially those in niche or nascent industries or those with limited comparable entities. Companies that fail to meet these thresholds could see tests omitted or may not receive a complete P4P score at all.
While Glass Lewis’s upcoming P4P methodology represents a significant evolution in executive compensation analysis, important uncertainties remain. Particularly, several aspects of the methodology remain opaque, including the specific weighting of tests, how numeric scores map to concern levels, and how companies with incomplete data will be evaluated. It also remains to be seen how the model will be phased in, how it will accommodate local market practices, and whether it will lead to a measurable shift in voting recommendations.
Summary Conclusions
We wonder as to whether or not this wholesale change is an effort by Glass Lewis to try and gain market share and influence, particularly in the U.S. market where its main competitor, Institutional Shareholder Services (ISS), is the dominant force in the proxy advisory business. From our perspective, Glass Lewis’ prior model lacked the nuance seen in competing frameworks like ISS, and the firm appears to be positioning itself more competitively, perhaps in an effort to assert greater relevance in the proxy advisory space.
Historically, we have viewed Glass Lewis’ analyses as more grounded in common sense compared to its peers, but these methodological changes could alter that dynamic. While ISS’s recent survey results suggest minimal upcoming policy changes, the more material updates from Glass Lewis could lead to unexpected outcomes in the market.
As such, it would be wise for companies to reassess their engagement strategies ahead of proxy season, as the landscape may be shifting toward greater divergence between proxy advisors, potentially complicating alignment efforts for issuers. Given the numerous unknowns, Zayla recommends companies pay close attention, monitor for further guidance and consider proactively assessing their alignment under this evolving framework as the 2026 proxy season approaches.
Comparing Glass Lewis and ISS P4P Approaches
As the two most influential proxy advisory firms, ISS and Glass Lewis play a critical role in shaping how institutional investors evaluate executive compensation. Their respective P4P methodologies have direct influence on say-on-pay outcomes (to the tune of 34% based on recent analyses), investor sentiment and corporate governance “best practices”.
With Glass Lewis implementing a significant overhaul of its P4P model for the 2026 proxy season—and ISS expected to implement minimal changes to its framework—it is increasingly important for companies to both understand how these approaches differ and how their investors may view the policies of each firm. Zayla believes a detailed comparison will help issuers prepare for a landscape where the two firms may diverge in their assessments of pay alignment and governance rigor.
Glass Lewis vs. ISS – Pay-for-Performance Methodology
The following provides a high-level comparison of Glass Lewis’s new 2026 pay-for-performance methodology against ISS’s current and expected approach. It highlights core differences in structure, evaluation periods, metrics, outputs, and methodologies to help stakeholders understand how each proxy advisor assesses executive compensation alignment.
Feature | Glass Lewis (2026) | ISS |
---|---|---|
Effective Date | 2026 proxy season (announced mid-2025) | Ongoing; refinements for 2026 expected to be released in October in draft form and finalized in late November 2025 |
Evaluation Horizon | 5-year weighted average | 3-year lookback (plus 1-year and 3-year snapshots) |
Output Format | 0-100 numeric score, with concern levels (Low to Severe) | Low / Medium / High concern flags (concerns correlated with numeric QualityScore 1-10) |
Tests Used | 6 tests: 5 quantitative, 1 qualitative | 3 quantitative screens, followed by qualitative review |
Quantitative Tests | – CEO Granted Pay vs. TSR (5-yr) – CEO Pay vs. Financial Perf – STI Payout vs. TSR – NEO Pay vs. Financial Perf – CEO CAP vs. TSR | – Relative Degree of Alignment (RDA) – Multiple of Median (MOM) – Pay-TSR Alignment (PTA) |
Financial Metrics Used | TSR, Revenue, EPS, ROA, ROE, Operating Cash Flow, etc. (minimum of 4, including TSR) | Primarily TSR (relative and absolute) |
CAP (Compensation Actually Paid) | Included as a distinct test (CAP vs. TSR) | Considered but not a formal screen |
Peer Group Construction | – Starts with company peers | – Financial size + Industry (GICS) |
– Adds industry/country comparables | – Typically 12-24 peers (0.4 – 2.5 times the company’s financial size) |
|
– Weighted by strength of connection | ||
STI Evaluation | Explicit test: STI payout vs. TSR (5-year avg) | Not a distinct test; considered qualitatively |
NEO vs. CEO Focus | Includes both CEO and aggregate NEO tests | CEO-focused; NEOs considered qualitatively |
Qualitative Review | Yes – structured governance/plan design test (e.g. discretion, one-time awards, disclosure, vesting) | Yes – qualitative overlay after screens (e.g. rigor of goals, pay structure, shareholder engagement) |
Score Transparency | Score provided, but weightings are proprietary | QualityScore disclosed in connection to concern level |
Global Coverage | U.S., Canada, UK, Europe, Australia | U.S. and rage of markets globally |
Use of Concern Level | Maps to possible voting recommendations, but not determinative | “High concern” generally leads to negative say-on-pay recommendation unless mitigating factors exist |
Data Requirements | Requires 3-5 years of data, at least 10 valid peers | More flexible; minimum 1-3 years of data for analysis |
Summary of Key Differences
The following distills the most critical contrasts between the two models, offering a quick reference summary.
Key Differences | Glass Lewis (2026) | ISS |
---|---|---|
Approach | Expanded, multifactor model focused on both performance and governance design | More streamlined, TSR-centric with qualitative override |
Quantitative Depth | More detailed; six tests across various angles | Three core screens, with optional deeper dives |
Score Transparency | Numeric score (0-100), but black-box weights | No numeric score; concern levels only |
Use of CAP (Realized Pay) | Directly tested vs TSR | Considered in context only |
Global Alignment | U.S., Canada, UK, Europe, Australia | U.S. and rage of markets globally |
Implications for Companies
The following outlines the practical implications of each approach from an issuer’s perspective—focusing on design sensitivity, communication strategy, data risks, and potential challenges for specific company types.
Considerations | Glass Lewis (2026) | ISS |
---|---|---|
Design sensitivity | High – poor design can trigger qualitative red flags even with strong TSR | Moderate – design reviewed if flagged by screens |
Pay volatility | May lead to issues with longer 5-year averaging | Focused on most recent 3 years |
Communication needs | Higher – must explain both short- and long-term alignment, and pay design choices | Focus on aligning CEO pay with TSR and peer benchmarks |
Smaller / volatile firms | More likely to be impacted if data or peers are insufficient | Greater flexibility in peer/metric selection |