Initiatives |
Elevating the S in ESG
|
When investors analyze impact data around the S in ESG, the metrics they use have been proven to be unreliable indicators of a company's social impact. 'Fixing the S in ESG' aims to change that paradigm through standardization, quantification, and reporting.
- Standardization. One of the biggest challenges in measuring social impacts has been the absence of a reliable, quantitative measurement standard. The result is that every company (and NGO) defines, measures, and reports every social impact differently. For investors, this results in unreliable, incomparable, and low-value data that cannot be used in financial models. While there have been a few attempts to create frameworks for reporting social impacts, most have fallen short.
- Quantification. Once social impacts are standardized and classified, they must be properly quantified. In the E world, independent bodies like Verra define standards for measuring “units” of environmental impact such as greenhouse gas emissions. Verra refers to these standard units as Verified Carbon Units, or VCUs. Rigorous rules and methodologies are established to ensure consistency and reliability of data across heterogeneous projects.
- Reporting. In the traditional ESG paradigm, reporting is all about disclosure of “material” risks. But as many researchers have pointed out, there are both negative and positive aspects of materiality. Some activities create material risks that could negatively impact corporate performance and merit disclosure. At the same time, some corporate activities create material benefits that could positively impact corporate performance.
Evidence 2.0
In most fields, it is accepted common knowledge to 'follow the science'. But in policymaking, 'the science' has been elusive. The field has been scientific about the production of evidence (using randomized control trials, requiring rigorous data, etc.), but has not been scientific about the 'use' of evidence.
How do we determine...
This is where the Center's work with Evidence 2.0 comes in. Our current database of evidence – 'Evidence 1.0' – does not lend us to find accurate, reliable, and standard answers to these problems that can then be transitioned into policy.
With the Center's work in Evidence 2.0, we unleash information about the following four capabilities:
Evidence 2.0 tools are designed to give decision-makers the data they need to design, benchmark, and predict the most effective program intervention.
How do we determine...
- If a yet-to-be-evaluated job training program is worth investing in?
- The 'return' or expected impact of two different approaches to food security?
- The cost per outcome we should expect to pay to reduce recidivism?
This is where the Center's work with Evidence 2.0 comes in. Our current database of evidence – 'Evidence 1.0' – does not lend us to find accurate, reliable, and standard answers to these problems that can then be transitioned into policy.
With the Center's work in Evidence 2.0, we unleash information about the following four capabilities:
- Situational Analysis: Questions related to better understand the trends and goegraphic distributions of phenomena.
- Cause and Effect: Questions that can help stakeholders better understand the key drivers and consequences of a situation. We can understand which are the key variables that make the greatest difference for a given problem.
- Prediction: Questions that interrogate new predictive capabilities that would allow stakeholders to assess future risks, needs, and opportunities.
- Impact Assessment: These questions try and determine the results (positive or negative) of various interventions.
Evidence 2.0 tools are designed to give decision-makers the data they need to design, benchmark, and predict the most effective program intervention.
Explorations in Impact Science
The Center has partnered with the Stanford Social Innovation Review to publish an article series around the field of Impact Science. This works explores the frontiers in impact science: data standardization, decision science, probabilistic modeling, core components analysis, advances in meta-analysis, matching algorithms, and more.
Contributors will analyze cutting-edge and disruptive use cases for impact science in the form of new tools, systems, and technologies for achieving social impact. Our audience is anyone who operates, invests in, designs, or studies social programs, including practitioners, funders, investors, policymakers, researchers, consultants, and students. Read more.
Contributors will analyze cutting-edge and disruptive use cases for impact science in the form of new tools, systems, and technologies for achieving social impact. Our audience is anyone who operates, invests in, designs, or studies social programs, including practitioners, funders, investors, policymakers, researchers, consultants, and students. Read more.