Description
This issue considers social experiments in practice and how recent advances improve their value and potential applications. Although controversial, it is clear they are here to stay and are in fact increasing.
With their greater abundance, experimental evaluations have stretched to address more diverse policy questions, no longer simply providing a treatment–control contrast but adding multiarm, multistage, and multidimensional (factorial) designs and analytic extensions to expose more about what works best for whom. Social experiments are also putting programs under the microscope when they are most ready for testing, enhancing the policy value of their findings.
This volume provides new developments in all these areas from scholars instrumental to recent scientific advances. In some instances, established ideas are given new attention, connecting them to new opportunities to learn and inform policy. By all means, this issue aims to encourage stronger and more informative social experiments in the future.
This is the 152nd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication of the American Evaluation Association.
Chapter
1 On the “Why” of Social Experiments: Some Lessons on Overcoming Barriers from 45 Years of Social Experiments
Contexts Shift: From Policy Research to Program Evaluation
Contexts Shift: From Individual to Place-Based Designs
Current and Future Evolution: Overcoming Further Barriers
2 On the “When” of Social Experiments: The Tension Between Program Refinement and Abandonment
“Implement Only Effective Programs”
From Theoretical Approach to Statute
Choosing Between “Refine” and “Abandon”
Goals of the Evaluation and Contextual Factors
3 On the “Where” of Social Experiments: The Nature and Extent of the Generalizability Problem
Background and Outcome Comparisons
Comparisons of Background Characteristics
Impact Finding Comparisons
Estimation of Bias Parameters
Future Research and Implications for Practice
4 On the “Where” of Social Experiments: Selecting More Representative Samples to Inform Policy
Contribution of the Chapter
Recommendation 1: Identify the Population of Policy Interest
Recommendation 2: Develop a Sampling Frame
Recommendation 3: Select Sites Randomly
Recommendation 4: Set Sample Sizes to Account for Random Site Selection
5 On the “How” of Social Experiments: Using Implementation Research to Get Inside the Black Box
The Implementation Problem
Performance Versus Impact
6 On the “How” of Social Experiments: Analytic Strategies for Getting Inside the Black Box
Possible Solutions: Experimentally Based Methods for Conducting Mediation Analyses
Instrumental Variables, Part 1: Random Assignment as an Instrument for Participation
Instrumental Variables, Part 2: Random Assignment Interacted with Site Indicators as Instruments for Endogenous Subgroup Traits
Principal Stratification: A Framework that Connects IV and ASPES
Analysis of Symmetrically Predicted Endogenous Subgroups for Program- and Personally Defined Mediators
Using Cluster Analysis to Identify Complex Subgroups
Illustration: Moving to Opportunity (MTO) Demonstration
7 On the “How” of Social Experiments: Experimental Designs for Getting Inside the Black Box
Competing Treatments Design
Enhanced Treatment Design
Role of the Control Group
Sample Allocation in Multiarm Designs
8 Program and Policy Evaluations in Practice: Highlights from the Federal Perspective
The Role of the Federal Government in Program and Policy Evaluation
Shortcomings of the Evaluation Enterprise
Efficiency in Resource Use
Challenges Meeting Evidence Needs to Guide Policy Development and Monitoring
Access to Reliable Extant Evidence
Commissioning New Evaluations
Getting More out of Future Evaluations
Strategies for Improving the Pace and Utility of Evaluation Research
Invest in Understanding the Broad Context for Evaluation
Design Evaluations with a Neutral View of What the Outcome Might Be
Bend the Rules of Optimal Evaluation When Warranted to Balance Competing Priorities
Use the Ideal to Guide the Path to a Constrained Optimal Evaluation Design
Consider Integrating Opportunities for Midcourse Corrections into the Evaluation
Have a Strong Plan for Disseminating Evaluation Findings