Social Experiments in Practice: The What, Why, When, Where, and How of Experimental Design and Analysis :New Directions for Evaluation, Number 152

Publication subTitle :New Directions for Evaluation, Number 152

Author: Laura R. Peck  

Publisher: John Wiley & Sons Inc‎

Publication year: 2017

E-ISBN: 9781119348993

P-ISBN(Paperback): 9781119348887

Subject: G644 scientific research work

Language: ENG

Access to resources Favorite

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Description

This issue considers social experiments in practice and how recent advances improve their value and potential applications. Although controversial, it is clear they are here to stay and are in fact increasing.

With their greater abundance, experimental evaluations have stretched to address more diverse policy questions, no longer simply providing a treatment–control contrast but adding multiarm, multistage, and multidimensional (factorial) designs and analytic extensions to expose more about what works best for whom. Social experiments are also putting programs under the microscope when they are most ready for testing, enhancing the policy value of their findings.

This volume provides new developments in all these areas from scholars instrumental to recent scientific advances. In some instances, established ideas are given new attention, connecting them to new opportunities to learn and inform policy. By all means, this issue aims to encourage stronger and more informative social experiments in the future.

This is the 152nd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication of the American Evaluation Association.

Chapter

Contextualization

References

1 On the “Why” of Social Experiments: Some Lessons on Overcoming Barriers from 45 Years of Social Experiments

Contexts Shift: From Policy Research to Program Evaluation

Contexts Shift: From Individual to Place-Based Designs

Current and Future Evolution: Overcoming Further Barriers

References

2 On the “When” of Social Experiments: The Tension Between Program Refinement and Abandonment

“Implement Only Effective Programs”

Falsifiable Logic Models

From Theoretical Approach to Statute

Choosing Between “Refine” and “Abandon”

Goals of the Evaluation and Contextual Factors

Costs and Benefits

An Example

Discussion

Disclaimer

References

3 On the “Where” of Social Experiments: The Nature and Extent of the Generalizability Problem

Background and Outcome Comparisons

Comparisons of Background Characteristics

Comparisons of Outcomes

Unobserved Factors

Impact Finding Comparisons

Estimation of Bias Parameters

Bias Simulations

Future Research and Implications for Practice

Acknowledgments

References

4 On the “Where” of Social Experiments: Selecting More Representative Samples to Inform Policy

Contribution of the Chapter

Recommendation 1: Identify the Population of Policy Interest

Recommendation 2: Develop a Sampling Frame

Recommendation 3: Select Sites Randomly

Recommendation 4: Set Sample Sizes to Account for Random Site Selection

Future Directions

Acknowledgments

References

5 On the “How” of Social Experiments: Using Implementation Research to Get Inside the Black Box

The Implementation Problem

Evaluability

Explanation

Performance Analysis

Performance Versus Impact

Practical Limitations

Exploration

Policy Learning

References

6 On the “How” of Social Experiments: Analytic Strategies for Getting Inside the Black Box

The Problem

Possible Solutions: Experimentally Based Methods for Conducting Mediation Analyses

Instrumental Variables, Part 1: Random Assignment as an Instrument for Participation

Instrumental Variables, Part 2: Random Assignment Interacted with Site Indicators as Instruments for Endogenous Subgroup Traits

Principal Stratification: A Framework that Connects IV and ASPES

Analysis of Symmetrically Predicted Endogenous Subgroups for Program- and Personally Defined Mediators

Using Cluster Analysis to Identify Complex Subgroups

Illustration: Moving to Opportunity (MTO) Demonstration

Conclusion

References

7 On the “How” of Social Experiments: Experimental Designs for Getting Inside the Black Box

Multiarm Randomization

Competing Treatments Design

Enhanced Treatment Design

Role of the Control Group

Sample Allocation in Multiarm Designs

Multistage Randomization

Factorial Designs

Conclusion

References

8 Program and Policy Evaluations in Practice: Highlights from the Federal Perspective

The Role of the Federal Government in Program and Policy Evaluation

Shortcomings of the Evaluation Enterprise

Responsible Oversight

Efficiency in Resource Use

Challenges Meeting Evidence Needs to Guide Policy Development and Monitoring

Timeliness of Evidence

Access to Reliable Extant Evidence

Commissioning New Evaluations

Getting More out of Future Evaluations

Strategies for Improving the Pace and Utility of Evaluation Research

Invest in Understanding the Broad Context for Evaluation

Design Evaluations with a Neutral View of What the Outcome Might Be

Bend the Rules of Optimal Evaluation When Warranted to Balance Competing Priorities

Use the Ideal to Guide the Path to a Constrained Optimal Evaluation Design

Consider Integrating Opportunities for Midcourse Corrections into the Evaluation

Have a Strong Plan for Disseminating Evaluation Findings

Conclusions

Disclaimer

References

INDEX

ORDER FORM

EULA

The users who browse this book also browse