Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations.pdf
The editors include lessons from their own applied research and evaluation, and suggest ways in which practitioners might address the key issues and challenges of collecting credible evidence. The Second Edition focuses on how to produce actionable evidence as well as credible evidence.
Stewart I. Donaldson is Professor and Chair of Psychology, Director of the Institute of Organizational and Program Evaluation Research, and Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. Dean Donaldson continues to develop and lead one of the most extensive and rigorous graduate programs specializing in applied psychological and evaluation science. He has taught numerous university courses, professional development workshops, and has mentored and coached more than 100 graduate students and working professionals during the past two decades. Dr. Donaldson has also provided organizational consulting, applied research, or program evaluation services to more than 100 different organizations. He has been Principal Investigator on more than 30 extramural grants and contracts to support research, evaluations, scholarship, and graduate students at Claremont Graduate University. Dr. Donaldson serves on the Editorial Boards of the American Journal of Evaluation, New Directions for Evaluation, and the Journal of Multidisciplinary Evaluation; is co-founder and leads the Southern California Evaluation Association; and served as Co-Chair of the Theory-Driven Evaluation and Program Theory Topical Interest Group of the American Evaluation Association for 8 years. He has authored or co-authored more than 200 evaluation reports, scientific journal articles, and chapters. Christina A. Christie is a Professor and Head of the Social Research Methodology Division in the Graduate School of Education and Information Studies at University of California, Los Angeles. Christie specializes in educational and social policy and program evaluation. Her research focuses on the factors and conditions that influence evaluation practice in an effort to strengthen our understanding of evaluation as a method for facilitating social change. She has published extensively and her work appears in journals such American Journal of Evaluation, Children and Youth Services Review, Evaluation and Program Planning, Studies in Educational Evaluation and Teachers College Record. Christie has served on the board of the American Evaluation Association (AEA) and is the former Chair of the Theories of Evaluation Division and the Research on Evaluation Division of AEA. Currently, she is an Associate Editor for the American Journal of Evaluation. Melvin M. Mark is Professor and Head of Psychology at Penn State University. A past president of the American Evaluation Association, he has also served as Editor of the American Journal of Evaluation where he is now Editor Emeritus. Dr. Mark's interests include the theory, methodology, practice, and profession of program and policy evaluation. He has been involved in evaluations in a number of areas, including prevention programs, federal personnel policies, and various educational interventions including STEM program evaluation. Among his books are Evaluation: An integrated framework for understanding, guiding, and improving policies and programs (Jossey-Bass, 2000; with Gary Henry and George Julnes) and the recent SAGE Handbook of Evaluation (Sage, 2006; edited with Ian Shaw and Jennifer Greene), as well as forthcoming books Evaluation in action: Interviews with expert evaluators (Sage; with Jody Fitzpatrick and Tina Christie) and Social Psychology and Evaluation (Guilford; with Stewart Donaldson and Bernadette Campbell).
Introduction 1. EXAMINING THE BACKBONE OF CONTEMPORARY EVALUATION PRACTICE: CREDIBLE AND ACTIONABLE EVIDENCE SOCIAL INQUIRY PARADIGMS AS A FRAME FOR THE DEBATE ON CREDIBLE EVIDENCE 3. HOW PEOPLE JUDGE THE CREDIBILITY OF INFORMATION: LESSONS FOR EVALUATION FROM COGNITIVE AND INFORMATION SCIENCES Credible and Actionable Evidence: The Role of Randomized Experiments 4. WHEN GETTING IT RIGHT MATTERS: THE STRUGGLE FOR RIGOROUS EVIDENCE OF IMPACT AND TO INCREASE ITS INFLUENCE CONTINUES RANDOMIZED CONTROLLED TRIALS: A GOLD STANDARD OR GOLD PLATED? DEMYTHOLOGIZING CAUSATION AND EVIDENCE Credible and Actionable Evidence: Perspectives from a Range of Evaluation Designs and Methods 7. WHEN AND HOW QUALITATIVE METHODS PROVIDE CREDIBLE AND ACTIONABLE EVIDENCE: REASONING WITH RIGOR, PROBITY, AND TRANSPARENCY SEEING IS BELIEVING: USING IMAGES AS EVIDENCE IN EVALUATION CREDIBILITY, POLICY USE, AND THE EVALUATION SYNTHESIS Credible and Actionable Evidence: General Perspectives HOW EVIDENCE EARNS CREDIBILITY IN EVALUATION ACTIONABLE EVIDENCE IN CONTEXT: CONTEXTUAL INFLUENCES ON ADEQUACY AND APPROPRIATENESS OF METHOD CHOICE CREDIBLE EVIDENCE OF EFFECTIVENESS: NECESSARY BUT NOT SUFFICIENT CREDIBLE AND ACTIONABLE EVIDENCE: A FRAMEWORK, OVERVIEW, AND SUGGESTIONS FOR FUTURE PRACTICE AND RESEARCH