Evaluation Capacity Building for Informal STEM Education: Working for Success Across the Field

Marjorie Bequette, Christopher L.B. Cardiel, Sarah Cohn, Elizabeth Kunz Kollmann, Frances Lawrenz

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Informal STEM education (ISE) organizations, especially museums, have used evaluation productively but unevenly. We argue that advancing evaluation in ISE requires that evaluation capacity building (ECB) broadens to include not only professional evaluators but also other professionals such as educators, exhibit developers, activity facilitators, and institutional leaders. We identify four categories of evaluation capacity: evaluation skill and knowledge, use of evaluation, organizational systems related to conducting or integrating evaluation, and values related to evaluation. We studied a field-wide effort to build evaluation capacity across a network of organizations and found it important to address individuals’ evaluation capacities as well as capacities at the organizational level. Organizational factors that support ECB included redundancy of evaluation capacities across multiple people in an organization, institutional coherence around the value of evaluation, and recognition that ECB can be led from multiple levels of an organizational hierarchy. We argue that the increasing emphasis on evaluation in the ISE field represents an exciting opportunity and that, with targeted strategies and investments, ECB holds great promise for the future of ISE and ISE evaluation.

Original languageEnglish (US)
Pages (from-to)107-123
Number of pages17
JournalNew Directions for Evaluation
Volume2019
Issue number161
DOIs
StatePublished - Mar 1 2019

Bibliographical note

Funding Information:
Evaluation Requirements and Supports Increased at a National Level. Beginning in the early 2000s, the ISE field changed norms, expectations, and assumptions around evaluation. These changes, led mostly by the National Science Foundation (NSF) but supported by others, are described more deeply by Allen and Peterman (this issue) and Grack Nelson, Goeke, Auster, Peterman, and Lussenhop (this issue). The development of shared outcomes, the increasing attention to the quality of instruments, the supports for a community of professionals involved in evaluation in ISE, and the rising demand for ISE institutions to report on their outcomes have all changed the work being done. One key resource in this area is the Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects, which was developed to support non-evaluators working as Principal Investigators (PIs) on NSF grants and includes advice on how to collaborate with the project evaluators for greatest impact (Bonney, Ellenbogen, Goodyear, & Hellenga, 2011).

Funding Information:
NISE Net began as a “national community of researchers and informal science educators dedicated to fostering public awareness, engagement, and understanding of nanoscale science, engineering, and technology” (www.nisenet.org). It was originally funded by the National Science Foundation through two consecutive grants that extended over 10 years and amounted to over $40 million. Continuing with new funding and new content to this day, the NISE Net is one of the largest ISE initiatives ever undertaken.

Publisher Copyright:
© 2019 Wiley Periodicals, Inc., and the American Evaluation Association

Fingerprint

Dive into the research topics of 'Evaluation Capacity Building for Informal STEM Education: Working for Success Across the Field'. Together they form a unique fingerprint.

Cite this