Data-Informed Decision Making as a Framework for Changemaking
This content was previously published by Campus Labs, now part of Anthology. Product and/or solution names may have changed.
When the calendar changed to 2020 and New Year's resolutions were being made, no one in higher ed had any idea of the challenges that lay ahead for the spring semester. Decisions that typically take months or years to make were made in moments. Health and safety became the primary focus, and instructors and students adapted to new methods of teaching and learning.
Student learning data has always been the best measure of classroom success, and now it is more relevant than ever. We now have an opportunity to look at the extent to which learning outcomes were achieved, given the disruption that we endured during the spring. Where did we do a good job? Which courses need to be revamped to more accurately meet the learning outcomes in a virtual environment?
As instructors and institutions at large plan for the fall term, they need to make sense of and learn from spring 2020. Analysis of student learning data should be at the center of this planning.
The long-standing narrative of data-driven decision making (DDDM) highlights the importance of using student learning assessment results to determine success in reaching goals or outcomes. But, DDDM lacks a human element and often leads to views of assessment, and the resulting data, as something punitive rather than informative. In recent years, this narrative has shifted to Data-Informed Decision Making (DIDM), changing the focus from data as a static driver of change to data as a resource to be reviewed, analyzed, regrouped, and interpreted within context. Being data-informed encourages the creation of testable hypotheses that are either confirmed or rebuked. These hypotheses can continue to be created and tested as different instructors or institutions try to home in on what worked and what didn't. This framework is engaging and empowering instead of punitive and feared but requires data that is easy to access, consume, and manipulate.
The DIDM framework isn't the scientific method. You don't need a sterile environment and a control group (and thank goodness for that because classrooms will never meet those research standards), but the context for data collection does matter. Did moving online half-way through a semester impact learning? Are program curricula designed in a way to handle disruption? Instructors and institutions need data to identify how to support, extend, and modify learning experiences for current and future students. They need data that is viewable at multiple levels from individual students to entire cohorts and from various perspectives like specific programs or pedagogical practice. Individual instructors should and will do this for their classes. Faculty should and will collectively analyze the success of their programs and institutions and conduct this same analysis to confirm (or not) student learning success.
To create an empowered community that makes data-informed decisions to improve student learning, the right people need the right data at the right time. Maybe this is just the right moment to ensure that you are:
- Sharing data soon after it's gathered, so it's still relevant
- Providing consumable data that doesn't require a degree in statistics to understand
- Offering multiple methods for securely sharing data to internal and external stakeholders: different perspectives creates a richer understanding of the story
- Disaggregating data in ways that create an actionable "n": Can consumers make sense of what they are looking at and do something with it?
- Encouraging a mixed-methods approach to data analysis: The how and the why are often hidden in comments
- Monitoring progress over time to highlight trends in student learning
- Gathering stakeholders to talk about the data AND how they are planning to act on it can be just as effective in a digital environment
A data-informed decision-making framework empowers faculty to tell the story of what is happening in their classrooms. It encourages institutions to get real about whether students are learning or not and to continue to ask questions until they land confidently on the why. Student learning assessment data can be a powerful changemaker when the right people are given the right data at the right time.
Jessica Chafin, M. Ed.
Jessica Chafin, M. Ed. is an experienced consultant skilled in the areas of educational assessment, educational technology, public speaking, curriculum development, and higher education accreditation. Prior to joining Anthology (formerly Campus Labs) in 2016, Jessica was the assessment coordinator, instructor, and clinical supervisor for the Bagwell College of Education at Kennesaw State University. In this role, she supported programs and faculty as they implemented assessment for student learning and navigated state and national accreditation processes. Jessica began her teaching career in K-12 education where she holds teaching credentials for grades 4-12 and endorsements focused in reading and English as a second language. Jessica earned a master’s in education from Kennesaw State University and a bachelor’s in sociology from Mary Washington College.