Harnessing Faculty Power with Better Outcomes Assessment
This content was previously published by Campus Labs, now part of Anthology. Product and/or solution names may have changed.
In today’s higher education environment, faculty regularly find themselves being pulled in more directions than ever. No longer debating whether they should be a sage on the stage or a guide by their students’ side, faculty now are asked to think about their roles in admissions, retention, graduation rates, and student success. Beyond the long-standing faculty triad of teaching, research, and service, the all-hands-on-deck approach to finding students, keeping them enrolled, and then assuring they become productive graduates all occupy faculty time. Thus, any additional request from an administrator has the potential to be met with skepticism.
Requests falling under the umbrella of institutional effectiveness seem to be especially ripe for this skepticism. These offices, after all, have the reputation for seeking artifacts to demonstrate student learning and teaching effectiveness and asking for volumes of data roughly every half decade for accreditation purposes. From a faculty perspective, their role is to provide information someone else can use to check a box, without returning any value in exchange. In short, it may feel cumbersome and episodic.
Even for faculty who fully appreciate the role of institutional effectiveness, submitting information on student learning without it being packaged and returned in a way that leads to improved learning can seem like a fruitless activity. As such, administrators must work diligently to assure any course outcomes data collection provides a framework from which faculty can make data-informed decisions in future iterations of their courses. Failing to do so leads to faculty not putting the effort into collecting and reporting this data. Halfhearted efforts defeat the purpose of monitoring course outcomes since haphazard data at the course level prevent institutions from making good decisions about courses or programs.
To help promote faculty buy-in to the outcomes assessment process, here are some recommendations for administrators:
1. Design a system that minimizes faculty effort
Let’s be honest: most faculty entered the academy long before anyone would have expected learning outcomes to be tracked and reported. While student learning outcomes have been discussed within the academy for a longer time, the interest in reporting and using this data to improve student learning in a systematic, meaningful way has surfaced much more recently. Given the many directions instructors are pulled in today, the first key to a successful outcomes process is to minimize the effort required to submit information.
If you make the process as intuitive and straightforward as possible, your faculty are more likely to view reporting favorably, as opposed to another item on a growing list of requirements each semester. When aiming to streamline outcomes reporting, consider using tools your faculty are already exposed to, as well as a vernacular they are already familiar with. Further, if technology allows you to combine reporting and assessing into one process through an integration, your faculty will be accomplishing three tasks at once.
2. Assure that outputs can be valuable for faculty
If instructors are expected to take the time to submit outcomes data for their course, there is an expectation of some quid pro quo. Turning results and artifacts into an assessment abyss stirs faculty resentment toward the process. The same occurs when outcomes reporting is treated as nothing more than an accreditation requirement. Giving your faculty usable data—whether at a course or program level—will foster buy-in and help demonstrate why the effort to track and report on student outcomes can be valuable in improving student learning and even pedagogy.
For example, perhaps longitudinal data will show that student learning during an intersession appears to be lower than during traditional terms. Such results could suggest some outcomes need full semesters for students to really make progress. Or data comparing outcomes for multiple sections of the same course might help determine if student learning for particular outcomes is better served in shorter or longer class periods or at different times of day. Likewise, faculty could examine how they are assessing various outcomes in the same course to determine what strategies lead to the highest degree of student learning.
3. Carefully craft messaging
Being asked to report student learning outcomes at the course level can lead some faculty to be anxious about how tracking this data will actually benefit them. Sadly, at many institutions, teaching faculty have become accustomed to feeling like mere data providers rather than data beneficiaries. If instructors believe that this data is sought only to fulfill a requirement of some kind (whether it be regional accreditation, annual program reporting, or periodic program review), they may be less willing to participate fully.
Consequently, it is essential to package outcomes reporting as a vital tool for improving student learning. Rather than framing this type of assessment as a requirement, present it as a tool to enhance student learning and faculty teaching. Further, this type of assessment should be geared toward determining what is working for a course or program—not an individual faculty member. While it will be possible to compare sections of the same course, the intent at an administrative level should be to see what is working (or not working) with the course. Outcomes reporting should never be approached as a means of evaluating faculty effectiveness. This will only encourage your instructors to possibly misreport outcomes data, which would defeat the entire purpose of this meaningful exercise.
4. Find quick victories
To harness the power of faculty in course-level outcomes reporting and assessment, quick victories can go a long way in creating a positive vibe around assessment efforts. Often, areas that could potentially turn into quick victories are already known around campus—if not in your own office. When it comes to assessment data at the course level, your faculty likely have preconceived notions of what’s working and what’s not.
Whether they are examining their own pedagogical tendencies, course scheduling, or curriculum mapping, your faculty think about assessment more than they let on or maybe even realize. If there are particular faculty who have posited thoughts or theories, immediately look at data when it becomes available to see if they are proven correct. And don’t simply shoot the faculty member an email to discuss what was found. Instead, ask them to come in so they can see the analysis—and at the same time help figure out what other questions arise. These quick wins build credibility and will lead to faculty members sharing positive vibes with their colleagues.
5. Devote early energy to detractors
As much as early believers need to be identified and rewarded for their trust, it is equally important to recognize the faculty members who will likely oppose new assessment efforts—regardless of reason. By recognizing these individuals early in the process, you can attempt to draw them in or minimize their potential damage within the faculty community. Nothing will help other faculty see potential value in new processes or technologies more than their colleagues supporting your work.
For example, if one of your faculty members is notorious for opposing standardized assessment or the use of rubrics, find a way to draw them in and announce them as a champion or early adopter. This will set the tone for other interactions. Not only do these efforts increase the odds of gaining full participation, they serve as an opportunity to learn more about faculty concerns with assessment at an institution. Anticipating how every decision, email, or guide may be interpreted by a naysayer will assure the best chances for success in implementing course-level outcomes reporting.
6. Get students on-board
While getting faculty on-board is crucial for any outcomes reporting, students are often treated as afterthoughts—or entirely ignored—in the process. Despite all of this reporting being centered on the notion of student learning, direct assessments are almost exclusively used to measure outcomes. Thus, faculty can report that students are successfully meeting outcomes, even if they would tell their peers (or perhaps even the students themselves) that they don’t believe their students gained the same amount of progress.
If you think about outcomes reporting at its most theoretical level, students should be able to clearly articulate what they have learned in a course or over the sequence of an academic program. If the pedagogical design and teaching have been effective, after a semester of being in a classroom, students should know what their teachers want and why they want it. Only by including students in assessment efforts—whether directly or indirectly—can educators truly measure learning in a meaningful way. Whether through departmental surveys, course evaluations, or student-led focus groups, bringing students into the process provides a fuller picture and more meaning to any assessment efforts conducted on a campus.
Conducting course-level outcomes assessment simply to check an accreditation box is rarely worth the effort. Alienating faculty by filling USB flash drives with data—without bringing a return—just harms future iterations. If outcomes reporting is to occur in a meaningful way, faculty power needs to be harnessed to provide the best assessment process possible. The key is to design an easy-to-use, faculty-friendly system that can deliver value to data providers through careful messaging that identifies champions and wins over detractors. Only then can assessment professionals create a program that serves everyone at an institution—most notably, students.
Will Miller, Ph.D.
Will Miller, Ph.D., leverages data best practices to help campuses make strategic decisions. He joined the Campus Labs team in 2016, after serving as a faculty member and senior administrator at Flagler College in Florida. There, as Executive Director of Institutional Analytics, Effectiveness, and Planning, he helped transform the campus-wide outcomes assessment process. He also served as Accreditation Liaison to the Commission on Colleges of the Southern Association of Colleges and Schools (SACSCOC). Before joining Flagler, he held faculty positions at Southeast Missouri State University, Notre Dame College, and Ohio University. His courses have explored topics in political science, public policy, program evaluation, and organizational behavior. His scholarly pursuits focus on assessment, campaigns and elections, polling, political psychology, and the pedagogy of political science and public administration.