Adopt a Streamlined Approach to Learning Analysis Through Juried Assessment
There are so many ways to approach learning assessment, each with their own merits and benefits. Across each institution we’ll find a variety at play, and the best institutional effectiveness practices will allow for programs and departments to choose the methods that best work for their learners and learning environments, while also bringing the results of those methods together in a meaningful way of understanding learning holistically.
Easier said than done! Which makes one particular approach, often referred to as juried assessment, a quite valuable one to have in your playbook.
With this particular method, artifacts of learning are collected from multiple places (for instance, a set of courses in a program or department, or a set of workshops or student experiences.) These artifacts are then provided to a set of evaluators (not the instructor or facilitator) who each score the same artifacts using a common rubric. The effect is a multi-scorer evaluation of an anonymous sampling of artifacts from a variety of learning experiences – and results that help us view learning achievement holistically.
This technique can be used in a number of ways. For instance:
- General education assessment: Sampling writing assignments from gen ed courses across the curriculum.
- Program assessment: Collecting final projects in capstone courses, like business plans or art portfolios.
- Experiential education assessment: Reviewing lesson plans from student teacher placements.
- Co-curricular assessment: Reading reflection papers from multiple service-learning trips such as Alternative Spring Break.
There are many benefits to this particular assessment model:
- Holistic view of student learning: This collective view of student learning across courses or interventions can impact your understanding of student learning as a whole, as opposed to in pockets.
- Equity-centered practice: Incorporating techniques like anonymous review of artifacts, multiple evaluators bringing multiple perspectives, and disaggregation of results by student demographics allows you to minimize bias and identify inequities.
- Dual impact: One student assignment can be used as an artifact to evaluate both at the course-level and at the institution or program level.
- Stakeholder engagement: The secondary evaluation process engages faculty and other stakeholders in an assessment process outside of their own classroom or department, which facilitates collaboration and multi-disciplinary dialogue, sharing, and buy-in.
It’s likely that this model is already being used at your institution, but you may find even more opportunities to leverage this technique. To maximize the impact of your existing juried assessment process (or to launch a new one), consider these tips:
- Streamline the process to be as efficient as possible. Look for artifacts that already exist and can be exported from a portfolio system, an LMS, or another piece of assessment technology on campus.
- Keep student artifacts anonymous to evaluators, but add an identifier behind the scenes so you can disaggregate results by student attributes (like demographics) or learning environment attributes (like hybrid vs. in-person courses, or site locations of field placements).
- Randomly assign artifacts to evaluators (ideally automatically!) to reduce bias.
- Use a common rubric or achievement levels scale for comparisons over time, groups, and outcomes.
- Provide guidance on appropriate assignments. (Check out the NILOA assignment library for examples.) Make sure students have access to the rubric in advance of assignment submission.
- Ensure proper training, including for evaluator bias. Keep equity-centered assessment practices in mind when you design your training.
- Allow for asynchronous evaluation. Invest in technology that allows your evaluators to log in and score artifacts on their own time, as opposed to requiring them to be present on a particular day.
- Conduct agreement checks, escalating any differences in scores for discussion and agreement. Use these discussions to further refine your rubric and training practices.
Interested in launching your own juried assessment process? Hear how Molloy College did just that leveraging Anthology Collective Review.
Anthology Collective Review simplifies the large-scale collection of student work—across a single program or your entire campus—and ensures data validity through integrating randomized sampling, anonymous reviews, automated agreement reports, and powerful reporting and data disaggregation features.
Annemieke Rice
A self-professed data geek, Annemieke has spent her time at Anthology (formerly Campus Labs) helping guide and educate member campuses in their journey to use data more effectively. In doing so, she has consulted with hundreds of higher education institutions seeking to accelerate practice in areas including student success, learning assessment and institutional effectiveness. She arrived at Campus Labs via early member campus Northeastern University, where her responsibilities provided her with first-hand experience in strategic planning, retention initiatives, strategic enrollment management, educational technology strategy and accreditation.
She earned a bachelor of arts in behavioral neuroscience and journalism from Lehigh University and a master of science in applied educational psychology from Northeastern University. A prolific and engaging speaker, she has presented at more than 100 national and regional forums.