Alternate Assessment Framework

Alternate Assessment Framework

One of the challenges with remote instruction is balancing assessment of student learning with academic integrity. The current pandemic has made this more challenging, and may require instructors to be especially creative.

This document shares a potential framework for developing alternative assessment strategies that are intended to substitute for an in-class multiple choice/short answer exams commonly seen in the sciences (as well as social sciences and large GE courses) and exams involving questions with symbolic and/or numeric solutions. This is not intended to be a solution for all courses, but serves as an example of a novel exam format in response to particular assessment expectations that may initially appear difficult to meet in a remote setting. I will outline the assessment expectations being considered and then provide a specific example that may meet these goals.

Example Assessment Expectations

  1. Assessments that focus on demonstrating higher order skills of analysis, critical thinking, problem solving and communication of thought process and solutions, as opposed to factual recall.
  2. Assessments that expect students to consult many sources to answer questions, similar to how we do so in the real world. Traditionally, academic integrity for written work has focused on avoiding plagiarism as opposed to preventing cheating between peers.
  3. Courses where grading is NOT based on a traditional curve with set numbers of A’s, B’s, etc allocated to the class. By grading on some sort of a straight scale, students are encouraged to collaborate as opposed to compete.

Example Assessment strategy to meet these expectations:

Design an exam with two parts

Part One is a standard multiple choice/multiple select/fill in the blank exam that can be administered through Canvas.

  • Designed for efficient grading.
  • Specific time is allotted for the exam to minimize cheating.
  • Allows for the possibility for students to use relevant materials, including the Internet, textbooks, or even Zoom collaboration with others in the course.

Part Two requires students to explain their answers to the first portion of the exam in a short answer/essay format. The answers to these are checked for plagiarism through software such as Turnitin.com.

  • Designed to ensure that students are able to explain their answers in their own words – which is less likely if they are merely copying outside sources to complete the first portion of the exam

Depending on the specific goals for your assessment, one can have the students complete either part first, and have different parts be individual work or collaborative work.

Grading

Part One: Students will earn the points allocated for each question for each correct answer, as with a traditional exam.

Part Two: Consulting the plagiarism software, the instructor will determine whether the explanations provided by the student are (1) relevant and (2) not-plagiarized. Assuming this is the case, the student will earn the score awarded from Part One. If either are not the case, the instructor will develop a rubric that includes penalties for the student not being able to demonstrate their understanding (or lack thereof) in an independent manner.

Overall, this assessment scheme eliminates the need for proctoring software and allows for the inclusion of higher level questions in assessments while also enabling students to utilize outside sources.

Additional questions? Feel free to contact:

Pavan Kadandale (pavan.k@uci.edu)

Brian Sato (bsato@uci.edu)

Michael Dennin (mdennin@uci.edu)