Ethan Snow

Date of Award

January 2019

Document Type


Degree Name

Doctor of Philosophy (PhD)


Biomedical Sciences

First Advisor

Kenneth Ruit


Reliable measurement of student learning and delivery of comparable education across distributed campus sites are two significant challenges facing institutions across the country. Evidence-based practices for learning objective (LO) development and use can help overcome comparability challenges, but widely-used correctness-only assessment methods contribute to these challenges since they are only able to interpret correct answers as displays of complete knowledge and incorrect answers as displays of absent knowledge. Assessment instruments that measure correctness alone are not able to distinguish guesswork (i.e., when a student lacks knowledge but randomly chooses the correct answer), partial knowledge (i.e., when a student has learned some correct information but does not display complete knowledge), or flawed knowledge (i.e. when a student learned incorrect information) – all of which are significantly different performances from complete or absent knowledge yet occur undetected when examining correctness alone. Confidence-based assessments (CBAs) use a multi-dimensional method of assessing knowledge that includes measuring student confidence levels in each of their answer choices in conjunction with answer correctness. As a result, CBAs can detect complete, partial, absent, and flawed knowledge levels and distinguish guesswork and from other correct responses.

This dissertation presents a novel use of CBA principles in an individualized remediation strategy implemented in high-stakes examinations for three cohorts of professional-level students in an OT 422 (Anatomy for Occupational Therapists) course taught simultaneously across two University of North Dakota campus sites. The variables in this study included individualized (i.e., different for each student) vs. standardized (i.e., same for all students) remediation interventions, self-assessment vs. instructor-derived feedback, and general motivation and learning strategies. These variables are hypothesized to influence learning via remediation and final grades between individual students and the two site populations. The following hypotheses were tested:

1. A confidence-based, individualized remediation strategy increases student learning.

2. Self-assessment of confidence-based academic performances increases student learning via remediation.

3. Student motivations, learning strategies, and academic performances are comparable across distributed campus site populations.

Student learning, measured by difference in confidence-based performance levels (PLs) through remediation, was shown to increase one knowledge level (1-2 PLs) following the individualized remediation intervention (p < 0.001) and resulted in achievement-level performances for 47 (65.3%) of the 72 LOs retested by each student (p < 0.001). As a result of the intervention’s ability to detect flawed knowledge and guesswork, regular positive remediation of these performances to better but incorrect confidence-based PLs caused student grades to decrease by an average of 1.2% (p < 0.001) and resulted in a lower final letter grade for 17.4% of students (p < 0.001). No significant differences in learning were found to result from self-assessment vs. instructor-derived feedback. Despite differences in two motivations (Self-Efficacy for Learning and Performance, and Test Anxiety) and three learning strategies (Rehearsal, Metacognitive Self-Regulation, and Peer Learning) across distributed campus site populations (p < 0.01), comparable final percentage and letter grades suggest effectiveness of the evidence-based practices used to develop the course as well as implement individualized assessments across distributed campus site populations.

In summary, the confidence-based, individualized remediation strategy we employed increases student learning by using CBA principles to more reliably assesses student knowledge, and using evidence-based assessment practices to evaluate student learning helps ensure the delivery of comparable education among distributed campus sites. Outcomes from this study support educators’ ongoing efforts to overcome challenges associated with reliable measurement of student learning and providing comparable yet individualized education to distributed populations.