What Makes an Online Exam an Exam? Student Perspectives on Assessment Practices at a Major Online University in the Pre-Gen AI era
DOI:
https://doi.org/10.34190/ejel.23.4.4456Keywords:
Higher education, Online assessment, Remote online exams, Distance learning, Student experienceAbstract
Several universities are evaluating the feasibility of adopting permanent online exam programmes, emphasising the need to assess their decisions and allocate appropriate resources to develop sustainable exam systems. Existing literature on online exams has primarily focused on closed-ended exam formats with immediate feedback, often overlooking other exam types and, importantly, the experiences of distance-learning students. Moreover, most studies concentrate on the perspectives of on-campus students, leaving a gap in understanding for students studying remotely. This study addresses this gap by examining distance-learning students' experiences as they transition from traditional in-person exams to uninvigilated remote online open-book/open-web (OBOW) exams, prior to the widespread adoption of generative artificial intelligence (GenAI). Thus, the study captures student experiences and institutional practices in a pre-GenAI landscape, unaffected by AI-generated content. As part of a larger project involving 562 distance-learning students, we conducted semi-structured interviews with 30 participants three years after the outbreak of the Covid-19 pandemic. Thematic analysis focused on three main areas: (a) students’ considerations regarding the place and time of taking remote online exams; (b) their understanding of the changing nature of exams in the online context; and (c) reflections on controlling the exam environment, with particular focus on invigilation and exam integrity issues and challenges faced by students. Key findings indicate that students valued greater flexibility, control, and accessibility in remote online exams but expressed anxiety regarding technology reliability. Unexpectedly, gender differences emerged in perceptions of cheating and exam integrity, with female students emphasising personal learning value and male students expressing greater concern about cheating opportunities. Additionally, confusion existed among students regarding what qualifies as an exam under new flexible formats. These findings contribute to the theoretical understanding of online assessment by highlighting the complex interplay of authenticity, trust, fairness, and learner autonomy. They also point to practical challenges and opportunities in designing equitable, flexible, and sustainable assessment models for higher education. The study's contributions include illuminating the underexplored perspectives of distance learners in OBOW contexts and pre-GenAI environments, informing policy and pedagogy as universities continue to adapt assessment in increasingly online and hybrid landscapes. These insights guide the development of institutional practices that balance technology, pedagogy, and student-centred design to enhance fairness and student confidence.
Downloads
Published
License
Copyright (c) 2025 Maria Aristeidou, Simon Cross, Klaus-Dieter Rossade, Carlton Wood, Terri Rees, Patrizia Paci

This work is licensed under a Creative Commons Attribution 4.0 International License.
Open Access Publishing
The Electronic Journal of e-Learning operates an Open Access Policy. This means that users can read, download, copy, distribute, print, search, or link to the full texts of articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, is that authors control the integrity of their work, which should be properly acknowledged and cited.