What Makes an Online Exam an Exam? Student Perspectives on Assessment Practices at a Major Online University in the Pre-Gen AI era

Authors

  • Maria Aristeidou Institute of Educational Technology, The Open University, Milton Keynes, UK https://orcid.org/0000-0001-5877-7267
  • Simon Cross Institute of Educational Technology, The Open University, Milton Keynes, UK
  • Klaus-Dieter Rossade Faculty of Wellbeing, Education and Language Studies, The Open University, Milton Keynes, UK https://orcid.org/0000-0003-4880-5011
  • Carlton Wood Faculty of Science, Technology, Engineering and Mathematics, The Open University, Milton Keynes, UK https://orcid.org/0000-0001-5567-1694
  • Terri Rees Institute of Educational Technology, The Open University, Milton Keynes, UK https://orcid.org/0009-0004-4138-5223
  • Patrizia Paci Faculty of Science, Technology, Engineering and Mathematics, The Open University, Milton Keynes, UK

DOI:

https://doi.org/10.34190/ejel.23.4.4456

Keywords:

Higher education, Online assessment, Remote online exams, Distance learning, Student experience

Abstract

Several universities are evaluating the feasibility of adopting permanent online exam programmes, emphasising the need to assess their decisions and allocate appropriate resources to develop sustainable exam systems. Existing literature on online exams has primarily focused on closed-ended exam formats with immediate feedback, often overlooking other exam types and, importantly, the experiences of distance-learning students. Moreover, most studies concentrate on the perspectives of on-campus students, leaving a gap in understanding for students studying remotely. This study addresses this gap by examining distance-learning students' experiences as they transition from traditional in-person exams to uninvigilated remote online open-book/open-web (OBOW) exams, prior to the widespread adoption of generative artificial intelligence (GenAI). Thus, the study captures student experiences and institutional practices in a pre-GenAI landscape, unaffected by AI-generated content. As part of a larger project involving 562 distance-learning students, we conducted semi-structured interviews with 30 participants three years after the outbreak of the Covid-19 pandemic. Thematic analysis focused on three main areas: (a) students’ considerations regarding the place and time of taking remote online exams; (b) their understanding of the changing nature of exams in the online context; and (c) reflections on controlling the exam environment, with particular focus on invigilation and exam integrity issues and challenges faced by students. Key findings indicate that students valued greater flexibility, control, and accessibility in remote online exams but expressed anxiety regarding technology reliability. Unexpectedly, gender differences emerged in perceptions of cheating and exam integrity, with female students emphasising personal learning value and male students expressing greater concern about cheating opportunities. Additionally, confusion existed among students regarding what qualifies as an exam under new flexible formats. These findings contribute to the theoretical understanding of online assessment by highlighting the complex interplay of authenticity, trust, fairness, and learner autonomy. They also point to practical challenges and opportunities in designing equitable, flexible, and sustainable assessment models for higher education. The study's contributions include illuminating the underexplored perspectives of distance learners in OBOW contexts and pre-GenAI environments, informing policy and pedagogy as universities continue to adapt assessment in increasingly online and hybrid landscapes. These insights guide the development of institutional practices that balance technology, pedagogy, and student-centred design to enhance fairness and student confidence.

Downloads

Published

3 Dec 2025

Issue

Section

Articles

Categories