Risks of AI Applications Used in Higher Education
DOI:
https://doi.org/10.34190/ejel.22.6.3457Keywords:
Artificial intelligence (AI), Risk management framework (RMF), Higher education, CybersecurityAbstract
As artificial intelligence (AI) tools become more widely used in higher education, we must pay attention to the risks that can emerge. AI projects, whether applied in classroom learning or used for decision-making regarding admissions, financial aid allocation, or hiring, must include attention to governance and compliance issues, regardless of the project’s scope and scale. Concerns highlighted in this work include transparency, user privacy, data confidentiality, data integrity, and system availability, however, we note that this is a non-exhaustive list of risks. In this paper, risk assessment is defined, and two examples of risk management frameworks, namely the United States National Institute of Standards and Technology Artificial Intelligence Risk Management Framework and the non-profit humanitarian effort ForHumanity’s Independent Audit of AI, Algorithmic, and Autonomous Systems are briefly described. We identify characteristics of AI applications that need to be assessed for vulnerabilities they may present, such as bias and discrimination. This paper aims to facilitate discussion among stakeholders about the risks that may be encountered from using AI in higher education, as well as to suggest ways developers, decision-makers, and users can mitigate these risks. Much discussion and published literature has focused on risk management frameworks designed for large organizations or enterprises or frameworks that do not consider risks specific to AI. We hope that decision-makers carefully consider the risks, perform due diligence when implementing AI applications, and create a plan for mitigating the risks. This research supports e-learning practice because students and faculty are embracing AI applications. Leaders and decision-makers in higher education need to be proactive in protecting their varied stakeholders. The paper asks what risks may be encountered by institutions of higher education when using AI tools and products in the classroom and for various aspects of decision-making and if published frameworks can mitigate these risks.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Donna Schaeffer, Lori Coombs, Jonathan Luckett, Marvin Marin, Patrick Olson

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Open Access Publishing
The Electronic Journal of e-Learning operates an Open Access Policy. This means that users can read, download, copy, distribute, print, search, or link to the full texts of articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, is that authors control the integrity of their work, which should be properly acknowledged and cited.