Machine Learning in Art Teacher Education: A Comparative Analysis and Student Perceptions
DOI:
https://doi.org/10.34190/ejel.24.2.4313Keywords:
Pre-service primary teachers, Visual arts education, Teaching practices, Artificial intelligence, Machine learningAbstract
Amid the global push for digital transformation in higher education, there is a critical need for objective, scalable assessment tools in subjective disciplines like visual arts. Modern teacher education increasingly integrates intelligent technologies, yet the application of machine learning (ML) for formative assessment in art education remains underexplored. While ML offers scalable feedback, its capacity to evaluate subjective creativity remains contested. The study aims to examine the technical accuracy of a CNN-based model trained on a local dataset of 300 archived projects, compared to instructor evaluations, and to analyze how future teachers (N = 180) perceive algorithmic feedback in assessment contexts. A mixed-methods design was employed using a highly reliable survey instrument (Cronbach’s α = .925) and comparative scoring analysis across four key dimensions: Technique, Composition, Color, and Creativity. Results indicate that the model aligns strongly with human assessments on technical execution (r = .426, p < .001), and moderate alignment for Composition (r = .430, p < .001) and weaker alignment for Color (r = .327, p < .001), while correlations for Creativity were notably weaker (r = .181, p = .015), indicating persistent limitations in modeling abstract artistic intent. ANOVA results revealed that students’ digital literacy significantly predicts their trust in the system (F = 3.547, p = .031) and willingness to use it (F = 8.476, p < .001). Furthermore, discrepancy analysis indicated systematic divergence across proficiency levels, with the model exhibiting increasing underestimation for highly proficient students, particularly in cases involving stylistic deviation or non-standard cultural expression. The findings suggest that while the algorithm provides consistent, transparent scoring that enhances assessment literacy, it lacks the sensitivity to evaluate high-level originality due to standardization bias. This study contributes to the field by empirically demonstrating the "accuracy–creativity trade-off" in ML-based art assessment and by validating a hybrid assessment framework that balances algorithmic precision with pedagogical intuition. The study concludes that ML tools should function as "human-in-the-loop" support systems rather than autonomous graders, fostering critical reflection and digital competence in future educators.
Downloads
Published
License
Copyright (c) 2026 Botagoz Kystaubayeva, Gulmira Mailybaeva, Kairat Dzhanabaev, Ainur Ansabayeva, Elmira Kydyrbekova, Aivar Sakhipov

This work is licensed under a Creative Commons Attribution 4.0 International License.
Open Access Publishing
The Electronic Journal of e-Learning operates an Open Access Policy. This means that users can read, download, copy, distribute, print, search, or link to the full texts of articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, is that authors control the integrity of their work, which should be properly acknowledged and cited.