Teaching and Governing AI Security in Academic Environments: A Review of Models, Gaps, and Educational Directions
Main article
Abstract
The accelerating digitalisation of higher education has produced an expansive and heterogeneous attack surface that traditional signature-based controls can no longer contain. Artificial intelligence (AI) has emerged as a central pillar of institutional cyber defence, yet the body of knowledge on how universities should simultaneously teach AI security, govern its deployment, and operationalise it within constrained academic environments remains fragmented across computer science, education research, and policy studies. This article presents a PRISMA-guided systematic review that synthesises 168 peer-reviewed studies published between 2019 and 2025, from which 42 were analysed in depth across technical, methodological, practical, ethical, and conceptual dimensions. A four-dimensional taxonomy is proposed — covering AI methodology, security application domain, deployment architecture, and evaluation rigour — and is applied to the full corpus to reveal systemic patterns. Quantitative analysis shows a compound annual growth rate of 24.1% in publication volume, the displacement of traditional machine learning by deep learning architectures (58% of 2025 studies), and a persistent misalignment between research emphasis on network-layer defence (41%) and the operational reality that phishing remains the dominant attack vector in academic environments. Performance benchmarking across methodology categories demonstrates an inverse correlation between technical sophistication and operational deployability (r = −0.61, p < 0.01), with deep learning architectures scoring lowest on edge feasibility (4.3/10). Gap analysis identifies adversarial vulnerability (66%), unrealistic evaluation datasets (61%), and legacy-system integration (57%) as the most prevalent deficiencies, while sustainability receives negligible attention (6%). The discussion translates these findings into concrete educational directions, including an interdisciplinary curricular model, a governance framework aligned with FERPA, GDPR, and regional regulations, and a five-stage institutional roadmap. The article argues that the defining research agenda for the next phase of the field is not further algorithmic novelty but holistic, deployment-conscious, equity-aware integration of AI security into the academic mission.
