Home - Toolkit - Artificial intelligence enabled professional learning
Explores how AI-enabled tools, analytics systems, and simulation environments can be designed to structure, support, and strengthen teacher professional learning.
Promising
Mixed
Weak
This strand examines how different forms of artificial intelligence are used within teacher education and professional development to structure and support teacher learning. The research base spans a range of technologies, including intelligent tutoring systems, learning analytics tools, AI-enabled educational robotics, simulation environments, and, in more recent studies, generative AI systems.
It is important to note that much of the evidence predates the widespread use of large language models such as ChatGPT. In this strand, “AI” refers broadly to systems that analyse data, adapt feedback, model scenarios, or generate structured prompts to support professional learning. It does not refer exclusively to generative AI tools.
Educational robotics is included where programmable or semi-autonomous systems are used within professional development to help teachers reason through pedagogical decisions and reflect on aspects of their own practice. The emphasis is on how robotics supports teacher learning, not on pupil-facing robotics activities.
Simulation platforms refer to virtual or extended reality environments that model classroom interactions. Some use AI to generate responsive avatars or structured feedback; others are technology-mediated rehearsal spaces without advanced generative capability. They are treated here as structured rehearsal tools within professional learning programmes, not as substitutes for placement or mentoring.
Across these forms, the common feature is not a specific type of AI technology but the use of AI-enabled or AI-informed systems to organise feedback, scaffold reflection, analyse patterns in professional activity, or create structured rehearsal opportunities within professional learning.
This strand does not address general digital teaching tools, subject-specific instructional strategies, or broader classroom technology integration. Its scope is limited to how AI-enabled systems shape the design and experience of professional learning for teachers and trainee teachers.
The evidence indicates that these AI-enabled approaches are most consistently linked to changes in teachers’ confidence and perceived readiness to handle professional decisions. Simulated rehearsal environments are associated with increased self-efficacy, as teachers practise responding to complex situations without the immediate consequences of a live classroom. However, these findings rely largely on self-report measures rather than observed behaviour. Professional learning that incorporates AI tools appears to support confidence when teachers receive guided human support alongside automated feedback, rather than working with AI outputs alone.
Several studies report perceived improvements in professional skills, but these are typically measured within the professional learning setting itself, for example through self-report or performance in simulations, rather than through observed or independently verified changes in classroom practice. Simulation-based work is associated with greater comfort in analysing classroom scenarios, identifying key features of interaction, and considering alternative responses. When teachers co-design or critically test AI-generated lesson or curriculum materials as part of professional learning, some studies report broader planning approaches, though evidence of sustained classroom change remains limited.
Knowledge development is also reported, primarily in relation to teachers’ technological and professional reasoning knowledge rather than subject content knowledge. Teachers describe clearer understanding of AI-related concepts or digital processes when tools structure information or provide analytic feedback. Learning analytics systems that visualise patterns in activity or reflection are associated with more deliberate professional reflection and greater awareness of how digital systems shape feedback and interpretation.
Where teachers gain direct experience with AI-based profiling tools, tutoring systems, or generative AI within structured professional learning, studies report clearer understanding of both the strengths and limits of these technologies.
Direct evidence of pupil impact in this strand is extremely limited. The large majority of studies, well over three quarters of those reviewed, focus solely on teacher learning and do not report measurable changes in pupils’ attainment, engagement, or experience. A small number, primarily within educational robotics research, describe perceived pupil benefits, but these are typically based on teacher report or short-term descriptive data rather than independent outcome measures. No study within this synthesis demonstrates sustained or longitudinal change in pupil outcomes attributable to these approaches.
Pupil benefit is therefore best understood as a possible downstream effect of teacher learning, not as an outcome demonstrated within the current research base.
The evidence suggests that this strand offers some promising but uneven benefits. The most consistent findings relate to teachers’ immediate learning, particularly increased confidence, clearer insight into their own strengths and gaps, and improved understanding of professional concepts.
Some AI-based profiling and analytics tools can reliably identify patterns in teachers’ responses or activity data, for example highlighting recurring misconceptions in subject knowledge quizzes, patterns in reflective writing, or repeated gaps in planning decisions that may indicate areas for further support. Simulation environments often lead trainee teachers to report feeling more prepared to respond to challenging classroom situations.
The evidence is less secure when asking whether these early gains lead to sustained changes in practice.
Evidence relating to pupil outcomes is weakest. Direct measurement of pupil outcomes is largely absent. Where pupil benefit is suggested, it is typically inferred from changes in teacher learning rather than demonstrated through independent attainment or engagement data, and no long-term pupil impact is evidenced.
These approaches appear more consistently effective for short-term teacher learning than for sustained behaviour change or pupil impact. Their effectiveness is shaped by context, design quality, and the still-developing nature of the research base.
Evidence suggests these approaches are most effective when leaders attend to the quality of the professional learning environment, rather than to the technology itself. What appears to matter is how the work is framed, how teachers are supported to interpret unfamiliar tools, and whether the setting feels psychologically safe for experimentation and honest reflection.
Three areas appear particularly influential:
These offer a strategic lens for leaders who want technology to strengthen professional learning rather than add unnecessary complexity.
Behaviours
A small number of behaviours by those leading or facilitating professional learning, such as professional learning leads, mentors, subject leaders, or external trainers, appear to influence how productively teachers engage with AI-enabled approaches. These behaviours affect how new tools are framed, how their outputs are interpreted, and whether teachers feel confident to explore them.
These behaviours keep professional learning at the centre, with technology acting as support rather than the driver.
Contextual factors
A small number of contextual conditions appear to influence how AI-enabled, analytics-supported, and simulation-based professional learning is received. These sit around the design rather than within it and help explain why similar approaches gain traction in some settings more readily than others.
These conditions do not determine success, but they help explain variation in how consistently these approaches support professional learning across settings.
Structured but flexible
Professional learning in this strand combines clear organising structures with room for adaptation. While this balance is not unique to AI-enabled approaches, the technologies involved introduce distinctive forms of structure that require careful interpretation.
The evidence suggests that what differentiates this strand is not the presence of structure alone, but the interaction between algorithmic structure and professional judgement. Stable learning architecture remains important, but it must be paired with deliberate human mediation.
The evidence identifies a limited number of constraints that can affect how securely AI-supported, analytics-informed, and simulation-based professional learning takes hold. These tend to relate to beliefs about the tools, features of programme design, and wider organisational conditions.
Belief and perception b
Design and feature barriers
System and organisational barriers
Evidence and governance barriers
These constraints illustrate how belief, design, capacity, and governance factors can intersect, shaping the pace and depth of adoption.
A small number of wider themes may influence how leaders position professional learning that draws on AI-enabled systems, data-informed analytics tools, educational robotics, or simulation environments.
This strand examines how AI-enabled tools, analytics systems, and simulation environments are used to structure professional learning rather than classroom instruction. The evidence suggests these approaches create structured opportunities for teachers and trainee teachers to rehearse decisions, reflect on practice, and engage with professional knowledge in controlled settings.
The most consistent effects relate to short-term teacher learning, particularly:
Evidence is less secure when considering sustained behaviour change. Many studies rely on self-report, small samples, or higher education contexts that do not fully mirror school-based professional development. Pupil effects are largely inferred rather than measured and remain speculative.
The research base also offers limited insight into longer-term reliability questions, such as how consistently teachers can evaluate AI-generated outputs, how often system errors occur in practice, or how these approaches compare in cost-effectiveness with other professional learning models.
For leaders of professional development, the message is one of measured caution. These approaches may offer structured ways for teachers to rehearse and reflect on aspects of practice that are difficult to simulate in live settings. However, their contribution remains partial and uneven, with little direct evidence of impact on pupils and limited insight into longer-term or unintended consequences.
The evidence base in this strand is smaller and less mature than in many other areas of professional learning, and it is dominated by short-term and higher education studies. As a result, claims about sustained behaviour change, system-level impact, or comparative advantage over other approaches cannot yet be made with confidence.
These methods are therefore best understood as emerging components within a wider professional learning strategy, rather than established or self-sufficient solutions.
When citing this strand, please use the following reference:
National Institute of Teaching (2026). NIoT Evidence Toolkit: Artificial intelligence enabled professional learning strand
We share practical ways teacher educators have used the evidence to inform the training and development of others, and a range of recent relevant research and resources.
Was this page useful?
Thank you!
Would you like to stay updated or share your experiences?
This strand is based on 6 references
6 References