Artificial intelligence enabled professional learning  

Explores how AI-enabled tools, analytics systems, and simulation environments can be designed to structure, support, and strengthen teacher professional learning.

Impact on pupils

Promising

Impact on teachers

Mixed

Strength of evidence

Weak

What is it?

This strand examines how different forms of artificial intelligence are used within teacher education and professional development to structure and support teacher learning. The research base spans a range of technologies, including intelligent tutoring systems, learning analytics tools, AI-enabled educational robotics, simulation environments, and, in more recent studies, generative AI systems.

It is important to note that much of the evidence predates the widespread use of large language models such as ChatGPT. In this strand, “AI” refers broadly to systems that analyse data, adapt feedback, model scenarios, or generate structured prompts to support professional learning. It does not refer exclusively to generative AI tools.

Educational robotics is included where programmable or semi-autonomous systems are used within professional development to help teachers reason through pedagogical decisions and reflect on aspects of their own practice. The emphasis is on how robotics supports teacher learning, not on pupil-facing robotics activities.

Simulation platforms refer to virtual or extended reality environments that model classroom interactions. Some use AI to generate responsive avatars or structured feedback; others are technology-mediated rehearsal spaces without advanced generative capability. They are treated here as structured rehearsal tools within professional learning programmes, not as substitutes for placement or mentoring.

Across these forms, the common feature is not a specific type of AI technology but the use of AI-enabled or AI-informed systems to organise feedback, scaffold reflection, analyse patterns in professional activity, or create structured rehearsal opportunities within professional learning.

This strand does not address general digital teaching tools, subject-specific instructional strategies, or broader classroom technology integration. Its scope is limited to how AI-enabled systems shape the design and experience of professional learning for teachers and trainee teachers.

Key findings

Impact on teachers

The evidence indicates that these AI-enabled approaches are most consistently linked to changes in teachers’ confidence and perceived readiness to handle professional decisions. Simulated rehearsal environments are associated with increased self-efficacy, as teachers practise responding to complex situations without the immediate consequences of a live classroom. However, these findings rely largely on self-report measures rather than observed behaviour. Professional learning that incorporates AI tools appears to support confidence when teachers receive guided human support alongside automated feedback, rather than working with AI outputs alone.

Several studies report perceived improvements in professional skills, but these are typically measured within the professional learning setting itself, for example through self-report or performance in simulations, rather than through observed or independently verified changes in classroom practice. Simulation-based work is associated with greater comfort in analysing classroom scenarios, identifying key features of interaction, and considering alternative responses. When teachers co-design or critically test AI-generated lesson or curriculum materials as part of professional learning, some studies report broader planning approaches, though evidence of sustained classroom change remains limited.

Knowledge development is also reported, primarily in relation to teachers’ technological and professional reasoning knowledge rather than subject content knowledge. Teachers describe clearer understanding of AI-related concepts or digital processes when tools structure information or provide analytic feedback. Learning analytics systems that visualise patterns in activity or reflection are associated with more deliberate professional reflection and greater awareness of how digital systems shape feedback and interpretation.

Where teachers gain direct experience with AI-based profiling tools, tutoring systems, or generative AI within structured professional learning, studies report clearer understanding of both the strengths and limits of these technologies.

Impact on pupils

Direct evidence of pupil impact in this strand is extremely limited. The large majority of studies, well over three quarters of those reviewed, focus solely on teacher learning and do not report measurable changes in pupils’ attainment, engagement, or experience. A small number, primarily within educational robotics research, describe perceived pupil benefits, but these are typically based on teacher report or short-term descriptive data rather than independent outcome measures. No study within this synthesis demonstrates sustained or longitudinal change in pupil outcomes attributable to these approaches.

Pupil benefit is therefore best understood as a possible downstream effect of teacher learning, not as an outcome demonstrated within the current research base.

 

How effective is the approach?

The evidence suggests that this strand offers some promising but uneven benefits. The most consistent findings relate to teachers’ immediate learning, particularly increased confidence, clearer insight into their own strengths and gaps, and improved understanding of professional concepts.

Some AI-based profiling and analytics tools can reliably identify patterns in teachers’ responses or activity data, for example highlighting recurring misconceptions in subject knowledge quizzes, patterns in reflective writing, or repeated gaps in planning decisions that may indicate areas for further support. Simulation environments often lead trainee teachers to report feeling more prepared to respond to challenging classroom situations.

The evidence is less secure when asking whether these early gains lead to sustained changes in practice.

  • AI-supported profiling tools have not yet been shown to predict or improve later classroom performance.
  • Intelligent Tutoring Systems (meaning AI-based programmes that provide step-by-step tasks with automated feedback tailored to a teacher’s responses) can support understanding of specific concepts or activities. However, these gains do not consistently translate into broader professional growth.
  • Reported changes in practice following AI-related professional learning are limited and usually self-reported
  • Simulation studies suggest improved awareness of classroom dynamics, but there is little independent evidence that they strengthen more complex interpersonal skills over time.

Evidence relating to pupil outcomes is weakest. Direct measurement of pupil outcomes is largely absent. Where pupil benefit is suggested, it is typically inferred from changes in teacher learning rather than demonstrated through independent attainment or engagement data, and no long-term pupil impact is evidenced.

These approaches appear more consistently effective for short-term teacher learning than for sustained behaviour change or pupil impact. Their effectiveness is shaped by context, design quality, and the still-developing nature of the research base.

How to implement it well

Evidence suggests these approaches are most effective when leaders attend to the quality of the professional learning environment, rather than to the technology itself. What appears to matter is how the work is framed, how teachers are supported to interpret unfamiliar tools, and whether the setting feels psychologically safe for experimentation and honest reflection.

Three areas appear particularly influential:

  • The behaviours of those leading and facilitating the learning (explored in the next section)
  • The wider organisational context in which the programme sits
  • The balance between structure and flexibility in design (meaning clear routines and shared frameworks) alongside space for interpretation and adaptation

These offer a strategic lens for leaders who want technology to strengthen professional learning rather than add unnecessary complexity.

Behaviours

A small number of behaviours by those leading or facilitating professional learning, such as professional learning leads, mentors, subject leaders, or external trainers, appear to influence how productively teachers engage with AI-enabled approaches. These behaviours affect how new tools are framed, how their outputs are interpreted, and whether teachers feel confident to explore them.

  • Set a low-stakes tone, positioning AI-enabled tools (such as analytics systems, tutoring platforms, and simulations) as spaces to test ideas, explore decisions, and discuss uncertainty without performance pressure.
  • Make the logic of AI visible, clarifying what tools draw on, what they can and cannot infer, and how outputs should sit alongside professional judgement.
  • Build structured reflection into the sequence, helping teachers connect their experiences with professional frameworks and their developing identity.
  • Maintain visible human facilitation, ensuring dialogue and interpretation accompany automated systems rather than being replaced by them.

These behaviours keep professional learning at the centre, with technology acting as support rather than the driver.

Contextual factors 

A small number of contextual conditions appear to influence how AI-enabled, analytics-supported, and simulation-based professional learning is received. These sit around the design rather than within it and help explain why similar approaches gain traction in some settings more readily than others.

  • Policy and organisational priorities can shape uptake, particularly where digital competence or AI literacy are already positioned as strategic aims.
  • Technical reliability and access, including stable connectivity and appropriate platforms, influence how confidently teachers can participate.
  • Programme culture and framing matter. Approaches tend to sit more comfortably where rehearsal of practice, digital capability, or professional identity work are already normalised within teacher development.
  • Career stage and prior experience with digital systems appear to shape early engagement. Studies focusing on trainee teachers suggest that those earlier in training, or with lower digital confidence, may require more structured support to engage confidently with simulations or analytics tools.

These conditions do not determine success, but they help explain variation in how consistently these approaches support professional learning across settings.

Structured but flexible

Professional learning in this strand combines clear organising structures with room for adaptation. While this balance is not unique to AI-enabled approaches, the technologies involved introduce distinctive forms of structure that require careful interpretation.

  • Shared structures, such as AI-driven profiling frameworks, predefined simulation scenarios, or analytics cycles, provide consistent reference points for rehearsal and discussion. Unlike traditional PL tools, these structures are often algorithmically generated or data-informed.
  • Flexibility arises when teachers interpret AI outputs, select scenarios, or adapt data tasks to suit phase, subject, or experience level. This interpretive step is particularly important where outputs are probabilistic or modelled rather than fixed.
  • Blended or modular formats support pacing flexibility, but in AI-enabled systems they often combine automated elements with facilitated dialogue, requiring deliberate integration rather than simple sequencing.
  • Opportunities for repeat practice with variation are more easily engineered in simulation or analytics environments, allowing structured rehearsal without identical repetition.
  • Human facilitation plays a distinct role in contextualising automated feedback and surfacing ethical considerations that may not be visible within the system itself.

The evidence suggests that what differentiates this strand is not the presence of structure alone, but the interaction between algorithmic structure and professional judgement. Stable learning architecture remains important, but it must be paired with deliberate human mediation.

Barriers to effective implementation

The evidence identifies a limited number of constraints that can affect how securely AI-supported, analytics-informed, and simulation-based professional learning takes hold. These tend to relate to beliefs about the tools, features of programme design, and wider organisational conditions.

Belief and perception b

  • Limited trust in AI-generated outputs, particularly where processes are not transparent or appear to overlook professional judgement.
  • Ethical reservations about generative AI and analytics systems, including concerns about bias in training data, intellectual property, environmental impact, data privacy, and accountability.
  • Questions about the credibility of virtual simulations, which may reduce engagement if they are seen as artificial or peripheral.
  • Unease with unfamiliar technologies, especially in early stages of participation.

Design and feature barriers

  • Activities that feel detached from teachers’ contexts can weaken perceived relevance.
  • Insufficient time for reflection or debrief can limit consolidation of learning.
  • Technical instability or low-fidelity simulation environments can divert attention from professional focus.

System and organisational barriers

  • Variability in infrastructure and access to required platforms can affect delivery reliability.
  • Institutional caution, particularly where the evidence base is seen as emerging, may limit sustained commitment.
  • Competing priorities and limited protected time can restrict depth of engagement.

Evidence and governance barriers

  • Small-scale and short-duration research makes it difficult to isolate which design features drive outcomes.
  • Uncertainty around data privacy, consent, and bias may inhibit confidence in analytics-based approaches.

These constraints illustrate how belief, design, capacity, and governance factors can intersect, shaping the pace and depth of adoption.

Other considerations

A small number of wider themes may influence how leaders position professional learning that draws on AI-enabled systems, data-informed analytics tools, educational robotics, or simulation environments.

  • Transparency increasingly functions as a design expectation, with explainable AI helping teachers interpret outputs critically rather than accept them at face value.
  • Effective facilitation may require both technical understanding and pedagogical insight, raising questions about who leads this work and how specialist knowledge is translated into accessible professional learning.
  • These approaches can shape teachers’ developing professional identities and may introduce uncertainty, making emotional climate and psychological safety relevant design considerations.
  • Long-term viability, including platform stability, licensing models, and data governance, influences whether approaches can be sustained with confidence.

Strand summary

This strand examines how AI-enabled tools, analytics systems, and simulation environments are used to structure professional learning rather than classroom instruction. The evidence suggests these approaches create structured opportunities for teachers and trainee teachers to rehearse decisions, reflect on practice, and engage with professional knowledge in controlled settings.

The most consistent effects relate to short-term teacher learning, particularly:

  • Increased confidence
  • Sharper diagnostic awareness
  • Greater insight into developing professional identity

Evidence is less secure when considering sustained behaviour change. Many studies rely on self-report, small samples, or higher education contexts that do not fully mirror school-based professional development. Pupil effects are largely inferred rather than measured and remain speculative.

The research base also offers limited insight into longer-term reliability questions, such as how consistently teachers can evaluate AI-generated outputs, how often system errors occur in practice, or how these approaches compare in cost-effectiveness with other professional learning models.

For leaders of professional development, the message is one of measured caution. These approaches may offer structured ways for teachers to rehearse and reflect on aspects of practice that are difficult to simulate in live settings. However, their contribution remains partial and uneven, with little direct evidence of impact on pupils and limited insight into longer-term or unintended consequences.

The evidence base in this strand is smaller and less mature than in many other areas of professional learning, and it is dominated by short-term and higher education studies. As a result, claims about sustained behaviour change, system-level impact, or comparative advantage over other approaches cannot yet be made with confidence.

These methods are therefore best understood as emerging components within a wider professional learning strategy, rather than established or self-sufficient solutions.

When citing this strand, please use the following reference:

National Institute of Teaching (2026). NIoT Evidence Toolkit: Artificial intelligence enabled professional learning strand

In practice

We share practical ways teacher educators have used the evidence to inform the training and development of others, and a range of recent relevant research and resources.

References

This strand is based on 6 references

6 References

Reference 1
Bond et al. (2025) AI applications in Initial Teacher Education: A systematic mapping review
Years included 2014-2024
Focus ITE only
# studies 138
Countries Multiple countries
Impact on pupils Not reported
Impact on teachers Mixed
Reporting quality Excellent

Reference 2
Dogan et al. (2025) Artificial intelligence professional development: a systematic review of TPACK, designs, and effects for teacher learning
Years included 2003-2024
Focus CPD only
# studies 13
Countries Multiple countries
Impact on pupils Positive
Impact on teachers Positive
Reporting quality High

Reference 3
Giannandrea et al. (2021) Teacher training on Educational Robotics: a systematic review
Years included 2016-2020
Focus ITE & CPD
# studies 16
Countries Multiple countries
Impact on pupils Positive
Impact on teachers Mixed
Reporting quality Medium

Reference 4
Ledger et al. (2022) Simulation platforms in initial teacher education: Past practice informing future potentiality
Years included 2009-2020
Focus ITE only
# studies 24
Countries Not reported
Impact on pupils Not reported
Impact on teachers Mixed
Reporting quality High

Reference 5
Salas-Pilco et al. (2022) Artificial intelligence and learning analytics in teacher education: A systematic review
Years included 2017-2021
Focus ITE & CPD
# studies 30
Countries Multiple countries
Impact on pupils Not reported
Impact on teachers Mixed
Reporting quality Medium

Reference 6
Theelen et al. (2019) Classroom simulations in teacher education to support preservice teachers’ interpersonal competence: A systematic literature review
Years included 2000-2016
Focus ITE only
# studies 15
Countries Multiple countries
Impact on pupils Not reported
Impact on teachers Mixed
Reporting quality Medium