Research • 2026

Supporting Teacher Learning with Large Language Models Simulated Student Dialogues

Exploring the potential of LLM-simulated student dialogues for teacher learning in science education.

Duration 6 months
Role Lead Researcher
Tools Used GPT-5, Process Mining

Project Overview

Providing students with opportunities to engage in peer discussion is crucial in science education. However, facilitating classroom discussion can be challenging for teachers, as they are not always able to anticipate and respond to students' ideas in the moment.

This research investigates the use of Large Language Models (LLMs) to provide an accessible solution to simulate students' science understanding in peer discussion. These simulations serve as a tool for teachers to identify potential student misconceptions and everyday reasoning, ultimately improving their lesson plans.

Problem Statement & Context

Various teacher training tools aim to provide teachers with opportunities to analyze and build on students' ideas. However, these tools have significant limitations.

Key Challenge

Traditional teaching rehearsals and video analyses are costly because they rely on experienced teachers or teacher educators to enact the rehearsal. Furthermore, the content might not align with the specific lessons teachers are preparing.

Approach & Methodology

The research was structured into four distinct steps:

  • Dataset Creation: Selected discussion questions from 45 publicly available OpenSciEd lesson plans.
  • LLM Generation: Developed prompts for OpenAI's GPT-5 to generate four distinct dialogue types.
  • Analysis: Analyzed dialogues for the diversity and sequence of talk moves using process maps.
  • Teacher Interviews: Interviewed middle and high school teachers to articulate potential instructional insights.
← Back to All Projects