Close-up of a businessman using a laptop in the office at night.

AI teacher tools display racial bias when generating student behavior plans, study finds

September 5, 2025
Pheelings media // Shutterstock

AI teacher tools display racial bias when generating student behavior plans, study finds

Asked to generate intervention plans for struggling students, AI teacher assistants recommended more punitive measures for hypothetical students with Black-coded names and more supportive approaches for students the platforms perceived as white, a new study shows.

These findings come from . Researchers specifically sought to evaluate the quality of AI teacher assistants 鈥 such as MagicSchool, Khanmingo, Curipod, and Google Gemini for Education 鈥 that are designed to support classroom planning, lesson differentiation, and administrative tasks, reports.

Common Sense Media found that while these tools could help teachers save time and streamline routine paperwork, AI-generated content could also promote bias in lesson planning and classroom management recommendations.

Robbie Torney, senior director of AI programs at Common Sense Media, said the problems identified in the study are serious enough that ed tech companies should consider removing tools for behavior intervention plans until they can improve them. That鈥檚 significant because writing intervention plans of various sorts is a relatively common way teachers use AI.

After Chalkbeat asked about Common Sense Media鈥檚 findings, a Google spokesperson said Tuesday that Google Classroom has turned off the shortcut to Gemini that prompts teachers to 鈥淕enerate behavior intervention strategies鈥 to do additional testing.

However, both MagicSchool and Google, the two platforms where Common Sense Media identified racial bias in AI-generated behavior intervention plans, said they could not replicate Common Sense Media鈥檚 findings. They also said they take bias seriously and are working to improve their models.

across the country have been working to implement comprehensive AI policies to encourage informed use of these tools. with the American Federation of Teachers to provide free training in using AI platforms. The Trump Administration has also . However, released by the U.S. Department of Education have not directly addressed concerns about bias within these systems.

About a third of teachers report using AI at least weekly, according to a national survey conducted by the . A separate found teachers specifically report using these tools to help develop goals for Individualized Education Program 鈥 or IEP 鈥 plans. They also say they use these tools to shape lessons or assessments around those goals, and to brainstorm ways to accommodate students with disabilities.

Torney said Common Sense Media isn鈥檛 trying to discourage teachers from using AI in general. The goal of the report is to encourage more awareness of potential uses of AI teacher assistants that might have greater risks in the classroom.

鈥淲e really just want people to go in eyes wide open and say, 鈥楬ey, these are some of the things that they鈥檙e best at and these are some of the things you probably want to be a little bit more careful with,鈥欌 he said.

Common Sense Media identified AI tools that can generate IEPs and behavior intervention plans as high risk due to their biased treatment of students in the classroom. Using MagicSchool鈥檚 Behavior Intervention Suggestions tool and the Google Gemini 鈥淕enerate behavior intervention strategies tool,鈥 Common Sense Media鈥檚 research team ran the same prompt about a student who struggled with reading and showed aggressive behavior 50 times using white-coded names and 50 times using Black-coded names, evenly split between male- and female-coded names.

The AI-generated plans for the students with Black-coded names didn鈥檛 all appear negative in isolation. But clear differences emerged when those plans from MagicSchool and Gemini were compared with plans for students with white-coded names.

For example, when prompted to provide a behavior intervention plan for Annie, Gemini emphasized addressing aggressive behavior with 鈥渃onsistent non-escalating responses鈥 and 鈥渃onsistent positive reinforcement.鈥 Lakeesha, on the other hand, should receive 鈥渋mmediate鈥 responses to her aggressive behaviors and positive reinforcement for 鈥渄esired behaviors,鈥 the tool said. For Kareem, Gemini simply said, 鈥淐learly define expectations and teach replacement behaviors,鈥 with no mention of positive reinforcement or responses to aggressive behavior.

Torney noted that the problems in these AI-generated reports only became apparent across a large sample, which can make it hard for teachers to identify. The report warns that novice teachers may be more likely to rely on AI-generated content without the experience to catch inaccuracies or biases. Torney said these underlying biases in intervention plans 鈥渃ould have really large impacts on student progression or student outcomes as they move across their educational trajectory.鈥

of suspension than their white counterparts in schools and more likely to receive harsher disciplinary consequences for subjective reasons, like 鈥渄isruptive behavior.鈥 Machine learning algorithms replicate the decision-making patterns of the training data that they are provided, which can perpetuate existing inequalities. A separate study found that AI tools replicate , assigning lower scores to Black students than to Asian students.

The Common Sense Media report also identified instances when AI teacher assistants generated lesson plans that relied on stereotypes, repeated misinformation, and sanitized controversial aspects of history.

A Google spokesperson said the company has invested in using diverse and representative training data to minimize bias and overgeneralizations.

鈥淲e use rigorous testing and monitoring to identify and stop potential bias in our AI models,鈥 the Google spokesperson said in an email to Chalkbeat. 鈥淲e鈥檝e made good progress, but we鈥檙e always aiming to make improvements with our training techniques and data.鈥

On its website, its AI teaching assistant as 鈥渁n unbiased tool to aid in decision-making for restorative practices.鈥 In an email to Chalkbeat, MagicSchool said it has not been able to reproduce the issues that Common Sense Media identified.

MagicSchool said its platform includes bias warnings and instructs users not to include student names or other identifying information when using AI features. In light of the study, it is working with Common Sense to improve its bias detection systems and design tools in ways that encourage educators to review AI-generated content more closely.

鈥淎s noted in the study, AI tools like ours hold tremendous promise 鈥 but also carry real risks if not designed, deployed, and used responsibly,鈥 MagicSchool told Chalkbeat. 鈥淲e are grateful to Common Sense Media for helping hold the field accountable.鈥

was produced by and reviewed and distributed by 麻豆原创.


Trending Now