Skip to main content

TEDxUNF is happening now! Join our livestream on YouTube and Facebook

Teaching machines to detect student emotions isn’t enough, research showsINNOVATION INSIGHTS

Teaching machines to detect student emotions isn’t enough, research shows

10-04-2026UNF staff
Share

Artificial intelligence can now detect whether a student is bored or engaged. It can detect if they’re anxious or uninterested and flag if they’re zoned out or locked in. But knowing how a student feels means nothing if it doesn’t change how they’re taught. New research from University of Niagara Falls Canada argues that emotion-sensing artificial intelligence in higher education has advanced faster than the ethical safeguards and pedagogical grounding needed to make it educationally meaningful.

Emotional artificial intelligence (E-AI), as described by research co-author Professor Mohammad Sedighi, refers to systems that use signals such as facial expressions, voice, text, or physiological data to estimate how a student may be feeling during learning.

“In classrooms, the goal is usually to support teaching through adaptive feedback, early warnings, or more responsive digital learning environments,” explained the Master of Management professor.

When teaching in person, professors can often rely on visible cues to detect emotion and adjust their instruction accordingly. As learning shifts toward hybrid and online formats, those cues can become obscured and harder to detect. That shift has motivated growing interest in E-AI, which can detect emotional states through multimodal sensing and machine learning. Sedighi teamed up with Sathy Srithar, Associate Dean of the Master of Management program, to conduct a systematic review of research around E-AI and its use in higher education.

Looking at empirical and grey literature spanning 2010-2025, across six major databases, they identified three historical research waves and an interpretable map of the field organized through four dominant research lenses: technical, pedagogical, theoretical, and ethical.

“What surprised me most was the imbalance in the field,” said Sedighi. “The technical side has advanced quickly, but theory integration and ethical safeguards still lag behind. We also found that many studies still treat recognition accuracy as if it automatically means educational value, which is a major gap.”

Current research, they found, treats facial expressions as universal indicators and underreports cultural bias. Describing the field as technically advanced but conceptually fragmented, Sedighi explained that could mean a system that detects confusion or disengagement from webcams or chat logs but offers no clear theory for what that emotion means or what instructors should do next.

“In other words, the technology may be sophisticated but the educational logic behind it is often weak or inconsistent,” he said, adding that bias auditing shows some student populations may face disproportionate risks. “Especially those whose emotional expression may be interpreted differently across cultural or demographic contexts. That can include international students and equity-deserving groups, because the literature shows uneven performance across populations and underreporting of subgroup fairness.”

Sedighi and Srithar’s findings will be published in an article, “Emotional Artificial Intelligence in Higher Education: A Systematic Review”, set to be published in an upcoming volume of The Internet and Higher Education. In it, they propose a maturity framework positioning E-AI as a socio-technical system that must be theory-anchored, pedagogy-first, bias-aware, and ethically governed.

“A responsibly governed system would need privacy-by-design, explicit and understandable consent, bias audits, transparency documents such as model cards, human oversight, and a cross-functional ethics review structure before adoption. Universities should not move forward with deployment unless these governance and accountability mechanisms are already in place,” said Sedighi, noting most Canadian universities are still closer to isolated prototypes or coordinated stages rather than fully embedded or adaptive ecosystems.

“The field is moving forward, but many initiatives remain pilot-based and have not yet achieved institution-wide, theory-grounded, and ethically governed maturity.”

Srithar said working with Sedighi on this research allowed them to explore how educators can bridge the gap between digital instruction and the ‘unseen’ emotional needs of students.

“I wanted to help redefine E-AI not just as a tracking tool, but as a pedagogy-first system that prioritizes cultural awareness and ethical safeguards.”

At a time when most discussions of AI in education focus on chatbots, plagiarism detection, or personalized learning, this research tackles something far more intimate. Looking at the technically sophisticated but conceptually fractured field of AI systems that read students’ emotional states in real time, Sedighi and Srithar offer a structured solution and a map for a responsible path forward.