The AI Dilemma: When Innovation Outpaces Integrity

Article Icon Article
Monday, October 20, 2025
Photo by iStock/Suriya Phosri
AI-enabled educational systems can undermine judgment, blur incentives, and erode learning—unless ethical reasoning is built into their design.
  • Two case studies illustrate how AI-powered tools can make it easier for students to cheat and harder for instructors to assess student comprehension.
  • When using algorithmic tools and alternative assessments, administrators must ask whether they’re focusing on student learning or prioritizing information that’s easy to track.
  • AI-powered metrics are useful, but not definitive. Human insights and faculty judgment remain essential for interpreting and validating student learning.

 
In 2025, two very different academic committees at one institution found themselves confronting the same unsettling realization. Promising digital innovations powered by artificial intelligence (AI) were quietly producing ethical blind spots. One committee was overseeing an AI tutor platform designed to boost instructional efficiency. The other was wrestling with a spike in integrity violations linked to AI-assisted assessments.

Although the school has chosen to remain anonymous, both incidents—one operational, one policy-based—are real. And together they illustrate the risk institutions take when they chase performance metrics without building ethical reasoning into decision structures: The very tools they are using to advance learning can compromise it instead.

Case One: AI Tutors and the Illusion of Progress

The first incident occurred as the school was rolling out a flagship innovation—an AI tutor that would be used across postgraduate programs. Designed to scaffold pre-class learning, the platform enabled students to learn about concepts that would be covered in the course; by engaging with the material before each class, students earned a numeric readiness score (their Pre-Learning Index or PLI). The platform also integrated a dashboard for faculty and auto-generated quizzes to support Socratic-style discussion.

But within weeks, cracks appeared.

Administrative decisions about curriculum pacing, student support, and teaching strategy were increasingly shaped by AI dashboards, not human dialogue.

Faculty noticed that high PLI scores didn’t align with actual student understanding. Students quickly learned to game the system—copying generic answers and clicking through prompts—because their PLIs didn’t count toward their grades. What looked like engagement was often strategic compliance. Instructors entered class sessions expecting students to be familiar with course concepts; instead, they had to reteach basics. Time for discussion shrank. Frustration grew.

Most concerning: Administrative decisions about curriculum pacing, student support, and teaching strategy were increasingly shaped by AI dashboards, not human dialogue. A metric designed to signal learning was functioning as a surrogate for it. The AI tool had drifted from acting as a reflective support system to being a performative signal.

Case Two: Exams, Ethics, and the AI Arms Race

Meanwhile, another academic review committee was facing a different kind of AI dilemma. Over the past year, non-invigilated assessment formats—such as case submissions, peer reviews, and video pitches—had become the norm. So had student misconduct. These unsupervised assessment formats, which often were powered by generative AI, were the source of nearly 73 percent of reported ethical violations.

The committee generated data that raised deeper questions. Why were some programs reporting no cases of integrity violations? Were they model departments—or black boxes with no oversight? Why were grade-change requests disproportionately clustered in one division?

And then came the memo.

A student whistleblower revealed that a peer had used AI tools to generate answer templates—initially for one class, but reportedly expanding to others. This student had begun selling the answers through encrypted messaging platforms. No detection system flagged the activity. Faculty were unaware. The assessments had already been submitted, graded, and archived. The incident only surfaced because a small group of students, disturbed by the behavior, anonymously reported it. They viewed it not just as a case of cheating, but as a betrayal of the trust that students place in one another to uphold shared standards—and as a signal that the integrity of the learning environment was under threat.

The committee came to see this case not as an isolated event, but as the most visible symptom of a deeper institutional vulnerability. AI-powered assessments, when introduced without adequate oversight or ethical scaffolding, could be quietly exploited in ways that no existing detection tools were equipped to catch.

The Common Thread: Institutional Design

These aren’t stories about rogue students or ineffective software. They’re stories about how institutional choices—about what to measure, how to assess, and when to intervene—can shape ethical climates and erode principled reasoning.

In both cases, surface-level signals crowded out substance. Students responded to the incentives in front of them. Faculty tried to recalibrate, but were operating inside systems optimized for efficiency, not integrity. Leadership, eager to show innovation, accepted positive metrics without probing their meaning.

As business schools adopt algorithmic tools, alternative assessments, and data dashboards, they must design for complexity—not just convenience.

The result? AI-enhanced systems that looked responsible on paper—but hollowed out ethical agency in practice.

The core issue here is not AI. It’s the absence of ethical scaffolding around its use. As business schools adopt algorithmic tools, alternative assessments, and data dashboards, they must design for complexity—not just convenience. This means they must ask hard questions:

  • Are we prioritizing what students should learn or what metrics are easiest to track?
  • Are faculty empowered to challenge misleading signals, or are they nudged to comply?
  • Are students being asked to navigate ambiguity and follow rules they don’t understand?
  • Most importantly, are we modeling the kind of judgment we claim to teach?

Lessons Learned About AI and Education

These two case studies yield four lessons for schools that want to embed ethics into their curricular and operational innovations.

Don’t confuse metrics with meaning. Readiness scores, quiz completions, and feedback ratings are useful—but not definitive. Ethical systems must allow for reflection, not just reporting.

Design for behavioral realism. Students act on incentives. If AI tools are ungraded, optional, or superficial, students will treat them as such. System design should reflect the behavioral truths we teach.

Make faculty judgment visible. When tools automate insight, faculty must reassert their role as interpreters and validators. No dashboard can replace the human ability to read context and nuance.

Elevate ethical reasoning in policy. Integrity modules, AI-disclosure policies, and real-time reflection points must be integrated into—not just appended to—the curriculum. Ethics isn’t a compliance box; it’s a capacity.

Responsible Innovation Requires Friction

The greatest risk business schools face in the age of AI isn’t that students will cheat. It’s that institutions will outsource ethical reasoning to systems not designed for it. By embedding signals without scrutiny, we teach students to perform trustworthiness, not to practice it.

Both of these case studies suggest that institutional ethics is not a value; it’s a design consideration. And design, like leadership, is a responsibility—one that cannot be left to automation.

As schools build toward AI-enhanced futures, we cannot simply innovate. We must question what our innovations ask of people—and whether those innovations encourage students to build judgment or bypass it.

Because in a world of smart systems and invisible shortcuts, the real test of ethics is not what we say—it’s what we scale.

What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
Muniza Askari
Assistant Professor of Economics, SP Jain School of Global Management
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.