The Academic Integrity Crisis Nobody Knows How to Solve
I need to be careful writing this. I'm a TA for a class of over 200 students at Northeastern, and I've spent the last semester watching the collision between generative AI and academic integrity play out in real time. I don't have clean answers. Nobody does. But I have a front-row seat, and I think the conversation deserves more nuance than it's getting.
What I've Seen
Let me describe a few situations without identifying anyone.
A student submits a homework assignment on data cleaning in pandas. The code is correct, well-commented, and uses a method we didn't cover in class. The variable naming style is different from their previous submissions. When I ask about it in office hours, they can explain what the code does but stumble when I ask why they chose that approach. They probably used ChatGPT. But "probably" isn't proof, and I'm not a detective.
Another student comes to office hours every week. They're struggling with the material, clearly working hard, and making incremental progress. One week, their assignment is suddenly flawless. Not just correct, but elegant. The jump is jarring. I don't say anything because I genuinely don't know what happened. Maybe they had a breakthrough. Maybe they got help. Maybe they used an AI tool as a learning aid and actually understood the output. I can't tell, and that uncertainty is the whole problem.
A third student tells me directly that they used ChatGPT to help debug their code. They want to know if that's okay. I appreciate the honesty, but I don't have a clear answer because the department's policy is still evolving and the line between "AI as a learning tool" and "AI doing your homework" is genuinely blurry.
The Detection Problem
Let me be blunt about something. AI-generated text detectors do not work reliably. I've tested several, including GPTZero and OpenAI's own (now discontinued) classifier. They produce false positives on non-native English speakers at alarming rates. In a class where a significant portion of students are international, including me until recently, that's not just inaccurate. It's discriminatory.
I watched a professor at another university publicly accuse a student of using AI based on a detector score. The student had written the assignment themselves. The detector flagged it because their writing was "too uniform," which, for someone writing in a second language with carefully learned grammar, is entirely normal.
You cannot build an academic integrity system on tools that punish students for writing carefully in their non-native language. Full stop.
The Real Crisis
Here's what I keep coming back to. The problem isn't that students are cheating. Students have always cheated. The problem is that we are testing students on tasks that AI can now do trivially. Write a function to sort a list. Explain the difference between supervised and unsupervised learning. Clean this dataset and produce summary statistics.
If ChatGPT can do your homework in 30 seconds, maybe the homework is testing the wrong thing.
This isn't an excuse for academic dishonesty. It's an observation about how the value of certain skills has shifted. Writing boilerplate code is no longer a meaningful test of understanding. Explaining a concept in paragraph form is no longer proof that someone understands it, because an LLM can generate that explanation too.
What I'd Change
If I were designing a course from scratch, here's what I'd do differently.
More project-based assessment. Instead of weekly problem sets, give students a semester-long project where they build something real. It's much harder to fake a project you have to present and defend. The process matters as much as the output.
More oral components. A five-minute conversation about someone's code tells you more about their understanding than any written submission. I've started doing informal code walkthroughs in my office hours, and the difference in signal is enormous. You can't ChatGPT a live conversation.
Emphasize process over output. Require students to submit their git history, their debugging logs, their iteration notes. Make the journey part of the grade. This teaches good engineering habits and makes it much harder to simply paste in a generated solution.
Teach students to use AI tools well. This might be the most controversial one. Rather than banning ChatGPT, teach students how to use it as a learning accelerator. How to prompt it for explanations. How to verify its output. How to use it for debugging without letting it do the thinking for you. These are skills they'll need in industry regardless.
The Uncomfortable Truth
The uncomfortable truth is that universities are slow and AI is fast. Policies take semesters to draft and approve. AI capabilities improve monthly. By the time a university finalizes its ChatGPT policy, the technology will have moved on to something the policy doesn't cover.
I don't think there's a clean solution. What I do think is that the current approach of trying to detect and punish AI use is a losing game. The energy would be better spent redesigning assessments so that understanding, not output, is what gets measured.
I'm finishing up my TA appointment this semester. I've learned more about teaching from grappling with this problem than from any pedagogy guide. The students who genuinely engaged with the material, who came to office hours, who asked questions and made mistakes and learned from them, they're going to be fine regardless of what tools exist. It's the system that needs to catch up.
Related Posts
Red/Green TDD with Coding Agents: Why Test-First Matters More
When AI writes your code, tests become the spec. Red/green TDD isn't just a practice anymore. It's the interface between intent and implementation.
Context Engineering Is Not Prompt Engineering
Prompt engineering was the 2023 skill. Context engineering is the 2026 skill. The difference matters more than you think.
Vibe Coding Is Real but Not What You Think
Everyone's talking about vibe coding. After years of using AI to write code, here's what it actually is, what it isn't, and why understanding the code still matters.