AI in Education Is a Wicked Problem
~ And Why We Can’t Figure it Out

The misuse debate misses the point.
Control is impossible when everyone depends on the machine.
Schools can’t control AI misuse because everyone depends on it.
Teachers use it to save time.
Students use it to save effort.
Tech companies keep building faster tools for both.
Every attempt at regulation fixes one issue and creates another.
That’s what makes this a wicked problem ~ something that can’t be solved because the act of solving it changes what it is.
AI misuse in education isn’t a passing issue ~ it’s unmanageable by design.
AI didn’t break the system.
It revealed how automated it already was.
Why AI Misuse in Education Can’t Be Solved Like Other School Problems
Most school problems can be managed with clear rules or metrics ~ test scores, attendance, or budgets.
AI doesn’t fit the mold.
It moves too fast and cuts across every layer of the system.
Understanding ‘Wicked Problems’ ~ and Why AI Fits the Definition
Coined by Horst Rittel and Melvin Webber (1973), a wicked problem ~
1. Has no clear definition or endpoint
2. Can’t be measured objectively
3. Evolves faster than systems can adapt
4. Produces new problems when you try to solve it
Poverty, climate change, social media addiction ~ and now, AI in the classroom.

Why No One Owns the AI Problem in Education
Education is a network of competing goals.
Students want grades.
Teachers want fairness and less grading.
Parents want strong transcripts.
Administrators want compliance.
Tech companies want more users.
No one defines the problem, but everyone contributes to it.
The answer to “what counts as learning?”
depends on who’s answering.
Teachers Use AI Too ~ And That’s the Real Story
Everyone’s using AI ~ just not for the same reasons.
Students write essays with it.
Teachers draft assignments and lesson plans.
Administrators write memos warning against AI misuse.
One Florida high school recently banned ChatGPT for student essays.
A week later, teachers there used it to write lesson plans.
Who broke the rules?
The machine hasn’t replaced education ~ it’s becoming part of it.
Teachers often rely on the same tools they warn students against.
Lesson plans, rubrics, and feedback.
A teacher uses ChatGPT to summarize essays ~
outsourcing judgment in the same way a student outsources authorship.
That overlap isn’t hypocrisy ~ it’s survival.
AI offers relief, and both sides take it.
Why Detection and Governance Fail for the Same Reason

Every attempt at control creates a new loophole.
A Stanford study found that AI detectors often misidentify text, especially from non-native English speakers, flagging human writing as AI-generated (Liang et al., 2023). These errors can unfairly damage reputations and institutional trust.
The line between “using AI” and “cheating with AI” shifts constantly. A student who brainstorms with ChatGPT isn’t necessarily breaking rules.
But when does help become substitution?
When does efficiency replace understanding?
The balance changes with every update, making enforcement impossible.
You can’t regulate what you can’t clearly define or observe.
Policy can’t keep up.
Even if Congress passed a law tomorrow, rapid development would render it a symbolic gesture. Both detection tools and governance structures fail for the same reason ~
Responsibility diffuses until it disappears.
How AI Is Replacing Judgment ~ Not Just Labor

When a student asks AI to “write an essay in my voice,” they’re not just skipping work. They’re giving away the decision of what counts as “good”.
Teachers do something similar.
Using AI for grading or feedback saves time but shifts judgment to a system that can’t read nuance or intent.
Everyone ends up relying on the machine to define quality.
What’s being lost isn’t morality ~
it’s the habit of thinking through a task yourself.
Education already rewards output over process ~ AI is just the next logical iteration.
Why the AI Misuse Problem Can’t Be Solved ~ Only Redefined

Rules always trail behavior.
Detection tools always lag the newest models.
The only sustainable response is to redefine what learning looks like when anyone can produce polished text instantly.
AI is a wicked problem, but education doesn’t have to treat it as a threat.
The focus could shift toward practical assessments ~ proving understanding through discussion, collaboration, or in-person demonstrations instead of text.
Until then, schools will keep looping through the same pattern ~
Students automate writing, teachers automate grading, and institutions automate enforcement.
AI didn’t break education ~ it’s holding up a mirror.
Incorporating AI can improve accessibility, especially for students with learning disabilities, but it can’t replace critical thinking, reasoning, or collaboration.
If we don’t like what we see, maybe the issue isn’t the machine ~
maybe it’s that we stopped thinking for ourselves.