Subscribe to my newsletter

Subscribed!

Subscribe to my newsletter

Subscribed!

Subscribe to my newsletter

Subscribed!

Resisting AI in Schools

Resisting AI in Schools

It’s Time for Mass Refusal

It’s Time for Mass Refusal

May 14, 2025

May 14, 2025

0 reads

0 reads

Generative Ai being used in schools
Generative Ai being used in schools
Generative Ai being used in schools

A few months ago, I overheard my daughter, a seventh grader, on Facetime imploring her friends not to use ChatGPT to write their history essays.

When I asked the district how they justified having Generative AI (widely acknowledged as highly problematic) so readily available on school-issued devices, I was told that students “need access to this transformative technology.” My follow-up questions have yet to be answered.

A few weeks later, one of my daughter’s teachers told me she was overwhelmed by her large class sizes– 36 students per class– and how she was falling behind in grading student essays. When she asked the district for support, they recommended she use AI to assess student writing. 

When I asked the district about this recommendation, I was told they are “unaware” of such guidance. I am continuing to ask for more information.

This is ridiculous. There are plenty of reasons to be concerned about the safety of Generative AI tools for children, of course, but we also have to ask why computer scientists working for powerful and wealthy technology companies get to have any input into how teachers should teach and children should learn. 

When did we abdicate this responsibility? When did we consent to treating children as widgets and teachers as automatons? When did we decide that this time around, the snake oil will be different?

It is time for us to refuse the use of Generative AI tools in schools, especially by children. 

I have been quoting Jeff Goldblum from Jurassic Park a lot these days: “Your scientists were so preoccupied with whether or not they could, they never stopped to think if they should.” 

In 2025 we are battling monsters of a different sort: Generative AI tools shoved hurriedly into K-12 education, with scant evidence of safety, efficacy, or even a clear understanding of what problem it is attempting to solve. (When teachers asked for more support in reducing class sizes, for example, I hardly imagine they were seeking more technology-based solutions.)

First, a clarifying point: when we talk about “AI,” we’re really talking about “Generative AI” and that in and of itself is a difficult concept to understand. “Generative AI” includes products like ChatGPT (owned by OpenAI/Microsoft); Gemini (Google); and Claude (Anthropic).

So…What is Generative AI and What Does It Do? 

I give full credit to Benjamin Riley of Cognitive Resonance for helping me attempt to explain this:

  • Generative AI works like “autocomplete on steroids” but even more so: it predicts the next most likely word used in a sentence, based on a series of complicated mathematical equations that are assigned to the frequency of words used in the training set. Think of it like a tool that uses probability to predict what comes next (or math to predict language). 

  • Generative AI predicts text based on what it has been “trained” on, which are called LLMs, or “large language models” and which, yes, include copyrighted material, intellectual property, and likely your own LinkedIn posts.

  • “Hallucinations” occur when GenAI “makes up” an answer when it can’t “find” it in its training set. Ben Riley thinks we should call these “confabulations”: when Generative AI provides an “answer” to you based on a “false memory.” 

  • When we enter a “prompt” into Generative AI, it searches through all the data it was trained on. Often, the predictions GenAI makes are untrue, but they are delivered with great confidence (a problem especially for a young child using the tool). For this specific task, GenAI actually works pretty well at spitting out fluent statements, regardless of their veracity.

  • However, what Generative AI can do is very, very different from what you and I do when we are thinking. When humans think, we don’t scan past knowledge to predict what thoughts will come next. We use context, previous experiences, feelings, abstract thoughts to come up with ideas and new thoughts. As a result, GenAI is not “thinking.” In fact, GenAI is a tool of “cognitive automation” and can indeed be used to automate certain tasks, but it can never operate as a thinking human.

  • Generative AI is a tool– and much like a car is a tool, we should understand how it works before we operate it. Currently, that is not what is happening in schools. Children are given access to ChatGPT and told to “use it responsibly.”

Why Should We Be Concerned About GenAI Tool Use in Schools?

There are numerous reasons to be concerned about GenerativeAI use in K-12 education. 

  1. The goal of education is to engage students in a cognitive process. GenerativeAI displaces the cognitive process– it does the “thinking” for students. (As Ben Riley describes, this would be like going to the gym to workout and then letting a forklift move the heavy weights around for you. ChatGPT is a “cognitive forklift.”)

  2. Our brains are not fully developed until we are well into our twenties or even thirties. Thus the temptation for students to use ChatGPT to do their work for them is significant and their executive function skills are not developed enough to “resist” this temptation. Students will use it to do their homework, write their essays, and cheat on their tests. Telling them to “use it responsibly” is irresponsible and completely misunderstands brain development.  

  3. The use of Generative AI tools in school is an extremely slippery slope. Using Generative AI embedded in other EdTech tools is problematic enough. When those tools are offered to students as “tutors” or “buddies” (or worse, “therapists”), any adult concerned about the welfare of children should recognize the peril this presents. Adults are regularly duped by AI companions; children are far more vulnerable.

  4. As Ben Riley points out, if the world’s best computer scientists haven’t been able to solve the hallucination problem of ChatGPT or OpenAI, then why on earth would we expect far weaker Generative AI tools for teachers not to hallucinate? Schools, as usual, get the rejected dregs of technology’s loser products (see: Chromebooks). 

  5. We cannot keep attempting to solve every problem with technology. AI is a hammer in search of a nail. What problem is it really attempting to solve? Is this what teachers are asking for? Previously, “Bring Your Own Device” and “Flipped Classrooms” were attempts to “help” teachers via technological solutions and those failed miserably. Why should we expect Generative AI tools to be any different, especially if they’re made by the same people?

  6. If we are increasingly recognizing that student personal devices (like smartphones and smartwatches) are distracting to the learning experience, then the next step is recognizing that EdTech tools like GenAI and 1:1 devices are equally distracting to the learning process

  7. Personalized learning has done more harm than good. Schools may claim that GenAI allows for more personalized learning, but this isn’t a benefit. Learning happens in a communal environment, with other people and in the context of relationships to teachers. GenAI and “personalizing instruction” siphons students into individual silos of thinking and prevents the cognitive conflicts and social struggles necessary to become thinking adults

The way Generative AI is currently being deployed in schools is neither thoughtful nor intentional. If we truly want to explore the opportunities with Generative AI tools, then we need teachers who not only understand it as such, but who think about thinking and want to generate critical thinking in their students. GenAI is a woefully incomplete tool for teaching and dangerous for use by children.
At the end of the day, the goal of education is to engage learners in the cognitive process. GenAI displaces the cognitive process. Teachers are the experts of their students. Learning is collaborative, full of struggle, and not something that can be standardized and measured.

Questions to Ask Schools About Generative AI

A mass refusal of Generative AI is the quickest way to compel change, given how rapidly technology changes (and how slow law and policy follow). We only need to look at where the Tech Elite send their own children to school (low-tech, nature-based schools) or at countries like Sweden and Finland who are rolling back on their use of EdTech tools in the classroom to see the writing on the wall: Unless you’re making changes now as a school to move away from internet-connected, Generative AI-enabled devices and tools, you’re moving in the wrong direction.

While mass refusal is a needed tool in this fight, I do believe strongly in the value of building relationships and working together to make change. School administrators who are open to conversations about the role of technology and Generative AI tools in school offer a starting point for change.

If your school administrators are open, here are some questions to pose:

  1. What problem is our school trying to solve by using GenerativeAI?

  2. Why is there such a sense of urgency to implement and use these tools? What is the risk if we decide to move slower?

  3. What evidence-based research did you use to make a decision to provide young children with such powerful and potentially dangerous tools?

  4. How is the use of Generative AI tools in our district or school in alignment with our school mission statement and goals? 

  5. What measures are in place to identify hallucinations and ensure that when they occur they will be countered with factual information, without further increasing teacher burden?

  6. How are teachers encouraged to use Generative AI tools? Do they use it to assess student writing?

What Gives Me Hope

It seems like there are suddenly a lot of computer scientists who are now also experts on teaching and learning and want us to all use Generative AI in our classrooms. I liken this to designing a surgical tool, then telling the surgeon because I designed the tool, she should allow me to perform the surgery.

That would be completely ridiculous and that is how I feel about the sudden influx of “educational consultants” from Generative AI and EdTech companies who can suddenly fix education with their tools by turning me into a robot and my students into widgets. 

That’s not how teaching works, and the implication that someone who may be brilliant in his own field (computer science) can step into my experience (teaching) and tell me how to do it better really boils my blood. I am the expert of my students, and each year, with each subsequent group of students, my teaching shifts and adapts to the needs and personalities of each new class. Remember, that computer scientist likely attended school in an era of physical books and paper and pencils, with a teacher helping him to think about thinking itself and apply those ideas to the world around him, so he could grow up and become a brilliant computer scientist.

Today’s college students likely spent the first five or ten years of their lives in a relatively low-screen world, where they experienced play and social interactions in the real world and tactile, three-dimensional experiences at school. When college students today use ChatGPT to write their essays, we are seeing only the beginning of a wave of children who increasingly received and interacted with digital technologies earlier and earlier in childhood. Today’s “iPad kids” won’t hit higher education for another ten years. With declining functional literacy rates and critical thinking skills, how will this upcoming generation be inventing and innovating like the computer scientist who invented the 'education software' that he is trying to foist upon our children?

In spite of how dark things can seem about the world right now, I do have hope for a better future. Here are three areas that give me hope:

  1. Children are starting to rebel against all this technology in schools, like by sticking pencils or paperclips into the USB ports of their Chromebooks. I don’t condone dangerous activities, but Chromebooks are just internet-browsers and not learning tools. If this is how children can express their frustration with the use of these products, then I take that as a signal that they want change too.

  2. My own university students are shocked by the things I tell them about modern-day education. They cannot believe kindergarteners have iPads or that middle schoolers are given access to ChatGPT by their own school administrators. They want things to be different for their own future children some day, and that speaks volumes.

  3. We don’t know what the future holds. It’s possible that Generative AI efforts will crumble and fall, but even if that’s the case, today’s students still shouldn’t serve as collateral damage. So we can fight back by refusing to allow our children to participate in any Generative AI tools in school. If enough parents say “No,” then schools will be forced to look again.

At the end of the day, we are fighting for teachers, teaching, and an educational system that raises critical thinkers, who will then grow up to be active participants in a democracy. 

That is worth fighting for.

Interested in an email template to opt your child out of any Generative AI products or tools at school? In our Tech-Intentional Movement for Education (T.I.M.E.) Collective, our members have access to downloadables of exactly that, plus many other useful resources. 

To watch my full interview with Ben Riley of Cognitive Resonance, you can view that webinar recording here


UPDATED 5/15/25 Ben Riley read this essay and offered these clarifications:
“Hallucinations” occur when GenAI “makes up” an answer when it can’t “find” it in its training set. Ben Riley thinks we should call these “confabulations”: when Generative AI provides an “answer” to you based on a “false memory.”

Not exactly. We use "hallucinations" to describe when AI makes up an answer that we humans do not consider true. It's not trying to "find" anything, really -- these tools don't function like search engines. So there's nothing different happening when it hallucinates versus when it produces something that we consider true -- the process is the same.

When we enter a “prompt” into Generative AI, it searches through all the data it was trained on. Often, the predictions GenAI makes are untrue, but they are delivered with great confidence (a problem especially for a young child using the tool). For this specific task, GenAI actually works pretty well at spitting out fluent statements, regardless of their veracity.

Mostly correct, but again, generative AI doesn't search through the data it's been trained on. The training is used to assign the statistical weights to words (or more accurately, components of words called "tokens"). When we prompt an LLM, it uses the numerical weights assigned to the words we enter and then makes a prediction about what words to produce. No searching, just predicting."