By the time I post this essay, the situation around AI will have changed again. That’s why it can feel futile to chime in too specifically on “what’s happening with AI” in schools (or in our personal inboxes– ahem, Gemini) right now.
So instead I would like to think more broadly about AI and technology use in Education, especially K-12 settings. Because we’re hearing a lot about what we “can” do with AI and why we should all buy in to the techno-optimistic view of the future, but really, as Ben Riley writes in his excellent essay, current (American) policy around the topic is an act of “national intellectual suicide.”
Let me explain. Last week I posted on LinkedIn that my concerns about children using ChatGPT to write their middle school essays should warrant grave concern for the future of democracy and wow did the techno-optimists come out of the woodwork to tell me all the ways I’m wrong and misguided about this. (That post, by the way, has over 76K impressions and nearly 500 reactions at the time I’m writing these words, and has resulted in a podcast interview request and a newspaper interview, so…I’m not complaining). But one of the more stunning objections– that children simply need “prompts” to use AI effectively in school– included this proposed scenario (bold mine):
“If there is no guidance, then when they [students] pick up a phone, they go to [the] camera and games, but if there are ‘prompts’ like ‘collect pictures of all your garden flowers and use Google lenses to learn their names, then add them to slides and use ChatGPT to create an image that combines flowers with their origin geographical location, prepare a 10 min presentation about it using Canva.’ That sort of thing.”
You know, like you did when you were in kindergarten <sarcasm>.
All joking aside (and I’m actually deadly serious), this is a dangerous line of thinking.
I’ve written about AI in education previously and I’ve been keeping tabs on the latest news, but I find myself growing increasingly frustrated by a few things about how these conversations get formed.
Let me try and unpack a few of those frustrations here:
Show Me the (Non-Industry Funded) Data. It is heavily assumed and argued in the tech world that AI in education is a net benefit to children and learning. Any objection to this line of thinking results in labeling me a “Luddite” (which isn’t actually an insult– read Blood in the Machine: The Origins of the Rebellion Against Big Tech for the history of that word) and that I, personally, should somehow have to defend why it is “bad” instead of demanding independent research or simply just reasoned logic of why it is “good.” Simply put, I’d love for someone, without a financial stake in AI or EdTech, to please answer this question: “What does AI actually do for children and learning?” (And to me, saving me twenty bucks because an A.I. can “analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6” isn’t really that compelling of an argument for “benefits to learning.”)
Educators Are Experts Too. Secondly, as is often the case with so-called experts who think they can revolutionize an industry because of their own personal training and expertise, yet again the expertise of actual educators, rooted in decades of excellent research, hands-on practice, and deep knowledge of children and learning development, is being dismissed. Ignoring the wisdom and knowledge of those who know teaching and learning is akin to designing a surgical tool then demanding a surgeon allow me to perform the surgery, simply because I made the tool. Wrong. That’s not how this works.
One recent example out of the UK is that AI can be used to help teachers with marking (aka “grading” in the U.S.). As a former teacher, this appalls me. When I graded 100+ 7th grade essays during my teaching days (all by hand, of course), there is no way I could have standardized my methods because the “grades” I gave, even with a rubric and letters and points, was still first and foremost going to be rooted in what I knew about each individual student and their individual capabilities, and reflect the progress they made from one point in time to another. That was perhaps the biggest shock to me about becoming a teacher– the subjectiveness of grading. For those who find this a surprise, I am sorry to tell you– grading essays is not an objective process, even if there are components that can be objectively assessed. If the goal of assessment is to help a student improve their skill set, then I will always be evaluating their progress in the context of their previous work (which, not shockingly, is how learning works– making new meaning in the context of previously understood knowledge). Why do tech-bros and AI-vangelists think that they can standardize this process and get a better outcome? If I were a student, I can’t imagine I’d appreciate this type of assessment. As a former teacher, I hate it.
Why Are We Taking Arsenic? I’ve often quoted my colleague Dr. Jared Cooney Horvath who has said about EdTech and AI in education, “Why are we asking, ‘What is the best way to take arsenic?’ rather than ‘Why are we taking arsenic in the first place?” This is exactly it. AI in education (and much of EdTech more broadly) is a solution in search of a problem. We don’t give kindergarten students iPads because they learn to read better on an app; we give them iPads because the pressure from the tech industry, some parents, and the financial commitments made by districts mean that a tool must get used, whether or not it’s actually beneficial (ChatGPT isn’t banned in our district, for example, so students are given access to “this transformative technology” on their school computers but expected to use it “thoughtfully.”) I hear over and over again that AI can help “overworked teachers” reduce their prep time or lesson development or grading, but never do I hear AI enthusiasts asking how reducing class sizes might benefit those teachers more. (As I wrote about recently, my daughter’s 7th grade English class has 36 students in it. If all those students have 1:1 internet-connected computers, then how can a teacher possibly monitor what’s going on on all those screens? She can’t, so the district purchased GoGuardian to monitor them, which comes with a whole different host of new and different problems. Using tech to solve a problem that tech created is not a sustainable or effective solution.) We should stop looking for better ways to take arsenic and ask instead why we’re taking it in the first place.
You Can’t Care About the Climate and Promote AI in the Same Breath. This isn’t talked about enough– the environmental impact of AI technologies is magnificently more harmful in terms of energy consumption than a “simple” Google search. Many children are deeply concerned about the changing climate, and have been told from a young age that devastating change is coming (and has arrived). Additionally, generative AI can deliver mis- and dis-information to promote climate change denialism. The recent World Economic Forum’s Global Risks Report (2025) lists the spread of mis- and dis-information at the top of the list of short-term risks. So how can we adults in one breath tell young people to care about the planet while handing them over products and tools that are guilty for contributing to the ongoing decimation of that same planet? If you are at all concerned about your community and environment in the coming years, you cannot also be pro-AI.
This IS About Money. It’s not a new story, but a familiar one: this is, as with so many things in a capitalist economy, about money. Big Tech for years has known that growth (at all costs, no pun intended) comes from clicks and engagement, which means the need for ads (the high cost of “free” products for users). It may be odd to think about them this way because that’s not likely our first thought about the company, but Google is actually one of the largest advertising platforms in the world. Persuasive design (aka manipulative technology) is built into the social media apps, games, and platforms we use to keep us clicking, scrolling, and engaging for longer. Big Tech companies hire developmental psychologists to make their products more compelling to children; those same executives send their own children to low- and no-tech schools. If BigTech and EdTech companies wanted to make their products safer for children (and users in general) they could; it is a choice not to, and that choice has everything to do with profit margins and growth. With China putting up newer, faster, and more economical options, the U.S. is only going to continue its push for better, faster, smarter, cheaper…even at the expense of our future citizenry (or result in “national intellectual suicide” per Ben Riley’s essay). For all those techno-optimists commenting on my LinkedIn post, it’s very, very hard to take you seriously when I can look at your bio and see that you have a vested financial interest in the success of these products. Sorry, but I’d like a second opinion.
So what can we do? It often feels like things are moving so quickly it’s hard to know where to jump on the merry-go-round.
In my advocacy work around technology in education, AI included, we start with three important questions that have everything to do with the HOW and not so much about the WHAT (because the WHAT constantly changes):
Is it safe?
Is it effective?
Is it legal?
Let’s unpack these a little:
Is it safe? “Digital safety” is an industry in and of itself. But safety for who? From what? Often parents use digital tools as a way to track their children in the real world. But when it comes to EdTech platforms or AI tools, “safety” must include conversations beyond just porn-blocking and address few critically important things– the data collected by the platform or tool (and where it goes and how it is stored); the hallucination rate (a huge issue for AI currently, where “hallucination” rates– i.e. when AI models produce false or misleading information– can range anywhere from 3-30%!); and the displacement of human and real-world skills in favor of digital ones. Even if a child knows an AI tutor is a bot and not a person, the neurochemicals in the brain react as though that child were interacting with a real human– and at what cost?
Is it effective? As I stated above, I’m tired of having to prove that AI and EdTech are problematic from a learning and development standpoint. Are there a few transformative ways technology can be incorporated into education? Sure, but that’s true of welding and woodworking and driver’s education too and we’re not handing kids blow torches and table saws and bus keys in the name of “exposure” to these other technologies. Here and here are some examples of why it’s not all it's cracked up to be. Can you honestly claim that a 7th grader having ChatGPT write her essay for her is “learning” how to be a better writer?
Is it legal? I often joke that I'm an accidental activist. I didn’t mean to file a lawsuit against my school district. But I see the ways to address this complex problem like removing pieces from a Jenga tower. It’s going to take parent efforts to manage screentime on the homefront; schools implementing bell-to-bell phone bans and rethinking 1:1 internet-connected device programs; and policy and litigation to bring about changes that make the experience of being a child in school a meaningful one. Right now, it’s debatable– at best– that the current use of AI and EdTech in schools is legal, let alone safe and effective.
Currently, the state of EdTech and AI in K-12 schools is neither safe, nor effective, nor legal.
Until the answer to all three of these questions is a resounding YES, then we are left with very few options: accept or refuse.