Mar 4, 2024
As a former teacher and The Screentime Consultant, I get asked a lot: “Won’t the presence of AI in schools lead to students cheating all the time?”
In short: This is the wrong question.
When I was a middle school English teacher, one student copied and pasted a summary of the book from the author’s website for a book report on his summer reading assignment. It made it very easy, even in 2005, for me to identify the source. The 12-year-old’s use of the phrase “requisite sacrifice” in a flawless first paragraph, followed by a paragraph riddled with typos and grammatical errors also raised some red flags.
We could go back further. In my own sophomore English class in 1993, I watched a classmate copy from a piece of paper in her hand the Shakespearean soliloquy that we were supposed to have memorized and written verbatim on the test.
Students have been cheating in school for a very long time. AI can make cheating easier and may initially be harder for a teacher to spot. But just as I was able to catch that seventh grader’s obvious plagiarism over 20 years ago, an engaged teacher who knows their students and their students’ writing abilities well will know a Chat-GPT-written paper when it shows up.
Worrying about cheating when it comes to AI is not what parents and teachers should focus on. Of course, saying “just because students have always cheated” isn’t an excuse to do nothing about AI either, because there are indeed real harms. Just not the ones we’re thinking of. In my book, The Screentime Solution: A Judgment-Free Guide to Becoming a Tech-Intentional Family, I call this “knowing the difference between scary and dangerous.”
Knowing the difference between “scary and dangerous” is critical to becoming Tech-Intentional™. It means focusing on things that cause true harm, not perceived risk.
As an example, “kidnapping” is the third biggest fear parents have in America today. Yet the risk of kidnapping by a stranger is so rare that it is statistically insignificant. We just perceive it is a true harm because of our own consumption of click-bait media that highlights the rare occurrences.
However, as a result of this fear, parents often dole out smartphones or smartwatches to track or keep in close contact with their children, without realizing that the #1 and #2 parental concerns in America are youth mental health and bullying– two very real dangers that become much, much more likely when given access to the internet or social media.
What a paradox!
How does “scary vs. dangerous” apply to cheating and AI?
When we fret about cheating, we’re focusing on the scary. Rampant cheating because kids learn to use Chat-GPT is scary, but it’s not particularly dangerous.
The real danger of AI is in the way it promotes and accelerates the risks posed by social media, EdTech, and excessive screentime to our kids’ mental health and, in turn, our society’s ability to function.
The real risks of AI are actually not that different from the risks that have been here since technology clawed its way into our classrooms and homes in the first place, but the stakes with AI are much higher, because of its ability to mimic real-life interactions.
AI allows anyone with a computer and an internet connection to generate deepfake videos, artificial robocalls, fake pornographic pictures, and misinformation (both intentional and unintentional) that can spread rapidly through the finely-tuned algorithms that have only gotten stronger in the past several years.
And while AI builds its knowledge base on previously existing data sets, the problem is that it doesn’t know the difference between real and fake, truth and fiction. So, as we’ve seen with examples spat out by ChatGPT or even, very recently, Google’s Gemini, it will “hallucinate” – make stuff up – but that stuff will look and sound very plausible. A discerning adult might easily be fooled; a still-developing child will have no reason to think it is anything but true and valid.
AI is also dangerous because of its power to erode our trust in institutions and decrease our confidence in democracy – not because it can help a 9th grader cheat on an English paper. AI doesn’t create new risks, AI accelerates risks we’ve known about and faced for years.
The good news is that the antidote to artificial intelligence's harms is the same one we needed before Chat GPT entered the, um…chat: tech-intentionality.
The tools we need to address the threats presented by AI form the foundation of a Tech-Intentional™ movement:
Less is more. Less reliance on EdTech. Less time on learning management systems. Less emphasis on learning via computer (instead of just learning).
Later is better. Young children do not need a computer or tablet to learn how to read or count. They do need ample opportunities to play with peers in the real world, to build skills that will allow them, later, to differentiate the fake from the real (one actual harm of AI).
Relationships first. Learning happens in the context of relationships. Tech-Intentional™ parenting and teaching come from a strong connection between adults and children. Trust in democratic institutions comes from being able to listen to, discuss, and debate ideas with our peers– even– and especially– when we don’t agree with them.
It’s not specifically AI in the classroom that we need to worry about; it’s the digital platforms and technology companies that fill our spaces and hours with content that does not prioritize critical skills or meaningful learning; the well-funded industry-based marketing machines that prey on our fears and push click-bait to generate profits; and the Silicon Valley voices and politicians who blame others and provoke further anxiety about the world, instead of providing and supporting meaningful and effective solutions.