Jan 26, 2023
When high-speed trains first came out, critics worried that women’s uteruses would fly out of their bodies. Obviously, this did not happen, and high-speed trains are only faster today (and uteruses intact).
It’s common with new forms of technology to panic about potential negative impacts. Today’s latest technological panic is about ChatGPT, a generative artificial intelligence (AI) tool designed to interact with humans.
I decided to try it out. You enter in a prompt or a question, and using predictive AI, ChatGPT spits back an answer. I asked the bot to write a short essay about Seinfeld in the style of Lord of the Rings. Here is an excerpt:
As a former teacher, my first reaction was, “Oh no. This is going to be bad for teaching.” Kids are going to cheat, creativity is going to fly out the window, and worse, as I later learned, ChatGPT doesn’t distinguish between fact and fiction. If it doesn’t know something, it makes it up. That seemed troublesome.
But to practice what I preach (aka “tech-intentionality”), I realized the need to frame this differently so I applied my framework of “scary” vs. “dangerous” to ChatGPT.
Scary vs. Dangerous: A Framework
Something that is scary (and rare) makes the headlines. Scary risks are frightening things that could happen. We hear a lot about things that are scary, even though their likelihood of happening is low. And as a result of our constant exposure to these scary (but rare) experiences (they are the most click-baity of click-baits, after all), our perception of these scary things is skewed.
We end up thinking the scary things are actually dangerous.
But they are not. Something that is dangerous (but common) rarely makes the headlines. But data show that dangerous harms (like slipping in our bathtubs) are actually very detrimental to our health and well-being. But because we don’t see headlines about things like bathtub deaths in our news feeds (common things, even if dangerous, are not very click-baity), our perception of these dangerous harms minimizes their seriousness. We don’t worry about them the way we worry about the scary things.
Here is what is SCARY about ChatGPT:
-It is scary to think that ChatGPT could replace writing assignments in K-12 education.
-It is scary to think kids will use a tool like ChatGPT to cheat.
-It is scary to think that your kids will have to understand AI in order to succeed in school.
My colleague Dr. Jared Cooney Horvath created a brilliant video about ChatGPT and why our fears might be displaced. He makes two key points about why he isn’t too worried about ChatGPT in K-12 education:
1. Teachers of K-12 students know their students well, so teachers will know when a writing assignment is “off.” (As an example, I had a 12-year-old student 15 years ago use the phrase “requisite sacrifice” in the first flawless paragraph of an English homework assignment, followed by a typo-riddled second paragraph. I knew immediately something wasn’t right.)
2. Writing is a process– we start with brainstorming, outlines, rough and final drafts. Rarely are students in K-12 expected to generate an essay without a process attached.
(It is true, as Dr. Horvath points out, that higher education is faced with slightly different challenges, such as 500-student Intro classes where a tool like ChatGPT could be used to cheat. However, once students advance to higher-level, smaller-sized classes, it will be harder for them to keep it up. Hopefully.)
Here is what I see as truly dangerous about tools like ChatGPT:
-We get whipped into a frenzy about a new technology without realizing there are already harms being perpetuated by existing ones (data mining, social media and mental health, etc.). A “Vegas lights” phenomenon, as one former colleague of mine used to say.
-A lot of K-12 education relies heavily on educational technology, some of which includes scripted teaching. It’s no wonder kids don’t find school interesting or engaging if they’re taught in this rote manner, devoid of passion and purpose. This is a far greater risk to decreasing critical thinking than a chat bot. If we want kids to think critically about their tools, we have to teach and model and engage with them critically!
-Kids have always cheated. The antidote to cheating is strong relationships to adults who care. A kid who cheats is struggling. They need something they aren’t getting. If we outsource too much tech to EdTech platforms (not just chat bots), then we move kids and teachers further away from building relationships with one another.
With high-speed trains and uteruses, we certainly got a few things wrong. It’s possible that our panic about ChatGPT will eventually die down (that’s what Dr. Horvath predicts) just in time for a new technology to come out. My colleague Blythe of EverySchool.org pointed out to me that ChatGPT might also be good for those of us who fight for decreased tech in schools– it might encourage more teachers to bring back paper and pen for essay writing.
In the meantime, let’s focus our worries on the dangerous, not the scary, and continue to fight for our kids’ future cognitive and emotional health by setting clear and consistent limits, modeling tech-intentional screen use, and advocating for intentional screen use in schools.