There is No Such Thing as “Safe AI” for Children.
It has no place in education. Do not be fooled. Do not consent.

Just as there is no such thing as a “safe cigarette,” there is no such thing as “safe AI,” particularly for children.
As we head back to school this fall, I want to provide courage to parents, teachers, and administrators to refuse to participate in this AI madness, because that’s what it is. AI is just a marketing term. As authors Emily M. Bender and Alex Hanna, in their new book, “The AI Con,” write:
“...the phrase ‘artificial intelligence’ is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans.”
So it is deeply concerning that such technologies are rapidly being deployed in schools– they are not for improving learning; they exist to increase profit. That being said, I’m going to use the marketing term in this essay, though when I say “AI” in this piece, I am mainly referring to Generative AI and LLM tools like ChatGPT, which are readily available to children on their school-issued devices.
As an educator myself and a longtime activist for intentional technology use in schools, I fundamentally reject the notion that a tech company knows what’s better for teaching and learning than I (or my educator colleagues) know. That arrogance dovetails nicely with Meta’s longtime proclaimed goal to “move fast and break things.”
Except we are talking about literal CHILDREN here. Moving fast and breaking CHILDREN.
While I can’t wait for the hubris of the AI bubble to pop and shrivel, I see the child sacrifices being made now. Our children are spending more time with tablets than teachers; more time online than on playgrounds; more time with the virtual world than the real one. Adults are no better– just look at everyone’s posture for a physical indicator of how these products are literally changing us.
The deskilling effect that has been wrought by EdTech over the past decade is going to accelerate even more with the forced inclusion of AI tools into EdTech products, coming to a classroom near you. Note that these are not separate or distinct or “additional” tools– AI products are being incorporated into existing EdTech platforms– the ones that schools already have signed multi-year, multi-million dollar contracts with. I am hearing over and over from employers who are hiring new graduates from top universities who may have technical skills but who– and I am quoting one engineer here– “fundamentally cannot function as human beings.”
Remember, folks, that all these tech execs claiming your children will need AI experience and exposure now to be successful in the future did not use AI tools themselves as children and many restrict their own children's use of technology and they seem to have turned out just fine (or at least “fine”).
Let me make this easy and list out three reasons why AI has no business being forced onto children or into classrooms in the name of education and why we as students, parents, teachers, and administrators must fundamentally reject the AI assault, NOW.
Reason #1:
The end goal of AI companies is to replace teachers, not help them.
Teachers who are trained to use AI are not future-proofing their employment; they are actively undermining it. Those AI professional development sessions to teach you how to "incorporate" AI into your lesson planning are learning how you lesson plan so they can eventually displace your job with “smarter” and “more personalized” AI “tutors.” Skilled humans are expensive, bots are cheap, and the education system is fraught with structural issues and overworked teachers. The AI industry, from a strategic business perspective, sees these problems as dollar signs and opportunities, where classrooms full of users provide them with extremely valuable data to further their profit margins. None of this is about making school systems better for children or teachers.
Reason #2: Teachers (and parents) are role models. Children will do what we do. If we use AI, they will use AI.
Don’t provide students with AI tools and then tell them to “be responsible” and “not use it to write” their homework. That is laughable. You literally gave them homework machines. Of course they are going to use it. But even worse– teachers are being told to use AI to assess student assignments to address “large class sizes” or “teacher burnout” but schools are not seeking ACTUAL solutions that reduce class size or support teachers. Children do what the adults around them do: If teachers use AI to assess student work, then students will use AI to create it. What did we think would happen? Using AI to displace skills, even as adults, makes cognition worse. There’s actual research now that indicates when we use tools like ChatGPT, we are functionally stupider for it. Do we want this in schools?
Reason #3: AI tools– none of them– are safe for use by children, because their business model is fundamentally at odds with what children need. Period.
Here are three reasons why:
Most AI products and tools are trained on problematic foundational AI platforms (such as OpenAI or Anthropic) that include copyrighted material, violent content, mis- and dis-information, and child sex abuse material (CSAM). If your child or student is using an AI tool in or for school, it will have been trained on problematic content and it will deliver this content to children when they use the tool. AI is making problems already presented by EdTech worse.
AI chatbots, which are increasingly being used to “tutor” students in the name of “personalized learning,” can manipulate human emotions and experiences. Adults are experiencing AI-induced psychosis or mental health breakdowns because AI chatbots are taught to constantly validate and encourage. It should be obvious why this is frighteningly problematic for children, whose brains are extremely vulnerable and not fully developed. One lawsuit already filed alleges a chatbot drove a child to suicide. This will happen on school-issued devices, on tools provided for “educational” purposes.
The terms of use for the main foundational AI platforms used in education (Claude, Anthropic, Gemini, and OpenAI) are in direct conflict with their actual users. For the millions of younger students with access to ChatGPT on their school-issued devices, it should be noted that OpenAI (on which ChatGPT is built), states that to use their product users must “be at least 13 years old.”
If a school is using MagicSchoolAI (which is built on Anthropic), as one example, Anthropic’s own consumer terms of service state that “You must be at least 18 years old” to use the service. Most K-12 students are under 18.
While I have tremendous empathy for the many teachers who are fighting to resist the AI hype and protect their profession and the learning experience (I hear you! I see you! Thank you for writing to me! Don’t give up!), there are far too many school officials or administrators (and yes, some teachers) who do not seem to understand the threat posed by the AI assault on education, and may not realize it until the pendulum has swung so far to one side it has snapped off completely. Additionally, many such officials are getting advice from industry-funded sources and in my own district, I was shocked to learn that the guidelines our state provides to our school districts about AI use were written by AI itself (see page 51):

Our children absolutely need technology education today more than ever before. But “tech ed” is not “EdTech.” We must keep educating our school boards and leaders about the risks posed by EdTech and AI products and tools, and that means reading the fine print, following the funding sources, and asking the tough questions.
At the end of the day, just like Big Tech and EdTech, AI relies on users spending time and contributing their data without informed consent. There is a reason the tobacco industry never successfully created a “safe” cigarette. Just like Big Tobacco, the business model of EdTech and Big Tech is fundamentally at odds with child development, and no parent, teacher, administrator, or politician should accept this.
It’s time to say no.
To learn more about how you can advocate for tech-intentional schools and for additional support and resources, please join our T.I.M.E. Collective.
There is No Such Thing as “Safe AI” for Children.
It has no place in education. Do not be fooled. Do not consent.

Just as there is no such thing as a “safe cigarette,” there is no such thing as “safe AI,” particularly for children.
As we head back to school this fall, I want to provide courage to parents, teachers, and administrators to refuse to participate in this AI madness, because that’s what it is. AI is just a marketing term. As authors Emily M. Bender and Alex Hanna, in their new book, “The AI Con,” write:
“...the phrase ‘artificial intelligence’ is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans.”
So it is deeply concerning that such technologies are rapidly being deployed in schools– they are not for improving learning; they exist to increase profit. That being said, I’m going to use the marketing term in this essay, though when I say “AI” in this piece, I am mainly referring to Generative AI and LLM tools like ChatGPT, which are readily available to children on their school-issued devices.
As an educator myself and a longtime activist for intentional technology use in schools, I fundamentally reject the notion that a tech company knows what’s better for teaching and learning than I (or my educator colleagues) know. That arrogance dovetails nicely with Meta’s longtime proclaimed goal to “move fast and break things.”
Except we are talking about literal CHILDREN here. Moving fast and breaking CHILDREN.
While I can’t wait for the hubris of the AI bubble to pop and shrivel, I see the child sacrifices being made now. Our children are spending more time with tablets than teachers; more time online than on playgrounds; more time with the virtual world than the real one. Adults are no better– just look at everyone’s posture for a physical indicator of how these products are literally changing us.
The deskilling effect that has been wrought by EdTech over the past decade is going to accelerate even more with the forced inclusion of AI tools into EdTech products, coming to a classroom near you. Note that these are not separate or distinct or “additional” tools– AI products are being incorporated into existing EdTech platforms– the ones that schools already have signed multi-year, multi-million dollar contracts with. I am hearing over and over from employers who are hiring new graduates from top universities who may have technical skills but who– and I am quoting one engineer here– “fundamentally cannot function as human beings.”
Remember, folks, that all these tech execs claiming your children will need AI experience and exposure now to be successful in the future did not use AI tools themselves as children and many restrict their own children's use of technology and they seem to have turned out just fine (or at least “fine”).
Let me make this easy and list out three reasons why AI has no business being forced onto children or into classrooms in the name of education and why we as students, parents, teachers, and administrators must fundamentally reject the AI assault, NOW.
Reason #1:
The end goal of AI companies is to replace teachers, not help them.
Teachers who are trained to use AI are not future-proofing their employment; they are actively undermining it. Those AI professional development sessions to teach you how to "incorporate" AI into your lesson planning are learning how you lesson plan so they can eventually displace your job with “smarter” and “more personalized” AI “tutors.” Skilled humans are expensive, bots are cheap, and the education system is fraught with structural issues and overworked teachers. The AI industry, from a strategic business perspective, sees these problems as dollar signs and opportunities, where classrooms full of users provide them with extremely valuable data to further their profit margins. None of this is about making school systems better for children or teachers.
Reason #2: Teachers (and parents) are role models. Children will do what we do. If we use AI, they will use AI.
Don’t provide students with AI tools and then tell them to “be responsible” and “not use it to write” their homework. That is laughable. You literally gave them homework machines. Of course they are going to use it. But even worse– teachers are being told to use AI to assess student assignments to address “large class sizes” or “teacher burnout” but schools are not seeking ACTUAL solutions that reduce class size or support teachers. Children do what the adults around them do: If teachers use AI to assess student work, then students will use AI to create it. What did we think would happen? Using AI to displace skills, even as adults, makes cognition worse. There’s actual research now that indicates when we use tools like ChatGPT, we are functionally stupider for it. Do we want this in schools?
Reason #3: AI tools– none of them– are safe for use by children, because their business model is fundamentally at odds with what children need. Period.
Here are three reasons why:
Most AI products and tools are trained on problematic foundational AI platforms (such as OpenAI or Anthropic) that include copyrighted material, violent content, mis- and dis-information, and child sex abuse material (CSAM). If your child or student is using an AI tool in or for school, it will have been trained on problematic content and it will deliver this content to children when they use the tool. AI is making problems already presented by EdTech worse.
AI chatbots, which are increasingly being used to “tutor” students in the name of “personalized learning,” can manipulate human emotions and experiences. Adults are experiencing AI-induced psychosis or mental health breakdowns because AI chatbots are taught to constantly validate and encourage. It should be obvious why this is frighteningly problematic for children, whose brains are extremely vulnerable and not fully developed. One lawsuit already filed alleges a chatbot drove a child to suicide. This will happen on school-issued devices, on tools provided for “educational” purposes.
The terms of use for the main foundational AI platforms used in education (Claude, Anthropic, Gemini, and OpenAI) are in direct conflict with their actual users. For the millions of younger students with access to ChatGPT on their school-issued devices, it should be noted that OpenAI (on which ChatGPT is built), states that to use their product users must “be at least 13 years old.”
If a school is using MagicSchoolAI (which is built on Anthropic), as one example, Anthropic’s own consumer terms of service state that “You must be at least 18 years old” to use the service. Most K-12 students are under 18.
While I have tremendous empathy for the many teachers who are fighting to resist the AI hype and protect their profession and the learning experience (I hear you! I see you! Thank you for writing to me! Don’t give up!), there are far too many school officials or administrators (and yes, some teachers) who do not seem to understand the threat posed by the AI assault on education, and may not realize it until the pendulum has swung so far to one side it has snapped off completely. Additionally, many such officials are getting advice from industry-funded sources and in my own district, I was shocked to learn that the guidelines our state provides to our school districts about AI use were written by AI itself (see page 51):

Our children absolutely need technology education today more than ever before. But “tech ed” is not “EdTech.” We must keep educating our school boards and leaders about the risks posed by EdTech and AI products and tools, and that means reading the fine print, following the funding sources, and asking the tough questions.
At the end of the day, just like Big Tech and EdTech, AI relies on users spending time and contributing their data without informed consent. There is a reason the tobacco industry never successfully created a “safe” cigarette. Just like Big Tobacco, the business model of EdTech and Big Tech is fundamentally at odds with child development, and no parent, teacher, administrator, or politician should accept this.
It’s time to say no.
To learn more about how you can advocate for tech-intentional schools and for additional support and resources, please join our T.I.M.E. Collective.
There is No Such Thing as “Safe AI” for Children.
It has no place in education. Do not be fooled. Do not consent.

Just as there is no such thing as a “safe cigarette,” there is no such thing as “safe AI,” particularly for children.
As we head back to school this fall, I want to provide courage to parents, teachers, and administrators to refuse to participate in this AI madness, because that’s what it is. AI is just a marketing term. As authors Emily M. Bender and Alex Hanna, in their new book, “The AI Con,” write:
“...the phrase ‘artificial intelligence’ is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans.”
So it is deeply concerning that such technologies are rapidly being deployed in schools– they are not for improving learning; they exist to increase profit. That being said, I’m going to use the marketing term in this essay, though when I say “AI” in this piece, I am mainly referring to Generative AI and LLM tools like ChatGPT, which are readily available to children on their school-issued devices.
As an educator myself and a longtime activist for intentional technology use in schools, I fundamentally reject the notion that a tech company knows what’s better for teaching and learning than I (or my educator colleagues) know. That arrogance dovetails nicely with Meta’s longtime proclaimed goal to “move fast and break things.”
Except we are talking about literal CHILDREN here. Moving fast and breaking CHILDREN.
While I can’t wait for the hubris of the AI bubble to pop and shrivel, I see the child sacrifices being made now. Our children are spending more time with tablets than teachers; more time online than on playgrounds; more time with the virtual world than the real one. Adults are no better– just look at everyone’s posture for a physical indicator of how these products are literally changing us.
The deskilling effect that has been wrought by EdTech over the past decade is going to accelerate even more with the forced inclusion of AI tools into EdTech products, coming to a classroom near you. Note that these are not separate or distinct or “additional” tools– AI products are being incorporated into existing EdTech platforms– the ones that schools already have signed multi-year, multi-million dollar contracts with. I am hearing over and over from employers who are hiring new graduates from top universities who may have technical skills but who– and I am quoting one engineer here– “fundamentally cannot function as human beings.”
Remember, folks, that all these tech execs claiming your children will need AI experience and exposure now to be successful in the future did not use AI tools themselves as children and many restrict their own children's use of technology and they seem to have turned out just fine (or at least “fine”).
Let me make this easy and list out three reasons why AI has no business being forced onto children or into classrooms in the name of education and why we as students, parents, teachers, and administrators must fundamentally reject the AI assault, NOW.
Reason #1:
The end goal of AI companies is to replace teachers, not help them.
Teachers who are trained to use AI are not future-proofing their employment; they are actively undermining it. Those AI professional development sessions to teach you how to "incorporate" AI into your lesson planning are learning how you lesson plan so they can eventually displace your job with “smarter” and “more personalized” AI “tutors.” Skilled humans are expensive, bots are cheap, and the education system is fraught with structural issues and overworked teachers. The AI industry, from a strategic business perspective, sees these problems as dollar signs and opportunities, where classrooms full of users provide them with extremely valuable data to further their profit margins. None of this is about making school systems better for children or teachers.
Reason #2: Teachers (and parents) are role models. Children will do what we do. If we use AI, they will use AI.
Don’t provide students with AI tools and then tell them to “be responsible” and “not use it to write” their homework. That is laughable. You literally gave them homework machines. Of course they are going to use it. But even worse– teachers are being told to use AI to assess student assignments to address “large class sizes” or “teacher burnout” but schools are not seeking ACTUAL solutions that reduce class size or support teachers. Children do what the adults around them do: If teachers use AI to assess student work, then students will use AI to create it. What did we think would happen? Using AI to displace skills, even as adults, makes cognition worse. There’s actual research now that indicates when we use tools like ChatGPT, we are functionally stupider for it. Do we want this in schools?
Reason #3: AI tools– none of them– are safe for use by children, because their business model is fundamentally at odds with what children need. Period.
Here are three reasons why:
Most AI products and tools are trained on problematic foundational AI platforms (such as OpenAI or Anthropic) that include copyrighted material, violent content, mis- and dis-information, and child sex abuse material (CSAM). If your child or student is using an AI tool in or for school, it will have been trained on problematic content and it will deliver this content to children when they use the tool. AI is making problems already presented by EdTech worse.
AI chatbots, which are increasingly being used to “tutor” students in the name of “personalized learning,” can manipulate human emotions and experiences. Adults are experiencing AI-induced psychosis or mental health breakdowns because AI chatbots are taught to constantly validate and encourage. It should be obvious why this is frighteningly problematic for children, whose brains are extremely vulnerable and not fully developed. One lawsuit already filed alleges a chatbot drove a child to suicide. This will happen on school-issued devices, on tools provided for “educational” purposes.
The terms of use for the main foundational AI platforms used in education (Claude, Anthropic, Gemini, and OpenAI) are in direct conflict with their actual users. For the millions of younger students with access to ChatGPT on their school-issued devices, it should be noted that OpenAI (on which ChatGPT is built), states that to use their product users must “be at least 13 years old.”
If a school is using MagicSchoolAI (which is built on Anthropic), as one example, Anthropic’s own consumer terms of service state that “You must be at least 18 years old” to use the service. Most K-12 students are under 18.
While I have tremendous empathy for the many teachers who are fighting to resist the AI hype and protect their profession and the learning experience (I hear you! I see you! Thank you for writing to me! Don’t give up!), there are far too many school officials or administrators (and yes, some teachers) who do not seem to understand the threat posed by the AI assault on education, and may not realize it until the pendulum has swung so far to one side it has snapped off completely. Additionally, many such officials are getting advice from industry-funded sources and in my own district, I was shocked to learn that the guidelines our state provides to our school districts about AI use were written by AI itself (see page 51):

Our children absolutely need technology education today more than ever before. But “tech ed” is not “EdTech.” We must keep educating our school boards and leaders about the risks posed by EdTech and AI products and tools, and that means reading the fine print, following the funding sources, and asking the tough questions.
At the end of the day, just like Big Tech and EdTech, AI relies on users spending time and contributing their data without informed consent. There is a reason the tobacco industry never successfully created a “safe” cigarette. Just like Big Tobacco, the business model of EdTech and Big Tech is fundamentally at odds with child development, and no parent, teacher, administrator, or politician should accept this.
It’s time to say no.
To learn more about how you can advocate for tech-intentional schools and for additional support and resources, please join our T.I.M.E. Collective.

T.I.M.E. Collective
Join Our Tech-Intentional Community Today!
Ready to connect with other tech-intentional parents, teachers, school leaders, and advocates in your area? Become part of a supportive network dedicated to fostering healthier digital environments. Share experiences, gain insights, and collaborate on initiatives that make a real difference in your local community.

T.I.M.E. Collective
Join Our Tech-Intentional Community Today!
Ready to connect with other tech-intentional parents, teachers, school leaders, and advocates in your area? Become part of a supportive network dedicated to fostering healthier digital environments. Share experiences, gain insights, and collaborate on initiatives that make a real difference in your local community.

T.I.M.E. Collective
Join Our Tech-Intentional Community Today!
Ready to connect with other tech-intentional parents, teachers, school leaders, and advocates in your area? Become part of a supportive network dedicated to fostering healthier digital environments. Share experiences, gain insights, and collaborate on initiatives that make a real difference in your local community.
Copyright © 2025 The Screentime Consultant, LLC | All Rights Reserved. | Tech-Intentional™ and The Screentime Consultant, LLC™ are registered trademarks.
Copyright © 2025 The Screentime Consultant, LLC | All Rights Reserved. | Tech-Intentional™ and The Screentime Consultant, LLC™ are registered trademarks.
Copyright © 2025 The Screentime Consultant, LLC | All Rights Reserved. | Tech-Intentional™ and The Screentime Consultant, LLC™ are registered trademarks.