Just as there is no such thing as a safe cigarette, I believe there is no such thing as safe EdTech.
At the end of the day, the business model underlying Big Tech and its sweater-vested cousin, EdTech, is extractive, manipulative, and harmful to children and development.
Period.
It isn’t (just) about the fact that too much time on school iPads is bad for learning or vision or focus (all true), or that “learning” apps are gamified to keep children on a platform longer (see here); or that kids can access ChatGPT and porn on their school-issued devices (they can); it is the fact that the only way these products will be profitable for their makers is if they collect the data of its users and these companies make a deliberate choice to choose profits over people.
Big Tech and EdTech are identical in the way they extract user data for profit.
When it comes to EdTech, however, it is easy to fall into the trap of believing that if this tech tool is for “educational purposes,” it must then be safe and good, right?
Unfortunately, no, not at all, or at least, not in the way that EdTech tools are currently being deployed.
How Does EdTech Harm Children?*
EdTech invades privacy. Everything a child does on a school-issued device is subject to surveillance by private companies with a vested interest in their data.
Surveillance erodes trust, especially for children who are in a vulnerable developmental stage where they are learning to discern what and who are safe from what and who are not safe.
EdTech is Big Brother. EdTech isn’t just operating at school; EdTech platforms can monitor student devices even while they are at home.
EdTech shares children’s data. EdTech companies partner with local law enforcement, providing behavioral records to help “predict” future criminal activity.
EdTech is highly commercialized. Children, even within “educational” apps, are exposed to advertisements and marketing strategies to increase engagement.
EdTech apps, games, and curricula are embedded with manipulative design techniques (sometimes called “persuasive design”) whose sole goal is to keep users on the screen longer to serve them more ads. This is fundamentally at odds with what children need for healthy learning and development.
EdTech increases screen use. Too much screentime is harmful to children’s health, whether it is their physical health, mental health, or cognitive health. We know that outside of school hours, children ages 8-18 average 7.5 hours per day on screens.
EdTech exposes children to harmful content. Children are viewing inappropriate content at school: porn, violence, self-harm videos, as a few examples, as well as engaging with Chat GPT or other forms of AI that has led at least one child to commit suicide.
EdTech isn’t monitorable. Even the most competent IT department cannot monitor every app, student device, or website. New sites pop up daily, children always find the workarounds, and as long as devices are internet-connected, there will be problems.
EdTech blames parents. Tech companies in general, but EdTech in particular, love to place the burden of “safe technology use” on parents (or teachers), rather than the leaders who offered these harmful tools or the technology companies who intentionally built them this way in the first place. Children as young as 5 years old are asked to sign “Responsible Use Agreements” when they are issued school-tech (and parents are told they are responsible when their child misuses it); in the current landscape of "compulsory" EdTech in schools, consent isn’t even legally possible, even if it is a parent theoretically “consenting” on behalf of a child.
EdTech requires engagement to thrive. Children do not require this to thrive. Parents and schools may desire to reduce “screentime”; but the goal of EdTech platforms is to increase engagement. Those two goals are diametrically opposed.
EdTech feeds on student data. Students’ data is highly valuable and highly vulnerable. EdTech platforms are ripe for leaks and hacks. In Vermont, the new Attorney General announced that in the first 30 days in 2025, there had been 31 data breaches alone. In one month.
EdTech platforms can be discriminatory, whether intentional or not. Bias is baked into AI, for example, and shared student demographic information that can be shared with recruiters and colleges can allow for filtering of characteristics such as race or family socioeconomic status.
There are thousands of EdTech tools on the market. Most parents have no idea how many unique platforms their child is using for school. Internet Safety Labs found that on average, schools use 125 distinct EdTech platforms. Some districts’ lists number in the thousands, and these are the ones considered “approved” or “vetted” and do not include apps or websites that individual teachers may utilize.
FERPA, the law designed to help parents access EdTech data, is not enforced. Even when parents attempt to seek the data that EdTech companies have collected about their child, they are met with silence, resistance, minimal support, or confusion, even though FERPA, a federal law, requires schools to provide this information to parents.
EdTech is all about the business model: profit over people. It is one thing for a school to collect information about its students; it’s entirely another for a corporation to take that same information and sell it for profit, sometimes even back to the school. As long as EdTech tools must connect to the internet to function (so data can easily be collected), children’s data is not protected.
There are more reasons, but let’s stop there. There is much to be concerned about. And of course, the fundamental problem of EdTech remains a business model at odds with children.
Additionally, Tech Ed is not EdTech. You do not need internet-connected devices to teach children to read; 1:1 devices to learn about mis- or dis-information; or online curricula to teach math or writing. One certainly can have or use those things, and how to do that in alignment with the child development is a very important conversation to have.
However, any conversation we have about EdTech begins with understanding that the business model itself is the underlying problem, and that until these companies are forced– or elect– to change, EdTech products as they currently exist will never be safe for children.
*Adapted from this resource written by my colleagues at EdTech Law Center.