Sep 12, 2024
While there are many areas of concern about the digital world we inhabit, nothing is less sexy, clickbait-worthy, or eye-catching than the words “data privacy.” Peripherally, we all know we should care about our privacy online, but the most common response I get to anything I write or share about data privacy is this sentiment, said with a sheepish shrug: “Well, Google/Meta/Amazon/Apple/Microsoft knows everything about me already anyway, so…”
Look, I get it. Like millions of other people, I, too, am guilty of clicking “Accept Terms and Conditions” without reading all the nitty-gritty details when I download a new app. I, too, find it hard to really “dig in and engage” with policy reports about privacy or data. And my activism for the past six years has focused primarily on student data privacy– I even co-founded The Student Data Privacy Project.
So if it’s hard for me to dive in with enthusiasm, no wonder most people just shrug it off.
But as a real estate agent once told me, when you’re looking for a house – even though those first impressions of a beautifully staged living room are going to have an impact on your feelings about a potential home – you actually want the “non-sexy stuff” (electrical, foundation, plumbing) to be in good condition. The fun stuff (paint color, furniture, flooring) is much easier to change or improve on later– and often less expensive. New electrical and pipes? Not so much.
So it’s easy as parents and advocates to get fixated on the catchier stuff of digital technologies– the parental controls, quality of apps, time limits, delaying access– and lose sight of the foundational stuff that actually really matters.
Data Privacy and School EdTech
Another key piece to this data privacy piece is that screen-based tech consumption now extends beyond just the good ol’ days of computers in the workplace. In particular, our children are spending increasing amounts of time on school-issued laptops, computers, and platforms for “learning”-- colloquially referred to as “EdTech.” “EdTech” can mean software, hardware, applications, digital curricula, surveillance and monitoring tools, learning management systems, security systems and more. The term is quite all encompassing, but it isn’t separate, really, from Big Tech.
At the end of the day, the business models of Big Tech and EdTech are identical, and that’s the problem.
We send our kids to school and assume the tools they use are designed for children, with attention to development and pedagogy. But that’s not, for the most part, what’s happening with technologies used at or for school.
What Parents Should Know About What YouTube Knows About Your Kid
Take, for example, YouTube, which is technically a social media app, but which is almost always available on school-issued devices because, as I’ve been told, “teachers use it for lessons.” Are there some cool things on YouTube? Sure, but here is just a list of a few things that YouTube (owned by Google) knows about your child when they interact with the website:
What videos they watch
What search terms they use
What ads they interact with
What IP address they use
When they watch videos
What type of device they use
What content they interact with
Google’s suite of products also includes the very popular Google Classroom tools used by thousands of students around the world, which means there is an awful lot of data flowing between platforms (especially those owned by the same technology monolith). Of course, Google/YouTube uses all this information (which you’ve given them permission to use) to train their algorithm to push you (or your child) content they think you (or your child) will be more likely to interact with– whether they are using this tech at or for school, or at home for fun. This is important because technology companies make money off advertising (Google is really just a giant ad company), so they need to know a lot about how you (your child) interacts with YouTube so they can better target ads while you (or your child) consumes content.
This might seem convenient– after all, Amazon does this when we search for a book (“People who read this book also enjoyed…” and recommends other ideas for books to read next) but this “convenience” comes at a cost. Children with depression seeking information about mental health are often fed pro-suicide content; teens searching for fitness or wellness videos get pushed eating disorder content; children can easily stumble into pro-self harm content even when they aren’t seeking it out. All of this happens millions of times a day around the world, and given that children are averaging 7.5 hours per day on screens, that’s a lot of opportunities to curate specific content.
Even worse is when this curated content is generated because of data mined about children via school-based technology…or EdTech.
Parents Have to Care About the Boring Stuff
My goal in this essay is, perhaps for the first time, to get us to care a little more about the non-sexy stuff: data privacy, particularly as it relates to our children and their use of technology, especially via EdTech platforms they are using in schools. Because even though it’s boring and tedious and out of sight and behind the scenes, it really, really matters and causes very real harm. (And it should go without saying that many of these issues are absolutely applicable to personal devices too).
Here are 4 things parents should care about when it comes to your child’s privacy:
Privacy is at Odds with EdTech–
Because BigTech and EdTech business models are dependent on data, the notion of “privacy” is in direct conflict with what technology companies need to build profits. If your data is private, it isn’t profitable.
That being said, EdTech (and BigTech) go out of their way to assure you that your data is secure, but if you keep reading those Terms and Conditions (ha), you’ll notice that there are a lot of exceptions, or claims of using “de-identifiers” to “protect” students (but it’s well known that you can take bits of data from one place, overlap it with data from another, and create a profile).
The other problem is that the law is woefully behind in protecting student information. Remember way back in elementary school when your teacher would pass around a class roster with names and phone numbers on it? They had to get permission to do that to protect families who didn’t wish to share that information. Trouble is, the law that required schools to get consent to share that information, FERPA (the Family Educational Rights and Privacy Act), was last meaningfully updated in 1974. I don’t need to point out that technology has changed quite a bit since 1974.
To look at what schools are doing in more modern times around protecting things like family phone numbers, just read any of the “Technology Use Agreements” sent around at the start of a school year, which typically compel students to “not misuse” technology or require parents to pay for damaged or lost devices, but conveniently say nothing about how schools will protect the data they are collecting. Our school district even admits students should have “no expectation of privacy” on school-issued devices and that the district has the “right” to “monitor, inspect, copy, review, and store without prior notice information” the content of student’s devices and digital tools.
How Privacy Violations Harms Your Child:
It’s true that nothing is private on the internet. Again, adults may intuitively know it, but expecting children to understand this concept is completely out of alignment with child brain development. Chidlren simply cannot understand that sharing personal information in a chat with a friend could lead to bigger problems. To expect them to mitigate and avoid engaging in these (very innocent childhood) behaviors is completely insane. Privacy violations can harm students by:
Decreasing trust through increased surveillance. Teachers didn’t go into teaching to spy on what students are doing online. And of course, surveillance erodes trust, and trust is at the core of a healthy student-teacher relationship.
Students are often aware that Big Tech/EdTech is spying on them. This can inhibit speech or cause children to become fearful about what they look for or at online. It prevents their ability to become critical thinkers or autonomous decision makers for fear their mistakes may come back to haunt them.
When school computers come home, platforms and devices can still be accessible via a teacher’s surveillance tool, sometimes allowing access to webcams and microphones in student homes. Access to this type of private family time is deeply concerning, and can be misused to discipline students. One study found that nearly 90% of EdTech products are able to surveil students outside of school time.
Unfortunately, some EdTech platforms can interface with law enforcement tools, increasing the chance that students who engage in some behaviors get flagged by police departments. When law enforcement has access to student behavior data, they can make faulty assumptions about who might be more likely to commit crimes.
How do we avoid this?
It can be tempting to see the use of surveillance tools as a way to “protect” society. But because surveillance erodes trust and increases our own anxieties about the world, it actually has a deteriorating effect. Some ways to avoid this might include:
Focusing on what is truly dangerous, not just “scary”. I write more about that here and in my book.
Reaching out to local and national politicians to advocate for updates to FERPA.
Ask your school district or administration to provide clarity around what EdTech tools are used and how they each individually protect student data (plus to request the research that shows they are better for learning in the first place)
Opting your child out of EdTech platforms and tools (see my UnPlug EdTech Toolkit here).
Data Insecurity– Or, Hackers Love EdTech.
In the technology industry, data drives profits. This means that allllll the information we put into apps, search engines, in DMs, on social media, in Google docs, in emails, etc. is filling some out-of-sight information bank that companies mine for information to train their algorithms. We should already know all this because it’s in fine print in the Terms and Conditions we’ve agreed to, of course.
But just in case you don’t remember, let me highlight how this is particularly risky for our children…
The same “Terms and Conditions” we check on new app downloads are embedded in the EdTech platforms that our children use at and for school– but schools are asking the kids to consent to them, and parents are uninformed or ill-informed, at best, about what schools and EdTech companies will do with all that information they collect.
And EdTech companies are collecting a lot of information about children, including (but not limited to):
Where your child lives
Your child’s age
Your child’s birthdate
Your family’s religious affiliations
Your child’s sexual orientation
Your child’s mental health status
Your family economic status
Your child’s grades and test scores
Your child’s behavior records
According to the National Center for Education Statistics, in the Fall of 2022 there were nearly 50 million school-aged students enrolled in U.S. K-12 public schools. And depending on whose estimate you use, schools are using hundreds, if not thousands, of unique EdTech platforms and tools per school (the EdTech industry boasts it’s over 2,000 per school district, while the independently funded Internet Safety Labs estimates it’s somewhere around 150 per school).
It’s not hard to do the math: That’s a lot of data times a lot of kids.
So it should come as no surprise that hackers love educational technology data– there are troves of it there for the taking, and, for a variety of reasons, school IT departments are unable or unwilling (through underfunding, understaffing, lack of awareness, or plain negligence) to properly vet and protect student’s data.
How Data Insecurity Harms Your Child:
Unfortunately, the fallout from student data hacks may not be known for years. Here are a few potential harms:
Identity theft is often not discovered until a minor child applies for a credit card or student loan, only to discover they have ruined credit or have had their identity stolen.
Students with disciplinary records who have been hacked or made public may find it hard to get a job as an adult. Children make mistakes; having those mistakes impact your future employment prospects is bleak.
How do we avoid this?
The easiest way to avoid these harms is to not collect all that data in the first place. Of course, this is much easier said than done, as schools aren’t going to walk away that easily. In the meantime, some strategies might include:
Working with schools leaders to seek better funding and infrastructure around student data privacy (but being wary about the “wolf in sheep’s clothing” companies who want to “help” schools but are in the pockets of Big Tech and EdTech– in other words: vet carefully)
Ensuring that if data is collected, it is for a specific purpose and there is a “delete by” date to ensure that future hacks don’t provide old student records that can still cause harms
Opting out of all EdTech platforms and tools (see my UnPlug EdTech Toolkit here).
EdTech is AdTech: Kids (and their Data) are the Product
I’ve said it before and I’ll say it again: EdTech is just Big Tech in a sweater vest. Companies sell the information they collect about users (in EdTech’s case, our children) to third-party companies, many of which are AdTech companies, who then use that valuable information to better target their advertising campaigns. It’s all one big, vicious cycle. For more on this topic, I recommend reading Susan Linn’s excellent book, Who’s Raising the Kids? Big Tech, Big Business, and the Lives of Children.
As just one example of the perniciousness of EdTech’s marketing arm, the extremely popular Math game, Prodigy, has a “free” version but constantly pushes upgrades to kids in-app and to parents via marketing emails. You can learn more about this and other ways Prodigy isn’t all it's cracked up to be in Fairplay’s report here.
How Commercial Manipulation Harms Your Child:
As stated, targeted advertising is a lucrative business. Collecting data about children and teens through the devices and platforms they use continues this cycle of collect, pitch, sell. Examples of this include sponsored content, influencers (PowerSchool has a teacher influencer program), persuasive design, and other marketing based on user behaviors.
We often think about screentime harms as related to focus and mental health, but they also deeply influence health and nutrition too, in the way that marketers use student profiles to pitch junk food, vape products, energy drinks, alcoholic beverages, and more. Like many problematic industries, children in vulnerable communities are exposed disproportionately to harmful content.
We all think we’re using our social media platforms for “free.” But the reality is that we are giving our personal information to technology companies for the privilege of using their product. Yet it is we (and our children) who are the products.
How do we avoid this?
Groups, like Fairplay (formerly The Campaign for a Commercial-Free Childhood), have been fighting against the commercialization of childhood for decades. Unfortunately, Big Tech and EdTech have made marketing to kids easier than ever.
Fairplay works hard to fight this– they are a well-established and well-respected advocacy group. Join a group, donate, or see how you can support local efforts by starting here.
Teach your children, even at a very young age, about marketing. Explain that when Elsa from Frozen appears on a cereal box, that’s because two companies are working together to get kids to ask their parents to buy something. Talk about the messages in advertising or product placement. Point out when things are funded by companies who have a vested interest in the outcomes.
EdTech Perpetuates Discrimination
It’s no secret that youth mental health is suffering in the United States. Many schools are attempting to address the causes and treat the symptoms, often using “mental health assessments” or surveys that solicit information from students. The intentions are often good– to help children. But there is a paradox– asking a child about their mental health in a digital survey can put their privacy at risk. If you say that a survey or assessment is “private” or “confidential”, what happens when that child confesses suicidal ideation? How can you help them if you don’t know who they are? To be clear, much more support and outreach are needed to support children and teens in today’s world. But using tech-based mental health assessments presents numerous risks.
As a parent of a high school junior, we’re starting to hear about “college planning” for our son. Things are pretty different than they were thirty years ago when we were going through this process, and much of that has to do with how much information has become digitized. Unfortunately, this also comes with problematic data use, which can lead to discriminatory practices.
How Discrimination Harms Your Child:
Here are a few ways in which EdTech platforms and tools (which include mental health assessments and college recruiting platforms) can harm children:
Shared student data can inform colleges or future employers to specifically target certain groups of students. Using race, gender, or socioeconomic status as a way to filter out specific demographics can lead to fewer opportunities becoming available to children from disadvantaged groups.
There are some forms of EdTech that rely on surveillance– security cameras, facial recognition, monitoring software. Unfortunately, surveillance erodes trust. A healthy teaching and learning dynamic is rooted in a trusting relationship.
There are concerns that monitoring and surveillance tools can disproportionately target certain marginalized groups.
Monitoring of student online behavior may well infringe on students’ civil rights.
Artificial Intelligence (AI), already making headway into classrooms and teaching materials, has yet to prove itself safe, better for learning, or, quite simply, accurate. AI is being used to “help” teachers write individualized learning plans or behavioral reports, which can cause students to be misidentified, missupported, or discriminated against. AI for students should be avoided at all costs.
How do we avoid this?
As with the above examples, the best rule of thumb on all this is “less is more.” This may look like:
Refusing online mental health assessments or recruitment tools.
Avoiding the use of AI in the classroom or for teaching purposes. It will be necessary to ask school leaders about how AI is deployed. PowerSchool, the learning management system, is building “PowerBuddy,” an artificial intelligence “chatbot” to “help” students learn. Parents should refuse their child’s participation with such platforms.
Opting out of all EdTech platforms and tools (see my UnPlug EdTech Toolkit here).
If a lot of this sounds dystopian…I apologize. Some of it truly is, and because EdTech’s foray into schools has been full steam ahead/at any cost/move fast and break things, we as parents have to pay closer attention, because our children are the targets and their personal information is the product.
Much of this essay’s examples were taken from the excellent resource at EdTech Law Center, which is a law firm specializing in protecting children from the exploitative practices of EdTech companies.