December 23, 2024

On Sept. 15, California Gov. Gavin Newsom (D) signed into law the Age-Appropriate Design Code Act, which unanimously passed in the state senate at the end of August despite protest from the tech industry.
Modeled after the U.K. Children’s Code that went into effect last year, the California law protects children’s privacy and well-being online by requiring companies to assess the impact of any product or service either designed for children or “likely to be accessed by children.”
The law will go into effect on July 1, 2024, after which time companies found in violation of the law may have to pay penalties of up to $7,500 per affected child. While that might sound like a small sum, similar legislation in the European Union has allowed Ireland’s Data Protection Commission to fine Meta $400 million for the way Instagram treated children’s data. (In the case of the new law, the California attorney general would impose fines.)
California’s Age-Appropriate Design Code Act defines a child as any person under the age of 18, in comparison to 1998′s Children’s Online Privacy Protection Act (COPPA), for which 13 is the cutoff age.
COPPA codified protections for children’s data, prohibiting “unfair or deceptive acts or practices in connection with the collection, use, and/or disclosure of personal information from and about children on the Internet.”
The new California law goes further. It requires that the highest privacy settings be the default for young users, and that companies “provide an obvious signal” to let children know when their location is being tracked.
Jim Steyer, founder and CEO of Common Sense Media, one of the bill’s lead sponsors, told HuffPost, “this is a very significant victory for kids and families.”
The law comes down firmly on the side of children’s safety over profit, stating: “If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.”
In a 2019 interview with The New York Times, Baroness Beeban Kidron, chief architect of the U.K. Children’s Code, elaborated on her meetings with tech executives.
“The main thing they are asking me is: ‘Are you really expecting companies to give up profits by restricting the data they collect on children?’ Her response? ‘Of course I am! Of course, everyone should.’”
“If a conflict arises between commercial interests and the best interests of children, companies should prioritize the privacy, safety, and well-being of children over commercial interests.”
The dangers of the internet for kids go beyond children being contacted by strangers online (though by making high privacy settings the default, the California act does try to prevent such interactions).
Increasingly, parents worry about the excessive time that children spend online, the lure of platforms with autoplay and other addictive features, and kids’ exposure to content that promotes dangerous behaviors like self-harm and eating disorders.
The Age-Appropriate Design Code Act requires companies to write a “Data Protection Impact Assessment” for every new product or service, detailing how children’s data may be used and whether any harm could result from this use.
“Basically, [companies] have to look at whether their product design exposed children and teens to harmful content, or allows harmful contact by others, or uses harmful algorithms,” said Steyer.
Under the law, Steyer explained, YouTube, for example, would still be able to make video recommendations. The difference is that they would have less data to pull from when making these recommendations. Companies would also be responsible for assessing whether their algorithms are amplifying harmful content, and taking action if this is the case.
Haley Hinkle, Policy Counsel at Fairplay, an organization “dedicated to ending marketing to children,” told HuffPost that by mandating an impact assessment, “big tech companies will be responsible for assessing the impact their algorithms will have on kids before they offer a product or new design feature to the public.”
Hinkle continued, “This is critical in shifting responsibility for the safety of digital platforms onto the platforms themselves, and away from families who do not have the time or resources to decode endless pages of privacy policies and settings options.”
Under the law, a company may not “collect, sell, share or retain” a young person’s information unless it is necessary to do so in order for the app or platform to provide its service. The law instructs businesses to “estimate the age of child users with a reasonable level of certainty,” or simply to grant data protections to all users.
“You cannot profile a child or a teenager by default, unless the business has appropriate safeguards in place,” Steyer said. “And you cannot collect precise geolocation information by default.”
Hinkle explained the motivation for companies to collect such data. “Online platforms are designed to capture as much of kids’ time and attention as possible. The more data a platform collects on a child or teen, the more effectively it can target them with content and design features to keep them online.”
While the law’s scope is limited to California, there is hope that it could instigate further-reaching reform, as some companies changed their practices worldwide before the enactment of the Children’s Code in the U.K. Instagram, for example, made teens’ accounts private by default, disabling direct messages between children and adults they do not follow. However, how they define “adult” varies by country ― it’s 18 in the U.K. and “certain countries,” but 16 elsewhere in the world, according to their statement announcing the changes.
While it’s uncertain if Instagram will raise this age cutoff to 18 in California now, the Age-Appropriate Design Code Act does require companies to take into account “the unique needs of different age ranges” and developmental stages, defined by the law as follows: “0 to 5 years of age or ‘preliterate and early literacy,’ 6 to 9 years of age or ‘core primary school years,’ 10 to 12 years of age or ‘transition years,’ 13 to 15 years of age or ‘early teens,’ and 16 to 17 years of age or ‘approaching adulthood.’”
“Kid development and social media are not optimally aligned.”
What are the biggest threats to kids online?
Some threats to kids come from big, impersonal corporations who collect data in order to subject them to targeted advertising, or to profile them with targeted content that may promote dangerous behaviors, like disordered eating.
Other threats come from people that your child knows in real life, or even your child themselves.
Devorah Heitner, author of “Screenwise: Helping Kids Survive (And Thrive) In Their Digital World,” told HuffPost that in addition to “interpersonal harm from people they know,” like cyberbullying, “there are ways that they can compromise their own reputations.”
“What you share when you’re 12 could live with you for a really long time,” Heitner explained.
While no law can prevent a child from posting something that they probably shouldn’t, the Age-Appropriate Design Code Act does require that businesses “take into account the unique needs of different age ranges,” establishing the precedent that children and teens are developmentally different from adults and require different protections.
“Kid development and social media are not optimally aligned,” noted Heitner.
What can parents do now to protect their children’s privacy and safety?
Parents don’t have to wait for big tech companies to change their practices before California’s new law goes into effect. There are things that you can do now to increase your child’s online privacy and safety.
Hinkle suggests keeping kids away from social media until at least age 13. To do so, she says, it can be helpful to communicate with the parents of your child’s friends, as the presence of their peers is the biggest draw to social media for most kids.
Once they do have social media accounts, Hinkle suggests to “review the settings with your child, and explain why you want the most protective settings on.” These include turning off location data, opting for private accounts and disabling contact with strangers.
Heitner advocates for an approach that she calls “mentoring over monitoring.” Because safety settings can only do so much, and because kids are so good at finding workarounds, she maintains that your best defense is an ongoing conversation with your child about their online habits, and the impact their actions may have, both on themselves and others.
Your kids will come across harmful content during their online hours. You want them to feel comfortable telling you about it, or, when appropriate, reporting it.
When it comes to examining their own behavior, kids need to know that you are open to discussion and won’t be quick to judge. Heitner suggests using phrases such as, “I know you’re a good friend, but if you post that it might not sound that way.”
Kids should understand how what they post may be misconstrued, and why they should always think before posting, especially when they’re feeling angry.
It’s a delicate balance of respecting how important your child’s online life is to them, while at the same time teaching them that social media “can make you feel terrible, and that [companies] are profiting from your time spent there,” said Heitner.
Parents’ goal should be making kids aware of these issues, and “getting kids to buy into a healthy skepticism” of big tech, said Heitner.
In addition to the resources available at Common Sense Media, Steyer recommends that parents take advantage of Apple’s privacy settings, which Common Sense Media helped to develop.
He also suggested that parents be role models in their own media consumption.
“If you’re spending all your time [there] yourself, what message is that sending to your kid?”

source

About Author