Meta loses trial after arguing child exploitation was “inevitable” on its apps

Nathan

SF VIP
Meta has lost the first of three child safety trials it’s facing this year after a jury in a New Mexico state court found that the social media giant’s platforms do not effectively protect kids from child exploitation.

On Tuesday, the jury deliberated for only one day before agreeing that Meta should pay $375 million in civil damages for violating state consumer protections and misleading parents about the safety of its apps.

The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raúl Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez’s office then conducted an undercover investigation codenamed “Operation MetaPhile,” in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were “simply inundated with images and targeted solicitations” from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta’s social networks to prey on children.

At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that “harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases,” The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico’s AG successfully argued. Full article
 
I don't understand any kind of tech stuff, but it seems to me if the owners of these companies are tech-savvy enough to build these sites, they should come up with some way to implement some type of safety precautions on sites that are commonly used by kids.
 
In some respects Meta was right in that with the massive number of users the odds of criminal activity was a possibility. Doesn't make it right. Now there is AI and other tech that will help enable these social media platforms/websites to monitor for potential abuse.

Gaming platforms have had this issue and are getting sued for the samething-minors being exploited. The game platform/website for Roblox is notorious for it. They have known about the for years and have done very little to correct it.

Los Angeles County sues Roblox, alleges platform makes it easy for adults to target children

Both social media and games use dopamine hits from the brain to reward the user. Sites like Facebook use 'likes' for positive feedback/a dopamine hit from the user brain for posts wether it be text or photos. Games use scoring or progression in a game to reward the player which helps produce dopamine.

The chatrooms where one can have a running conversation seem to be where a lot of the exploitation starts in which the predator will eventually ask another for personal information, pictures etc. With kids the predators frequently pose as other kids.

This is why the parents must educate their kids about social media and dangers along with monitoring it because they will get access somehow.
 
I don't understand any kind of tech stuff, but it seems to me if the owners of these companies are tech-savvy enough to build these sites, they should come up with some way to implement some type of safety precautions on sites that are commonly used by kids.
It's really difficult, if not impossible, to monitor these sites. TIt would take thousands of individuals working by reading every posting that comes onto the site (and there are thousands coming in by the second). Some 'intelligence' (AI) could be bujild it, but you can't monitor someone 'grooming' a child. There is no such thing as internet security (you can use software, but it won't get everything) and no way to monitor the millions of postings. As WhatinThe posted, it's up to parents and we know from all the issues now, that ain't workin' too good.
 
Back
Top