In 2021, a former Facebook product manager named Frances Haugen walked out of the company's Menlo Park headquarters with tens of thousands of internal documents. She handed them to the SEC, Congress, and journalists. The documents contained, among other things, an internal 2019 research project titled "Teen Mental Health Deep Dive."

The finding: 13.5% of teen girls on Instagram said the platform made their thoughts about suicide worse. 17% said it worsened eating disorders. Instagram's internal researchers concluded that the app was causally connected to body image issues and depression in adolescent girls. The researchers proposed design changes. Some were implemented. Most were not, because they were projected to reduce engagement metrics.

That 2019 report is now Exhibit A in a federal courtroom.


What the Case Actually Alleges

The consolidated litigation — originally filed separately by dozens of state attorneys general, later combined in the Northern District of California — does not argue that social media is generally bad for teenagers. That would be difficult to prove and easy for Meta to contest.

The case argues something more specific and legally significant: that Meta knowingly designed addictive features targeting minors, knew those features caused psychological harm, misrepresented the safety of its platforms to the public and to parents, and violated the Children's Online Privacy Protection Act (COPPA) by knowingly collecting data from users under 13 despite a nominal age gate that the company's own researchers described as ineffective.

This distinction matters. Product liability in the United States generally requires proving that a manufacturer knew about a defect and concealed it. The Haugen documents — and additional materials obtained through discovery — provide the "knew about it" evidence. The question before the jury is whether Meta's design choices constitute a defective product under that framework, and whether the harms are compensable.


The Documents: What Meta Knew and When

The internal research disclosed through the Haugen leak and subsequent discovery is extensive and detailed in ways that are difficult for the defense to explain away.

In 2020, Meta's internal team analyzed the "infinite scroll" feature — the mechanism by which Instagram and Facebook continue loading new content without a natural stopping point. Their research found the feature significantly increased session length (a business metric Meta optimized for) and also significantly increased the percentage of users who described their usage as "compulsive" or "out of control." The feature was retained.

In 2018, a research project examined the "social comparison" dynamics of Instagram's visual feed format. Researchers found that the platform's presentation of curated, aspirational images produced measurable negative affect in users compared to other content formats, with the effect strongest in users aged 13-17 and particularly pronounced in girls. The research recommended algorithmic changes to reduce comparison-driving content. The changes were not implemented.

In multiple internal emails disclosed through discovery, senior employees discussed the tension between "integrity" (the team responsible for monitoring harms) and "growth" (the team responsible for engagement metrics). In one widely cited exchange, a product executive wrote that integrity changes that reduced engagement were "not something we're willing to do."

Meta has argued that these documents are being taken out of context, that the company did implement significant safety improvements over the years, and that correlation between social media use and mental health outcomes does not establish causation.


The Legal Barrier: Section 230

For years, legal scholars and Meta's lawyers argued this case could never get to a jury because of Section 230 of the Communications Decency Act of 1996. Section 230 provides broad immunity to internet platforms for content posted by their users — meaning Meta cannot be sued for what users post on Instagram.

The plaintiffs' argument, which the presiding judge accepted at the motion to dismiss phase, is that this case isn't about user content. It's about product design: the infinite scroll, the notification systems, the algorithmic recommendation engines, the age-verification failures. Those are Meta's own choices, not user-generated content. Section 230 doesn't cover them.

This distinction — between platform liability for user content (shielded by 230) and platform liability for product design choices (potentially not shielded) — is the case's most important legal contribution regardless of how it ends. If the court upholds this framework, it opens every social media platform to similar design-defect litigation for the first time since 230 was enacted thirty years ago.


The Public Health Data the Plaintiffs Are Leaning On

The state attorneys general are not relying solely on Meta's internal documents. They are presenting a substantial body of external public health research as supporting context.

The CDC's Youth Risk Behavior Survey, conducted biennially since 1991, shows a sharp inflection in adolescent mental health metrics beginning around 2012 — the year Instagram launched on Android and reached mass adoption among teenagers. The percentage of high school students who reported persistent feelings of sadness or hopelessness rose from 28% in 2011 to approximately 40% in 2023. The increase was sharpest among girls.

Psychologist Jean Twenge's longitudinal research, published across multiple peer-reviewed journals, documents a correlation between smartphone and social media adoption and the rise in adolescent anxiety, depression, and loneliness that is among the strongest in the field. Her 2017 book "iGen" presented the data before it became a mainstream policy conversation.

Meta's defense experts argue that these population-level trends have multiple causes — including the 2008 financial crisis's effects on family stability, the opioid epidemic's effects on communities, and the general increase in academic pressure — and that social media is one variable among many.


What a Verdict Could Mean

The stakes are enormous in both directions.

If the plaintiffs win, the most immediate consequence would be financial: damages in cases of this scope, involving thousands of individual plaintiffs whose psychological treatment costs and lost earnings are being documented, could reach into the tens of billions of dollars. Meta's 2025 annual revenue was approximately $165 billion. A large verdict would be survivable but painful.

The larger consequence would be legal precedent. A finding that social media platforms can be held liable for product design choices that harm minors would accelerate existing legislative efforts — federal age verification requirements, algorithmic transparency mandates, a potential revision of Section 230 — and would expose TikTok, YouTube, Snapchat, and every other platform that operates recommendation engines to identical litigation.

If Meta wins, the inverse applies: it would confirm that existing legal frameworks cannot effectively regulate social media design choices, and would shift the burden of action entirely to Congress — which has debated social media regulation for years without passing significant legislation.

A settlement — which remains possible at any point — would likely involve a large payment and a commitment to specific design changes, potentially including defaults that restrict infinite scroll and notification intensity for users under 18. It would not create precedent, but it would create a template.


The Historical Parallel

The litigation follows a template that has reshaped industries before. The tobacco trials of the 1990s began with individual product liability suits that the tobacco industry successfully defended for decades by arguing causation could not be established. The eventual breakthrough came when internal documents — obtained through discovery — showed that tobacco companies had known about addiction and cancer risks since the 1950s and had actively concealed that knowledge.

The resulting 1998 Master Settlement Agreement required tobacco companies to pay $246 billion to state governments over 25 years, prohibited certain marketing practices targeting youth, and funded anti-smoking campaigns. It did not destroy the tobacco industry, but it permanently altered how the industry was allowed to operate and how it was publicly perceived.

Whether social media follows the tobacco arc depends on whether a jury concludes that Meta's internal documents constitute the same kind of knowing concealment. That is precisely what jurors are being asked to decide right now.

The "daunting evidence" AP described in its headline refers to volume — thousands of documents, dozens of witnesses, technical expert testimony. What it really amounts to is a simple question: Did Meta know it was hurting kids and keep doing it anyway?