Meta Platforms CEO Mark Zuckerberg arrives outdoors court docket to take the stand at trial in a key take a look at case accusing Meta and Google’s YouTube of harming youngsters’ psychological well being via addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026.
Mike Blake | Reuters
For the final three a long time, web giants have been in a position to keep away from authorized publicity for content material on their platforms, because of a legislation that differentiates the businesses from on-line publishers. However these safeguards seem like weakening.
Meta and Google, which dominate the U.S. digital advert market, discover themselves as defendants in a bunch of lawsuits that collectively serve to undermine the long-held notion that they’ve authorized safety for what surfaces on their websites, apps and providers. Firms like TikTok and Snap are in the identical predicament.
The unifying side of the latest instances is that they are crafted to bypass Part 230 of the Communications Decency Act, which Congress handed in 1996 and President Invoice Clinton signed into legislation. Established within the early days of the web, the legislation protects web sites from being sued over content material posted by their customers, and permits them to behave as moderators with out being held chargeable for what stays up.
Final week, a jury in New Mexico discovered Meta liable in a case involving youngster security, whereas jurors in Los Angeles held the Fb mother or father and Google’s YouTube negligent in a private harm trial. Days after these verdicts have been revealed, victims of the infamous intercourse offender Jeffrey Epstein filed a category motion lawsuit in opposition to Google and the Trump administration over allegations associated to the wrongful disclosure of private info.
In that criticism, the plaintiffs argue that Google’s AI Mode, which serves up AI-powered summaries and hyperlinks, is “not a impartial search index,” a transparent effort to make the case that Google is not only a platform sitting between customers and the data they search.
“The plaintiffs’ bar is successful the struggle in opposition to part 230 via systematic, relentless litigation that’s inflicting there to be divots and chinks in its safety,” mentioned Eric Goldman, a legislation professor at Santa Clara College College of Regulation, in an interview.
The stakes are large because the expertise sector exits the period of conventional on-line search and social networking and enters a world outlined by synthetic intelligence, the place fashions designed by the homeowners of the most important platforms are serving up conversational chats, footage and movies that may vary from controversial to doubtlessly unlawful. The monetary penalties thus far have been minimal — lower than $400 million in damages between the 2 verdicts final week — however the instances set up a troubling precedent for tech giants which might be betting their future on AI.
“For therefore lengthy, tech firms have used Part 230 as an excuse to keep away from taking significant motion to guard customers, however particularly youngsters from egregious harms, harassment and abuse, frauds and scams,” Sen. Brian Schatz (D-Hawaii) mentioned in March throughout a U.S. Senate Commerce Committee listening to tied to the thirtieth anniversary of Part 230. “It is not that they do not know what’s occurring and even why it is occurring. It is that to do one thing about it will be to harm their backside line. And as long as federal legislation gives a protect, why even hassle?”
Meta declined to remark for this story. Google did not reply to a request for remark. Each firms mentioned they plan to enchantment final week’s verdicts.
‘Difficult questions’
Politicians on either side of the aisle have proposed all types of reforms to Part 230 over time, and firm executives have confronted public grilling in congressional hearings over the alleged harms brought on by their platforms.
President Donald Trump, throughout his first time period in workplace, supported better restrictions on social media firms for what he seen as their bias in opposition to him. And Joe Biden, when he was a presidential hopeful in 2020, advised The New York Instances editorial board that Part 230 “ought to be revoked” for tech platforms together with Fb, which he mentioned was “propagating falsehoods they know to be false.”
Nadine Farid Johnson, coverage director of the Knight First Modification Institute at Columbia College, mentioned about legislative efforts that “none of these issues have totally come to fruition, partly as a result of they’re such sophisticated questions.”
However whereas the problem has stagnated in Washington, D.C., plaintiff attorneys are discovering different routes towards holding huge tech firms accountable.
Meta Platforms CEO Mark Zuckerberg testifies earlier than Los Angeles Superior Court docket Decide Carolyn Kuhl at a trial in a key take a look at case accusing Meta and Google’s YouTube of harming youngsters’ psychological well being via addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch.
Mona Edwards | Reuters
The decision final week in opposition to Meta and YouTube was the primary time a jury discovered social media platforms chargeable for what plaintiff attorneys alleged was deliberately engineering habit in minors with their merchandise. The case went after how the platforms have been designed, not simply what content material they carried.
Plaintiffs argued that the mixture of options like autoplay, suggestion algorithms, notifications and sure filters acted like “digital casinos,” resulting in severe psychological well being issues for a younger lady who claimed she could not cease utilizing the apps.
The category motion swimsuit in opposition to Google, filed final week by a plaintiff with the pseudonym Jane Doe, alleged that the corporate’s AI Mode created its personal summaries and hyperlinks, exposing Epstein victims’ private figuring out info (PII), together with names, cellphone numbers and e mail addresses.
Kevin Osborne, the plaintiff’s legal professional within the case, advised CNBC in an interview that the swimsuit was filed after Google declined a request to take down the victims’ contact info from AI mode. Osborne mentioned the case has to maneuver rapidly due to how briskly the data is spreading.
“We filed once we filed as a result of we wanted to behave as quickly as potential to get these items taken down,” mentioned Osborne, a accomplice at Erickson Kramer Osborne in San Francisco. “Persons are getting calls from complete strangers and dying threats. It is a nightmare.”
Osborne added that the timing was “serendipitous” given Meta’s court docket defeats final week, however he mentioned there’s overlap in that all of them contain efforts by the plaintiffs to skirt Part 230. Osborne mentioned that in his case, “that is AI mode developing with its personal content material and that is one thing that is not been explored very completely by the courts.”
Matthew Bergman, one of many legal professionals representing the plaintiffs within the Los Angeles case, testified earlier than a Senate committee in March and mentioned the tech trade has relied on overly broad interpretations of Part 230 so as “to evade all potential authorized accountability just because third-party content material is discovered someplace within the causal chain of their misconduct.”
Bergman mentioned he appeared intently at a 2021 ruling in an appeals court docket involving allegations in regards to the position a Snapchat characteristic performed in a deadly automobile crash. The court docket reversed an earlier resolution to dismiss the case underneath Part 230, citing the plaintiff’s allegations that Snap’s negligent design incentivized younger folks to drive recklessly.
“I charted a really slender authorized concept that may legally allow sure instances introduced by dad and mom to proceed regardless of Part 230,” Bergman advised lawmakers.
The proof offered in Los Angeles bolstered the plaintiff’s arguments that Meta and YouTube executives knew of their merchandise’ design harms and did not adequately deal with them. At a press briefing in regards to the case on Monday, Bergman mentioned “one of the simplest ways to show our case is thru their very own paperwork.”
Within the Google AI Mode swimsuit, the plaintiff additionally pointed to design flaws associated to the general public show of private info.
“Google is deliberately furnishing that PII in a method designed, or at the least considerably sure, to gas harassment and concern,” the swimsuit says.
Osborne expanded on that concept.
“Google did not simply present our consumer’s e mail deal with,” he mentioned. “They created a hyperlink, so whenever you’re studying the content material, AI mode, all you have to do is click on a button and you have generated an e mail on to the [Epstein] survivor.”

It is not the primary time Google has been sued for the way its AI interacted with customers, a problem that is additionally created authorized challenges for ChatGPT creator OpenAI.
Earlier In March, the daddy of Jonathan Gavalas filed a lawsuit in opposition to Google, accusing the Gemini chatbot of convincing his son to hold out a collection of missions, together with staging a “catastrophic accident.” The youthful Gavalas then dedicated suicide on the instruction of Gemini, the lawsuit alleges.
And in January, Google settled with households who sued the corporate and Character.AI, alleging their expertise triggered hurt to minors, together with suicides. Final 12 months OpenAI was sued by a household who blamed ChatGPT for his or her teenage son’s dying by suicide.
Supreme Court docket?
Authorized specialists mentioned appeals within the newest instances might discover their strategy to the Supreme Court docket, which might decide whether or not the businesses ought to be protected by legislation in opposition to the claims.
David Greene, senior counsel on the Digital Frontier Basis, known as the verdicts “very preliminary choices,” and mentioned there stays a scarcity of consensus over whether or not sure product options are protected by Part 230, and even the First Modification.
“Simply labeling one thing as a design characteristic means nothing,” Greene mentioned. “If it is speech, it is speech and it will get each First Modification safety and doubtlessly Part 230 safety as properly.”
Farid Johnson of Knight Institute mentioned she’s pushing Congress to enact a extra measured strategy that would let tech firms get hold of Part 230 protections so long as they meet sure circumstances associated to information privateness, platform transparency and different stipulations.
“These questions are solely changing into an increasing number of difficult, because the platforms proceed to increase their use of generative synthetic intelligence, as they’re sort of upping their algorithm sport,” Farid Johnson mentioned. “Our concern is that this turns into a sport of primarily whack-a-mole with each new iteration, with each new piece of technological progress that impacts the platforms and the folks participating on the platforms.”
In case you are having suicidal ideas or are in misery, contact the Suicide & Disaster Lifeline at 988 for help and help from a educated counselor.
WATCH: Extra litigation to return following Meta ruling, says Harvard Regulation professor.


