Section 230, the bedrock law protecting internet platforms from liability for user-generated content, faced a sustained assault in Congress Wednesday as lawmakers grappled with two converging crises: a wave of product liability lawsuits targeting social media companies and growing bipartisan alarm over government censorship. At a Senate Commerce Committee hearing, Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) defended their bill to sunset the 30-year-old law entirely, while other lawmakers debated narrower reforms as jurors in Los Angeles deliberate whether Instagram and YouTube's design decisions fall outside Section 230's protections.
The debate unfolded against an unprecedented backdrop of legal challenges that are already testing Section 230's limits in real time. Matthew Bergman, whose Social Media Victims Law Center has pioneered a new approach to suing platforms, testified before senators with grieving parents holding photos of children who allegedly died from online harms sitting directly behind him.
"Section 230 is not one of the Ten Commandments," Sen. Brian Schatz (D-HI) declared in his opening remarks, dismissing arguments that the law is untouchable. "This idea that we can't touch it, otherwise internet freedom incinerates, is preposterous."
- Los Angeles jury currently deliberating whether Instagram and YouTube's design decisions caused harm to a young plaintiff
- Product liability model targeting platform design rather than content moderation gaining traction in courts
- Supreme Court's recent jawboning ruling making it harder for censorship victims to seek relief
Bergman's litigation strategy represents a sharp break from traditional approaches to challenging Big Tech's liability shield. Rather than challenging platforms' content moderation decisions—which Section 230 clearly protects—his cases target design features like algorithmic recommendations and user interface choices that allegedly hook young users.
The approach has already survived initial legal challenges and reached jury trials, suggesting courts may be receptive to arguments that Section 230 was never intended to protect product design decisions. "If they wait for the courts to decide, more kids are going to die," Bergman told senators, urging legislative clarification that the law doesn't shield design choices.
But the hearing's second major theme—government censorship—complicated traditional partisan lines on Section 230 reform. Committee chair Ted Cruz (R-TX) opposed full repeal, arguing it would incentivize platforms "to engage in more censorship to protect themselves from litigation."
The concern reflects a dramatic shift in Republican thinking about Section 230. While conservatives have long criticized the law for enabling alleged anti-conservative bias, many now worry that removing liability protections would worsen censorship by making platforms more risk-averse.
That quote from Sen. Schatz captured a bipartisan awakening about government pressure on social media companies, known as "jawboning." Schatz praised Cruz's criticism of Biden administration jawboning while also condemning FCC Chair Brendan Carr's recent threats to broadcasters.
The jawboning discussion produced the hearing's most heated exchange when Sen. Eric Schmitt (R-MO) confronted Stanford Law School's Daphne Keller about her institution's now-disbanded Internet Observatory. Schmitt, who as Missouri's attorney general unsuccessfully sued the Biden administration over alleged platform pressure, accused Stanford researchers of flagging content that "didn't line up with the Biden administration's view."
Keller pushed back, saying her colleagues were "exercising their First Amendment rights to go talk to the government and say what they thought should happen." When Schmitt referenced his Supreme Court case, Keller shot back: "The one you lost?"
The hearing also touched on emerging challenges from artificial intelligence. Americans for Responsible Innovation President Brad Carson argued that Section 230 should not protect AI outputs, while Cruz highlighted the Take It Down Act—which requires platforms to remove reported nonconsensual intimate images—as an example of "targeted legislation" that addresses specific harms without amending Section 230.
Some witnesses proposed alternatives to changing Section 230 entirely. Knight First Amendment Institute policy director Nadine Farid Johnson suggested focusing on privacy protections, interoperability requirements, and expanded researcher access to platforms as ways to address underlying problems without touching the liability shield.
The complexity of balancing free speech, child safety, and platform accountability emerged clearly in Cruz's personal anecdote about his then-14-year-old daughter. After confiscating her phone as punishment, Cruz discovered she had removed the SIM card before handing it over and used it in a burner phone.
"I was both annoyed and really proud at the same time," Cruz said. "It does show just how completely outmatched parents are with trying to keep up with teenagers with these issues."
That story encapsulated the central challenge facing lawmakers: how to protect children online without breaking the legal framework that enables digital communication. As legal challenges proliferate in courts and political pressure builds in Washington, Section 230's three-decade run as settled law appears increasingly precarious.
The Los Angeles jury's verdict, expected in the coming weeks, could provide crucial guidance on whether courts will narrow Section 230's scope without congressional action. If Bergman's product liability approach succeeds, it might offer a path to holding platforms accountable while preserving the law's core speech protections.