Massive social networks, such as Facebook, Twitter, and Instagram, have thrived for decades adhering to two fundamental principles.
One is that platforms are independent of government control in deciding what information to retain online and what to remove.
The other is that sites cannot be deemed legally liable for the vast majority of what their users put online, hence protecting them from lawsuits involving libelous statements, extremist information, and actual harm caused on their platforms.
A Revisit
However, the Supreme Court is prepared to revisit these norms, which might result in the most substantial overhaul of the principles controlling online expression since the 1990s, when U.S. officials and courts agreed to apply minimal controls to the internet.
On Friday, the Supreme Court was set to decide whether or not to hear two cases challenging Texas and Florida statutes prohibiting web platforms from removing specific political information.
The court is slated to hear a lawsuit challenging Section 230, a 1996 provision that shields platforms from responsibility for information uploaded by their users, the month after next.
The Supreme Court on Friday announced it would hear a challenge over what is considered a "true threat" in the age of the internet, weighing a case where a man is seeking to overturn his criminal conviction for stalking due to his social media messages.https://t.co/fF5lQG0Osb
— Robert W Malone, MD (@RWMaloneMD) January 15, 2023
The lawsuits might ultimately modify the hands-off legal stance that the U.S. has mostly adopted toward online expression, posing a threat to the operations of Twitter, YouTube, Snapchat, and even Meta.
The proceedings are part of an intensifying worldwide conflict over how to address damaging internet speech.
In recent years, when Facebook and other sites garnered billions of users and became effective communication channels, their influence drew heightened attention. Concerns developed over the potential influence of social media on elections, mass murders, wars, and political disputes.
In various regions of the world, legislators have sought to limit the effect of platforms on speech. In the United States, where the First Amendment guarantees freedom of expression, there has been less regulatory change.
In the last three years, politicians in Washington have questioned the chief executives of digital firms about the content they remove. However, plans to restrict harmful information have failed to gain momentum.
Partisanship has exacerbated the impasse. Republican politicians, many of whom have charged Twitter, Facebook, and other platforms with censorship, have pressured the networks to allow more content to remain.
In comparison, Democrats have stated more content, such as health disinformation, should be removed from the platforms.
The lawsuit before the Supreme Court that challenges Article 230 of the Communications Decency Act will have widespread repercussions.
While publications can be sued for what they print, Section 230 protects internet platforms from being sued for most user-posted information. It also shields platforms from legal action when they remove content.
Lawsuits
In 2020, the court rejected a case filed by the relatives of victims of terrorist acts alleging that Facebook was liable for disseminating extremist content.
In 2019, the court refused to hear the case of a man who claimed his ex-boyfriend used the dating app Grindr to stalk him. The individual filed a lawsuit against the app, claiming its product was defective.
However, on February 21, the court intends to hear Gonzalez v. Google, a suit filed by the family of an American slain in Paris following an Islamic State assault.
In their case, the family argued that Section 230 should not protect YouTube from allegations that it promoted terrorism by recommending Islamic State videos to viewers.
The lawsuit contends that suggestions might be considered the sort of material generated by the platform, therefore losing their protection under Section 230.
The court will consider a second case, Twitter v. Taamneh, the following day. It addresses the question of when, under federal law, platforms are criminally accountable for aiding terrorism.
A set of US Supreme Court cases could transform the legal landscape for social media companies, with potentially wide-reaching implications for political discourse and the 2024 elections. https://t.co/3uBcr4xHXJ
— Bloomberg Law (@BLaw) January 17, 2023
In Florida, a national judge decided with the industry players and ruled the law violated the First Amendment protections of the platforms; the U.S. Court of Appeals for the 11th Circuit largely upheld this ruling.
However, the Appeals Court for the Fifth Circuit agreed with Texas’ law, standing against the notion that companies have an unfettered First Amendment right to censor the speech of individuals.
This puts pressure on the Supreme Court to intervene.
Jeff Kosseff, an adjunct professor of information security law at the U.S. Naval Academy, stated when federal courts provide contradictory responses to the same issue, the Supreme Court often decides to settle the dispute.
Ashley Moody, the Florida attorney general, cited the state’s petitions with the Supreme Court, in which it contends the order banning the statute removes states’ ability to protect individuals’ access to information.
If the Supreme Court justices decide to accept the challenges, they might take the cases immediately for the court’s current term, which ends in June, or for the court’s upcoming term, which stretches from October 2023 to the summer of 2024.
This article appeared in NewsHouse and has been published here with permission.