Business International News Uncategorized

Supreme Court VS. Goggle

Teeth Whitening 4 You
<ins class='dcmads' style='display:inline-block;width:728px;height:90px' data-dcm-placement='N46002.3910832MAHOGANYREVUE/B29181624.356591058' data-dcm-rendering-mode='iframe' data-dcm-https-only data-dcm-gdpr-applies='gdpr=${GDPR}' data-dcm-gdpr-consent='gdpr_consent=${GDPR_CONSENT_755}' data-dcm-addtl-consent='addtl_consent=${ADDTL_CONSENT}' data-dcm-ltd='false' data-dcm-resettable-device-id='' data-dcm-app-id=''> <script src='https://www.googletagservices.com/dcm/dcmads.js'></script> </ins>

Supreme Court case could fundamentally change the internet

Gonzalez v. Google is a high-stakes case about what we actually see when we go online.

A pedestrian walks past Google in New York City in July. John Smith/VIEWpress via Getty Images

Gonzalez v. Google, an extraordinarily high-stakes tech policy case that the Supreme Court announced it will hear on Monday, emerged from a horrible act of mass murder.

Nohemi Gonzalez was a 23-year-old American studying in Paris, who was killed after individuals affiliated with the terrorist group ISIS opened fire on a café where she and her friends were eating dinner. According to her family’s lawyers, she was one of 129 people killed during a November 2015 wave of violence in Paris that ISIS claimed responsibility for.

In the wake of Gonzalez’s murder, her estate and several of her relatives sued an unlikely defendant: Google. Their theory is that ISIS posted “hundreds of radicalizing videos inciting violence and recruiting potential supporters” to YouTube, which is owned by Google. Significantly, the Gonzalez family’s lawyers also argue that YouTube’s algorithms promoted this content to “users whose characteristics indicated that they would be interested in ISIS videos.”

The question of whether federal law permits a major tech company like Google to be sued over which content its algorithms served up to certain users divides some of the brightest minds in the federal judiciary. Although at least two federal appeals courts determined that these companies cannot be sued over their algorithms, both cases produced dissents. And it’s now up to the Supreme Court to resolve this disagreement in the Gonzalez case.

At stake are fundamental questions about how the internet works, and what kind of content we will all see online. Currently, algorithms and similar behind-the-scenes automation determine everything from what content we see on social media to which websites we find on search engines to which ads are displayed when we surf the web. In the worst-case scenario for the tech giants, a loss in Gonzalez could impose an intolerable amount of legal risk on companies like Google or Facebook that rely on algorithms to sort through content.

At the same time, there is also very real evidence that these algorithms impose significant harm on society. In 2018, the sociologist Zeynep Tufekci warned that YouTube “may be one of the most powerful radicalizing instruments of the 21st century” because of its algorithms’ propensity to serve up more and more extreme versions of the content its users decide to watch. Someone who starts off watching videos about jogging may be directed to videos about ultramarathons. Someone watching Trump rallies may be pointed to “white supremacist rants.

If the United States had a more dynamic Congress, lawmakers could study the question of how to maintain the economic and social benefits of online algorithms, while preventing them from serving up ISIS recruitment videos and racist conspiracies, and potentially write a law that strikes the appropriate balance. But litigants go to court with the laws we have, not the laws we might want. And the outcome of the Gonzalez lawsuit turns on a law written more than a quarter-century ago, when the internet looked very different from how it does today.

That means that the potential for a disruptive decision is high.

Section 230 of the Communications Decency Act, briefly explained

There are many reasons to be skeptical that the Gonzalez family will ultimately prevail in this lawsuit. Even if their lawyers can prove that the individuals who murdered Nohemi watched ISIS videos on YouTube, it’s unclear how they could show that these videos caused Nohemi’s death. And the First Amendment typically protects video content, even videos that advocate violence or terrorism, unless the video is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

But the Gonzalez litigation never got that far. A federal appeals court dismissed the case, holding that Google is immune from the lawsuit thanks to one of the most consequential tech policy statutes ever enacted: Section 230 of the Communications Decency Act of 1996.

Briefly, Section 230 offers two protections to websites that host third-party content online.

First, it shields those websites from civil lawsuits arising out of illegal content posted by the website’s users. If I send a tweet falsely accusing, say, singer Harry Styles of leading a secretive, Illuminati-like cartel that seeks to overthrow the government of Ecuador, Styles can sue me for defamation. But, under Section 230, he cannot sue Twitter simply because it owns the website where I published my defamatory tweet.

Additionally, Section 230 states that websites retain this lawsuit immunity even if they engage in content moderation that removes or “restrict[s] access to or availability of material” posted on their site. So Twitter would still be immune from Styles’s hypothetical lawsuit if they ban other users, but not me, even after I commit defamation on their website.

These twin safeguards fundamentally shaped the internet’s development. It’s unlikely that social media sites would be financially viable, for example, if their owners could be sued every time a user posts a defamatory claim. Nor is it likely that we would have sites like Yelp, or the user reviews section of Amazon, if a restaurant owner or product maker could sue the website itself over negative reviews they believe to be defamatory.

But, while Section 230 protects websites that remove content they find objectionable, it is far from clear that it protects websites that promote illegal content. If I publish a defamatory tweet about Harry Styles, and Twitter sends a promotional email to its users telling them to check out my tweet, Styles would have a fairly strong argument that he can sue Twitter for this email promoting my false claim — even though Section 230 prevents him from suing Twitter over the tweet itself.

The Gonzalez family argues that YouTube’s algorithm should be treated the same way as Twitter would be treated if it sent mass emails promoting defamatory tweets. That is, while Google cannot be sued because ISIS posts a video to one of its websites, the Gonzalez family claims that Google can be sued because one of its websites uses an algorithm that shows ISIS content to users who otherwise most likely would not have seen it.

And this is an entirely plausible reading of Section 230, which, again, was enacted long before tech companies started using the sophisticated, data-informed algorithms that form the backbone of so much of today’s internet. Although several well-regarded judges have determined that Section 230 does protect tech companies from these sorts of suits, other highly respected judges urge a more limited reading of this landmark law.

Why is Section 230 written the way that it is?

Section 230 sought to undo a 1995 court decision that threatened to snuff out online conversations just as the internet was becoming widely available to most Americans. And the broader (now largely defunct) law that it was attached to, the Communications Decency Act, was primarily concerned with things like internet pornography.

Ordinarily, a company that enables people to communicate with each other is not liable for the things those people say to one another. If I write a letter or email to my brother which includes a defamatory conspiracy theory about Harry Styles, Styles can’t sue the Post Service or Gmail.

But the rule is typically different for newspapers, magazines, or other publications that carefully curate which content they publish. They can often be sued over any content — or, at least, any curated content — that appears in their publication.

Harry Styles signing autographs on a city street.

Just in case there is any doubt, I am emphatically not accusing this man of leading a secretive, Illuminati-like cartel that seeks to overthrow the government of Ecuador. Wesley Lapointe/Los Angeles Times via Getty Images

Much of the internet falls into a gray zone between a telephone company — which does not screen the content of people’s calls, and therefore is not liable for anything said on those calls — and curated media such as a magazine. Twitter, for example, routinely deletes tweets it deems offensive. And it sometimes bans individuals, including former President Donald Trump. But Twitter doesn’t exercise anywhere near the level of editorial control that a magazine (or an online publication like Vox) exercises over its content.

Which brings us to a New York state trial court’s 1995 decision in Stratton Oakmont v. Prodigy Services Company.

Prodigy was a popular online service in the 1990s, which hosted several “bulletin boards” where users could discuss topics of mutual interest. An unidentified Prodigy user posted several statements to one of these bulletin boards, which allegedly defamed a brokerage company by falsely accusing it of committing “criminal and fraudulent acts.” The question in Stratton Oakmont was whether Prodigy could be held liable for these statements by one of its users.

Like Twitter, Prodigy fell into the gray zone between a telephone company and a magazine. It did not curate every piece of content that appeared on its website. But it did use an “automatic screening program” to remove some offensive content. And it did have content guidelines that were enforced by designated bulletin board leaders. This level of editorial control, according to Stratton Oakmont, was enough to make Prodigy liable for its users’ statements.

One purpose of Section 230 was to overturn Stratton Oakmont, and to ensure that companies like Prodigy could operate discussion forums without being held liable for the content of those forums. This is why federal law stipulates that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In effect, the law established that online forums shall not be treated as though they were publications like magazines or newspapers, which is why Section 230 says that they won’t be treated as the “publisher” of content produced by their users.

And per a separate provision of Section 230, online forums keep their lawsuit immunity even if they “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

Recall that Stratton Oakmont held that Prodigy was liable for illegal content published on its bulletin boards because it took some steps to remove content it deemed objectionable. If removing objectionable content stripped websites of their lawsuit immunity, then those websites could face crippling consequences. Users could potentially bombard online forums with pornography, and the website would either have to leave those pictures up — lest they face a wave of lawsuits that could shut down their company — or subject every word published on online forums to the sort of advance editorial review typically associated with newspapers.

And so Congress decided to give online forums broad authority to remove content from their websites without endangering their liability shield.

It is genuinely unclear whether Section 230 applies to websites’ choices to promote, instead of remove, content

The Gonzalez plaintiffs effectively argue that a website is not protected by Section 230 when it “affirmatively recommends other party materials,” regardless of whether those recommendations are made by a human or by a computer algorithm.

Although the primary purpose of Section 230 was to allow online forums to operate without having to host pornographic or otherwise offensive content, the federal law is written in expansive terms. It provides that no such forum will be subject to liability as if it were the “publisher or speaker” behind “any information provided by another information content provider.”

Given this broad language, a divided panel of the United States Court of Appeals for the Ninth Circuit concluded that YouTube’s algorithms are protected by Section 230. Among other things, the Ninth Circuit argued that websites necessarily must make decisions that elevate some content while rendering other content less visible. Quoting from a similar Second Circuit case, the court explained that “websites ‘have always decided … where on their sites … particular third-party content should reside and to whom it should be shown.’”

Prodigy, for example, didn’t simply host an open, Twitter-style forum where anyone could post about anything at all. It organized its website into bulletin boards that focused on particular subject matters. The allegedly defamatory statements that triggered the Stratton Oakmont lawsuit were posted on a bulletin board called “Money Talk” — a subject matter that was likely to attract users who would be unusually sensitive to an allegation that a brokerage was engaged in fraud or criminal activity. Nevertheless, Section 230 sought to immunize sites like Prodigy from liability.

A strong rebuttal to the Ninth Circuit’s argument was offered by Judge Robert Katzmann’s dissent in Force v. Facebook (2019), a lawsuit very similar to Gonzalez which claims that Facebook’s algorithms helped promote content from the militant Palestinian organization Hamas.

Recall that Section 230 prohibits courts from treating an online forum “as the publisher” of illegal content posted by one of its users. But Katzmann argued that Facebook’s algorithms do “more than just publishing content.” Their function is “proactively creating networks of people” by suggesting individuals and groups that the user should attend to or follow.

Whether that’s a good thing or a bad thing, Katzmann claimed, it goes beyond publishing. And therefore this activity is not shielded by a statute that prevents Facebook from being treated as a “publisher.”

Again, the question of whether Section 230 applies to algorithms and promotional choices is a difficult legal question that’s divided lower court judges, and not along partisan or ideological lines.

Katzmann was a center-left Clinton appointee to the Second Circuit. His dissent in Force disagreed with a majority opinion by former Judge Christopher Droney, an Obama appointee. Similarly, the majority opinion in Gonzalez was authored by Judge Morgan Christen, an Obama appointee. Although that opinion was “reluctantly” joined by Judge Marsha Berzon, a liberal lion who was one of the nation’s leading union-side labor lawyers before she became a judge, Berzon wrote a separate opinion urging the full Ninth Circuit to “reconsider” binding precedents that read Section 230 broadly.

A Supreme Court decision that embraced Katzmann and Berzon’s reading of Section 230 could, as Berzon wrote in her Gonzalez opinion, prevent online algorithms from promoting content that “can radicalize users into extremist behavior.” But such a decision could also have tremendous implications for some of the internet’s most banal features.

If Google can be held liable because its algorithms point a particular user to a particular piece of harmful content, then what happens if someone googles the word “ISIS” and finds their way to a pro-ISIS webpage that leads them down the road to radicalization?

Or, if I can drag poor Harry Styles into this conversation one last time, what happens if Vox’s editorial safeguards somehow break down and we publish an article falsely defaming him? Perhaps Vox should suffer financial consequences for such an error. But should Google pay the price if someone searches for “Harry Styles” and is directed to our erroneous article?

If Google loses the Gonzalez case, it needs to fear the possibility that it could be held liable for illegal content published by others — at least if that content is surfaced by an algorithm. And it’s unclear how a search engine can even operate without some kind of algorithm that determines which websites are listed in which order whenever someone conducts a search.

In an ideal world, Congress would step in to write a new law that strikes a sensible balance between ensuring that important websites continue to function, while also maybe including some safeguards against the promotion of illegal content. But the likelihood that Congress will successfully thread this needle, especially at a time when many Republicans would like to rewrite Section 230 to include ham-handed safeguards for political conservatives, probably isn’t very high.

And so we must wait and see if the Supreme Court hands down a decision that could smother many emerging forms of communication — and not because the justices necessarily have a particular axe to grind. Congress simply did not write Section 230 with this issue in mind back in 1996.