Posted By: Kyle Sol Johnson
April 5, 2015
Google is the world’s leading search engine service, with over a 65% market share in the U.S. and over 90% in Europe. An integral part of Google’s search feature is ‘autocomplete’ which shows an algorithmically derived list of search completion suggestions to users based on what millions of other users have searched for previously. The autocomplete terms are published algorithmically and have no human intervention, though Google has tweaked the terms to remove some illicit autocomplete suggestions.
Nevertheless, Google has come under legal fire in recent years for autocomplete results that link individuals and corporations to illegal or otherwise unsavory activities. In 2011, a Milan Italy court issued an order requiring Google to filter libelous autocomplete suggestions. Google autocomplete suggestions returned the words “truffatore” (con man) and “truffa” (fraud) when users queried the plaintiff’s first and last names, allegedly damaging the plaintiff’s reputation and public image as an entrepreneur.
In Germany in 2013, a Federal Court in Berlin held Google responsible for returning the terms “Scientology” and “fraud” in association with the name of the owner of a nutritional supplements company. The court ordered Google to remove the ‘defamatory’ autocomplete entries, imposing a new duty on Google to monitor autocomplete suggestions in Germany.
In Hong Kong, 2014, a judge permitted a libel suit against Google to go ahead despite the search giant’s objections. Autocomplete returned the term ‘triad’ in connection with searches for the name of a Chinese business tycoon who had been convicted of several crimes in the past. Google has also lost cases concerning autocomplete in France, Japan, and Australia.
In response to the Italian verdict, Google released a statement saying:
“We believe that Google should not be held liable for terms that appear in autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself,” the company said.
This has been Google’s default response to each of the cases brought against it because of its autocomplete terms. Is it then just a matter of time before the U.S. begins to see similar litigation brought against the company? Perhaps not.
United States defamation law differs in key respects from that of foreign jurisdictions, due primarily to the stringent protections that the U.S. has afforded to free speech. To support a cause of action for defamation, a plaintiff in the U.S. must show four elements:
- A false statement purported to be a fact;
- That is published or communicated to a third person;
- Fault; and
- Damages or some harm caused to the individual (or entity) that is the subject of the statement.
However, the U.S. Supreme Court in New York Times v. Sullivan created an additional burden for plaintiffs who are public figures or public officials. Government officials and famous persons must show that the statements were made with actual malice, meaning that the perpetrator acted with knowledge that the statement was false or recklessly disregarded whether it was false. Further, this burden must be proved by clear and convincing evidence rather than the usual civil preponderance of evidence standard.
It is unlikely that a U.S. court would find that Google had actual malice because for the most part the autocomplete function is user generated. There is also a question of whether or not Google is actually ‘publishing’ the defamatory terms.
Section 230 of the Communications Decency Act states that online services cannot be treated as the publishers of user-generated defamatory content. It is unlikely that a court would find that Google’s algorithm so modifies the searches that it becomes a content creator.
In conclusion, while Google is unlikely to be found liable for libel in U.S. jurisdictions, the rulings against it in other parts of the world could substantially threaten its entire business model. Google has stood astride the search engine field for years because of the efficiency of its search apparatus, autocomplete offering suggestions to users who may not know yet what they are looking for. Google Instant returns results even as the user types. Every Google search comes with a list of other suggested searches. Holding Google liable for the defamatory words spun out by what the algorithm finds most popular could force it and other search engines to actively censor their own essential functions, reducing the immense social utility provided by the services.