top of page
Dealing with online harms: Impact of "safe harbor" laws
Image by Adem AY

By Kati Suominen, Founder and CEO, Nextrade Group and Techical Director, eTrade Alliance

Internet intermediaries such as internet service providers and ecommerce, payment, and social media platforms fuel the flow of information online by helping individuals and companies find, share and access content and interact and transact with each other. Anyone using the Internet to do research for blogs like this will see that online platforms reduce search costs by enabling immediate access to an ocean of content generated by a massive range of third parties. Some 3.5 billion Google searches are conducted worldwide per day. Intermediaries like Google generate a great deal of value: in one study, Internet intermediaries were found to increase EU’s GDP by €430 billion in 2012, or about 3.3 percent of EU’s GDP; of this, €220 billion were gains from investment, private consumption, and exports, and €210 billion was indirect effect of productivity increases in firms serviced by intermediaries. Additional €640 were consumer benefits from free services, increases in online advertising, and B2B platform revenues.

 

In recent years there have been growing concerns about “online harms” such as fake news, hate speech, and counterfeits sold on ecommerce platforms. Platforms have been under growing pressure to identify and remove illegal (and, in some cases, legal but objectionable) content. For policymakers, the key question is, who is liable for malicious or misleading content, for example on apartments listed on Airbnb, posts on Facebook, profiles on dating sites, or for content on YouTube that may violate a third party’s copyright – the user, the platform, or someone else? And what are the economic implications of different liability models, for example for startup investments in new platforms, efficiency of online intermediation, and the growth of developing countries’ digital ecosystems and MSME ecommerce?

 

While many developing countries either have yet to adopt a legal liability regime for internet services or are debating existing laws, in the United States, such rules were established already 20 years ago in Section 512 of the 1998 Digital Millennium Copyright Act and Section 230 of the American Communications Decency Act of 1996. These laws cover Internet service providers (ISPs) and Internet application providers (IAPs), such as social media websites and search engines for third-party content. The law considers Internet intermediaries largely as conduits of information, not its generators, and thus provides them with certain immunities, or a so-called “safe harbor”, from the content their users post. Canada has a similar copyright law: ISPs are “exempt from liability when they act strictly as intermediaries in communication, caching and hosting activities.”

 

Under U.S. law, intermediaries’ liability has been limited to cases where they fail to remove infringing material or damaging content in a timely manner after a judicial order or, in cases of sexual content or nudity, after the injured party makes a takedown request. Platforms are liable for any trademark infringement, unfair competition, privacy, or defamation laws, and any of their own infringing activities and collusion between them and third parties to create infringing material.

 

A number of emerging markets and developing nations have already adopted safe harbor laws. In its Marco Civil Internet law of 2014, Brazil has a safe harbor that limits the responsibility of providers for hosting or transferring third-party content. Also Chile’s copyright law of 2010 specifies that internet intermediaries are not liable for user content on their sites if they take appropriate action in response to official notices. Safe harbors are also making their way to Africa. For example, Ghana’s Electronic Transmissions Act of 2008 limits the liability of intermediaries for “hosting, caching, linking, or mere conduits” on condition that they do not “have actual knowledge that the information or an activity relating to the information is infringing the rights of a third party (or person or the state).”

 

Safe harbors have also been adopted by parties in recent U.S. trade agreements. The U.S.-Mexico-Canada Agreement (USMCA) that replaced the North American Free Trade Agreement includes safe harbor provisions for Internet providers in North America; also the CAFTA-DR agreement between the United States, Central America, and the Dominican Republic requires parties to follow DMCA safe harbor protections.

 

Over the years, U.S. courts have by and large upheld safe harbors and sided with internet service providers. In a landmark case in 2008, a court ruled that eBay was not liable for the fact that one of its sellers posted infringing products on eBay, because eBay did not have knowledge that the products were infringing. Courts in Latin America have reached similar conclusions.

 

Of course, platforms also have their own community guidelines and policies for content removal, for example in the case of videos or posting terrorist propaganda, violence, and so on. Facebook eliminates some 15,000 posts a month in Germany alone; YouTube removed an average of 2.7 million videos per month globally in 2017.

 

Europe has taken a different path. European courts have been less consistent in applying Europe’s safe harbor regime in the way U.S. courts have applied it. The EU’s safe harbor has been enshrined in the EU’s E-Commerce Directive that frees service providers from an obligation to monitor illegal activity by the users of their service. However, in 2018, the European Parliament passed a copyright law that makes website operators liable for copyright infringements of the content their users upload. The law requires platforms like Facebook, Google, YouTube and Twitter to sign licensing agreements with musicians, authors and news publishers before posting their content; otherwise they would be legally responsible for copyright infringements by these users. Platforms are also to use upload filters to prevent users from uploading copyrighted content and are required to compensate copyright holders (such as journalists or musicians) for the use of their content, even in snippets. Positively, small and micro platforms and startups are exempted from the law.

 

China too has tightened rules on platforms as well as their users. Its January 2019 regulations require online sellers to register their businesses, acquire all necessary licenses, and hold liable both counterfeiters and ecommerce platforms that fail to “take necessary measures” to stop infringing sellers

 

What then are the economic impacts of these various laws to date?

 

  • Safe harbors are found to provide confidence to investors in online platforms. Section 230 is widely hailed as key for the growth of American online platforms, as it and its interpretations in American courts has given legal certainty to platforms and investors in those platforms. Surveys suggest that unclear and restrictive liability and copyright regulations deter investors. In one survey, regulations holding internet services liable for user-generated content would reduce the pool of investors interested in investing in such services by 81 percent. Meanwhile, clarifying copyright regulations to allow websites to resolve legal disputes quickly would expand the pool of interested investors by 111 percent, and limiting penalties for websites acting in good faith would expand the pool of interested investors by 118 percent.

 

  • Safe harbors can be especially helpful for small platforms that lack staff and capacities to remove content from their sites. For example, China’s January 2019 regulations are seen as favoring large ecommerce platforms such as Alibaba and JD.com that already adhere to these types of rigorous practices, and hurting small businesses that sell online as well as small platforms that have fewer resources to implement these regulations. Upload filtering requirements can also harm small platforms in particular: content removal technologies are not yet at a point where they would suffice for making judgements about acceptable and unacceptable content – for example, computers do not yet distinguish “hate speech” from benign, non-hateful speech or speech covered by freedom of speech laws. The broad consensus among technology leaders is that human beings will for the foreseeable future need to be in the content moderation loop – and that AI should merely be an assistive technology, for scaling and sharpening the decisions made by humans.  

 

  • Safe harbors can protect freedom of speech. Critics argue that safe harbors were crafted for another era where the problems of online copyright infringement, hate speech, and other unsavory content were much more limited. However, safe harbor proponents argue that making platforms police their users could also limit innovation and freedom of expression (as platforms would likely err on the side of caution and remove content that could be deemed offensive or infringing). For example, the European Parliament law seeks to enable freedom of expression by exempting hyperlinks to articles and individual words that describe them, but critics argue that legitimate content can still easily get censored. Courts in the United States have also recognized intermediaries’ own free speech rights in their handling of user-generated content.

 

  • Revoking safe harbors may hurt especially original and small content providers. A number of analysts share concerns that erosion of safe harbors, as in Europe, will limit innovation and competition. For example, some analysts worry that removing safe harbors would make copyright holders overreach and request that platforms take down content that is not necessarily infringing. Critics of EU’s 2018 copyright law argue that the law makes platforms much more concerned about copyright infringements, and because of this they would refuse content from smaller less-known content creators and instead be incentivized to accept content from larger, better known companies that are unlikely to post infringing content. This would in essence lead to censorship of content dissemination and threaten the livelihoods of the people it seeks to protect, a realization that incited protests in Europe against mandatory upload filters required by law.

 

To be sure, courts have recently taken a more stringent approach to platform liability in the United States. For example, in 2019, a judge ordered neighborhood services rating platform Yelp to remove reviews that were found to be defamatory; another judge ruled that Twitter could not use a Section 230 defense in a lawsuit over unwanted texts; and a panel of federal judges allowed a $10 million lawsuit against dating site Match.com brought by a woman who was stabbed by a man she had met on the site. Safe harbors are not perfect and are fiercely debated and contested among interest groups and legal teams arguing cases in courts. For policymakers considering regulations on online platforms in their own economies, research to date shows that making platforms “police the Internet” can have significant negative implications on the growth of domestic platforms and digital ecosystems, on small local firms seeking to post content on large global platforms, and on citizens’ freedom of speech. 

bottom of page