Defamation Reform – Briefing for response to Consultation
This is a short briefing on the proposed reforms to Defamation law in Scotland.
In 1996, the publication of a statement that had potential to harm the reputation of another individual could arguably only be done by those with prominent publishers such as authors, journalists, writers, or academics. The subject of defamation would not have concerned much of the population of Scotland bar a few free speech activists.
Now, with the rise of social media, everyone should be alert to the power – and the risk – we have when we write something online. As users of Twitter, Facebook, review sites like TripAdvisor and platforms like WordPress, we have all potentially become authors. Through organising on these platforms and setting up different community groups we could – depending on how laws are defined – be editors.
Freedom of expression is cherished right, one that we exercise every day when we comment, tweet, review or publish. Our right to freedom of expression has been given an injection by the Internet.
The last time defamation was reformed in Scotland was in 1996. At the time, AOL had only just decided to shift from a charging model from hourly, to monthly. Public internet use could be measured and charged by the hour. We are in a vastly different space now that we are always connected to networks, capable of publishing and communicating to millions at the touch of a screen.
This reform in Scotland provides us with an opportunity to ensure:
- That Freedom of Expression is respected;
- That individuals can remove defamatory material, and challenge the removal of that material through due process.
To achieve this we must:
- Make sure that the threshold for bringing a defamation action is proportionately high.
- Restrict the liability of internet intermediaries to prevent immediate and pre-emptive takedowns.
- Dis-incentivise unjustified threats of action.
- Limit the time for which a statement can be claimed to be defamatory.
Serious harm threshold
Facts
- 8,429 tweets are sent per second.
- 1,508 Tumblr posts per second.
- 400 hours of content are uploaded to YouTube every minute.
As more and more of us can publish information online, it is important to make sure that actions for defamation are reserved for situations in which serious harm to the individual occurs.
Currently, Scots law does not provide any statutory threshold test for a party to meet to bring an action. If an individual could be threatened with an action for a statement that may be considered merely harmful or inconvenient to a person’s reputation rather than causing serious harm, there is a risk to creating a chilling effect on freedom of expression where a person takes a statement down rather than risking a drawn out court case.
In an online environment more people are communicating publicly whether that be in large groups or on public social media, this means there is greater opportunity for someone to take a speculative defamation claim to try and silence inconvenient, but not seriously harmful speech. There needs to be a recognition that defamation should only be available for statements that are seriously harmful to a person’s reputation, not just inconvenient.
Open Rights Group support a threshold of ‘serious harm’ to be established in statute.
Clear definitions
It is very important to provide clear, unambiguous definitions of those who may be liable for defamatory content. Traditionally, the roles that attract liability for defamation are:
- Author
- Editor
- Publisher
Previously, these individuals were easy to identify, authors would sign their articles, editors details are attached to their publications and commercial publishers are often prominently displayed on their products.
The draft Defamation and Malicious Publications (Scotland) Bill provide clear definitions for author and publisher, but with editor there is a need for greater clarity. Editor is defined as:
“A person with editorial or equivalent responsibility for the content of the statement or the decision to publish it.”
Is a moderator of a group or forum to be considered an editor? If you are a moderator of a group on Facebook you have a number of different roles including:
- Approve or deny posts in the group
- Remove posts and comments on posts
- Pin or unpin a post.
- Remove and block people from the group.
A moderator on Reddit can “stick” a post so that it stays on top of the subreddit (the webpage), and can remove items.
These powers could arguably be considered editorial or equivalent responsibility. The Scottish Government do provide a series of examples for secondary publishers who should not be considered liable:
(a) printing, producing, distributing or selling printed material containing the statement,
(b) processing, making copies of, distributing, exhibiting or selling a film or sound recording (as defined in Part 1 of the Copyright, Designs and Patents Act 1988) containing the statement,
(c) processing, making copies of, distributing or selling any electronic medium in or on which the statement is recorded,
(d) operating or providing any equipment, system or service by means of which the statement is retrieved, copied, distributed or made available in electronic form,
(e) broadcasting a live programme containing the statement in circumstances in which the person has no effective control over the maker of the stateme
(f) operating or providing access to a communications system by means of which another person over whom the person has no effective control transmits the statement or makes it available,
(g) moderating the statement (for example, by removing obscene language or correcting typographical errors without altering the substance of the statement).
Open Rights Group thinks a clearer definition of editor and ‘editorial responsibility’ should be developed.
Secondary publishers
Facts
- Google’s search index has hundreds of billions of websites and is over 100,000,000 gigabytes in size
- Facebook has 2.3 billion monthly active users.
- There are 456 million blogs on Tumblr.
- In 2016, nearly 118 billion words were published on WordPress.
Example (d) and (f) from the block quote above in the list of secondary publishers perform the task of protecting internet service providers and internet intermediaries like platforms from liability. This is an important restriction to include and is welcomed by Open Rights Group.
Open Rights Group are signatories to the Manila Principles on Intermediary Liability, principles backed by civil society organisations around the world that set out proportionate limits to intermediary liability for third party content and laws. The Manila Principles state “Intermediaries should be shielded by law from liability for third-party content”.
Open Rights Group welcomes the restriction of defamation actions against secondary publishers.
It is vital that intermediaries are shielded om liability to protect freedom of expression. If exposed for content it would risk:
- Intermediaries such as Internet Service Providers and social media platforms installing general monitoring systems to monitor all content that is uploaded to their platforms. This would be in violation of European law against imposing general monitoring obligations on service providers.
- Takedowns from intermediaries concerned about their legal liability when receiving complaints. This would undermine due process and tip the balance in favour of complainer, undermining freedom of expression.
Create an unjustified threats mechanism
The Manila Principles 3.“Requests for restrictions of content must be clear, be unambiguous, and follow due process.”
In line with this principle that continues the effort to protect intermediaries and support freedom of expression unjustified threats is a method to restrict requests for content restriction to those that are legitimate and follow due process. As online reputation management becomes a lucrative business area, the need to make sure individuals use the law of defamation in good faith grows.
As stated previously, more regular citizens are capable of finding themselves involved in defamation actions with the growth of social media and user generated content. A mechanism that would dissuade sending of empty threats of litigation brought solely for the purpose of stifling and silencing public debate becomes necessary in this scenario.
This mechanism exists in other areas relevant to digital in the Intellectual Property (Unjustified Threats) Act 2017 to combat the growth in “patent trolls”, it should be recognised that a similar risk presents itself in defamation proceedings, with a similar mechanism presenting a useful counter-weight.
Open Rights Group supports introducing an unjustified threats mechanism.
Limit the time for which a statement can be claimed to be defamatory
Facts
- The Guardian has over 300,000 subscribers in the US.
- Facebook has:
- 400 million users in India
- 210 users in the United States
- 130 million in Indonesia
- 40 million in the United Kingdom
One of the decisive changes the Internet has made to information is to make it always available. While previously, back issues of newspapers were rarely collected and circulated articles are permanently available in multiple places at any one time. This makes a single publication rule necessary to prevent perpetual liability for a potentially defamatory statement.
Previously, the rules for when actions for defamation could be brought were too permissive. They were so permissive that they allowed a Russian oligarch to take a US publication to court in the UK, despite the circulation of the publication in question in the UK being a fraction of its home jurisdiction in the US.
Alongside the permissiveness of locations, multiple publication rules also allowed each individual access of the article to be a new claim for defamation, allowing another case involving a different Russian oligarch to claim that articles published in 1999 were still defamatory in 2002 because people were still able to access them.
Currently the rule in Scotland is that an action must be brought within three years from the point the pursuer is aware of the statement and will restart when the statement in question is republished. These rules are sit in stark contrast to the nature of accessing content and communications online.
The ease and automatic nature of indexing and archiving stories online means that any multiple publication rule would risk creating perpetual liability. The best approach that would safeguard freedom of expression while still allowing for individuals to bring legitimate claims against defamatory material is a single publication rule that begins on the date of publication.