Icloud hacks; time to make search engines and social media companies liable for publishing illegal content online
Over recent months yet more women from the UK (actresses, tv personalities, models and sportswomen) have had their icloud accounts hacked. Self-taken, intimate photographs, most of which had been sent only to partners and some not sent to anyone at all, have been stolen and published on multiple pornographic websites to the world at large. This has been hugely distressing for those involved – they have had to endure intense embarrassment as private photographs spread like wildfire across the internet. The tech giants are not responsible for these hacks or for the initial publications (albeit Apple may soon be facing negligence and data protection claims about its apparent inability to protect its customers’ data), however, these companies have much to answer for. They provide the platform for their users to easily access the unlawful material and their efforts to deal with the issue have been lamentable. The delayed response or inaction of these companies in response to reports to them about this illegal activity has been a source of immense frustration for those affected. It is now over 3 years since the hacks of Jennifer Lawrence and others back in 2014 – yet the latest round of hacks demonstrates that the systems these companies have in place for dealing with such issues are still not fit for purpose.
The case for online platforms to answer
These photographs have been stolen – they are unlawful. They are invariably of a sexual nature and are invariably published without the woman’s consent (the photographs are almost exclusively of women). Their publication constitutes a breach of the woman’s privacy rights and a breach of her rights under European data protection law. Given that the photographs are self-taken, unauthorised publication also represents infringement of the woman’s copyright.
Copyright complaints are usually seen as the best method of securing swift removal of material from the web – at least from companies based in the U.S. This is because if those companies wish to avail of the safe harbour provisions contained in the Digital Millennium Copyright Act they must act expeditiously to remove the infringing material once being placed on notice of it. If they fail to do so, they are exposed to a liability of up to $150,000 for each instance of infringement (i.e. for publication of each photograph).
There has been much legal debate about what being placed on notice means (of the issue as a whole or of each specific instance) and what expeditiously means (i.e. what is a reasonable length of time to remove/block the material.) The online platforms require specific notification of each infringing URL before they will take action (Google makes complainants use its hopelessly inadequate “removals tools”). This usually forces the victim of the illegal activity to incur the cost of hiring professionals to police the internet and make individual complaints. The problem with this is not just one of inconvenience and cost, it is that modern technology makes a mockery of such action. The sites publishing the illegal material use software to auto-generate content, each with slight variations in the URL, so that as soon as one page is removed or blocked, another from the same site with a tiny variation in the URL takes its place in the search results or image results. A woman who has become a victim of one of these hacks is highly unlikely to be able to secure complete removal of the photographs from the web (given their digital nature and the speed with which they can be disseminated and downloaded and later uploaded somewhere else). The best she can hope to do at present is to significantly reduce circulation by limiting access to them. However, in order to do this, she will need much greater assistance from the online platforms than that which they are currently providing.
The sites/blogs publishing the photographs are often repeat, indeed prolific, offenders. They do not seek to hide the fact they are publishing hacked (i.e. stolen) photographs – indeed most of them proudly exclaim this fact in their titles and on their home pages. These sites are known to the search engines (because they have received thousands of complaints in relation to them) yet they continue to provide them with the life-blood the sites need to survive. By continuing to include them in their indexes they drive traffic to the sites and likely profit from advertising revenue generated by them.
Pressure for legislative reform
In the run up to the 2017 general election, many politicians were subjected to vicious online abuse and harassment. This led to a cross party committee recommending that the ‘Government should bring forward legislation to shift the liability of illegal content online towards social media companies’. The committee suggested that this be done upon the UK exiting the EU in March 2019.
Currently, social media companies do not have liability for the content on their sites, even where that content is illegal. The relevant legislation in the UK is the EU’s E-Commerce Directive (introduced 18 years ago i.e. before the main social media companies were even formed). The directive allows internet service providers and social media companies, to be exempt from criminal or civil liability when their services are used to commit an offence – for example, publishing or transmitting illegal content. This is because they are able to avail of the ‘hosting’ exemption, where the provider’s relationship to that content as a host is considered merely ‘technical, automatic or passive’. The hosting exemption requires that the company does not have knowledge of the illegal activity or information, and removes or disables access to it ‘expeditiously’ if it becomes aware of it. This has formed the basis for what is called the ‘notice and takedown’ model.
Member states are prohibited from imposing a general monitoring duty on service providers (Article 15 of the directive). The result is that the platforms take a passive, rather than proactive, in relation to identifying and removing illegal content.
In recent years, member states have diverged significantly in their legislative treatment of online platforms – last year Germany became the first EU member state to pass legislation creating time-specific takedown provisions for platforms (24 hours of being notified by a user) and introduced significant sanctions (up to €50m) for contravention.
While such reform would be an improvement on the status quo, it does not go far enough. Significant legislative change is needed. The online platforms must take more responsibility for the content listed, posted and shared on them. After all, it is they who profit from that content.
The time has come for legislators to realise that the platforms are not merely hosts - they use complex algorithms to analyse and select content on a range of factors. These companies are not lacking in resources – they have the technology (machine learning / automation techniques) and the ability to change their algorithms to stop such sites appearing in their search results or on their platforms.
Revising the legal framework will incentivise the prompt, automated identification of illegal content. It will also remove the current perverse incentives for online platforms to avoid any form of active moderation.
While they would not wish it to be known, these platforms are likely benefiting financially from the victimisation and harassment of women. They could easily introduce measures to immediately remove access to such illegal material and prevent it reappearing on their platforms. They have failed to do so because it does not suit their self-interest and because the current legal framework discourages them from taking such action. The law is out of date. It must be changed.