Social media firms should face fines for hate speech failures, urge UK MPs

Social media giants Facebook, YouTube and Twitter have once again been accused of taking a laissez-faire approach to moderating hate speech content on their platforms.

This follows a stepping up ofpoliticalrhetoric against social platforms in recent months in the UK, following a terror attack in London in March after whichHome Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.

In ahighly critical reportlooking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter,aUK parliamentary committeehassuggested the government looks at imposing fines on social media forms for content moderation failures.

Its also calling for a review of existing legislation to ensure clarity about how the law applies in this area.

Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe, the committeewrites in the report.

Last month, theGerman government backed a draft law which includesproposals to fine social media firms up to 50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.

AEurope Union-wide Code of Conduct on swiftly removing hate speech, whichwas agreed between the Commission and social media giants a year ago,does not include any financial penalties for failure but there are signssome European governments are becoming convinced of theneed tolegislate toforcesocial media companies to improve their content moderation practices.

The UK Home Affairs committee report describes it as shockingly easy to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.

Iturges social media companies to introduce clear and well-funded arrangements for proactively identifying and removing illegal content particularly dangerous terrorist content or material related to online child abuse, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating totackle the spread of child abuse imagery online.

The committeesinvestigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because thework was cut short by the UK governmentcalling an early general election the committee says it haspublishedspecificfindings on howsocial media companies areaddressing hate crime and illegal content online having takenevidence for this from Facebook, Google and Twitter.

It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe, it writes.

If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.

The committee flags multiple examples where it says extremist content was reported tothe tech giantsbut these reports were not acted on adequately calling out Google, especially, for weakness and delays in response to reports it made of illegal neo-Nazi propaganda on YouTube.

It also notes the three companies refused to tell it exactly how many people they employ to moderate content, and exactly how much they spend on content moderation.

The report makes especiallyuncomfortable reading for Google with the committee directly accusing it ofprofiting from hatred arguing it has allowedYouTube to be a platform from which extremists have generated revenue, and pointing to the recent spate of advertisers pulling their marketing content from the platform after it was shown being displayed alongside extremist videos. Google responded to the high-profile backlash from advertisers by pulling ads from certain types of content.

Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves, the committeewrites.

If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.

The committee suggests social media firms should have to contribute to the cost to the taxpayer of policing their platforms pointing to how football teams are requiredto pay for policing in their stadiums and the immediate surrounding areas under UK lawas an equivalent model.

It is also calling forsocial media firms to publish quarterly reports on their safeguarding efforts, including

  • analysis of the number of reports received on prohibited content
  • how the companies responded to reports
  • what action is being taken to eliminate such content in the future

It is in everyones interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material, the committeewrites. Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the government consult on requiring them to do so.

The report, which is replete with pointed adjectives likeshocking, shameful, irresponsible and unacceptable, follows severalcritical media reports in theUK whichhighlightedexamples of moderation failures on social media platforms, and showedextremist andpaedophilic content continuing to be spread on social mediaplatforms.

Responding to the committees report, a YouTube spokesperson told us: We take this issue very seriously. Weve recently tightened our advertising policies and enforcement; made algorithmic updates; and are expanding our partnerships with specialist organisations working in this field. Well continue to work hard to tackle these challenging and complex problems.

In a statement,Simon Milner, director of policy at Facebook, added: Nothing is more important to us than peoples safety on Facebook. That is why we have quick and easy ways for people to report content, so that we can review, and if necessary remove, it from our platform. We agree with the Committee that there is more we can do to disrupt people wanting to spread hate and extremism online. Thats why we are working closely with partners, including experts at Kings College, London, and at the Institute for Strategic Dialogue, to help us improve the effectiveness of our approach. We look forward to engaging with the new Government and parliament on these important issues after the election.

Nick Pickles, Twitters UK head of public policy, provided this statement: Our Rules clearly stipulate that we do not tolerate hateful conduct and abuse on Twitter. As well as taking action on accounts when theyre reported to us by users, weve significantly expanded the scale of our efforts across a number of key areas. From introducing a range of brand new tools to combat abuse, to expanding and retraining our support teams, were moving at pace and tracking our progress in real-time. Were also investing heavily in our technology in order to remove accounts who deliberately misuse our platform for the sole purpose of abusing or harassingothers. Its important to note this is an ongoingprocess as we listen to the direct feedback of our users and move quickly in the pursuit of our mission to improve Twitter for everyone.

The committee says it hopes thereport willinform the early decisions of the next government with the UK general election due to take place on June 8 and feed into immediate work by the three social platforms to be more pro-active about tackling extremist content.

Commenting on the publication of the report yesterday, Home Secretary Amber Rudd told the BBCshe expected to seeearly and effective action from the tech giants.

Read more: https://techcrunch.com/2017/05/02/social-media-firms-should-face-fines-for-hate-speech-failures-urge-uk-mps/