Communicators struggle to contain video of mass shooting in New Zealand
Attacks by gunmen on two mosques were streamed live, raising questions about tech companies’ responsibility in curbing violent videos. Experts urge users not to share the footage.
This article originally ran in 2019 and is part of our annual countdown of the most-viewed stories from PR Daily.
Tech companies seem to have few answers for censoring violent speech online.
The inability to control and moderate social media platforms was highlighted by a tragic shooting in New Zealand. Two gunmen separately attacked mosques in Christchurch, New Zealand, killing dozens of people.
One of the attackers appears to have livestreamed his actions on Facebook, forcing the company to answer for its role in the tragedy.
“New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said in a statement.
Hours after the attack, however, copies of the gruesome video continued to appear on Facebook, YouTube and Twitter, raising new questions about the companies’ ability to manage harmful content on their platforms.
Facebook is “removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” Garlick said.
Become a Ragan Insider member to read this article and all other archived content.
Sign up today
Already a member? Log in here.
Learn more about Ragan Insider.
Tags: New Zealand, Shooting