Tech companies scramble to remove New Zealand shooting video
By KELVIN CHAN, Associated Press
Mar 15, 2019 3:53 PM CDT
In this frame from video that was livestreamed Friday, March 15, 2019, a gunman who used the name Brenton Tarrant on social media reaches for a gun in the back of his car before the mosque shootings in Christchurch, New Zealand. (Shooter's Video via AP)   (Associated Press)

LONDON (AP) — Internet companies scrambled Friday to remove graphic video filmed by a gunman in the New Zealand mosque shootings that was widely available on social media for hours after the horrific attack.

Facebook said it took down a livestream of the shootings and removed the shooter's Facebook and Instagram accounts after being alerted by police. At least 49 people were killed at two mosques in Christchurch, New Zealand's third-largest city.

Using what appeared to be a helmet-mounted camera, the gunman livestreamed in horrifying detail 17 minutes of the attack on worshippers at the Al Noor Mosque, where at least 41 people died. Several more worshippers were killed at a second mosque a short time later.

The shooter also left a 74-page manifesto that he posted on social media under the name Brenton Tarrant, identifying himself as a 28-year-old Australian and white nationalist who was out to avenge attacks in Europe perpetrated by Muslims.

"Our hearts go out to the victims, their families and the community affected by this horrendous act," Facebook New Zealand spokeswoman Mia Garlick said in a statement.

Facebook is "removing any praise or support for the crime and the shooter or shooters as soon as we're aware," she said. "We will continue working directly with New Zealand Police as their response and investigation continues."

Twitter, YouTube owner Google and Reddit also were working to remove the footage from their sites.

The furor highlights once again the speed at which graphic and disturbing content from a tragedy can spread around the world and how Silicon Valley tech giants are still grappling with how to prevent that from happening.

British tabloid newspapers such as The Daily Mail and The Sun posted screenshots and video snippets on their websites.

One journalist tweeted that several people sent her the video via the Facebook-owned WhatsApp messaging app.

New Zealand police urged people not to share the footage, and many internet users called for tech companies and news sites to take the material down.

Some people expressed outrage on Twitter that the videos were still circulating hours after the attack.

"Google is actively inciting violence," tweeted British journalist Carole Cadwalladr with a screen grab of search results of the video.

The video's spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform. In 2017 it said it would hire 3,000 people to review videos and other posts, on top of the 4,500 people Facebook already tasks with identifying criminal and other questionable material for removal.

But that's just a drop in the bucket of what is needed to police the social media platform, said Siva Vaidhyanathan, author of "Antisocial Media: How Facebook Disconnects Us and Undermines Democracy."

If Facebook wanted to monitor every livestream to prevent disturbing content from making it out in the first place, "they would have to hire millions of people," something it's not willing to do, said Vaidhyanathan, who teaches media studies at the University of Virginia.

"We have certain companies that have built systems that have inadvertently served the cause of violent hatred around the world," Vaidhyanathan said.

Facebook and YouTube were designed to share pictures of babies, puppies and other wholesome things, he said, "but they were expanded at such a scale and built with no safeguards such that they were easy to hijack by the worst elements of humanity."

With billions of users, Facebook and YouTube are "ungovernable" at this point, said Vaidhyanathan, who called Facebook's livestreaming service a "profoundly stupid idea."

In footage that at times resembled scenes from a first-person shooter video game, the mosque shooter was seen spraying terrified worshippers with bullets, sometimes re-firing at people he had already cut down.

He then walked outside, shooting at people on a sidewalk. Children's screams could be heard in the distance as he strode to his car to get another rifle, then returned to the mosque, where at least two dozen people could be seen lying in pools of blood.

He walked back outside, shot a woman, got back in his car, and drove away.

The livestream video was reminiscent of violent first-person shooter video games such as "Counter-Strike" or "Doom" as the gunman went around corners and calmly entered rooms firing at helpless victims. Many shooting games allow players to toggle between close-range and long-range weapons, and the gunman switched from a shotgun to a rifle during the video, reloading as he moved around.

At one point, the shooter even paused to give a shout-out to one of YouTube's top personalities, known as PewDiePie, with tens of millions of followers, who has made jokes criticized as anti-Semitic and posted Nazi imagery in his videos.

"Remember, lads, subscribe to PewDiePie," the gunman said.

The seemingly incongruous reference to the Swedish vlogger known for his video game commentaries as well as his racist references was instantly recognizable to many of his 86 million followers.

The YouTube sensation has been engaged in an online battle over which channel is the most subscribed to, and his followers have taken to posting messages encouraging others to "subscribe to PewDiePie."

PewDiePie, whose real name is Felix Kjellberg, said on Twitter he felt "absolutely sickened" that the alleged gunman referred to him during the livestream. "My heart and thoughts go out to the victims, families and everyone affected," he said.

The hours it took to take the violent video and manifesto down are "another major black eye" for social media platforms, said Dan Ives, managing director of Wedbush Securities.

The rampage's broadcast "highlights the urgent need for media platforms such as Facebook and Twitter to use more artificial intelligence as well as security teams to spot these events before it's too late," Ives said.

Hours after the shooting, Reddit took down two subreddits known for sharing video and pictures of people being killed or injured —R/WatchPeopleDie and R/Gore — apparently because users were sharing the mosque attack video.

"We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit," it said in a statement. "Subreddits that fail to adhere to those site-wide rules will be banned."

Videos and posts that glorify violence are against Facebook's rules, but Facebook has drawn criticism for responding slowly to such items, including video of a slaying in Cleveland and a live-streamed killing of a baby in Thailand. The latter was up for 24 hours before it was removed.

In most cases, such material gets reviewed for possible removal only if users complain. News reports and posts that condemn violence are allowed. This makes for a tricky balancing act for the company. Facebook says it does not want to act as a censor, as videos of violence, such as those documenting police brutality or the horrors of war, can serve an important purpose.

___

Associated Press writers Danica Kirka in London, Nick Perry in Wellington, Mark Baker in Christchurch and Mae Anderson in New York contributed to this report.

See 2 more photos