By Michelle Castillo

A livestream of one of the most tragic events in New Zealand history is highlighting the difficult issues encountered by Facebook and other tech companies when monitoring content.

On Friday New Zealand local time, an unidentified man in his late 20s killed at least 49 people and seriously injured at least 20 others during two orchestrated shootings at Christchurch mosques. The alleged perpetrator livestreamed the massacre on Facebook for 17 minutes before the content was flagged by authorities.

“Our hearts go out to the victims, their families and the community affected by this horrendous act,” Facebook New Zealand spokeswoman Mia Garlick said in a statement. “Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We're also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continues.”

This issue, however, goes beyond stopping a livestream of a crime in-action. Social media posts are published so rapidly and frequently that it’s currently impossible to monitor them all in real-time. About 500 million tweets are sent per day. As of July 2015, 400 hours of video are uploaded to YouTube every minute. It’s impossible to predict which questionable content is about to lead to actual violence.

Warning signs

The Christchurch shooter left a trail of warning signs online well before the event. Days before the attack, a person alleged to be the shooter tweeted photos of weapons inscribed with the names of victims of terrorist attacks.

Right before the shooting, the same individual posted on the messageboard 8Chan: “Well lads, it’s time to stop sh--posting and time to make a real life effort post. I will carry out and attack against the invaders, and will even live stream (sic) the attack via facebook.” The post included a link to his Facebook page, as well as links to a detailed anti-Muslim, pro-Nationalist manifesto and a call to spread his messages through memes and social posts online. (Cheddar is electing not to post links to the content in order to stop their dissemination.)

Even with detailed language of what was about to occur, social platforms were unable to identify that an act of violence was imminent, locate the person orchestrating the acts, or alert authorities in time to stop the shooting. The shooter's livestream, if tracked in real-time, would have easily allowed authorities to find him. In the video, a navigation device could be heard audibly describing his exact location.

After the shooting, social platforms scrambled to delete copies of the video as they quickly spread online.

While Facebook and Twitter were quick to identify and delete accounts connected to the suspect after the shooting started, they couldn’t stop people who reposted or re-uploaded the livestream and manifesto. Fake accounts using the alleged suspect's name have also sprung up. And screenshots and links made it easy to share the disturbing materials on other platforms like YouTube and Reddit.

The Facebook spokeswoman stressed that the company is taking urgent action to remove the content as quickly as they find it.

“Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement," Facebook's Garlick said in a statement. "We are adding each video we to find to an internal data base which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.”

Though the tech companies continue to actively search for this materials on their platforms, video recognition tools are not advanced enough to identify the same video 100 percent of the time. For example, it is difficult for the tech to recognize a clip of a video that is recorded from another screen. Also, platforms can't remove videos that are hosted on a separate site. In short, platforms' current technology is not evolved enough to bar violent materials that violate policies from going up with complete accuracy.

What’s left is to rely on human moderators and citizens to flag questionable content. Facebook announced it would hire 20,000 people to its safety and security teams, but it’s still not enough to watch the deluge of posts published every day across its family of apps. Google said it would employ more than 10,000 workers to look at questionable content, but YouTube recently had to limit its moderators to four hours per day because of the psychological and emotional effects of screening disturbing videos.

Almost a day after the shooting, the video could still be found online.