
So on platforms like Facebook Live, YouTube Live and the Twitter-owned Periscope, all of which give users the ability to go live anywhere, any time, rapid content moderation is a nearly impossible task. And with current AI technology, it’s all but impossible to detect a violent scene as it is being live-streamed - and to quickly take down that stream while it’s still happening. What’s especially challenging about the Christchurch video is that the attack wasn’t recorded and uploaded later, but livestreamed in real-time as it unfolded. Facebook, YouTube and Twitter each employ thousands of content moderators around the world many have recently promised to take better care of their workers. The job is psychologically grueling, as a recent report from The Verge illustrates, with workers exposed to the most grotesque footage imaginable on a daily basis for low pay and with minimal mental health support. One could easily imagine a situation, Lemieux says, where an algorithm confuses footage of a first-person-shooter video game with real-life violent footage, for example. Social media companies are also experimenting with machine learning to detect violent footage the first time it is uploaded, the experts say, but the algorithms are not advanced enough yet to reliably take down such footage. “Please know we are working vigilantly to remove any violent footage,” YouTube said in a statement. The better “fingerprinting” technology gets, the more variants of an offending piece of footage can be detected, but the imperfection of the current systems in part explains why copies of the video are still appearing on sites like YouTube several hours after the initial assault. Social media companies looking to prevent a video being uploaded at all must first upload a copy of that video to a database, allowing for new uploads to be compared against that footage.Įven when platforms have a reference point - the original offending video - users can manipulate their version of the footage to circumvent upload filters, for example by altering the image or audio quality.

The way most content-recognition technology works, he explains, is based on a “fingerprinting” model.

“It’s very hard to prevent a newly-recorded violent video from being uploaded for the very first time,” Peng Dong, the co-founder of content-recognition company ACRCloud, tells TIME. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”Įxperts say the Christchurch video highlights a fatal flaw in social media companies’ approach to content moderation. “We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said.
