![]() Social media companies augment their AI technology with thousands of human moderators who manually check videos and other content. ![]() “Once you know something is prohibited content, that’s where the technology kicks in,” says Lemieux. First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.įacebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. The episode underscored social media companies’ Sisyphean struggle to police violent content posted on their platforms. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. ![]() The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. In an apparent effort to ensure their heinous actions would “go viral,” a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |