The chief artificial intelligence scientist at Facebook says the platform is years away from using AI video regulation tech to screen for violent content.
After receiving harsh criticism for how Facebook handled to Christchurch shooter — who live streamed the attack on Facebook — Facebook has been doubling down on its content regulation. Just recently, Facebook revealed its one-strike policy for live streams that violate its policies. Regardless, according to an article from Boomberg, it seems like Facebook will fall behind the likes of YouTube in AI video regulation.
“This problem is very far from being solved,” LeCun said Friday at Facebook’s AI Research Lab in Paris.
Why it takes so long to develop an AI video regulation system
In the live stream, LeCun said the system needs to be trained in both picture and sound. Also, information about individuals posting the video and content needs to be incorporated. Ultimately, LeCun said there isn’t enough data yet to train an AI to detect videos in an effective way. “Thankfully, we don’t have a lot of examples of real people shooting other people,” LeCun said.
This issue lies in training the AI to recognize real life. LeCun did mention they could use movie violence to help train the software. However, that causes some issues for the AI. It may have trouble differentiating real violence and movie violence. It could possibly block movie clips that are allowed on the platform.
According to Facebook’s vice president of AI, Jerome Pesenti, Facebook plans to use both human reviewers and an AI review system as soon as possible. He went on to say that if the AI system sees a video it isn’t confident classifying as prohibited, a human reviewer would look at it. They would then determine if the video violates policy.
Facebook reports it’s been able to automatically detect and block 99% of content linked to terrorist group al-Qaeda. However, LeCun said it’s a “very hard problem” to detect and block all extremist content from anywhere in the world.
Image courtesy Jenny Kane/AP