Facebook CEO Mark Zuckerberg said in his recent testimony before Congress that Artificial Intelligence is "5 to 10" years away from preventing abuse on its platform.
"Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it," he said in testimony, according to The Washington Post.
"So, that's a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community."
He added, "Hate speech — I am optimistic that, over a 5 to 10-year period, we will have A.I. tools that can get into some of the nuances — the linguistic nuances of different types of content to be more accurate in flagging things for our systems. But, today, we're just not there on that."
W20 Group chief innovation officer Bob Pearson told Fox News that A.I. can provide assistance in flagging hate speech and other forms of abuse online, but that it must first learn about bias in language and what harms people.
"All human beings follow patterns online," Pearson said. "You can see what language, content, channel and people matter to them. You can see which words trigger information seeking, which language is most associated with hate topics or sites, which people are the most important influencers and you can see a range of behavioral characteristics."
He added that a problem bigger than training A.I. has been Facebook's failure to make the fight against hate speech a top priority.
"A media platform can identify bias, hate and extremist speech just as easily as it can identify your needs for advertisers. It is just a matter of focus," he said.
© 2025 Newsmax. All rights reserved.