Several Palestinian advocacy groups are demanding action from Meta Platforms, the parent company of Facebook, Instagram, and WhatsApp, to address alleged long-standing content moderation issues that they argue unfairly curtail Palestinian speech, The Hill reported.
This call to action comes in the aftermath of the Israel-Hamas war outbreak on Oct. 7.
The petition, "Meta: Let Palestine Speak," alleges that the technology giant unjustly eliminates content, imposes suspensions, or engages in "shadow banning" of Palestinian accounts. It also contends that the tech giant fails to address what the groups describe as "incendiary Hebrew-language content."
Nadim Nashif, executive director and co-founder of the Palestinian digital rights group 7amleh — The Arab Center for the Advancement of Social Media, stated that complaints about Meta's content moderation policies date back several years. 7amleh, in collaboration with the digital rights advocacy group Fight for the Future, is spearheading the Meta petition.
After similar accusations emerged following an earlier outbreak of violence in May 2021, Meta commissioned an independent due diligence report. This report found that Meta's actions during that period had an adverse human rights impact on Palestinian users. The company agreed to implement recommendations, including developing "classifiers" for the Hebrew language "hostile speech."
However, following the Oct. 7 attack by Hamas and Israel's subsequent actions, Nashif claims that Meta's new Hebrew language classifiers are falling short. Meta internally acknowledged these shortcomings, stating that the classifiers were not applied to Instagram comments due to insufficient data, The Wall Street Journal reported.
In response to a surge in hateful content, Meta reportedly lowered the threshold for its automated system that hides potentially policy-violating comments. Nashif contends that this approach results in a "very aggressive content moderation approach," leading to numerous "false positives" and removing content that should remain.
Jillian York, director of international freedom of expression at the Electronic Freedom Foundation (EFF), emphasized the need for social media companies to be more transparent in content moderation. She called for specificity in cultural moderation, noting that Meta has not ensured cultural relevance in its moderation practices.
"Basically, tell us what you're taking down and tell us why you're taking it down, who asked you to take it down and why," York said.
Recent incidents involving AI-powered tools on Meta platforms have further highlighted issues of bias. Nashif pointed out instances where auto-translation on Instagram inserted the word "terrorist" into Palestinian users' bios, and WhatsApp's AI-generated images contained guns in response to phrases like "Palestinian."
Meta responded to the concerns, pointing to an October statement reaffirming its commitment to keeping people safe on its apps while denying deliberate suppression of voices.
"Our policies are designed to keep people safe on our apps while giving everyone a voice," the company said. "We apply these policies equally around the world, and there is no truth to the suggestion that we are deliberately suppressing voice."
Jim Thomas ✉
Jim Thomas is a writer based in Indiana. He holds a bachelor's degree in Political Science, a law degree from U.I.C. Law School, and has practiced law for more than 20 years.
© 2024 Newsmax. All rights reserved.