Last week, China became the first country to mandate that manipulated materials through deepfake technology — a type of software which allows people to swap faces, voices, and other characteristics to create digital forgeries — have the subject's consent and display digital signatures or watermarks.
However, according to The New York Times, the regulation process could be difficult for China — or any other country banning the creation of Artificial Intelligence deepfakes — since "the worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms."
Tech experts speculate that Beijing could influence how other governments deal with the machine learning and AI that powers deepfake technology.
"The AI scene is an interesting place for global politics, because countries are competing with one another on who's going to set the tone," Ravit Dotan, a postdoctoral researcher who runs the Collaborative AI Responsibility Lab at the University of Pittsburgh, told the Times. "We know that laws are coming, but we don't know what they are yet, so there's a lot of unpredictability."
And from a legal perspective, deepfake opponents fear the technology could be misused to erode trust in surveillance videos and body cameras.
Also, digital forgeries could discredit or incite violence against police officers. The Department of Homeland Security has also "identified risks including cyberbullying, blackmail, stock manipulation and political instability," the Times writes.
The steady proliferation of deepfake videos could lead to a situation where "citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as 'information apocalypse' or 'reality apathy,'" the European law enforcement agency Europol wrote in a report last year.
In 2019, and again in 2021, Rep. Yvette D. Clarke, D-N.Y., proposed a piece of legislation bearing the lengthy title of the "Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act."
Clarke's bill — which would require deepfakes to include watermarks or identifying labels — has yet to reach the stage of a floor vote, but she believes that may change this year.
The watermarks and protective labels are a "protective measure," noted Clarke. "Many of the sophisticated civil societies recognize how this can be weaponized and destructive.
Clarke also stated the U.S. should set higher standards for regulating deepfakes.
"We don't want the Chinese eating our lunch in the tech space at all," Clarke reportedly said. "We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space."
© 2023 Newsmax. All rights reserved.