In an effort to beef up digital technology and AI, the Pentagon has hired Craig Martell, who previously worked as head of machine learning at Lyft and Dropbox and led artificial intelligence (AI) initiatives at LinkedIn.
"It's not my goal to come in here and change the entire culture of the DOD. It's my goal to demonstrate that with the right cultural changes, we can have really big impact," Martell, who will be the Pentagon's chief of digital and AI, told Bloomberg News.
"Advances in AI and machine learning are critical to delivering the capabilities we need to address key challenges both today and into the future," said Deputy Secretary of Defense Dr. Kathleen H. Hicks. "With Craig's appointment, we hope to see the department increase the speed at which we develop and field advances in AI, data analytics, and machine-learning technology. He brings cutting-edge industry experience to apply to our unique mission set."
In April, the Brookings Institute stated: "The United States and China are increasingly engaged in a competition over who will dominate the strategic technologies of tomorrow. No technology is as important in that competition as artificial intelligence: Both the United States and China view global leadership in AI as a vital national interest, with China pledging to be the world leader by 2030. As a result, both Beijing and Washington have encouraged massive investment in AI research and development."
Recently, some Pentagon officials who were working on technological modernization resigned amid concerns that the U.S. was falling behind China. Resignations included Nicolas Chaillan, chief software officer for the Air Force; David Spirk, the chief data officer; Preston Dunlap, the Air Force's chief architect; and Jason Weiss, the Defense Department's chief software officer.
Martell's responsibilities will include work on "algorithmic warfare," an under-defined concept that seeks to apply artificial intelligence to combat, Bloomberg noted. Martell said he was attracted to the role because his remit was "responsible AI."
The Pentagon has previously aroused ethical concerns over its potential use of AI. Some Google employees refused to work on some military applications for artificial intelligence because of possible future use of autonomous lethal weapons and targeting.
"For me, whenever there are lives on the line, humans should be in the loop," Martell said to Bloomberg, adding the Pentagon needed to have robust ethical guidelines for the use of artificial intelligence in warfare and to ensure that machines would be 99.999% correct before any were deployed.
© 2022 Newsmax. All rights reserved.