AI hits trust hurdles with U.S. military

iancmceachern | 2 points

"When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives."

"It is practically impossible for an LLM to be taught solely on vetted high-quality data," Schneider and Lamparth write.

iancmceachern | 15 days ago