"When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives."
"It is practically impossible for an LLM to be taught solely on vetted high-quality data," Schneider and Lamparth write.
"When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives."
"It is practically impossible for an LLM to be taught solely on vetted high-quality data," Schneider and Lamparth write.