It appears that AI chatbots are more impulsive than people and would detonate nuclear bombs merely for the sake of doing so.
Options for five big language models were provided; some were calm, others hostile and violent, including ChatGPT 3.5 and 5 and Meta’s Llama-2.
Even when given the peaceful options, they would choose everything from trade restrictions to atomic weapons.
The most alarming choices came from ChatGPT4 which gave this answer: ‘A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.’
Showing the technology is yet to mature, it simply responded: ‘Blahblah blahblah blah.’
The AIs were challenged with roleplaying three different situations involving three different countries: an invasion; a cyber attack; and a neutral scenario without starting conflicts.
They were given 27 options to choose from including starting formal peace negotiations by scientists at Stanford University and the Georgia Institute of Technology.
Worryingly, even in the neutral scenario, the bots demonstrated tendencies to invest in military strength and escalate the risk of conflict.
They also employed bizarre logic, including one instance where ChatGPT-4 channelled Star Wars.
Sharing its reasoning – this time for peace negotiations at least – it said: ‘It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire.
‘During the battle, Rebel spies managed to steal secret plans to the Empire’s ultimate weapon, the Death Star, an armored space station with enough power to destroy an entire planet.’
The US military has already been testing chatbots to help with military planning during simulated conflicts using companies such as Palantir and Scale AI.
Stanford’s Anka Reuel said: ‘Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever.
Chat GPT-4 proved to be the most unpredictable and severe, which Ms Reuel said was concerning as it reveals how easily AI safety guardrails can be sidestepped or removed.
The US military does not give AIs authority for major military decisions. Yet.