As there is no proof that the technology can be managed, a specialist in artificial intelligence (AI) has cautioned against its development.
After thoroughly examining the software, Dr. Roman Yampolskiy concluded that it won’t always be in our best interests to change society.
Dr. Yampolskiy stated, “We are facing an almost certain event with the potential to cause an existential catastrophe.”
It seems sense that a lot of people think this is the biggest issue that civilization has ever faced. The universe’s fate is at stake, and the result might be either prosperity or extinction.
He said humans’ ability to produce intelligent software far exceeds our ability to control AI – and that no advanced intelligent systems can ever be fully controlled.
‘Why do so many researchers assume that AI control problem is solvable?’ he said. ‘To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.
‘This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.’
One problem put forward by Dr Yampolskiy is that, as AI becomes more intelligent, there will be an infinite number of safety issues. This will make it impossible to predict them all, and existing guard rails may not be enough.
He also added that AI cannot always explain why it has decided something – or humans may not always be able to understand its reasoning – which may make it harder to understand and prevent future issues.
‘If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,’ said Dr Yampolskiy, who conducted the review for his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
However, one of the most concerning elements of AI is its increasing autonomy. As AI’s ability to think for itself increases, humans’ control over it decreases. So too does safety.
‘Less intelligent agents – people – can’t permanently control more intelligent agents (ASIs),’said Dr Yampolskiy. ‘This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist.
‘Superintelligence is not rebelling, it is uncontrollable to begin with.’
To minimise the risks from AI, Dr Yampolskiy said users will need to accept reduced capability, and AI must have built-in ‘undo’ options in easy-to-understand human language.
‘Humanity is facing a choice,’ he said. ‘Do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free?’