There is no proof that AI can be controlled, researcher warns
There is no current evidence that AI can be controlled safely, according to an extensive review, and without proof that AI can be controlled, it should not be developed, a researcher warns.
Despite the recognition that the problem of AI control may be one of the most important problems facing humanity, it remains poorly understood, poorly defined, and poorly researched, Dr. Roman V. Yampolskiy explains.
In his book, AI: Unexplainable, Unpredictable, Uncontrollable, AI Safety expert Dr. Yampolskiy looks at the ways that AI has the potential to dramatically reshape society, not always to our advantage.
He explains, “We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”
Dr. Yampolskiy has carried out an extensive review of AI scientific literature and states he has found no proof that AI can be safely controlled—and even if there are some partial controls, they would not be enough.
He explains, “Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.
“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”
He argues our ability to produce intelligent software far outstrips our ability to control or even verify it. After a comprehensive literature review, he suggests advanced intelligent systems can never be fully controllable and so will always present certain level of risk regardless of benefit they provide. He believes it should be the goal of the AI community to minimize such risk while maximizing potential benefit.
What are the obstacles?
AI (and superintelligence), differ from other programs by its ability to learn new behaviors, adjust its performance and act semi-autonomously in novel situations.
One issue with making AI ‘safe’ is that the possible decisions and failures by a superintelligent being as it becomes more capable is infinite, so there are an infinite number of safety issues. Simply predicting the issues not be possible and mitigating against them in security patches may not be enough.
At the same time, Yampolskiy explains, AI cannot explain what it has decided, and/or we cannot understand the explanation given as humans are not smart enough to understand the concepts implemented. If we do not understand AI’s decisions and we only have a “black box,” we cannot understand the problem and reduce likelihood of future accidents.
For example, AI systems are already being tasked with making decisions in health care, investing, employment, banking and security, to name a few. Such systems should be able to explain how they arrived at their decisions, particularly to show that they are bias-free.
Yampolskiy says, “If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”
- What is a GTMS connector? What are the applications of GTMS connectors?
- What is the difference between normal power supply ripple and noise?
- Bad Q4 for mobile network investment
- How will FPGA affect AI in 2024?
- Things to consider when synchronizing oscilloscopes A brief analysis of the causes of timing errors between oscilloscopes
- Is the smaller the charger ripple the better?
- ST ToF 3D LiDAR module has 2.3k resolution
- Innovative hybrid case design and partial packaging of TCI power series
- The most comprehensive and latest overview of the automotive sensor field and its complete industry chain
- How to set the 1A limiting current in the overcurrent protection circuit?