На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Brobible

14 подписчиков

AI Safety Expert Warns That It Can’t Be Controlled, Could Lead To Humanity’s ‘Extinction’

artificial intelligence chatbot
artificial intelligence chatbot

Numerous very smart people with a deep knowledge of how dangerous artificial intelligence (AI) can be have issued warnings about its use over the past several years.

The latest to do so is Dr. Roman V. Yampolskiy, AI safety expert and associate professor at the University of Louisville.

Yampolskiy states in a study recently published in the book AI: Unexplainable, Unpredictable, Uncontrollable that there is currently no evidence that artificial intelligence can be controlled safely.

“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” Dr. Yampolskiy said in a statement. “No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

He believes that until we have concrete proof that AI can be controlled, it should not be developed.

“Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable,” said Dr. Yampolskiy.

“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”

He argues our ability to produce intelligent software far outstrips our ability to control or even verify it. After a comprehensive literature review, he suggests advanced intelligent systems can never be fully controllable and so will always present certain level of risk regardless of benefit they provide. He believes it should be the goal of the AI community to minimize such risk while maximizing potential benefit.

Dr. Yampolskiy warns, “If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”

As capability of AI increases, its autonomy also increases but our control over it decreases, Yampolskiy explains, and increased autonomy is synonymous with decreased safety.

“Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs),” he said. “This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist. Superintelligence is not rebelling, it is uncontrollable to begin with,” he explains.

“Humanity is facing a choice, do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free.”

The post AI Safety Expert Warns That It Can’t Be Controlled, Could Lead To Humanity’s ‘Extinction’ appeared first on BroBible.

 

Ссылка на первоисточник

Картина дня

наверх