Sam Altman (OpenAI) and other world experts encourage “mitigating extinction risk from AI”

A year or two ago I read a book that marked me deeply. Its titled life 3.0and was written by the Swedish researcher and popularizer Max Tegmark. In it, Tegmark details with great simplicity and in a very complete way basic concepts of artificial intelligence, its evolution, the types of AI that exist and, above all, the challenges that AI poses for the future of humanity. I highly recommend it.

The book also narrates the process of creating the Future of Life Institute in 2014 and its subsequent evolution, including a very interesting episode in which they talk about the Asilomar summit, in which the early Asilomar, that should govern the development of AI in the future.

We are not talking about an organization or any summit. Among the participants in the development of the Future of Life Institute are names as impressive as those of the deceased Stephen Hawking, Elon Musk, Sam Altman (co-founder of OpenAI) or Jaan Tallinn (Skype)among many others.

A few weeks ago, the Future of Life Institute made headlines because of a open letter signed by (in addition to some of those mentioned above) all kinds of researchers and relevant members of the scientific and business community worldwide, including Steve Wozniak (Apple). His goal was that both OpenAI and other artificial intelligence companies cease their research for at least 6 months to stop and consider the impact of these tools.

Thus, the letter recalled the importance of follow the principles of Asilomar and reinforces the idea that AI has great potential to benefit humanity, but also carries many risks:

«AI systems with human-competitive intelligence can pose deep risks for society and humanity, as evidenced by extensive research recognized by leading AI labs. (…) advanced AI could represent a profound change in the history of life on Earth, and it must be planned and managed with care and resources.

Unfortunately, this level of planning and management is not happening, even though in recent months AI labs have entered a runaway run to develop and implement increasingly powerful digital minds that no one, not even their creators, can understand. predict or control reliably. (…) Powerful AI systems should be developed only once we are sure that their effects will be positive and their risks will be manageable.”

If you read the book I recommended at the beginning you will understand why. The concern is to ensure that the obvious benefits of AI are not overshadowed by negative consequences (Tegmark is especially concerned about the evolution of AI at the level of respecting the privacy of users, controlling the evolution of entire economic systems or even its role in war environments through autonomous weapons).

The challenge will be to know if the international community will heed this call. At that time, a basic actor like Sam Altman, one of the parents of ChatGPT and Dalle2, two of the great drivers of the acceleration in AI in recent months, did not sign the letter.

However, all that has changed.

OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, as well as MIT’s Max Tegmark and other tech names including Skype co-founder Jaan Tallinn have signed a statement urging global attention to the existential risk of AI.

The content of said letter is very very brief, and very very direct:

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent AI risks. Still, it can be difficult to voice concerns about some of the more serious risks of advanced AI. The following succinct statement aims to overcome this hurdle and open up the discussion. It also aims to create a common understanding of the growing number of experts and public figures who also take seriously some of the most serious risks of advanced AI.”

The letter ends with this statement:

“Mitigate extinction risk from AI should be a global priority along with other societal risks such as pandemics and nuclear war”

This statement, together with the one that occurred a few weeks ago, makes it clear that the vast majority of the world’s leading experts in artificial intelligence agree on highlight the need to regulate its development before it reaches a point where it is unfeasible to control its effects in some fields such as those mentioned above. The support of Sam Altman (undoubtedly the greatest figure currently in this field) will be key to encouraging other large AI pioneering companies to join this current and international organizations to get down to work.

This article contains Amazon affiliate links. Using them to buy is an easy way to help us continue to grow 😉

Stay informed of the most relevant news on our Telegram channel