Concern about advances in artificial intelligence is a topic that has been gaining importance in recent months due to its rapid development and the way in which it is being applied in a variety of fields of work.
Given this, the owner of Tesla, Twitter and SpaceX, Elon Muskalong with Apple co-founder Steve Wozniakand other relevant names in the digital sector have demonstrated their concern about the rapid development of language AI modelssuch as GPT-4, and how this could have negative consequences for society since there is the possibility that many jobs will disappear and the level of misinformation will increase.
They request a break of at least 6 months to refine and establish security protocols
The arguments of these gurus were exposed in an open letter published through the Institute for the Future of Life, a non-profit institution that seeks to create regulation on the subject of AI. Its goal is that both OpenAI and other artificial intelligence companies cease their investigations for at least 6 months to stop and consider the impact these tools have.
The request is based on the concern about AI systems with human competitive abilities. Due to the possible risks that they could represent for society and humanity, and because it is believed that their creation is being carried out without proper planning or management.
“We call on all AI labs to immediately pause training on AI systems more powerful than GPT-4 for at least 6 months. This pause must be public and verifiable, and include all key stakeholders. If such a pause cannot be quickly enacted, governments should step in and institute a moratorium,” the petition reads.
The petition states that AI labs should take advantage of the pause time to develop and implement new security protocols to design AI with greater oversight. Such protocols must ensure that the systems that adhere to them are secure beyond a reasonable doubt. This does not mean a pause in AI development in general, just slowing down the development of capabilities.
Regulation of artificial intelligence
The request that It already has almost 1,400 signatures states that AI developers should work with lawmakers to speed up the development of a government system that regulates and monitors artificial intelligence systems with high computational capacity.
It is suggested that the AI regulatory body it should have regulatory authorities, provenance and watermarking systems, an audit and certification ecosystem, liability for AI harm, public funding for technical AI safety research, and institutions to deal with economic and political disruption.
“At some point, it may be important to get independent review before starting to train future systems, and for more advanced efforts to agree to limit the growth rate of the computation used to create new models,” the open letter states.
Who have supported the letter
The public letter has the support of several influential figures in the technological field. Among these we find the co-founder of Apple Steve Wozniak; the CEO of Tesla and SpaceX, as well as Twitter, Elon Musk; Skype co-founder Jaan Tallinn; the CEO of Stability AI Emad Mostaque; the co-founder of Pinterest Evan Sharp; or the CEO of Getty Images craig peters.
Stay informed of the most relevant news on our Telegram channel