Elon Musk and other tech leaders call for pause on 'dangerous race' to make A.I. as advanced as humans
PUBLISHED WED, MAR 29 2023 8:23 AM EDTUPDATED WED, MAR 29 2023 11:30 AM EDTRyan Browne
@RYAN_BROWNE_
WATCH LIVE
KEY POINTS
- Artificial intelligence labs have been urged by Elon Musk and numerous other tech industry figures to stop training AI systems more powerful than GPT-4, OpenAI's latest large language model.
- In an open letter signed by Musk and Apple co-founder Steve Wozniak, technology leaders urged for a six-month pause to the development of such advanced AI, saying it represents a risk to society.
- Musk, who is one of OpenAI's co-founders, has criticized the organization a number of times recently, saying he believes it is diverging from its original purpose.
Sopa Images | Lightrocket | Getty Images
www.cnbc.com
Elon Musk and dozens of other technology leaders have called on AI labs to pause the development of systems that can compete with human-level intelligence.
In an open letter from the Future of Life Institute, signed by Musk, Apple co-founder Steve Wozniak and 2020 presidential candidate Andrew Yang, AI labs were urged to cease training models more powerful than GPT-4, the latest version of the large language model software developed by U.S. startup OpenAI.
"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" the letter read.
"Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
The letter added, "Such decisions must not be delegated to unelected tech leaders."
The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn.
The organization has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems.
The institute said it was calling on all AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
GPT-4, which was released earlier this month, is thought to be far more advanced than its predecessor GPT-3.
ChatGPT, the viral AI chatbot, has stunned researchers with its ability to produce humanlike responses to user prompts. By January, ChatGPT had amassed 100 million monthly active users only two months into its launch, making it the fastest-growing consumer application in history.
The technology is trained on huge amounts of data from the internet, and has been used to create everything from poetry in the style of William Shakespeare to drafting legal opinions on court cases.
But AI ethicists have also raised concerns with potential abuses of the technology, such as plagiarism and misinformation.
In the Future of Life Institute letter, technology leaders and academics said AI systems with human-competitive intelligences poses "profound risks to society and humanity."
"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal," they said.
OpenAI was not immediately available for comment when contacted by CNBC.
OpenAI, which is backed by Microsoft, reportedly received a $10 billion investment from the Redmond, Washington technology giant. Microsoft has also integrated the company's GPT natural language processing technology into its Bing search engine to make it more conversational.
Google subsequently announced its own competing conversational AI product for consumers, called Google Bard.
Musk has previously said he thinks AI represents one of the "biggest risks" to civilization.
The Tesla and SpaceX CEO co-founded OpenAI in 2015 with Sam Altman and others, though he left OpenAI's board in 2018 and no longer holds a stake in the company.
He has criticized the organization a number of times recently, saying he believes it is diverging from its original purpose.
Regulators are also racing to get a handle on AI tools as the technology is advancing at a rapid pace. On Wednesday, the U.K. government published a white paper on AI, deferring to different regulators to supervise the use of AI tools in their respective sectors by applying existing laws.
WATCH: OpenAI says its GPT-4 model can beat 90% of humans on the SAT