The "Father of ChatGPT" was suddenly dismissed, and the "earthquake in the AI industry" attracted global attention!
[Global Times Special Correspondent Zhen Xiang Global Times Special Correspondents in the United States and Canada Xiao Da Tao Danfang] On the 18th local time, Ott, founder and chief executive officer (CEO) of OpenAI, an artificial intelligence company known as the "father of ChatGPT" Just one day after Mann was fired by the board of directors, news suddenly came out that the company's board of directors was discussing with Altman and planned to ask him to come back as CEO. At present, the final outcome of this "AI earthquake" that has attracted global attention is still unknown, but many media and industry insiders say that behind this fierce battle, it reflects the fierceness of human beings towards different development concepts of artificial intelligence (AI). Conflict and collision.
Ultraman is coming back?
On the afternoon of the 18th local time, the American technology news website The Verge reported that the OpenAI board of directors is discussing with Altman to ask him to come back and continue to be CEO. At about 9 o'clock that night, Altman posted on the social platform Terman is back and will support him in whatever he does next.”
On the 17th local time, Ultraman suddenly received a notice from the company to participate in an online meeting. At the meeting, the company's board of directors informed Altman that he had been fired based on a board vote. The company also issued a statement saying Altman was not consistently candid in his communications with the board and hindered the board's ability to carry out its duties. The board no longer has confidence in his ability to continue leading the company.
According to reports, Altman only roughly understood the topics of the meeting half an hour before the meeting. Some major investors, such as Microsoft, were notified only a short time before the above-mentioned online meeting, and some were even informed after public news reports were released. After Altman was fired, Chairman Brockman, who supported him, also announced his resignation. The New York Times quoted sources as saying that Altman and Brockman planned to launch a new artificial intelligence project, and that many investors already planned to support it.
CNN stated that Ultraman’s firing was related to the intensification of disagreements within OpenAI over the future development of artificial intelligence. Altman and Brockman are radicals who advocate all-out efforts to promote the research and development and commercialization of artificial intelligence. However, the company's co-founders, chief scientist Ilya Sutskefer and chief technology officer Mira Mulati are not optimistic about artificial intelligence. Development takes a more cautious attitude, and the views of Ilya and others are supported by most members of the company's board of directors except Brockman.
According to reports, major investors in OpenAI, including Microsoft, are dissatisfied with the company's dismissal of Ultraman and hope that Ultraman will return. However, from the current point of view, the answer is still unknown whether Ultraman and Brockman will return, and what roles they will play after their return.
Will artificial intelligence threaten humanity in the future?
"Artificial intelligence may lead to the extinction of mankind, which is no less dangerous than large-scale epidemics and nuclear wars." On May 30 this year, more than 350 international artificial intelligence industry leaders and experts issued a joint statement, saying that the artificial intelligence crisis should be regarded as a global Priorities. Subsequently, Yale University conducted a survey on the future of artificial intelligence among participating business leaders at the National CEO Summit held by Yale University. 58% of people believed that the statement that artificial intelligence could cause disasters was not an exaggeration. Judging from the decision of the OpenAI board of directors to fire Altman, most board members are highly concerned about the possible negative impacts of artificial intelligence. According to CNBC in the United States, the OpenAI board of directors has six members. Except for Chairman Brockman, the other five members are famous experts and scholars. They are the company’s chief scientist Ilya, current Quora CEO D’Angelo, and the RAND Corporation. Management expert Macaulay and AI governance expert Tone.
OpenAI adopts a relatively unique management structure. The company is registered as a non-profit organization, and the board of directors can make independent decisions without the influence of investors. The company clearly states on its website that the management decision-making power of "artificial general intelligence" (AGI) technology belongs to the OpenAI non-profit organization and all mankind. AGI usually refers to artificial intelligence that has the same intelligence as humans or exceeds human intelligence.
The Washington Post stated that the battle revolves around the differences between two trends. Altman hopes to promote the rapid development and commercialization of artificial intelligence technology, while others are increasingly concerned about possible security issues. According to reports, most OpenAI board members prefer to prioritize risk control rather than rushing to expand business, but impatient investors have bet on Ultraman’s upcoming artificial intelligence projects to stay at the forefront of the artificial intelligence competition and gain from it. profit.
On the Reddit forum, a user who claimed to be informed said, "People are worried that in the race to use ChatGPT to hype, the technology is rushed to the market without sufficient security review... Ultraman rushes to the front. His focus seems to be It’s more and more about fame and profit, away from our mission.”
Artificial intelligence regulation has a long road ahead
Artificial intelligence supervision is the forefront of today's technology supervision. On November 1, the world's first Artificial Intelligence Security Summit was held in the UK. 28 participating countries and the European Union jointly signed the "Bletchley Declaration". The declaration believes that the conscious misuse or unconscious control of cutting-edge artificial intelligence technology may cause huge risks, especially in areas such as network security, biotechnology and the intensification of the spread of false information. The Declaration emphasizes that artificial intelligence risks are international in nature and are “best resolved through international cooperation.”
It is worth noting that developed countries are working together to coordinate artificial intelligence development and regulatory policies. At the end of October, the Group of Seven issued the "International Code of Conduct for Organizations Developing Advanced Artificial Intelligence Systems," proposing an artificial intelligence development framework that includes an 11-point code of conduct.
"Toronto Life" recently interviewed scientist Jeffrey Hinton, known as the "Godfather of Artificial Intelligence." Hinton has recently been calling attention to the risks of artificial intelligence and discussing regulatory issues with leaders of many countries. Some people think that artificial intelligence created by humans cannot "rebel." In this regard, Hinton believes that the information provided to artificial intelligence may evolve unexpected results. "Toronto Life" stated that many people do not agree with Hinton's view and believe that the so-called artificial intelligence will exterminate mankind is unfounded. According to the report, even so, the urgent problems brought about by artificial intelligence require immediate response. For example, the United States plans to develop artificial intelligence weapons by 2030, which will lead to a global artificial intelligence arms race. In addition, how to solve the problem of large-scale population unemployment after the promotion of artificial intelligence technology also requires careful consideration by humans.