share_log

OpenAI宫斗大戏第二幕 核心安全团队解散 负责人自爆离职内幕

OpenAI Gong Dou: The inside story of the dissolution of the core security team and the leader who blew up and left his job

cls.cn ·  May 19 12:34

① The head of the core safety team revealed that the team did not allocate enough resources, and safety is not Altman's core priority; ② Altman will post a long post within the next few days; ③ Another area where Altman was criticized was “covering” the departing employees.

“Science and Technology Innovation Board Daily”, May 19 (Editor Song Ziqiao) The second scene of the OpenAI Gong Dou drama opens.

Unlike Ilya Sutskever (Ilya Sutskever)'s polite farewell when he left his job — “I believe OpenAI will build a safe and beneficial AGi under the leadership of Altman and others”, and Altman's warm words — “It makes me sad... He (Ilya Sutskever) has something personally significant to do, and I am forever grateful for everything he has done here”.

The latest OpenAI executive, Jan Leike, head of the Super Alignment team, directly tore the face off to reveal the inside story of OpenAI co-founder and chief scientist Sudskevi's departure — Altman didn't give resources or push for commercialization without regard to safety.

Leike is also the co-founder of OpenAI and belongs to the “conservative” faction. Just a few hours after Sutskever announced his departure, he posted: I'm quitting my job.

The day after the official announcement of his departure, Leike posted more than 10 posts publicly explaining the reasons for his departure —

The team did not allocate enough resources: the Super Align team has been “sailing against the wind” for the past few months, struggling computationally, and it is becoming more and more difficult to complete the research;

Safety isn't Altman's core priority: over the past few years, safety culture and processes have given way to more glamorous products. “For a long time, I didn't agree with the OpenAI leadership about the company's core priorities until we finally reached a tipping point.”

This is the first time an OpenAI executive-level figure has publicly acknowledged that the company values product development over safety.

Altman's response was still decent. He said he was very grateful to JanLeike for his contributions to OpenAI's consistency research and security culture, and was sorry to see him leave. “He's right, we still have a lot of work to do and we're committed to doing that. I'll be posting a longer post in the next few days.”

▍ Investing in the future? No, Altman chose to seize the present

OpenAI's Super Alignment team was founded in July 2023, led by Leike and Sutskever. The members are scientists and engineers from OpenAI's former calibration department and researchers from other institutions. It can be described as a team of technicians. The team aims to ensure that artificial intelligence is in line with the goals of its creators. The goal is to solve the core technical challenges of controlling super-intelligent AI in the future, “to solve the different types of security issues that will actually occur if the company successfully builds AGI.”

In other words, the Super Align team is building a safety shield for a more powerful AI model, not the current AI model.

As Leik said, “I believe we should put more effort into preparing for the next generation model, focusing on security, monitoring, preparation, making AI adversarial, robust, (super) aligned, and focused on topics related to confidentiality and social impact. These are hard questions to do right, and I'm worried we're not moving in the right direction.”

Leik also pointed out that although ultra-intelligent AI currently seems far away, it may appear within this decade. Managing these risks requires new governance institutions and alignment technology to ensure that super-intelligent AI systems follow human intentions.

From Altman's standpoint, maintaining a super-aligned team is a huge expense. OpenAI has promised that the team will receive 20% of the company's computing resources, which will inevitably reduce OpenAI's resources for new product development, and investing in a hyper-aligned team is actually investing in the future.

According to a source from the OpenAI Super Alignment team, promises of 20% computing resources were discounted, and requests for a small portion of these computation were often denied, which hindered the team's work.

▍ What if the Super Align team disbands?

According to media reports such as “Connect,” OpenAI's Super Alignment team was disbanded after Sutskever and Leike left their jobs one after another. It is no longer a dedicated team, but rather a loose research group spread across all departments of the company. An OpenAI spokesperson described it as a “deeper integration (team)”.

There is news that John Schulman (John Schulman), another co-founder of OpenAI, may take over research on the risks associated with more powerful models.

This inevitably raises questions. Schulman's current job is to ensure the safety of OpenAI's current products; will he focus on future-oriented security teams?

Another area where Altman was criticized was “covering his mouth” for leaving the employee.

Since November of last year, at least 7 safety-conscious members of OpenAI have resigned or been fired, but most of the former employees are unwilling to talk about this publicly.

Part of the reason, according to Vox, is that OpenAI will let employees sign agreements with non-derogatory terms when they leave their jobs. If they refuse to sign, they will lose their previous OpenAI options, which means that employees who come out to speak may lose a huge amount of money.

A former OpenAI employee revealed that the company's entry documents include an item: “Within 60 days of leaving the company, you must sign a separation document containing a 'general exemption'. If you don't complete it within 60 days, your equity benefits will be forfeited.”

Altman acknowledged the incident in disguise. He responded that the company's separation documents contained a clause on “cancellation of potential shares” for former employees, but it was never put into practice, and “we have never taken back anyone's vested shares.” He also said that the company is revising the terms of the separation agreement, and “if any former employee who signed these old agreements is concerned, they can contact me.”

Going back to the origin of the dispute between senior OpenAI executives, it is also the core point of conflict in this protracted war — “conservatives” emphasize safety before products, and “profit groups” want commercialization to be implemented as soon as possible.

How will OpenAI balance product security and commercialization? How to recover an increasingly poor public image?

Looking forward to Altman's “Little Essay” in the next few days.

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment