share_log

热点回顾 | “硅谷新思想”有效利他主义!一文解析OpenAI宫斗背后真正的“无形之手”

Hot review | “Silicon Valley New Thoughts” Effective Altruism! An article analyzes the real “invisible hand” behind OpenAI Gong Dou

cls.cn ·  Nov 25, 2023 14:00

Source: Finance Association

① In the past few years, the social ideological movement known as “effective altruism” has caused intense differences among employees and executives of various artificial intelligence companies in Silicon Valley; ② The tit-for-tat confrontation between followers of this ideology and non-followers also seems to be becoming the true “invisible hand” behind OpenAI's “palace fight incident.”

Over the past few years, the social ideological movement known as “effective altruism” has caused intense differences among employees and executives of various AI companies in Silicon Valley.

And the tit-for-tat between followers of this ideology and non-believers also seems to be becoming the true “invisible hand” behind OpenAI's “palace fighting incident” that shocked the world over the past week.

How is effective altruism affecting the field of AI?

Effective Altruism (Effective Altruism) began in the early 21st century, a set of philosophical ideas that wanted to use evidence and reasoning to choose the most effective way to benefit others, that is, to do the greatest or better good. For effective altruists, a very important goal is to improve the world, and the more efficient the better. This means: through rational analysis and measurement of the situation, they will ensure that the resources in their hands can make the most of their resources, and ultimately achieve the goal of improving the world!

This idea has been accepted by those working on animal rights and climate change, drawing on ideas from philosophers, mathematicians, and future predictors, influencing the artificial intelligence research community in Silicon Valley and elsewhere.

According to this concept, in the field of Internet technology, “open source,” “free,” and “decentralization,” etc. are all conducive to doing good to the greatest extent possible, while the corporate organizational form with profit as the basic goal is contrary to this. In the field of AI, effective altruists believe that a carefully crafted artificial intelligence system will bring about a golden age after injecting correct human values, but failure to do so may have apocalyptic consequences.

OpenAI launched ChatGPT a year ago, and its introduction is actually based in part on the principles of effective altruism.

Ultraman and Musk began building OpenAI in 2015, and their vision at the time was to achieve General Artificial Intelligence (AGI), a system capable of reasoning at or above the level of humans. They said at the time that they hoped to achieve this goal in a way that would benefit humanity, not just to profit for business.

To some extent, OpenAI's turmoil over the past week is itself a dispute of ideas arising from this: on the one hand, a group that believes in the laws of the market, and on the other, effective altruists who believe in machines endowed with morality, reason, mathematics, and precise adjustment can guide the future of humanity more safely.

Are three of the four directors effective altruists?

Effective altruists believe that a single-minded pursuit of artificial intelligence could destroy humanity. When it comes to the development of artificial intelligence, they prefer safety over speed. Many participants in this campaign have themselves been leaders in helping to shape the artificial intelligence boom over the past year.

According to people familiar with OpenAI's internal controversy, Ultraman, who was fired by OpenAI's board of directors on Friday, had a violent conflict with Ilya Sutskever, the company's chief scientist and board member, over artificial intelligence security issues. However, Sutskever's past series of positions actually reflect the concerns of effective altruists about the current rapid development of AI.

As many people have learned after OpenAI's “Gong Dow”, Ilya Sutskever is a student of Geoffrey Hinton, the “father of deep learning.” Geoffrey Hinton resigned from Google earlier this year to be able to better warn about the safety of artificial intelligence. As a Hinton student, SutSever's concerns about AI have existed for a long time.

In July of this year, OpenAI officially announced the establishment of a new research team, the “Superalignment” (Superalignment) team. This is exactly one of the tasks led by Sutskever — it plans to invest 20% of computing power over the next 4 years and use AI to monitor AI to solve the superintelligent “AI alignment” problem. “AI alignment” means that the goals of the AI system are required to be consistent with human values and interests.

Also with similar views to Sutskever are Tasha McCauley and Helen Toner, who initially voted to expel Ultraman's other two directors — Tasha McCauley and Helen Toner.

Tasha McCauley is a technology executive and board member of Effective Ventures (Effective Ventures), an effective altruistic charity, while Helen Toner is an executive at Georgetown University's Center for Security and Emerging Technology, which is supported by a charity dedicated to the cause of effective altruism. People familiar with the inside story said they had three of the four votes needed to overthrow Ultraman in the board of directors.

OpenAI announced on Wednesday that Ultraman will return to the company's CEO, yet the above three people — Ilya Sutskever, Tasha McCauley, and Helen Toner — are no longer on the latest list of board members.

It can be said that although the effective altruist camp lightning ousted Ultraman in the first half of this “palace fight”, it was still “completely defeated” in this game in the end. However, is everything that is happening to OpenAI really the end of this story?

Obviously, not necessarily!

The battle for ideas is expected to accompany the development of AI

In fact, since around 2014, effective altruists have begun to realize that advanced artificial intelligence systems risk human extinction — a risk they see as comparable to the extent of the climate crisis. This revelation coincided with the publication of the book “Super Intelligence” by Swedish philosopher Nick Bostrom, which used the “paper clip” as a symbol of the danger of artificial intelligence.

The idea is simple, yet scary — if the only goal of a machine is to maximize mass production of paper clips, then it may invent some ridiculous technology just to turn all the resources available in the universe into paper clips and eventually destroy humans.

One day last fall, thousands of paperclips in the shape of the OpenAI logo were sent to the company's San Francisco office. No one seems to know where they came from, but everyone knows what they mean...

This prank is said to have been carried out by an employee of Anthropic, a competitor of OpenAI in the same city. The company itself was founded by a group of former OpenAI employees who left. The reason for the solo flight is also said to be based on differences over AI security issues.

What's interesting is that after Ilya Sutskever and others initially forced Ultraman out of Ultraman last weekend, the news broke out right away — they actually wanted OpenAI and Anthropic to merge.

In the eyes of many believers who are not effective altruists, their ideas seem difficult to accept. Venture capitalist and OpenAI investor Vinod Khosla wrote in a review article in “The Information”: “OpenAI board members' belief in 'effective altruism' and its misapplication may hinder the world's path to huge benefits in artificial intelligence.”

When Ultraman toured around the world this spring, although he warned that artificial intelligence could cause serious harm, he also believed that effective altruism was a kind of “flawed campaign,” showing “very strange and unexpected behavior.”

Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz, and Garry Tan, CEO of startup incubator Y Combinator, also criticized the campaign. Tan called it a “philosophy of virtue signals” that has no real meaning and should be abandoned in order to solve real problems and create human prosperity.

Shazeda Ahmed, a Princeton University research team researcher on the movement, pointed out that effective altruists' imminent fear that artificial intelligence will destroy humanity “masks their ability to accept external criticism of this culture. This isn't good for any group trying to solve any tough problem.”

However, it is now clear that people will not be able to ignore the influence of effective altruism in Silicon Valley and even in the field of AI. The campaign's supporters include Facebook co-founder Dustin Moskovitz and Skype co-founder Jann Tallinn, who have pledged billions of dollars for effective altruistic research. Before FTX exploded last year, FTX creator Sam Bankman-Fried also pledged billions of dollars.

Finally, there is another important name that is clearly more difficult to ignore — Tesla CEO Musk.

Musk once said that William MacAskill (William MacAskill), founder of the Effective Altruism Movement and professor of philosophy in England, “fits my philosophy very well.” Over the past year, Musk has repeatedly expressed concerns that AI may destroy humans.

Editor/Corrine

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment