share_log

OpenAI CEO:AI Agent就像高级同事,将根据计算量而不是数量来定价

OpenAI CEO: AI Agents are like senior colleagues, pricing based on computational complexity rather than quantity.

wallstreetcn ·  14:32

Altman states that agents can complete a two-day or two-week task well, ask questions when necessary, and then provide excellent key products, not based on the number of agents, but priced based on the calculation needed for the question. GPT4 can easily replicate success, even if it does not know how others have achieved it.

Recently, OpenAI CEO Sam Altman shared his views on AI Agents in an interactive interview, involving the potential, pricing, and business models of AI Agents. He believes that Agents can work with users like an intelligent colleague to complete projects. He also emphasized that reasoning is the most important focus area for current OpenAI models, unlocking greater value.

The key points of the interview are as follows:

  • An Agent can effectively complete a task in two days or two weeks, ask you questions when necessary, and provide excellent feedback on an important product.

  • AI Agents are not priced per Agent, but based on the computational needs of the problem.

  • We can generate income through models, which I believe proves that the investment is reasonable.

  • There may be too many people training very similar models, so if you fall slightly behind, or if you do not have a product with natural business rules that gives the product stickiness and value, it may be difficult to achieve investment returns. We are very fortunate to have ChatGPT and thousands of people using our models.

  • Our unique advantage is the ease to replicate what has already been successful. So after doing some things in the research lab, even if you don't know how they did it, replicating it is still feasible.

  • The culture that truly challenges me and makes me proud is constantly doing new, completely unverified things.

The following is the original interview text, with some content slightly trimmed:

AI Agent is like an advanced colleague, capable of providing efficient feedback for excellent results.

Question:

What misconceptions do you think people have about AI Agents?

Sam Altman:

I don't think anyone can predict what this will become. As you said, we are all describing some seemingly important things, maybe I can give an example, when people talk about AI Agents acting on their behalf, the main examples they give seem quite consistent.

For example, you can have the Agent make restaurant reservations for you, whether through interaction with OpenAI GPT or by directly calling the restaurant. Of course, this is a very troublesome thing to do, but it must be done, perhaps similar to doing certain types of work.

What I find interesting is that globally, you can do things that you wouldn't or couldn't do otherwise. For example, instead of my agent calling one restaurant to make a reservation, what if they called 300 to find out which one has the most suitable food for me or what makes them special?

You might say, it would be annoying if the agent kept calling restaurants, but if it's an automated system handling those 300 restaurants, then no problem. It becomes a large-scale parallel operation that humans can't achieve. This is just a simple example.

But it also reveals the limits of human bandwidth, limits that an agent may break. I think the more interesting category is not what people usually talk about, but having something that serves you, more like a truly intelligent senior colleague who you can collaborate with on a project. An agent can handle a two-day or two-week task well, ask you when necessary, and then deliver a great output product.

AI agents are paid by the amount of computation.

Question:

Does this fundamentally change the way pricing works, and how do you see the future of pricing? How will it be priced?

Sam Altman:

I am a venture capitalist, making a living by investing, so we expect to reap rewards over time. Imagine a world where you can say, I want a GPU, or 10 GPUs, or 100 GPUs to solve my problem. It's not about paying per agent or per unit, but pricing based on the computational needs of the problem.

Question:

Do we need to build dedicated models for specific purposes, or do we not need to? How do you view this issue?

Sam Altman:

There is indeed a lot of infrastructure and things to build, but I think o1 points to a model that can do an excellent job. This is indeed something worth further consideration.

Models are depreciating assets, but investing in ChatGPT makes sense.

Question:

With regards to models, everyone says models are depreciating assets, and model commoditization is so common, how do you respond to and think about this issue? When considering the increasing capital intensity of training models, are we actually seeing a reversal of this trend, requiring so much funding that very few can actually do it?

Sam Altman:

Indeed, models are depreciating assets, but to say that their value is not as good as training costs seems completely incorrect. Not to mention, as you learn to train these models, you will better train the next model and receive positive feedback from it. We can generate income through models, which I believe proves that investing is reasonable.

But to be fair, I don't think this applies to everyone. There may be too many people training very similar models, and if you fall behind slightly, or if you don't have a product with natural business rules that gives the product stickiness and value, it may be difficult to get a return on investment. We are very fortunate to have ChatGPT and thousands of people using our models.

Reasoning is the most important area of focus for current OpenAI models.

Question:

How do you view the continued differentiation of OpenAI models over time, and which aspect of differentiation are you most interested in focusing on?

Sam Altman:

Reasoning is our current primary area of focus, which I believe is the key to unlocking the next major advancements and value. Therefore, we will enhance them in many ways. We will engage in multimodal work, incorporating other features that we consider very important into the models to meet how people use these tools.

Question:

How do you view the reasoning in multimodal work and the challenges it faces, as well as the goals you hope to achieve?

Sam Altman:

In multimodal work, reasoning clearly requires some effort, but just like when people are infants and learning to walk, although their language ability is not strong, they can still engage in quite complex visual reasoning, so obviously this is possible.

Question:

How will visual capabilities expand with the reasoning time paradigm set by o1?

Sam Altman:

Without revealing too many details, we will make rapid progress.

Question:

GPT's output is generative. I don't think we have a close English-style in a lot of GPT outputs. The output of GPT is usually in American spelling and tone. How do you view the internationalization of models, different cultures, different languages, and the importance of this?

Sam Altman:

I don't use British English, I haven't tried, but I would guess it performs well in British English. We can see, we can do it.

OpenAI easily replicates things that have already been successful.

Question:

How does OpenAI make breakthroughs in reasoning ability? Do we need to start advancing reinforcement learning as a path, or new technologies? Are there other methods besides Transformer?

Sam Altman:

This is our uniqueness, it's easy to replicate things that have already been successful. So, after doing some things in the research lab, even if you don't know how they did it, replicating it is feasible. You can see this in the replication of GPT-4, I believe you will see this in the replication of o1.

The culture that truly challenges me and makes me proud the most is to continuously do new and completely unverified things. Many organizations, in any field, claim to have this ability, but actually achieve very little. In a sense, I think this is one of the most important contributions to human progress.

So I once dreamed of writing a book after retirement, summarizing everything I have learned about how to build an organization and culture to do such things, rather than just imitating what others have done.

Because I believe there can be more organizations like this in the world, limited by human abilities, but there is a lot of waste of human abilities because this is not the organizational style or culture we are good at building, whatever you want to call it. Therefore, I hope there can be more of these things, I think this is what makes us special.

Editor/ping

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment