share_log

Weekend Reading | Google is “Making a Comeback”! CEO Sundar Pichai’s Latest Dialogue: “Full-Stack Transformation” Has Moved Past the Preparation Phase, with All Teams Accelerating Forward

Smart Investor ·  Nov 30, 2025 11:24

Source: Smart Investors

Entering the fourth quarter, $Alphabet-A (GOOGL.US)$ is undoubtedly the most formidable among the 'Magnificent Seven'.

Especially since November, with NVIDIA, the leading stock, experiencing continuous declines, its relentless rise to new highs has appeared particularly solitary.

With a latest market capitalization exceeding $3.8 trillion, it has left $Microsoft (MSFT.US)$ behind, ranking second only to $NVIDIA (NVDA.US)$$Apple (AAPL.US)$ The top three finishers.

On November 26, Google and Alphabet CEO Sundar Pichai was interviewed by Logan Kilpatrick of the DeepMind team, sharing highlights from an eventful week. Of course, the focus wasn't on stock prices, but rather the enthusiastic response following the launch of Gemini 3.

Google is making a powerful comeback, a sentiment most strongly felt over the past six months.

This reflects the convergence of three underlying trends: the commercialization of AI in core businesses, accelerated growth in cloud computing, and the varying degrees of return of founders Sergey Brin and Larry Page.

Once feared to be a cyclical company overly reliant on advertising, Google faced the risk of its core search business being eroded by AI. However, almost imperceptibly, Google has transformed into an efficient, vertically integrated AI infrastructure giant with strong competitive advantages.

In response to the "red alert" posed by ChatGPT, Google swiftly embedded its most powerful AI model, Gemini, into its core products. Data shows that AI-driven search summaries and other new features have not only preserved traditional query volumes but also driven incremental growth in user queries.

More crucially, Google Cloud Platform (GCP) has become the cornerstone of computational power in the AI race. Amidst a shortage of NVIDIA GPUs, Google’s self-developed TPUs have attracted numerous enterprise clients requiring large-scale model training and deployment due to their cost and efficiency advantages, such as the latest win with $Meta Platforms (META.US)$ Large orders.

In this conversation, Sundar Pichai made this point clearer: from declaring AI First in 2016, to the merger of Google Brain and DeepMind into Google DeepMind, to the deployment of transformer and BERT across products such as search and photos, Google has actually been on a long journey of full-stack commitment—first building up layers of models, computing power, and toolchains, then integrating them into search, YouTube, Docs, Maps, cloud services, and an open ecosystem for developers.

"From the outside, you might have thought that Google was quiet at the time, seemingly inactive and perhaps falling behind. But in reality, we were steadily laying a solid foundation, arranging each piece in the right order, and then beginning to truly exert ourselves on top of that."

Beyond strategic transformation, Google also achieved its 'coming of age' in financial discipline. In April 2024, it announced its first quarterly dividend and paired it with a $70 billion stock buyback, signaling to the market that it is not just a growth stock but also a cash flow machine.

While heavily investing in AI infrastructure, Google maintained robust operating margins by streamlining its organization and improving efficiency. This provides long-term investors with a quantifiable and reassuring anchor in the capital-intensive AI era.

It seems that, $Berkshire Hathaway-B (BRK.B.US)$ the over $4 billion investment in Google during the third quarter of this year, making it the tenth largest holding, was a significant advantage.

A subtler and perhaps more important thread is the return of culture and the founders.

Sergey Brin stepped back from his 'semi-retirement' to closely oversee the Gemini project, helping Google make an urgent recalibration in its AI direction. He bridged the gap from research to product and then to commercialization, reigniting the entrepreneurial spirit and sense of urgency within the company. Meanwhile, Larry Page mostly stayed behind the scenes, quietly playing the role of a long-term architect by focusing on the Alphabet structure, AI infrastructure CapEx, and resource allocation for 'moonshot projects' like DeepMind.

In the interview, Sundar mentioned a corner he particularly likes: the pantry in the Gradient Canopy building, where you often see Sergey Brin personally making coffee... The dense feeling of the 'company shrinking' is reminiscent of the early entrepreneurial atmosphere at Google.

The interesting aspect of this conversation lies in how it serves as both a technical roadmap and an expression of how a manager views cycles and timeframes.

On one hand, Sundar discussed how Nano Banana Pro transitioned from being 'fun' to 'enhancing productivity,' using new graphics and interactions to condense complex analyses into more readable infographics. On the other hand, he talked about vibe coding, which transforms 'writing code' into 'YouTube moments for the programming world,' encouraging non-technical people to start creating things hands-on.

Looking further ahead, he spoke about Waymo, quantum computing, and Project SunCatcher, which aims to send data centers into space—those seemingly sci-fi projects are reverse-engineered with a decade-long perspective, broken down into more than 20 milestones for execution.

When we piece these fragments together, we realize that what Google got right over the past year was reconstructing its growth engine with AI, while firmly anchoring this new engine onto the corporate body through financial discipline and the directional clarity brought by the founders' return.

There is both speed and stability.

Regardless, investors who newly purchased, increased their stakes, or consistently held significant positions in Google during the third quarter must be absolutely thrilled.

Finally, Smart Investors has compiled and shared the highlights of Sundar’s dialogue with everyone.

01. Building a 'full-stack capability' system

Logan: We are currently in Mountain View. Both Gemini 3 and Nano Banana Pro have been launched, and the response from the outside world has been extremely enthusiastic. Could you help us outline the significance of this phase from an overall perspective?

Sundar: First of all, I'm very glad to be here. This week can indeed be described as 'spectacular.'

When we were heads-down developing products internally, we always imagined what it would be like if one day we could truly release all our results. For product developers, there’s nothing more exciting than that. And this week, it's precisely such a moment.

But this outcome didn’t happen overnight; it’s built on years of accumulation and continuous investment. We’ve been pushing forward at a rapid pace. However, seeing all the results converge and materialize at the same time feels truly extraordinary.

Over the past few weeks, I've been reflecting, and almost every day we’ve had something new being launched. The rhythm and atmosphere are incredibly invigorating.

Logan: The key reason we’ve reached this milestone today is the long-term perspective — truly integrating Gemini into various Google products, advancing the model to the forefront of the industry, and completing the infrastructure setup in parallel.

In such a highly competitive environment, I’m particularly curious about how you maintain that long-term mindset?

Sundar: I constantly push myself to think in that direction.

Of course, you can get caught up in the immediate pace. Our industry moves very fast, and everyone wants quick results. Personally, I also enjoy this fast pace.

But at the same time, you must have the ability to step back and evaluate from a longer-term perspective: assessing which directions are truly worth betting on, and then staying focused on them over an extended period. I think this is critically important.

In 2016, I spearheaded the company’s overall transition to AI-First, which was in fact built upon an earlier phase of accumulation.

For instance, in 2012, there was Google Brain, with its famous paper on cats that achieved a breakthrough in image recognition; in 2014, we acquired DeepMind; and early 2016 was a pivotal moment for AlphaGo. What many may not have noticed is that in May of that year, we also released the first generation of TPU.

Therefore, at that point in 2016, I was already very clear: we were standing at the threshold of the next platform shift. The choice we made was an “all-in bet” to transform Google into a company truly driven by AI at its core.

In the following years, we continuously advanced the implementation of AI technologies, with many breakthroughs originating from Google, such as the transformer.

We integrated these capabilities into our products, using BERT and MUM to enhance search experiences, and also launched products like Google Photos.

However, when the wave of generative AI truly arrived, I realized this window was much larger than we had originally anticipated—whether users, developers, or the entire world, everyone was ready to adopt these technologies on a much greater scale.

So how should we respond?

For us, it was about initiating the Gemini project, driven through cross-team collaboration between Google Brain and DeepMind. Simultaneously, we decided to integrate these teams, forming today’s Google DeepMind.

We also ramped up investments in infrastructure: whether data centers, TPUs, or GPUs. Next, the goal is to accelerate the pace of work across the entire company.

With the core technology in place, the GDM team has continued to roll out various versions of Gemini, and you’ve been involved in many of them. It’s great to see that together, we’ve witnessed these milestones.

The next step is how to truly integrate these technologies into all of our products, which reach billions of users every day.

Particularly for core products like search, how do we continuously iterate through the capabilities of these models? This is the journey we are currently advancing.

And when you step back and look at the big picture, it becomes very clear: once we finally establish a full-stack capability system, technological innovation at every layer can be transmitted downward, ultimately reflecting in the user experience.

This is what excites me the most.

02, The simultaneous release of Gemini accelerates progress

Logan This is also the analogy I like to use when explaining 'pre-training' to others. Pre-training works so well on DeepMind's Gemini model, and the subsequent post-training and reinforcement learning built upon it are like accelerating the model’s inherent capabilities. I think our infrastructure story follows a similar logic.

Sundar Exactly. Once you solidify the foundational infrastructure, every phase of model training—whether pre-training, post-training, or inference—becomes more efficient and powerful.

The next question is: how do we turn these capabilities into tangible products that users can experience? For example, how do we implement capabilities like Nano Banana into specific products?

We have launched generative interfaces in search, such as AI mode, which is one manifestation. You can see these models’ capabilities being activated across different levels and products.

Moreover, not only are we using them ourselves, but we are also packaging and opening up these capabilities to developers, enabling them to continue innovating on top of them.

This cascading amplification effect, from foundational capabilities to product experience and then to the developer ecosystem, can indeed yield multiplicative results.

This feeling always gets the blood pumping.

Of course, this kind of full-stack transformation does not happen overnight; many components require time to be gradually built up. When we prepared to respond to the wave of generative AI, initially we faced a shortage of computing power.

Therefore, during that period, we had to make significant investments, aggressively scaling up data centers, TPUs, and GPUs, to raise the overall 'ceiling' of computational capacity. The fixed costs involved in this process were extremely high.

From the outside, it might have seemed as though Google was quiet at the time, appearing inactive or even falling behind. But in reality, we were steadily laying a solid foundation, arranging each building block in the right order, before beginning to truly exert our efforts on top of it.

Now we have moved past that preparatory phase and officially entered the execution phase. You can clearly feel that every team is now accelerating forward with great momentum.

Logan: Indeed, the overall pace has noticeably quickened.

You mentioned earlier that Gemini appears across all our products, presenting a new challenge: how to synchronize releases—not only aligning the timing of product launches but also addressing issues like computational resource allocation and optimizing model performance differences across various product scenarios to ensure a consistent user experience.

In the past, aside from infrastructure systems like Gaia or Google Accounts, there were very few instances where something truly spanned so many of Google's products—from Cloud, Waymo, Search, Gmail, to Maps, Docs, and more.

But now, Gemini serves as a central thread connecting everything. This level of integration is unprecedented and sometimes even feels magical. What are your thoughts on this change?

Sundar, your observation is particularly insightful.

For me, Gemini represents the clearest and most concrete embodiment of our AI-First strategy. We've been advocating this strategy for many years, which used to feel more like a sense of direction, but now Gemini is something you can actually see, touch, and experience.

As you mentioned, Gemini not only enhances search but also improves the experiences on YouTube, Google Cloud, and Waymo. This broad impact is something no previous technology has achieved.

I am particularly excited about the release of Gemini 3.

The simultaneous release you mentioned—deployed across many of our products—is impressive, but what excites me even more is that if you look at X, companies like Copilot, Replit, and Figma are also launching their own Gemini-powered applications at the same time.

To me, this is true innovation at scale: it's not just Google accelerating, but companies all over the world collectively making strides at the same moment.

This synchronized progress is truly awe-inspiring.

03. The world’s latent creativity is being unleashed

Logan, returning to Nano Banana Pro. You must have spent quite some time playing with this model yourself, right? Everyone is going crazy over it now, which also speaks to its strength. Here's a question: Are we enhancing the productivity of the entire world?

We are gradually transitioning from the fun phase to a truly practical phase.

I just saw a post by Ben Gerin on X, where he shared an infographic from his core analysis of ViiV. I initially glanced at it casually, but the graphic was so compelling that it drew me in to read through the entire analysis carefully to understand what he was conveying.

When PowerPoint first emerged, it triggered a wave of content proliferation: people could create slideshows, and as a result, the amount of content grew rapidly, becoming increasingly complex.

However, with the advent of Nano Banana Pro, we may be able to enter the next phase—leveraging the capabilities of models to recompress and reorganize complex information, then presenting it in a more intuitive and comprehensible manner, truly enhancing people's understanding.

Logan: To be honest, I’ve always had some doubts: can many generative media models really bring significant value to the world? Of course, they’re useful for entertainment, but they’ve always seemed somewhat distant from being productivity tools.

This time, I believe Nano Banana Pro has truly bridged that gap, especially in the area of infographics combined with Google search integration.

This also serves as a reminder: new applications like this will continue to emerge in the future.

Sundar: This actually reflects a broader reality—the world holds far more creativity than we ever imagined.

One particularly exciting change we’re seeing now is that people are becoming increasingly willing to express themselves. And what we’re doing is providing them with a new set of tools that allow them to turn their ideas into reality.

Today, not only are we making these tools more powerful and expressive, but we’re also striving to make them simpler and more accessible so that more people can use them.

Witnessing this creativity being gradually unleashed is truly awe-inspiring.

04. How to Measure the Success of a Product Launch

Logan: For Google, a large-scale launch like Gemini 3 is considered a milestone event. What is your personal standard for measuring 'success'? Is it based on external reactions? Or the usage data from the first day? Or how do you determine whether a launch has truly pushed Google forward by a significant margin?

Sundar: Actually, on the day of the launch, I was online throughout, closely monitoring the product's performance and feedback from everyone.

I look at information from various channels. For example, I go on X to observe how people, especially regular users, are actually using our products. I even reply directly to some comments and forward valuable feedback to the team: 'Look, this point is valid, we need to find a way to improve it.'

Internally, we also have a comprehensive system in place for this purpose. Many teams are using Gemini to collect and organize feedback. We have top-tier data dashboards, so I aggregate this information from various sources.

But I’m still someone who needs to personally experience things to feel reassured.

Of course, I review reports, but I prefer to walk around, see firsthand how people are using the product, what they’re discussing, and observe their genuine reactions.

I also visit some core teams whose offices typically have large screens displaying metrics such as QPS (queries per second), usage trends, and computational load… These details give you a very real sense of what’s happening out there.

Therefore, my approach to measuring the success of a launch is through a combination of 'online sentiment + offline communication + real-time data + user observation.'

Especially on the day of the launch, this intuitive feeling is very helpful for quickly identifying what went well and what areas still need improvement.

Logan: I remember the 2.5 Pro was announced at Google I/O, right? You could already tell back then that it was a very pivotal upgrade.

Sundar: Exactly. What makes me most proud now is that Cora and the Google DeepMind team have settled into a very healthy rhythm. We are basically able to push the frontier of model development just a little further every six months.

Of course, this also means the challenges are becoming greater.

The 2.5 Pro itself is already incredibly powerful, so taking a 'meaningful major step' forward from it is undoubtedly difficult. But precisely because it’s challenging, this path becomes all the more exciting.

I know you’ve always been particularly excited about Flash (laughs), and developers are too. Because Flash will allow us to serve more users, and it will play a significant role in the overall model family.

I’m highly confident about Gemini 3.0 Flash—it might just be the strongest model we’ve developed to date.

More importantly, the internal team has already started preparing for the next generation, and our pre-training team is already conceptualizing subsequent versions.

This culture of continuous innovation and regular releases is exactly what I find particularly precious at this stage.

05, The Revival of Internal Entrepreneurial Culture

Logan: I’d like to share an observation that has also been mentioned internally: In the Gradient Canopy office building, there’s a tea station where many important discussions within DeepMind are now happening.

We all know that Google is a vast, global company with countless things happening every day. But when you walk into that pantry, you suddenly feel like the company has become small, intimate, and very much like a team in its startup phase. What do you think of this kind of atmosphere?

Sundar: I really understand what you're saying. That place truly reminds me of the early days of Google.

I often go there, and sometimes when you walk in, you can see Sergey Brin making coffee. People like Demis Hassabis, Jeff Dean, and Sanjay Ghemawat are also there, meticulously preparing their espresso shots (laugh).

To be honest, if there were one image to encapsulate Google's culture, it might just be those people making coffee in the pantry.

I actually know quite a bit about how to make a good espresso, but in such an environment, I wouldn't dare to try because it makes me a little nervous.

But being there, you can sense an extremely high density of talent and intensive communication. Many colleagues from out of town come to visit, and ideas keep colliding. The atmosphere is genuinely reminiscent of our early startup days.

Some of our service teams, like Emma’s group, sit right over there. When I mentioned earlier that I would check the QPS (Queries Per Second) changes, I was literally standing next to their big screen, closely monitoring the data to see what was happening at the moment.

So for me, that area is truly one of my favorite corners and also a microcosm of how this company operates.

Logan: If I could submit a 'product request' to Google, it would be whether we could replicate this model across the office spaces of all product lines? I don’t know how to do it, but intuitively, I think it’s worth doing.

Sundar: In fact, many teams have already DIYed similar versions (laugh). And I think it helps encourage people to return to the office. It makes a significant difference.

Because when you are actually sitting in the office, you can truly feel the value of face-to-face communication.

Of course, everyone can still choose to work remotely from home when they need to focus. But before that, having a brief but highly intensive moment of interaction is truly very meaningful.

06. Make long-term bets and reverse-engineer each milestone along the way.

Logan: Looking ahead from this point, what direction do you think we should bet on for the next decade? Infrastructure? Or, since we are already quite certain that AI is the core, should we bet everything on it?

Sundar: I think being able to think about issues with a ten-year horizon is itself very important.

Looking back ten years, our bet was on AI, and not just a superficial attempt but a comprehensive, top-to-bottom full-stack commitment.

We were also simultaneously driving several large new initiatives to make the company’s structure healthier and more resilient, such as YouTube and cloud services.

In fact, Google has been a Cloud Native company since its inception; internally, we adopted advanced cloud architecture very early on. However, at that time, we did not fully externalize this entire capability, so later on, we made a very significant bet on the cloud.

Waymo is another example of a long-term bet. These projects often require a sufficiently long accumulation period, and looking back now, Waymo has reached a critical turning point.

We continue to make similar future-oriented bets. For example, with quantum computing, I still believe that in five years, we will be talking about quantum with the same excitement as we discuss AI today.

So I often think about issues on this kind of time scale in my daily life.

For example, two weeks ago we just announced Project SunCatcher, which aims to build data centers in space. It sounds like a 'moonshot' project and may seem a bit crazy now, but take a step back and think: how immense will the future demand for computing power be?

This idea will soon become reasonable; it's only a matter of time.

To make real progress on these ultra-long-term projects, our approach is 'backcasting,' meaning we start from the long-term goal and work backward to identify each milestone along the way.

For instance, SunCatcher has been broken down into 27 critical milestones, and then we move forward step by step.

By 2027, we hope to actually send TPUs into space, and there might even be a chance to 'accidentally encounter' that floating Tesla Roadster(笑).

But that’s just one example. Projects like AlphaFold, Wing’s drone delivery, and the robotics initiatives we are advancing all follow a similar line of thinking.

As long as you adopt a sufficiently long-term perspective and keep pushing forward, results will eventually emerge.

07. Changes Brought About by Lowering Barriers with Tools

Logan: The point you just mentioned resonates deeply with me. The continuous improvement in model capabilities essentially raises the 'floor of creation.' Many people who previously couldn’t write code or design are now able to start creating things.

You also engage in vibe coding in your free time, right? I’m curious about how you view this moment. It’s not just traditional software engineers anymore—more 'AI builders' are now able to take action. What does this mean for you?

Sundar: This is indeed an especially exciting phase.

It reminds me of the early days of the internet. At that time, blogs suddenly exploded, resulting in countless new writers overnight; later, YouTube lowered the barrier for video creation, giving rise to a new generation of content creators.

I think what we’re seeing with vibe coding is akin to the 'YouTube moment' for programming.

You can see this trend within Google too: there's been a noticeable increase in the number of people submitting their first code change (CL). This is the kind of shift that happens when tools lower the barriers to entry.

In the past, if you worked in marketing and had a good idea, at most you would write a document to describe it; now, you can use vibe coding to create a prototype demo yourself and show it directly to your team.

You can clearly see that this change is happening every day.

Just the other day, I was chatting with a colleague from our communications team. She doesn’t normally write code, but to help her son learn Spanish verb conjugations, she wrote a prompt in Gemini 3 to generate an animated webpage.

Something like that would have been unimaginable in the past. Now we're seeing tools bringing people who were previously left out into the ranks of creators, which is really promising.

I don’t have much time to play around with these things myself, but every time I do, I get the sense that 'this is so much fun.' Even if it’s not specifically vibe coding, just using a new IDE makes programming itself more enjoyable.

Of course, I wouldn’t dare touch large codebases with extremely high security requirements; that work must be left to professional engineers.

But I genuinely feel that the barriers are truly lowering, and coding is becoming fun again.

The key point is, this is only the most rudimentary stage of these tools’ performance.

I’ve often said: What you’re seeing now is Waymo’s worst-ever performance—it will only get better from here. Similarly, what we’re seeing with Gemini, AIStudio, and vibe coding is still in a comparable early phase.

That’s why I’m particularly excited about what’s coming next—it’s truly an exhilarating time.

I’m also very curious to see what completely unexpected things people around the world will create using these tools.

Logan: What can we expect next? I know there are definitely many new developments in the pipeline—what excites you the most right now?

Sundar: First, I’d say we all need to get some good rest (laughs). It’s been such an intense period; I hope both you and I, as well as the entire team, can take a breather and adjust our pace a bit.

From a product perspective, what excites me the most is, of course, the entire Gemini roadmap. Not only are our models getting stronger, but they are also deeply integrating into all of our core products.

At the same time, we continue to roll out entirely new offerings. For instance, I’m particularly fond of Flow—I’ve been experimenting with it lately—and notebook-style products, which have already attracted a highly enthusiastic and rapidly growing user community.

I’ve seen reporters use it to create interview outlines, and doctoral students complete their entire thesis research process within it. These application scenarios truly resonate with me.

So, there is still much to be done moving forward, and I am very optimistic about future developments.

Editor /rice

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment