On the other side of the ocean in Silicon Valley, a 'covert battle' over the foundational computing power for AI is quietly intensifying.
Recently, Gemini 3, trained using Google's self-developed Tensor Processing Unit (TPU) chips, outperformed ChatGPT 5, which was trained using NVIDIA's Graphics Processing Units (GPUs). Following this, market rumors suggested that Meta might adopt Google’s self-developed TPUs on a large scale to replace some of NVIDIA’s GPUs. These developments, like boulders dropped into a lake, have created ripples across the capital markets.
Since November, the two AI giants have exhibited diverging performance trends,$NVIDIA (NVDA.US)$with one falling by 12.59%, while$Alphabet-A (GOOGL.US)$the other rose against the trend by 12.85%. Will NVIDIA’s trillion-dollar AI empire show cracks due to the rise of Google's TPU? In this contest between tech giants, Chinese companies, which are already deeply embedded in the global computing power supply chain, will face what opportunities and challenges? With these questions in mind, reporters from the Securities Times and Broker China interviewed several fund managers and industry veterans specializing in technology to seek the real logic of the capital markets through the fog of competition over technical routes.
The Battle Between Customized and General-Purpose Chips
To outsiders, the contest between Google’s TPU and NVIDIA’s GPU appears to be a fierce battle for supremacy. However, in the eyes of professional investors, it represents more of a rational return to considerations of efficiency and cost.
“$Alphabet-A (GOOGL.US)$TPU is a customized chip,$NVIDIA (NVDA.US)$while GPU is a general-purpose chip. Therefore, what we are discussing is the competition between customized and general-purpose chips, rather than the competition between Google and NVIDIA.” Cao Xuchen, fund manager of Hua Bao Hong Kong Stock Information Technology ETF, pointed out to the Securities Times - Broker China reporter.
History is always strikingly similar. Cao Xuchen believes that it is not only in the server field but also in traditional fields such as consumer electronics and automobiles, where the share of customized versus general-purpose chips has undergone changes. The eventual outcome in all sectors was coexistence at different shares. For instance, in the mobile phone sector, there are both general-purpose Qualcomm and MediaTek, and customized Huawei and Apple. “In terms of underlying chip manufacturing technology, there is no difference between TPU and GPU. The core demand of custom chips like TPU is cost reduction.”
A public fund manager from North China further analyzed the differences between the two from a technical architecture perspective. He stated that Google’s TPU is a typical ASIC (Application-Specific Integrated Circuit), designed from the outset solely to accelerate neural network computations, enabling maximum performance while reducing energy consumption. In contrast, NVIDIA, through its CUDA ecosystem, has turned the GPU into an extremely flexible general-purpose parallel computing platform with high flexibility, capable of supporting and being compatible with the evolution of various large AI models.
“Generally speaking, Google’s TPU performs better than NVIDIA’s GPU in terms of performance and cost, but it lags behind in ecological openness and compatibility.” This fund manager analyzed, “Against the backdrop of continuous iteration of large models and technical routes not yet fully established, NVIDIA’s general-purpose GPU, with CUDA’s strong compatibility, remains the optimal choice for most manufacturers. The trend of one company dominating may continue for quite some time. However, once the technical route of large models stabilizes in the future, ASICs such as Google’s TPU may gradually become mainstream in computing acceleration chips.”
One direct trigger for the market's heightened attention on TPU is that Gemini 3, trained by Google using the TPU v6 chip, is considered superior in performance to ChatGPT5, which was trained by OpenAI using NVIDIA GPUs.
In this regard, Cao Xuchen provided a calm perspective: “This does not mean that NVIDIA’s chips are inferior. If we roughly calculate using the method of ‘total compute power / price,’ the cost-performance ratio of v6 per unit of compute power is still weaker than NVIDIA’s B200/B300. Next year, the cost-performance ratio of Google’s v7p per unit of compute power is expected to be on par with NVIDIA’s Rubin chip.” He believes that this is a natural trend resulting from the constant competition among chip manufacturers and the narrowing technological gap, rather than a new disruptive threat.
A technology fund manager in Shanghai likened this competition to a ‘relay race.’ He pointed out that both the self-developed chips of major companies and NVIDIA’s iterations are accelerating. “Although TPU v7 is currently ahead, NVIDIA will soon mass-produce Rubin after the release of v7, achieving an overtaking. In the future, the two will run in parallel: NVIDIA represents the highest-end general-purpose computing needs, while self-developed chips from companies like Google, Amazon, and Meta will mainly be used in specific scenarios with low inference costs.”
Cao Xuchen predicted that the increase in customized chip market share is an established trend. Market expectations indicate that by 2029–2030, the global market share of customized computing power chips and GPUs will reach a 'fifty-fifty' split. However, before 2026, NVIDIA's dominant position remains unchanged, and it is not until around 2027, as computing power performance becomes increasingly comparable, that the market may once again enter into fierce competition for market share.
A-share optical module and PCB sectors may see unexpected incremental growth.
Whether GPUs or TPUs gain the upper hand, the competition behind computing power chips reflects higher requirements for data transmission efficiency. For hardware supply chains, often referred to as 'water carriers,' this is not a negative factor but rather a structural benefit. The recent fluctuations in A-share optical module and PCB (Printed Circuit Board) sectors seem to be pricing in these expectations in advance.
Caitong Fund stated that from the demand side, although different chip architectures have some variations in their requirements for hardware components such as optical modules and PCBs, this primarily reflects diversity in technical pathways rather than fundamental demand divergence. From the supply chain perspective, in segments like PCBs and optical modules, some leading domestic suppliers are already at the forefront globally. They possess advantages in customer responsiveness, product mass-production stability, and cost-efficiency. Therefore, despite the emergence of new participants among downstream chip manufacturers, the cooperation foundation between core domestic supply chain companies and major global clients remains solid, providing them with a competitive edge.
It is worth noting that if$Alphabet-A (GOOGL.US)$the share of TPUs increases, it may bring unexpected incremental growth to the optical module and PCB markets.
A fund manager from a public offering fund in North China revealed to reporters from Securities Times - Broker China: 'We conducted an estimate regarding optical modules. Assuming Google and NVIDIA's computing power cards have equal theoretical computational power, the optical module usage for TPU v7 is 3.3 times that of NVIDIA Rubin (2-die version). This implies that if Google's TPUs partially replace NVIDIA’s market share, the growth rate of the optical module sector will accelerate, and its proportion of overall capital expenditures (Capex) will further increase.'
In the PCB sector, technological iteration has also led to significant value enhancement. According to the fund manager, Google’s next-generation TPU may adopt Taiflex's more advanced M9 (HVLP4+Q glass) copper-clad laminate material, which will directly drive up the price and profit margins of high-end PCBs.
However, amid optimistic expectations, some institutional investors have maintained cautious and sober reflections.
Cao Xuchen proposed a unique risk perspective: if the TPU model achieves complete success, it may signify a shift in the logic of the computing power industry from 'continuous inflation' to 'relative deflation.'
"I don’t believe that if TPU succeeds while GPU fails, the valuation of the computing power industrial chain can be sustained," Cao Xuchen analyzed. "A good growth industry is accompanied by triple upward revisions in volume, price, and profit margins, which was exactly the case for the computing power industrial chain over the past two years. Once low-cost TPUs become mainstream, while this will benefit the application side, the hardware industrial chain may face valuation suppression. Therefore, looking slightly longer term, GPUs cannot 'fall,' because if they do, there may be a collapse in valuations across the entire industrial chain."
Cost reductions in computing power fail to mask the absence of a 'blockbuster product.'
At the same time, some market voices suggest that if$Alphabet-A (GOOGL.US)$TPUs can provide powerful computing power at a lower cost, it may greatly promote an explosion in AI applications, thereby shifting market focus from infrastructure hardware to application software and services.
In response, the interviewed fund managers expressed a stance of 'long-term optimism, short-term caution.'
"The second half of AI investment lies in applications, but that doesn't mean the second half has already arrived," said a public fund manager from North China frankly. "The key to whether applications can explode now lies in whether large models are sufficiently 'intelligent,' not just in cheap computing power. Despite the shortage of computing power, I still think we are in the 'computing power reigns supreme' phase."
Cao Xuchen further pointed out that breakthroughs in technology sectors often begin with a blockbuster product, such as how ChatGPT ushered in the first year of computing power. However, the core issue with AI applications today is the 'absence of a blockbuster product.' Even without discussing how large models are 'eroding' parts of the traditional software application market, it must be acknowledged that the core problem with AI applications right now is still the lack of a blockbuster product.
"From a medium- to long-term perspective, the cost-reduction effect of TPUs will definitely benefit large model companies and AI application companies, as it lowers the threshold for companies' AI transformation. But from an investment perspective, whether AI applications can have significant upside potential like AI computing power remains questionable, especially in terms of AI application software," Cao Xuchen stated.
Feng Ludan, a fund manager at China Europe Fund, believes that AI is not only relevant to the TMT industry but also serves as a general productivity tool reshaping traditional sectors. At this stage, she is paying closer attention to the following areas: first, humanoid robots and high-end manufacturing (embodied intelligence), where AI “brains” have enabled mechanical “bodies” to understand complex instructions; second, intelligent driving, with large models driving breakthroughs at the L3/L4 levels; third, AI-powered pharmaceutical R&D, where AI applications in protein structure prediction and molecule screening are significantly compressing the new drug development cycle.
The practical implementation of applications may be key to validating AI bubbles.
It is worth noting that, with$NVIDIA (NVDA.US)$the fluctuations in stock prices, Wall Street’s debate over an “AI bubble” has gradually gained momentum, with many investors drawing parallels between the current AI boom and the dot-com bubble of 2000.
From the perspective of Caitong Fund, the two waves do share similarities: both originated from technological breakthroughs, triggered capital frenzies, and initially saw revenues unable to cover investments. However, fundamentally, this time is “different.”
“In terms of technological implementation, text models of this round of AI have already been widely adopted, with programming scenarios beginning to generate revenue and cloud providers seeing accelerated revenue growth; regarding the health of the industrial chain, GPU idle rates are low, and leading firms’ inventory, cash flow, and other metrics remain healthy, with extremely high order visibility; in terms of valuation rationality, based on the high growth expectations for the industry over the next three years, we believe the overall valuation levels of companies within the industrial chain are still relatively reasonable, without widespread signs of overheating.” Caitong Fund stated.
Regarding valuation concerns, a public fund manager in North China conducted a detailed comparison using data: “At the peak of the dot-com bubble in 2000, leading companies had P/E ratios as high as 150x, severely overextending future prospects. In contrast, the P/E ratio of leading AI companies in 2025 will be less than 40x, supported by strong financial performance.” However, he also cautioned that some domestic AI unicorns exhibit inflated valuations and lack practical application scenarios, warning investors to be wary of “FOMO” (fear of missing out) driven speculative investments. He suggested focusing on two key metrics—P/E (price-to-earnings ratio) and ROI (Return on Investment)—to assess the ‘payback period.’
Feng Ludan also noted that the AI sector had indeed accumulated significant gains in the earlier period, reflecting strong market consensus on this track. However, the rise in stock prices was supported by earnings realization rather than purely speculative concept trading. Considering the high growth potential for the future, and from the perspective of dynamic valuation, the sector as a whole has not shown obvious signs of bubble formation.
“For an industrial revolution like AI, which is in its explosive growth phase, we cannot simply apply the static mindset of ‘low valuation equals value洼地’ traditionally used for cyclical industries. What we refer to as a ‘value洼地’ pertains more to ‘growth space that has not yet been fully priced by the market’ or ‘sub-sectors with significant expectation gaps.’ The current investment opportunities coexist with risks. AI technology is at the dawn of accelerated iteration and commercial implementation, with extremely high industry ceilings.” Feng Ludan added.
Cao Xuchen believes that the biggest difference between the Internet and AI revolutions lies in the threshold. “The barrier to entry for the Internet industry is low, whereas for the AI industry, it is extremely high.” He predicts that cloud vendors are highly likely to complete their first round of financing for computational power expansion by 2026, with relatively controllable bubble risks. The real divergence may occur at the second round of financing in 2027.
“Whether or not a bubble bursts hinges on whether the pace of industrial application落地 can sustain the high stock prices.” Cao Xuchen stated, “If blockbuster AI applications emerge between 2026 and 2027, then the current AI sector not only lacks a bubble but may even be undervalued.”
Editor/melody