share_log

Microsoft's reduction of Datacenter has caused panic, but Goldman Sachs reassures the market: the demand for AI computing power remains strong, with no changes to the $80 billion expenditure.

Zhitong Finance ·  Feb 26 15:55

近期,美国科技巨头$Microsoft (MSFT.US)$撤掉两个大型数据中心租约的消息引发全球股市热议,无疑给自2023年以来在全球范围持续火热的人工智能(AI)投资热潮泼了一大盆冷水,使得投资者们对于AI基础设施建设相关的看涨情绪——尤其是对于AI芯片、高性能网络硬件设备以及电力设备领域的投资热情大幅减弱。

但是华尔街金融巨头高盛在该消息出炉后火速发研报稳定市场情绪,强调微软高达80 billion美元资本支出不会进行任何程度削减,并且该机构重申对于微软股票的“买入”评级以及500美元目标价。

Despite recent rumors regarding Microsoft's adjustment of AI datacenter leasing plans triggering market panic about the imminent retreat of the AI boom, Goldman Sachs believes this actually reflects Microsoft's cautious approach to AI investment—prioritizing high ROI focused AI infrastructure projects while shifting capital expenditure towards shorter lifecycle and relatively lower-cost assets (such as Edge Computing devices). This move helps balance short-term costs with long-term gains.

Goldman Sachs stated in its research report that Microsoft possesses strong capabilities across all levels of Cloud Computing (application, platform, infrastructure), covering applications such as Copilot, GitHub Copilot, Microsoft 365 Copilot, and the underlying infrastructure of Azure AI. This strength allows it to fully benefit from the shift of generative AI from the infrastructure layer to the platform and application layers, which will undoubtedly be positive for Microsoft's long-term profitability.

Goldman Sachs indicated that if Microsoft continues to effectively integrate its generative AI technologies across product lines and delivers strong business revenue performance, any delays, cancellations, or changes to datacenter leases are unlikely to negatively impact Microsoft's AI-related business revenue. The Goldman Sachs analysis team also believes that by optimizing the structure of capital expenditures (CapEx) and shifting toward high-margin AI inference layer businesses, Microsoft's EPS growth rate is expected to accelerate again in the future.

Goldman Sachs remains bullish on Microsoft, emphasizing that Microsoft is a big winner benefiting from the AI boom.

As the new paradigm of 'low-cost computing power' led by DeepSeek sweeps the globe, AI training and application inference costs continue to decline. This simultaneously drives the accelerated penetration of AI application software (especially generative AI software and AI agents) into various industries, fundamentally revolutionizing the efficiency of various business scenarios and significantly increasing sales. Microsoft, Amazon, Oracle, Alibaba, and Tencent are among the cloud computing and software giants whose AI revenue and profits may experience exponential growth, making them the biggest winners under the global rise of DeepSeek and the 'DeepSeek low computing cost shockwave.'

If killer-level AI Application software/AI agents start to emerge on a large scale from 2025, it would be a significant bullish factor for Microsoft, Amazon, and other global cloud giants. These AI software systems, whether they are early-stage AI software developer ecosystem platforms or later-stage immensely large cloud AI inference computing resource systems, rely heavily on the strong computing platform support provided by these cloud giants.

These cloud giants focus on building an ecosystem for B-end and C-end AI application software developers related to generative AI, aiming to comprehensively lower the technical threshold for non-IT professionals in various industries to develop AI applications.

Unlike OpenAI, xAI, and other leaders providing similar AI chatbots and AI agents, software companies like Microsoft, Amazon, ServiceNow, and Datadog have their own one-stop 'AI Application Platform' within their software ecosystems, which will undoubtedly benefit significantly from this unprecedented global trend of enterprises deploying AI.

They focus on developing the B-end and C-end AI Application software developer ecosystems related to generative AI, aiming to comprehensively lower the technical thresholds for non-IT individuals in various industries to develop AI applications. They provide cloud AI training/inference computing resources based on NVIDIA AI GPUs or self-developed AI Chips, intending to greatly simplify one-stop deployment of application software based on AI large models and offer AI inference computing resources to support AI workloads.

Goldman Sachs stated that due to the wave of data center construction or expansion sweeping the globe in 2023 and 2024, the growth rate of capital expenditure related to AI is inevitably slowing down because of the high base effect. Generative AI is shifting from the infrastructure layer to the SaaS platform and the application layer covering various industries. As one of the few super cloud computing service providers globally with extensive commercial applications, Microsoft can fully leverage this transition, which will undoubtedly be positive for Microsoft's long-term profitability.

Goldman Sachs' analysis team added that Microsoft has a strong market share across various areas in the cloud layer, including applications, platforms, and underlying infrastructure. Therefore, Goldman Sachs believes Microsoft is capable of capturing some long-term and sustained growth trends, such as generative AI software, AI agents sweeping across various industries globally, public cloud consumption and SaaS, digital transformation, AI/ML, business intelligence/analytics, and DevOps.

In addition, the Goldman Sachs team emphasized in their Research Reports that Microsoft, with its large user base (over 0.4 billion business users), full-stack cloud computing services (IaaS/PaaS/SaaS), and generative AI tools (Copilot series), holds a unique advantage in the AI application layer. Microsoft has a complete AI technology stack covering everything from underlying power infrastructure (Azure AI infrastructure) to upper-level AI applications (Copilot, GitHub Copilot), combined with its enterprise-level customer base (for example, RPO orders exceeding 300 billion dollars), forming an extremely difficult-to-replicate 'data-model-application' closed-loop Microsoft ecosystem globally.

Goldman Sachs stated that as Microsoft's cloud computing business reaches an annualized revenue scale of approximately 100 billion dollars, the Operational leverage effect will continue to drive significant growth in EPS, with expectations to double by the fiscal year 2028. Goldman Sachs reiterated its 'Buy' stock rating for Microsoft in the Research Reports, with a target price of up to 500 dollars within 12 months. In contrast, Microsoft's stock price closed at 397.900 dollars in the US stock market on Tuesday.

Microsoft reaffirms its 80 billion dollar spending plan to stabilize market confidence in its AI business.

According to recent reports from several media outlets, Microsoft has canceled multiple lease agreements with various private Datacenter operators, involving a total power capacity of hundreds of megawatts. Among them, Microsoft cut back on two large Datacenters in Wisconsin Kenosha and Georgia Atlanta, according to media tracking data.

The former was due to Microsoft's withdrawal from the Stargate project, transferring Wisconsin Kenosha to the Stargate project, while the latter was due to Microsoft's Datacenter team overestimating the AI computing power demand around Atlanta. Additionally, Microsoft has also suspended the conversion of the negotiated and signed Statement of Qualifications (SOQ) into lease agreements.

However, Microsoft quickly issued a statement afterward, indicating that the cancellation or delay of datacenter leases would not affect the company's capital expenditure of up to 80 billion USD for the current fiscal year, reaffirming the target of 80 billion USD in capital expenditure, while also acknowledging that strategic adjustments or a slowdown in infrastructure construction may occur in certain areas.

In response to market skepticism, Microsoft promptly reacted. A company spokesperson stated via email on Monday local time: "Our plan to invest over 80 billion USD in AI-related infrastructure this fiscal year is still on track, and we will continue to grow at a record pace to meet customer demand for AI computing power."

Despite reaffirming investment commitments, Microsoft also acknowledged that strategic adjustments might take place. A company spokesperson stated in a statement: "Thanks to the significant investments we have made so far, we are well-prepared to meet the current and continuously growing customer demand.

In just the past year, our additional capacity exceeded any year in history. While we may strategically adjust the pace or scale of infrastructure in certain areas, we will continue to maintain strong growth in all regions. This allows us to invest and allocate resources to areas of future growth."

Analysts have indicated that while Microsoft's reaffirmation of the 80 billion USD spending plan showcases the company's forward outlook on AI cloud computing demands, with DeepSeek leading a new paradigm of "low cost + high performance" AI model computing, the future costs of AI training and AI computing demand will undoubtedly decline significantly compared to the current situation, while the demand for computing power at the AI inference end is expected to grow substantially, which also explains why Microsoft is reducing datacenter leases—after all, with the decline in AI training computing demands, OpenAI, one of Microsoft's largest clients, may no longer require such a vast amount of training computing resources.

Gavin Baker, managing partner and chief investment officer of Atreides Management Company, recently pointed out that OpenAI's first-mover advantage is waning, with Google’s Gemini, xAI’s Grok-3, and Deepseek’s latest models all reaching technical levels comparable to GPT-4. Gavin Baker stated that in the future, only 2-3 giant datacenters will be needed for pre-training, with a resource allocation of 5/95 for pre-training and inference.

In his latest article, Baker emphasized that focusing on large AI models for inference is an extremely compute-intensive niche, which requires an incredibly powerful AI computing infrastructure for models to efficiently complete inference tasks. However, unlike the previous situation where the allocation of computing resources was roughly half for pre-training and half for inference, it is expected that AI computing infrastructure will soon transition to "5% for pre-training and 95% for inference computing resources," making excellent inference infrastructure crucial.

Editor/ping

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Futu. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.
    Write a comment