Source: Semiconductor Watch
Last year had good news and bad news for the Korean semiconductor industry.
The bad news is that the memory and storage industry has entered a downward cycle. Inventories have soared and prices have plummeted. As a result, South Korean semiconductor manufacturers have suffered a huge loss, and there have been losses in financial reports for recent quarters.
The good news is that in the downturn cycle, thanks to Nvidia's AI accelerator cards, HBM rose to prominence, becoming the only memory product that can reverse the market and increase dramatically, and Korean manufacturers account for 90% of the share, making up for the loss of traditional memory business to a certain extent.
In order to support the development of HBM, the Korean government also recently identified HBM as a national strategic technology, and will provide tax benefits to HBM suppliers such as Samsung Electronics and SK Hynix. The decision is part of the draft law enforcing the Korean Tax Law Amendment Act. Compared with general R&D activities, national strategic technology can enjoy higher tax relief: small and medium-sized enterprises can receive up to 40% to 50% reduction, while large enterprises can receive 30% to 40% reduction.
HBMs in 2024 are still hot. Nvidia's H100 and 200 are still the most popular GPUs on the market. Naturally, demand for HBM is also soaring. For Samsung and SK Hynix, continued expansion of HBM production seems a no-brainer with policy support and market development.
Expand production and expand production
The first is SK Hynix, the leader of HBM.
SK Hynix's performance for the third quarter of last year was the best among all memory manufacturers, and on January 25, SK Hynix also released the latest financial reports for the fourth quarter of 2023 and the full year of 2023:
Fourth quarter revenue increased 47% year over year to 11.3055 trillion won, higher than analysts' expectations of 10.4 trillion won; gross profit was 223 million won, surging 9404% year over year, and gross profit margin was 20%, recovering for the third consecutive quarter; operating profit was 346 billion won (approximately RMB 1,854 billion), better than analysts' expected loss of 169.91 billion won, and operating margin of 3%; net loss was 1.3795 trillion won (approximately RMB 7.394 billion), better than analysts' expected loss of 0.41 billion won Trillion won, but compared Losses narrowed sharply in the previous quarter, with a net loss rate of 12%. EBITDA (profit before tax, interest, depreciation and amortization) was 358 million won, an increase of 99% over the previous year.
SK Hynix pointed out in its financial report that the achievement of this performance was mainly due to SK Hynix's sharp increase in sales of flagship products such as the AI memory chip HBM3 and high-capacity mobile DRAM in the fourth quarter, which increased 4 times and 5 times over the same period last year, respectively. Demand for AI servers and mobile applications increased, improving the overall memory market situation in the last quarter of 2023.
HBM is the brightest in the financial report. SK Hynix also stated in the financial report that it plans to increase capital expenditure in 2024 and focus production on high-end storage products such as HBM. HBM's production capacity will more than double compared to last year. Previously, Hynix had anticipated that by 2030, its HBM shipments would reach 100 million units per year, and decided to reserve about 10 trillion won (about 7.6 billion US dollars) of facility capital expenditure in 2024 — an increase of 4367% compared to the estimated investment of 6 trillion to 7 trillion won in 2023%.
The focus of expanding production is to build and expand the plant. In June of last year, Korean media reported that SK Hynix is preparing to invest in back-stage process equipment to expand the Icheon plant that packages HBM3. It is expected that by the end of this year, the scale of the plant's back-stage process equipment will nearly double.
In addition, SK Hynix will also build a state-of-the-art manufacturing plant in Indiana, USA. According to two sources interviewed by the Financial Times, the South Korean chipmaker will produce HBM stacks at this factory. These stacks will be used for Nvidia GPUs produced by TSMC. The chairman of SK Group said that the plant is expected to cost 22 billion US dollars.
In contrast, Samsung seems somewhat passive when it comes to HBM. Samsung Electronics began expanding the supply of fourth-generation HBM, or HBM3, in the fourth quarter of last year, and is currently entering a transition period.
On January 31, during the fourth quarter and annual earnings conference call, Samsung Electronics stated that the memory business is expected to return to normal in the first quarter of this year. Kim Jae-joon, vice president of Samsung Electronics' memory business division, said, “We plan to actively respond to demand for HBM servers and SSDs related to generative AI, focusing on improving profitability. It is expected that the memory business will return to profit in the first quarter of this year.”
The key to recovering profits in the memory business is high-value products such as HBM and server memory. Notably, Samsung's HBM sales in the fourth quarter of last year increased 3.5 times year-on-year, and Samsung Electronics plans to concentrate the capabilities of its entire semiconductor division (including foundry and system LSI business units) to provide customized HBM to meet customer needs.
A representative from Samsung commented, “HBM sales are breaking records every quarter. In the fourth quarter of last year, sales increased by more than 40% month-on-month, and increased more than 3.5 times year-on-year. In the fourth quarter in particular, we targeted major GPU manufacturers as our customers.” The representative further predicted, “We have provided customers with 8-layer stacked samples of the next generation HBM3E and plan to begin mass production in the first half of this year. By the second half of the year, its share is expected to reach around 90%.”
Han Jin-man, executive vice president of Samsung's semiconductor business in the US, said in January of this year that the company has high expectations for high-capacity memory chips, including the HBM series, and hopes it will lead the rapidly growing field of artificial intelligence chips. “Our production of HBM chips this year will be 2.5 times that of last year,” he told reporters at the CES 2024 media conference. “Our production of HBM chips this year will increase 2.5 times compared to last year, and will continue to increase 2 times next year.”
Samsung also officially revealed that the company plans to increase the maximum production of HBM to 150,000 to 170,000 pieces per month by the fourth quarter of this year to compete for the HBM market in 2024. Earlier, Samsung Electronics spent 10.5 billion won to acquire Samsung Display's factory and equipment in Cheonan, South Korea to expand HBM production capacity, while also planning to invest 700 billion to 1 trillion won to build a new packaging line.
The battle for HBM4
In addition to expanding production, they are also vying for the next generation of HBM standards.
Samples of HBM3e were provided at the end of last year. It is expected to complete verification and mass production in the first quarter of this year. Obviously, the HBM4 stack will increase from the current 12 layers to 16 layers, and may use a 2048-bit memory stack connection interface, but the HBM4 standard has not yet been finalized, and the two Korean manufacturers have proposed different routes in this regard.
According to Business Korea, SK Hynix is preparing to use a “2.5D fan-out” package for the next generation of HBM technology. The move is aimed at improving performance and reducing packaging costs. This technology has not been used in the memory industry before, but it is very common in the advanced semiconductor manufacturing industry, and is thought to have the potential to “completely change the semiconductor and foundry industry”. SK Hynix plans to announce research results using this packaging method as early as next year.
Specifically, 2.5D fan-out packaging technology arranges two DRAMs horizontally and assembles them into a structure similar to a normal chip. Since there is no substrate under the chip, the chip is thinner, and the thickness is greatly reduced when installed in IT equipment. At the same time, this technology bypasses the silicon through hole (TSV) process, provides more input/output (I/O) options, and reduces costs.
The current HBM stack is placed next to the GPU and connected to the chip, while SK Hynix's new goal is to completely eliminate the middle layer, place HBM4 directly on GPUs from companies such as Nvidia and AMD, and prefer TSMC as the foundry.
According to the root plan, SK Hynix will mass-produce the sixth-generation HBM (HBM4) as early as 2026. In addition, Hynix is also actively researching “hybrid bonding” technology, which is likely to be applied to HBM4 products.
Samsung, on the other hand, went against Hynix and studied the application of photonic technology in the middle layer of HBM technology with the aim of solving challenges related to heat and transistor density.
The chief engineer of Samsung's Advanced Packaging team shared his views at the OCP Global Summit in October 2023. He said that at present, the industry has made significant progress in integrating photonic technology with HBM through two main methods. The first is to place photonic middleware between the bottom package layer and the top package layer containing the GPU and HBM as a communication layer, but this method is expensive and requires installing an intermediate layer and photonic I/O for the logic chip and HBM.
The second method is to separate the HBM memory module from the package and connect it directly to the processor using photonic technology. Compared to dealing with complex packaging problems, a more effective approach is to separate the HBM memory module from the chip itself and connect it to a logical integrated circuit using photonic technology. This approach not only simplifies the manufacturing and packaging costs of HBM and logic integrated circuits, but also does not require internal digital-to-optical conversion in the circuit; it only requires attention to heat dissipation issues.
A Samsung executive said in a blog post that the company aims to launch the sixth-generation HBM (HBM4) in 2025, which includes non-conductive adhesive film (NCF) assembly technology and hybrid bonding (HCB) technology optimized for high temperature thermal characteristics, to win dominance in the rapidly growing field of artificial intelligence chips.
As can be seen, the two Korean manufacturers are already in a fierce battle for the next generation of HBM standards.
Micron, sneak attack?
Compared with the two Korean manufacturers mentioned above, Micron is in an obvious weak position. Micron expects its HBM market share to be around 5% in 2023, ranking third.
To close the gap, Micron is betting heavily on its next product, the HBM3E. Micron CEO Sanjay Mehrotra said, “We are in the final stages of verifying HBM3e for Nvidia's next generation AI accelerator.” It plans to begin shipping HBM3E memory in large quantities in early 2024, while stressing that its new products have received great interest from the entire industry, which suggests that NVIDIA may not be the only customer to end up using the Micron HBM3E.
In terms of specific specifications, Micron's 24GB HBM3e module is based on eight stacked 24Gbit memory chips and manufactured using the company's 1beta (1-beta) manufacturing process. Its data rate is as high as 9.2 GT/s, and the peak bandwidth of each stack reaches 1.2 Tb/s, which is 44% higher than the fastest existing HBM3 module.
In terms of future layout, Micron revealed the next generation of HBM memory, tentatively named HBM Next. It is estimated that HBM Next will provide 36GB and 64GB capacity, and can provide various configurations, such as 12-Hi 24Gb stack (36GB) or 16-Hi 32Gb stack (64GB). Furthermore, the bandwidth of each stack is 1.5Tb/s—2+TB/s, which means that the total data transmission rate exceeds 11.5GT/s/pin.
Unlike Samsung and SK Hynix, Micron does not plan to integrate HBM and logic chips into a single chip. In the development of the next generation of HBM, Korean and American memory manufacturers are clearly distinguished. Micron may tell AMD, Intel, and Nvidia that people can get faster memory access speeds through combined chips such as HBM-GPU, but relying on one chip alone means greater risk.
The US media said that with the increase in machine learning training models and the extension of training time, the pressure to shorten the operating time by speeding up memory access and increasing the memory capacity of each GPU will also increase, and abandoning the competitive supply advantage of standardized DRAM in order to obtain a locked HBM-GPU combination chip design (albeit with better speed and capacity) may not be the right way forward.
On HBM4, which has yet to be determined, Micron seems to want a “sneak attack.”
Write it at the end
Needless to say, HBM is an opportunity for all memory manufacturers. As long as the AI boom has not subsided and Nvidia's GPUs are still popular, everyone can continue to sell the highly profitable HBM and hand over financial reports with good results.
The two Korean manufacturers have not only begun to compete in the market, but are also frantically expanding production with each other, and are also competing on technology routes to gain a voice in the next generation of standards. We may see more moves by Samsung and SK Hynix on HBM this year.
However, after losing the bet, Micron invested heavily in HBM. Compared with the Korean factory, Micron, which is close to Nvidia, has its own advantages. Considering previous technology accumulation, it may also become the biggest competitor of the Korean factory.
However, looking at the moment, more than 90% of HBM's production capacity has already been taken over by SK Hynix and Samsung, and a civil war in South Korea is unavoidable.