Global Hybrid Memory Cube (HMC) and High-bandwidth Memory (HBM) Industry

Global Hybrid Memory Cube (HMC) and High-bandwidth Memory (HBM) Industry

  • September 2020 •
  • 263 pages •
  • Report ID: 5798790 •
  • Format: PDF
The global Hybrid Memory Cube (HMC) and High-Bandwidth Memory (HBM) market is projected to reach US$4.7 billion by 2025, driven by the blistering pace of growth of AI-assisted technologies, increase in AI workloads and the ensuing need for more memory in AI servers. Making data readily available for AI initiatives is key for successful AI projects and this requires data to be stored closer to the processing tasks to speed up data processing and deliver business value by providing timely and actionable insights. On an average, AI servers require over 8 times the amount of DRAM capacity and over three times the amount of SSDs when compared to a traditional server.  This need for memory will only grow bigger and more urgent with the growth of deep learning, machine learning, expanding size of neural networks and emergence of newer and more complex neural networks such as Feedforward Neural Network, Radial basis function Neural Network, Kohonen Self Organizing Neural Network, Recurrent Neural Network (RNN), Convolutional Neural Network and Modular Neural Network. For instance, Machine Learning (ML) involves continuous running of algorithms against historical data, creating a hypothesis, analyzing new data in real-time as and how it is generated and fed through the IoT system. Similarly, in Deep Learning incoming processed data sets are used to train multi-layered neural networks to continuously learn to interpret data with greater speed and accuracy. To achieve all of these with efficiency and effectiveness algorithms need dynamic on-the-go access to cold (old historic data), warm (recently generated data) and hot (current sensor generated data).

AI and machine learning have changed the computing paradigm. Execution time of a program now depends on memory transfers rather than processors thereby creating the need for greater memory bandwidth. The scenario is priming the in-memory computing paradigm. In other words, lines between memory and compute are rapidly blurring with AI and machine learning requiring memory-rich processing and compute-capable memory. Gaining new interest is Hybrid Memory Cube (HMC) which is defined as next generation high-performance RAM interface for TSV-based stacked DRAM memory. Given that AI requires cold data buried in SSDs, the high memory density of HMC enables cold data to be readily usable by transferring it to the RAM (hot data). Benefits of HMC include higher bandwidth (upto 400 GB/s); increased power efficiency; lower system latency; lower energy used; increased request rate for multiple cores; and greater memory packing density. The United States and Europe represent large markets worldwide with a combined share of 73.4%. China ranks as the fastest growing market with a 36.2% CAGR over the analysis period supported by the countrys herculean efforts to challenge the world and especially the U.S in the AI race. The National Development and Reform Commission (NDRC) remains committed to encourage R&D in AI and machine learning. Giants such as Baidu, Alibaba, Tencent, and Huawei are actively involved and committed to AI R&D. Against this backdrop as AI ecosystems evolve and proliferate, enabling hardware like memory chips and processors will witness robust growth in the country.