Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, August 26, 2023

SK Hynix Leads DRAM Industry's Rebound in Q2 with Revenue Surge, Reclaims No. 2 Position

South Korea's SK Hynix Inc. has orchestrated a substantial resurgence in the DRAM chip sector during Q2, propelling itself back to the second-largest global position and surging ahead of Micron Technology Inc., which now stands third. The chipmaker achieved a nearly 50% surge in DRAM shipments, propelling its revenue to $3.44 billion in the April-June period. Notably, SK Hynix excelled in DDR5 and HBM chip shipments, products with higher average selling prices (ASPs) than standard commodity DRAM items, thus enhancing its ASP growth by 7-9% compared to the previous quarter. In contrast, market leader Samsung Electronics experienced a 7-9% ASP drop while retaining its top position, and third-place Micron sustained relatively stable ASP with DDR5 shipments. The overall DRAM industry marked a 20.4% QoQ revenue increase in Q2, signaling a potential turnaround in the sector.

SK Hynix leads DRAM industry’s Q2 revenue rebound, retakes No. 2 spot - KED Global

Friday, August 25, 2023

AI Chip Market Poised to Soar: Gartner Predicts Revenue to Reach $53 Billion in 2023, Double by 2027

Gartner forecasts that worldwide AI chips revenue will reach $53.4 billion in 2023, an increase of 20.9% from 2022. The growth is driven by the developments in generative AI and the increasing use of a wide range of AI-based applications, such as natural language processing, computer vision, speech recognition and machine learning.




The AI semiconductor industry is on the brink of a remarkable surge, as outlined by Gartner's latest forecast. Predicting an impressive revenue increase of 20.9%, the industry is set to reach a staggering $53.4 billion in 2023. This upward trajectory shows no signs of slowing down, with anticipated growth rates of 25.6% in 2024, culminating in an AI chips revenue forecast of $67.1 billion. However, the real eye-opener lies in Gartner's projection for 2027, where the AI chips market is poised to more than double, reaching an astonishing $119.4 billion. 

This meteoric rise is attributed to the expanding landscape of AI-based applications in data centers, edge devices, and more, necessitating the deployment of high-performance graphics processing units (GPUs) and tailored semiconductor devices. Notably, custom-designed AI chips are expected to become a staple, replacing prevalent architectures and accommodating the growing demand for optimized AI workloads. The consumer electronics sector is also embracing this transformation, with the value of AI-enabled application processors predicted to surpass $1.2 billion by the close of 2023. The future shines brightly for AI chips, as generative AI techniques and hyperscalers' interests drive innovation and efficiency in deploying AI applications. Gartner's insights underscore the imminent revolution in the semiconductor industry, ushering in an era of unprecedented growth and potential.

Tuesday, May 7, 2019

Applied Materials - The AI Era is Driving Innovations in Memory

[Applied Materials Blog] Industries from transportation and healthcare to retail and entertainment will be transformed by the Internet of Things, Big Data and Artificial Intelligence (AI), which Applied Materials collectively calls the AI Era of Computing.

The previous computing eras—Mainframe/Minicomputer, PC/Server and Smartphone/Tablet—all benefitted from advances in Moore’s Law whereby 2D scaling was accompanied by simultaneous improvements in performance, power and area/cost—also called “PPAC.”

While AI Era applications are booming, Moore’s Law is slowing; as a result, the industry needs breakthroughs beyond 2D scaling to drive PPAC in new ways. Specifically, we need new computing architectures, new materials, new structures—especially area-saving 3D structures—and advanced packaging for die stacking and heterogeneous designs.
 

The AI Era is Driving a Renaissance in Semiconductor Innovation (Applied Materials Blog)
 
AI Era architectural changes are influencing both logic and memory. Machine learning algorithms make heavy use of matrix multiplication operations that are cumbersome in general-purpose logic, and this is driving a move to accelerators and their memories. AI compute includes two distinct memory tasks: first, storing the intermediate results of calculations; and second, storing the weights associated with trained models.

Performance and power are important in the cloud and in the edge, and innovations in memory can help. One approach using existing memory technologies is “near memories” whereby large amounts of working memory are condensed, placed in close physical proximity to logic, and connected via high-speed interfaces. As examples, 3D stacking and through-silicon vias are gaining traction. One major drawback of SRAM and DRAM as “working memories” in these applications is that they are volatile and need a constant supply of power to retain data—such as weights.

To reduce power in the cloud and edge, designers are evaluating new memories that combine high performance with non-volatility so that power is only needed during active read and write operations. Three of the leading new memory candidates are magnetic random-access memory (MRAM), phase-change RAM (PCRAM) and resistive RAM (ReRAM). 

Full article: Applied Materials Blog LINK
 
Additional read: Manufacturing Requirements of New Memories LINK