Chiplets for the future of AI
By Arvind Kumar, Principal Research Staff Member at the IBM T.J. Watson Research Center, USA
The explosive growth of AI is accompanied by ever-increasing demands of compute, memory, and bandwidth. Meeting these demands sustainably while logic scaling provides diminishing benefits is well-aligned with the industry movement to chiplets. In this talk I will discuss the opportunities for heterogeneous integration to meet the challenging demands of AI, examining both the architecture requirements as well as the advanced packaging technologies needed to enable an upward trajectory for system performance.
Biography:
Dr. Arvind Kumar is a Principal Research Staff Member at the IBM Thomas J. Watson Research Center where he manages a team focusing on next generation AI Hardware including heterogeneous integration. He has presented several invited talks and served as a panelist and short-course instructor in this area at major conferences. He holds over 50 patents and is an IBM Master Inventor. Dr. Kumar earned SB, SM, and PhD degrees in Electrical Engineering and Computer Science, all from MIT, and held an SRC graduate fellowship during his doctoral studies.
Related Videos
- The future of AI: Chiplets & Lasers
- Connectivity for AI Everywhere: The Role of Chiplets
- Podcast: AI, Chiplets, and the Future of Semiconductors
- Embedded World, Nuremberg: Arm’s Suraj Gajendra on AI, Chiplets, and the Future of Automotive Compute
Latest Videos
- The Democratization of Co-Design (DeCoDe) for Energy-Efficient Heterogeneous Computing
- With Moore’s Law Slowing Down—Chiplets Rebooting The Semiconductor Industry, BJ Han - Silicon Box
- Trends in 2025 Chiplet Design
- Custom Tool Development Strategies for Chiplet Reliability
- Imec Automotive Chiplet Forum 2025: Interview with Arm's John Kourentis