One of many areas Intel centered on throughout its Structure Day 2020 occasion was its ongoing work to enhance interconnect expertise in each 2.5D and stacked 3D chip configurations. We’ve talked about these applied sciences previously — they vary from EMIB (used for Kaby Lake-G) and Foveros (Lakefield) to imminent ideas like Intel’s Omni-Directional Interconnect (ODI), which blends the earlier two strategies.
On the similar time that Intel has been speaking up its interconnect expertise, nonetheless, the corporate has been struggling on the CPU entrance. Intel first mentioned Foveros in late 2018, and within the 18 months since, it’s confronted down a serious 10nm delay and the current information about its 7nm product traces. Given the difficulties the corporate has confronted, you’d be forgiven for pondering its sudden curiosity in interconnect and packaging mirrored a must discover a optimistic matter of dialog.
On this case, Intel’s concentrate on how we join chips collectively isn’t an try and keep away from speaking about its manufacturing points. The one factor that makes chiplets an advantageous technique in contrast with current strategies of die integration is that producers are deploying new applied sciences to attenuate the facility and latency influence of transferring numerous parts additional aside.
A technique chip designers can proceed to enhance transistor density within the face of weaker course of node scaling is by stacking extra chips on high of one another. 3D NAND has been available on the market for a number of years, but it surely took longer to develop a technique of stacking logic chips on high of one another that didn’t lead to a metaphorical heap of melted wire and scorched silicon as quickly as you ran a critical workload by means of it. Price, TSV routing points, and manufacturing integration have all been critical challenges to the adoption of high-end packaging applied sciences.
For a distinct instance of this pattern at work, contemplate HBM. AMD used Excessive Bandwidth Reminiscence for its Fury household of GPUs simply over 5 years in the past. If HBM had adopted the adoption pattern of earlier reminiscence applied sciences, it could now be ubiquitous throughout each AMD and Nvidia’s product traces. As an alternative, HBM has acquired a follow-up enhancement within the type of HBM2 with out ever actually going mainstream and neither AMD nor Nvidia is anticipated to make use of HBM2 of their upcoming refreshes. HBM2 nonetheless provides an influence and efficiency benefit in contrast with GDDR6, but it surely’s costly and tough sufficient to attain that each companies reserve it for his or her skilled and enterprise GPUs.
As for why firms have pivoted from specializing in transistors to packaging, it’s out of necessity. Because the enhancements supplied by every new course of node decline, firms want to optimize different points of their designs. 3D Chip stacking might enable a CPU designer to attenuate inside latency by positioning practical blocks on high of one another, reasonably than merely placing them side-by-side. Pursuing more cost effective strategies of interconnection and aggregation is how we’ll drive down the price of mounting reminiscence nearer to the CPU and bettering general efficiency traits. The work Intel is speaking about on the interconnect entrance is essential to long-term efficiency enhancements and higher energy effectivity.