GPUs poised for major performance enhancements under preliminary
HBM4 specsHBM4 is designed to further enhance data processing rates, offering higher bandwidth and increased capacity per die and/or stack compared to its predecessor, HBM3. It also aims to maintain lower power consumption, which is crucial for large-scale computing operations.
Technical advancements include a doubled channel count per stack compared to HBM3, a larger physical footprint, compatibility with HBM3 through a single controller, and support for 24 Gb and 32 Gb layers. There is also an initial agreement on speed bins up to 6.4 Gbps, with discussions about higher frequencies. Missing from the specs is the integration of HBM4 memory directly on processors, which Tom's Hardware says is perhaps the most intriguing part about the new type of memory.
HBM4 is particularly important for generative artificial intelligence, high-performance computing, high-end graphics cards, and servers. In particular, AI applications will benefit from the data processing and memory capabilities the standard will offer, allowing AI applications to handle larger datasets and perform complex calculations more quickly.