IBM’s new POWER9 Chip was Built for AI and Machine Learning
In a world that requires increasing amounts of compute power to handle the resource-intensive demands of workloads like artificial intelligence and machine learning, IBM enters the fray with its latest generation Power chip, the Power9.
The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it’s releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud. “We generally take our technology to market as a complete solution,” Brad McCredie, IBM fellow and vice president of cognitive systems explained.
The company has designed the new chip specifically to improve performance on common AI frameworks like Chainer, TensorFlow and Caffe, and claims an increase for workloads running on these frameworks by up to almost 4x.
If it works as described this should give data scientists building models and running them on a Power9-powered machine increased speed, which should allow them to run these jobs faster and complete model creation more quickly.
Patrick Moorhead, principal analyst at Moor Insights & Strategy believes IBM has really differentiated itself from the competition with this chip. “Power9 is a chip which has a new systems architecture that is optimized for accelerators used in machine learning. Intel makes Xeon CPUs and Nervana accelerators and NVIDIA makes Tesla accelerators. IBM’s Power9 is literally the Swiss Army knife of ML acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today,” Moorhead said.
If you’re thinking that Nvidia seems to have grabbed a good deal of the AI/machine learning workloads, it didn’t escape IBM’s notice either and they have been working closely with the GPU chip maker. In fact, McCredie says that IBM built a system bus that moves workloads between the two chip types much faster than competitive systems.
“Modern workloads are becoming accelerated and the Nvidia GPU is a common accelerator. We have seen this trend coming. We built a deep relationship with them and a partnership between the Power system and the GPU. We have a unique bus that runs between the processor and the GPU and has 10x peak bandwidth over competitive systems,” McCredie explained.
The new chips are going to power a supercomputer called Summit being built by Lawrence Livermore and Oakridge national laboratories. He says the supercomputer will be built on top of thousands and thousands of the Power9 computers at a cost of $325 million, a nice little burst of business for the new chip right out of the gate.
Chirag Dekate, who is research director for HPC, machine learning and emerging compute technologies at Gartner says this release is a continuation of IBM’s aggressive approach to capture high-growth market segments like artificial intelligence. “By aligning their strategy across segments like AI (specifically machine learning and deep learning), it enables IBM to better compete in hyperscale datacenter and broader market datacenter initiatives. This has a potential to drive direct revenue impact for IBM and enable new larger scale datacenter deployments,” Dekate explained.
The Power9 chip is generally available starting today.
Full story from Techcrunch HERE
Related Stories: