Google announced its new CPU, named Axion, on Tuesday. It’s following Amazon Web Services and Microsoft Azure in creating custom Arm-based processors.
The move is another step forward in Google’s development of new computing resources. Axion is set to be available for Cloud services later this year. Google says Axion will improve performance for “general-purpose workloads,” such as open-source databases, web and app servers, in-memory caches, data-analytics engines, media processing, and artificial-intelligence training.
Google says the new Axion chips will deliver 30% better performance than the current fastest general-purpose Arm-based chips in the cloud. It says it will also improve performance by 50% and energy efficiency by 60%.
CPUs — or central processing units — such as Axion are crucial to the AI arms race. Training complex AI models involves processing large data sets, which CPUs help run more quickly.
The computing power needed to train AI models is increasingly important. Major tech companies are in a race to scale up in AI, and Google is among them. It has been developing AI for more than a decade, incorporating it into its search engine, ad products, and YouTube recommendations. But its AI chatbot Gemini has trailed behind OpenAI’s ChatGPT.
Google’s decision to make its own AI chips may create competition with partners including Nvidia and Intel as it leans away from using outside vendors. It could, however, save Google some money — buying AI chips is notoriously costly. For example, Nvidia’s Blackwell chip is estimated to cost from $30,000 to $40,000.
Arm CEO Rene Haas said in a statement that the announcement “marks a significant milestone in delivering custom silicon that is optimized for Google’s infrastructure.”
In addition to the Arm-based CPU, Google also announced the general availability of TPU v5p, which it says is its most powerful and scalable flexible AI accelerator.
Do you have a tip about Google? Email the author at [email protected].
Read the full article here