Intel at its recent Data-Centric Innovation Summit, a media and analyst day held at its Santa Clara headquarters, showed how Intel has grown its market from PC and server sales to new data center sales following the rise of cloud computing. Intel CPUs, FPGAs, and specialized processors are targeted at a range of use cases, closely tracking the rise of cloud-native computing and artificial intelligence (AI) among other applications. These sales underpin current IT trends and provide an independent view of the direction that computing and data centers are taking.
Navin Shenoy, Intel executive VP and GM Data Center Group, talked about the decreasing cost of technology (between 2012 and 2017 there was a 56% reduction in the cost of compute, and a 77% decrease in the cost of storage), while performance increased by over 41 times between 2006 and 2017. The data-centric silicon total addressable market (TAM) opportunity has been revised upwards by Intel to over $200bn by 2022, spanning its four major markets: DC (network, memory, connectivity, and AI); non-volatile memory technology; IOT and advanced driver-assistance systems (ADAS); and FPGAs. While one-third of this business is enterprise conversion due to digital transformation, two-thirds is due to the cloud expanding the TAM. Also interesting is the rise in custom CPUs from 18% in 2013 to 50% in 2017, with all CPU sales being custom designs mostly to major cloud providers. These buyers usually design their own cloud-native infrastructure and have proprietary requirements for the CPUs. Another interesting statistic (it was a day of many statistics) is that nearly 75% of all DC traffic is just internal traffic.
For the enterprise, Intel had detected a 4% IT infrastructure decline over the period between 2014 and 2017, but it sees 2018 as a turnaround year, with enterprise spend increasing by 6% to date as this sector adopts hybrid cloud strategies and private clouds (private cloud adoption over the last five years has doubled to 12%). In line with this analysis, we have seen the private cloud market receive keener attention from vendors such as IBM, Red Hat, and Pivotal as they address the need for enterprise-grade cloud-native development platforms.
Intel Xeon has grasped the AI opportunity with supporting machine learning applications in inference mode. Its latest Xeon Scalable Processor has significant performance improvements over previous-generation Xeon processors. Intel sees AI applications as a major driver of sales of its high-end CPUs and FPGAs. With a $1bn AI business in 2017, by 2022 the company estimates AI related DC "silicon" TAM of between $8bn and $10bn. This is just one-third of the opportunity because there are also edge servers and appliances, and end-point IOT, vehicles, and mobile devices.
Compared with the July 2017 CPU performance, the latest Intel Xeon Scalable processors are showing a x1.4 improvement in AI training and x5.4 (using INT8 arithmetic) in AI inference mode, with next-generation Cascade Lake CPUs expected to achieve x11 inference mode improvement. In 2019 the first Intel Nervana NNP L-1000 processors will become available, designed for AI training workloads. AI is recognized as a huge opportunity by Intel and it is making strides to ensure its place in the market. GPUs (important for many AI training tasks) need CPUs, and while Intel also faces competition in CPUs, its latest and next-generation CPUs are competing well.
Michael Azoff, Principal Analyst, Information Management