Large enterprises need to decide which microprocessors to deploy across multiple edge-to-cloud scenarios and provide their workforce with enough compute resources to run new generation applications making more use of artificial intelligence (AI) and heavy-duty analytics. With the latest microprocessors launched having features to process AI workloads, from CPUs to dedicated AI hardware accelerators, the choice is difficult given the wide range of options available. Omdia has produced "Implications for investing in a new microprocessor: essential checklist," a research note to help navigate the decision-making.
The choice is doubly difficult with AI-specific workloads as these have many complex dimensions: is the microprocessor designed for training or inference; does it have power constraints; is there a software stack to make the use of the microprocessor easy for developers; and so on. The report provides recommendations for large enterprise evaluators who need to consider the applications that are likely to run in the lifespan of the new infrastructure, by planning for anticipated near-term needs, as well as for current needs.
When enterprises are evaluating whether to make use of a new chip, they are often making considerable investment decisions. For example, a cloud-based product or service with a global audience may require millions of data processing operations per hour. Providing the right processor in the data center would be a major undertaking in terms of costs, resources, and effort. The research note uses a set of eleven criteria to help in the evaluation process, going beyond the basic performance statistics. These criteria are brought together into a total cost of ownership that helps evaluate the real-world cost of using a particular microprocessor. These criteria will assist in avoiding an expensive mistake or ending up with a capability gap when applications do not have the right microprocessor available.