The challenges of 5G New Radio (NR) beamforming are being met with advanced machine learning optimization algorithms, run on field-programmable gate array (FPGA) devices. This was highlighted by Nokia when it was invited on the keynote stage by Xilinx CEO Victor Peng at the recent Xilinx Developer Forum (XDF) 2018 in San Jose. Nokia's Dr Tero Rissa said the tens of millions of parameters necessary in beamforming optimization were only possible using machine learning technology. Xilinx has been working with tier-1 telcos to solve the beamforming challenge, a technique for creating narrow radio beams that link end users with base stations. Moreover, when end users are moving, such as in vehicles, beamforming requires sophisticated sweeping techniques to keep connections live. Beamforming is an essential component in 5G NR for reducing power consumption at base stations and improving efficiency. This specialized task in the next generation of telecommunications technology promises to offer a ready market for Xilinx's new adaptive compute acceleration platform, Versal.
Telcos have been developing complex domain-specific architectures comprising multiple processors and FPGAs to solve the beamforming challenge, and have been working with Xilinx to come up with a dedicated processor that would simplify the compute challenge with lower power consumption and higher efficiency. The result is the Xilinx Zynq UltraScale+ RFSoC, production of which started in June 2018. Xilinx says the Zynq UltraScale+ is the first and only solution to meet the power and integration levels required by 5G NR beamforming. Xilinx has also announced Versal, its next-generation compute platform, which includes an RFSoC on board, and telcos are expected to be major customers of the new platform to satisfy the high compute demands of 5G NR base stations. Beamforming leads to a massive optimization requirement, with tens to hundreds of millions of parameters that can only be realistically solved using machine learning, according to Nokia. Xilinx has worked closely with tier-1 telcos to solve this industry-wide challenge, and the new Versal platform is expected to have a ready market for 5G in beamforming and other applications.
The new artificial intelligence (AI) engine in Versal comprises vector processing cores with local memory for minimizing data movement. This engine performs the type of linear algebra calculations required to train and run in production in inference mode today's deep learning neural networks. Xilinx released its own benchmarks comparing Versal's performance criteria, such as sub-2ms latency for machine learning inference, and showed multiple improvement factors over rival approaches (CPU alone, and CPU plus GPU). The MLPerf independent benchmarking initiative launched this year is a useful way to validate performance claims, and it is hoped that Xilinx will cite MLPerf results when Versal is launched in 2019.
AI applications with high compute inference mode requirements look to be highly promising for Xilinx Versal, in a market where new entrants are emerging today with competing offerings and more competition is in the pipeline in the next few years. Carving a position in high-value inference mode applications, as well as working closely with customers with real-world problems to solve, is a sound strategy for Xilinx.
"Xilinx unveils next-generation AI compute platform," INT003-000258 (October 2018)
Choosing the Appropriate Hardware Acceleration for AI Systems, INT002-000149 (August 2018)
Michael Azoff, Distinguished Analyst, Information Management