skip to main content

Straight Talk Service Provider

Ovum view

Network virtualization is a powerful tool for network operators, but the industry needs more tools. At the Red Hat Summit 2019 held in Boston, MA, the company's telco executives discussed its industry successes, positioning, and the future. Optus and Turkcell were among the Summit's speakers and attendees. Today's Red Hat has more than 100 installed instances with telcos worldwide, with deployments as large as 15,000 nodes. These operators use cloud environments on Red Hat Enterprise Linux for mobility, IoT, and private networking. The platforms supply a range of conventional network and operational functions; they also host more specialized tasks for service providers, such as in-network video transcoding.

In networking, stock virtualization is still standard for operators, with instances hosted on (OpenStack) cloud, supported by management platform(s), operating system(s), and underlying hardware. The stack is well established, with plenty of tools and a proven track record of stability. It is a flexible, but not especially resource-efficient, approach.

Containerization is high on the hype scale because it is a lightweight alternative to virtualization that can deliver performance gains with reduced opex. But containerization still needs refinement to reach full feature maturity for telco-grade environments: it is just starting to appear in large telco production environments. Red Hat in May 2019 closed a logistics gap with its universal basic image (UBI). UBI is a stripped-down, free-to-distribute Red Hat Linux container that lets third parties such as software vendors assemble and ship containerized packages without licensing worries.

There is also room for performance improvement in the compute stack. Platform vendors have taken on responsibility through projects such as data plane development kit (DPDK) and fast data ( These libraries and drivers turn the compute stack into a better-performing packet processor.

Finally, responsibility for optimization lies with network function virtualization (NFV) applications vendors. Telcos rated the first generation of premises-based NFV hardware conservatively, guaranteeing port speeds only up to 100 Mbps. Those speed limits work for dedicated fiber to branch offices but not for more demanding peak speeds.

That is why Comcast Business, for example, has pushed NFV vendors to optimize their applications up to gigabit broadband services. In late 2018, Comcast Business noted its NFV partners succeeded in increasing applications throughput up to 500–600Mbps on premises-based NFV hardware. Peak performance varies depending on the type of NFV application, configuration features and options (e.g., features such as unified threat management are resource intensive), and the number of processors in the compute box.

Ultimately, operators will use a spectrum of compute options for network functions, a sort of mirroring of the software-defined everything (SDx) concept as "network functions on everything," or NFx. Some types of network are best suited for bare metal with a thin control layer, some functions can take advantage of containers, and other functions will need the full virtualization of a private cloud or public cloud. All these options should be possible and manageable under a common umbrella, mixed as needed, and each optimized to run at its full potential.

Straight Talk is a weekly briefing from the desk of the Chief Research Officer. To receive this newsletter by email, please contact us.