skip to main content

The application landscape is changing rapidly, with cloud-native applications, such as containers, being developed and deployed to take advantage of the benefits a distributed cloud environment provides. Omdia's infrastructure software forecasts 2018–23 for the container management platforms shows a CAGR of nearly 34% by 2023. This increase in the use of management platforms is in response to the growth in applications expected over this period. However, the role of management in these new cloud-native environments requires different tools to be used as the inter-service communications and policy management become an order of magnitude greater than currently experienced in the VM-based workloads.

What is a service mesh and why it is needed?

In a microservices architecture each application, or service, needs to communicate with each other, this can be achieved by a central control point, but as the size and number of microservices expands this becomes a performance and administrative bottleneck. A service mesh is a separate infrastructure layer that is built into each application, and because the service mesh has visibility of all communications it is the ideal layer to ensure the environment is optimized.

The basic concept of microservices is that each application, or service, is reliant upon other services to deliver the outcomes. For example, in a microservices architecture if a user requests a train ticket online, they need to know the times of the trains, the cost of the tickets, the route of the trains. Therefore, the users request service on the webpage will need to communicate with the route database, which will need to communicate with the timetable, and then the pricing database (with special offers being also needed to be interrogated). Finally, this will need to populate the user online order cart. To make this website more user friendly the service may also want to make suggestions to the user based on history, all these different services are separate microservices that must locate each other and deliver the required function based on the context of the request.

The challenge in terms of performance and scalability with this networked approach to services, where each service is performing a specific function, is dealing with request overload. Because, each service might need to request data from several other services, and if one service, such as the timetable service above, is central to all service requests then managing the traffic flows becomes an issue. The service mesh is designed to route and optimize traffic in these scenarios.

Questions organizations should ask themselves about the most appropriate way to adopt a service mesh

The first and most obvious question about how to start adopting a service mesh is what level of control/management you want to be responsible for. Deciding this requires the following two challenges to be answered by the organization: firstly, dealing with the complexity of what is still an emerging technology and, secondly, are the skills needed to successfully deploy and manage any service mesh available?

These two fundamental hurdles are forcing many organizations to ask a key question: do they want and need to manage the service mesh themselves, or should they adopt a managed service approach? Vendors like AWS, HashiCorp, and VMware are building packaged solutions that come with a managed service capability. While the open source technologies remain more DIY. The small start-ups are focused on simplification and making the technology more hybrid.