Author: James Oakley
Contributors: Ben Tremblay, Josh Hicks
The increasing popularity of central office edge clouds is putting new and complex demands on networking solutions. 5G network deployment requires more dense configuration of antennas and associated networking functions, and projected use cases will require central office functionality to have a smaller footprint, be software-driven and rely less on specialized equipment. The use of SD-WAN containers can help meet these changing network needs.
In November of 2019, Turnium’s technology was used in the Virtual Central Office (VCO) 3.0 proof of concept demonstration at Kubecon. Together with other technology industry leaders, this project demonstrated how a central office’s call switching functions could be deployed in Kubernetes (an open source container orchestration system) to solve the unique challenges of deploying 5G networks.
The challenge that 5G networks present is that antennas and associated networking functions must be in a denser configuration than previous wireless technologies. Increased density is required to deliver higher bandwidth and accommodate 5G’s reduced range due to its use of millimeter-wave spectrum.
In addition, to support the forecast use-case of using 5G to connect Internet of Things (IoT) deployments, providers will need to deploy more central office functionality in their networks. All of this requires that the traditional central office functionality have a smaller footprint, be software-driven to support easy deployment and reconfiguration and use less specialized equipment.
The demonstration at Kubecon showed the value of SD-WAN containers on commodity “edge” compute solutions in addressing these new networking challenges. Using these technologies, service providers will be able to light up central offices in much shorter timelines to meet demand and network growth.
Connecting centralized compute resources with these new central office edge compute clouds raised some technical challenges. The goal of the VCO 3.0 project was to deliver a complete architecture that delivers small footprint, containerized central office switching and a secure, scalable multi-cloud network. In this post, we’ll detail how we achieved that goal and some of the difficulties we encountered.
VCO 3.0 Technical Architecture
The main components of the VCO 3.0 demonstration were as follows:
- Orchestration and federation of the Openshift/Kubernetes/Openstack installs
- UPF and Layer 2 fabric
- Layer 3 Private wide area networking (SD-WAN)
- Media server to handle the video call
- 5G radio
- Private and Public cloud instances
By integrating these components into a single rack, the project team successfully completed an international 5G video call between two 5G handsets over a containerized cloud-native (hybrid cloud) infrastructure and met all the network conditions required to bring together the various components listed above.
For our part of the project, we containerized our SD-WAN solution so that it could be deployed as a cloud-native application with edge clouds connecting to public clouds over private, highly resilient networks built on commodity provider lines. The need for traditional private networking solutions such as MPLS, was eliminated as our SD-WAN provided comparable resiliency, security and quality without the expense, coordination, or long lead times.
Containers, Hybrid Clouds, Scale
Containers allow compute and data processing to be quickly deployed throughout a network, which is critical to the performance required for many use-cases. Big data applications, Internet of Things deployments and Artificial Intelligence are driving the need for computing closer to the edge of the network.
Network congestion and latency are reduced by moving processing to the edge. Containers simplify the process of deploying and reconfiguring networks dynamically to accommodate changing volumes of data and scale applications across multiple clouds.
But moving compute and processing to the edge is only part of the story. For quick scaling, networks must scale up as easily as compute resources – ideally using the same or similar set of tools (much like the SD-WAN containers used in the demo at Kubecon).
Easy and quick scaling requires edge devices to be non-proprietary and capable of supporting multiple containerized applications. There also needs to be a way to manage network performance to communicate between each edge device and multiple core applications in a reliable, secure manner while delivering acceptable cost.
Implications for Containerizing SD-WAN
Despite the VCO project’s success, the demonstration at Kubecon did not come without its challenges. Networking was never a core focus in the design of Kubernetes, since it’s often seen as low-priority for application developers, as evidenced by the common refrain, “the network doesn’t matter!”
Call us biased, but we think the network does matter.
Challenge One: Direct Access
The first challenge in deploying our SD-WAN software in containers was getting direct access to multiple networks into the container. Kubernetes is designed to give each container a single interface with private networking and have a front-end web proxy pass public IP requests into the private IP.
Our SD-WAN solution is a router, with one or more WAN connections and one or more LAN connections. Obviously, the private networking behind a proxy does not work for an IP router. Specifically, we needed the SD-WAN to build a network to the other clouds and provide reliable cloud and public access to the Kubernetes cluster itself.
Our solution to the single interface issue involved creating several networking addons known as CNIs (Container Network Interface) to allow multiple interfaces. The most popular one is Intel’s Multus, which we leveraged to obtain the WAN and LAN networks we needed. (Intel was one of the companies we worked with for this demo!)
Challenge Two: Routing
The next challenge was to ensure that routing was in place for the edge hosts and their container networks to access resources on the other clouds as well as public networks.
In the Kubernetes world, this is typically done using VXLAN or IPIP tunnels over the host’s public interfaces. Often, clouds are built with the same set of private IP addresses. To solve this, we required some creative uses of Kubernetes host containers and some custom configurations on the individual clouds.
In the networking world, routing policy is typically determined via dynamic routing protocols like BGP. Each gateway advertises the networks it’s responsible for to other gateways, which in turn advertise the networks to other gateways. All public global routing on the Internet is advertised in this way.
In Kubernetes, the pod networks are typically private ranges, and often, multiple clusters will reuse the same networks. This is not typically a problem with incoming proxies, since external hosts only need to access the global public addresses, which are translated to the private addresses.
Since we needed pods in multiple clusters to talk directly to each other, we first had to ensure there was no reuse of overlap among the pods networks.
Where pods need to talk to other pods within a cluster, they are typically connected via a layer-2 tunnel, such as VXLAN. This allows hosts to talk directly to each other without the use of gateways. The downside of this is that larger networks require a lot of broadcast traffic (such as ARP), which ends up going to all hosts when any pod is looking to connect to another one.
Creating a single layer-2 network with multiple clusters of intercommunicating pods is simply not feasible. There are CNIs (such as Calico) that solve this problem using layer-3 routing on the nodes with BGP dynamic routing instead of VXLAN, but it isn’t always practical to swap the CNI on an existing cluster.
To solve this problem, we used Multus to create virtual interfaces for our SD-WAN gateway container and added a daemonset to distribute the routes for the pod networks of the other clusters into our gateway via whatever host the container was running on, which in turn routed the networks into the Multus virtual interface. For VCO 3.0, these were static routes, but we recommend running dynamic routing in the daemonsets instead in future deployments.
Are SD-WAN Containers Right for You?
At the end of the demo, we collectively asked the Kubernetes community to consider the requirements of telcos and other network providers as new capabilities are added to Kubernetes.
Moving forward, we are examining ways to deploy containerized versions of our SD-WAN product both independently and stacked with other cloud native technologies, to make scaling up resources to offload workloads from centralized clouds over to edge environments a realistic option.
Our goal is to reduce network congestion and move compute power to closer edge devices and minimize latency, which is particularly important for IoT deployments.
If you’d like to discuss how to deploy reliable, private SD-WAN containers in your environment, we would love to chat about your requirements and how we might be able to support you. Please contact us.