Skip to main content

Kubernetes deployment models for edge applications

Distributed Kubernetes cluster networking supports reliable low-latency access across wide geographical areas for 5G and similar services.
Image
Machine learning models require unique consideration when building out its architecture.

Photo by Pietro Jeng on Unsplash

Kubernetes is a well-established platform for hosting microservices. It facilitates a cloud-native approach to application development. Coupled with DevOps and GitOps tooling, it has essentially become a standard platform for containerized services across multiple industries.

However, Kubernetes alone isn't likely to address your complete needs for application development and the post-deployment operational tasks that enable the mature, reliable, and predictable execution of these applications.

[ Leverage reusable elements for designing cloud-native applications. Download the O'Reilly eBook Kubernetes Patterns. ]

Complementary solutions in the market fill gaps and ameliorate weaknesses on platforms where Kubernetes is the underlying engine. They come in the form of Kubernetes-native solution packages known as Kubernetes Operators, available on the open source Operator Hub. These packages include GitOps and DevOps pipelines, service mesh, performance-monitoring tools, and multicluster management.

An end-to-end technology stack is a good starting point. But your goal is to design the deployment model to reach the consumers and support the backend systems wherever they are and offer them high-performance and cost-effective outcomes. This is how you convert a technology stack into a successful business solution. When you locate the service targets at the edge, edge computing becomes crucial for media and communication services and offerings.

This article examines Kubernetes deployment models for edge applications. It addresses enabling north-south (external consumers) and east-west (backend systems) communication between different infrastructure types hosting the same application platform for developer and operational consistency.

The need and possible solutions

Placing certain services in close proximity to the consumers has great benefits, including low-latency response, bandwidth consumption savings, and data locality. However, there are also multiple challenges. One of the key challenges with the Kubernetes deployment model is the placement of the Kubernetes control plane that manages the workers that comprise the resource pools consumed by the applications and services. The two main options for control plane placement are:

  • Deploying full-fledged cluster(s), complete with control nodes and worker nodes, everywhere you need your applications to be accessible
  • Deploying worker nodes at the edge and connecting them to the central location hosting the control plane
Image
Option 1: Full cluster model
(Rimma Intonel and Fatih Nar, CC BY-SA 4.0)

You can simplify option one (the full-cluster model shown above) with innovative deployment models:

  • A compact high availability (HA) cluster with a minimum of three nodes accommodating both control plane and worker node roles
  • An all-in-one, single-node standalone cluster

You must have a dedicated control plane in both compact deployment models.

[ Download 6 considerations for choosing the right Kubernetes platform. ]

Option two (the remote worker approach shown below) eliminates the overhead of having a dedicated control plane at each location. Still, it may not be feasible if there is a significant latency, intermittent connectivity, or a lack of sufficient bandwidth for the cluster's internal services or operations between the Kubernetes control plane and the worker locations to function correctly.

Image
Option 2: Remote worker approach
(Rimma Intonel and Fatih Nar, CC BY-SA 4.0)

Suppose the network connectivity between the core cluster hosting the Kubernetes control plane and the remote worker nodes meets performance requirements (for example, when the latency is below the Kubernetes node-status-update-frequency). In that case, you can use a remote worker node (RWN) to cost-optimize the distributed application platform solution. We refer to this approach as "grid-platform," where the central site performs control and management tasks while remote sites deliver a platform with consumable resources.

[ Learn why open source and 5G are a perfect partnership. ]

What is grid-platform?

While you are making the application platform available wherever necessary, you also need to secure the traffic between applications hosted on a Kubernetes cluster and also make sure the breakout traffic to and from consumers is optimally placed to ensure performance, low cost, and a secure communication path. The diagram below shows a high-level view of RWNs.

Image
High-level view of remote worker nodes
(Rimma Intonel and Fatih Nar, CC BY-SA 4.0)

Central Kubernetes clusters get deployed in selected geolocations to serve nearby consumers. The remote workers expand the reach of the cluster to remote sites without affecting the integrity of the cluster control plane, maintaining its high availability and scalability. The diagram below shows a solution topology using a central cluster expanded with RWN.

Image
Solution topology: Central cluster expanded with RWN
(Rimma Intonel and Fatih Nar, CC BY-SA 4.0)

In the distributed deployment model, remote workers need access to the relevant cluster's internal communications so that the cluster control plane can monitor and manage them and make the workloads available for scheduling through the cluster workload scheduler. The remote workers also need to participate in the cluster domain name service (cluster-dns) hosted by control plane nodes, enabling the service discovery feature in service mesh solutions across the whole cluster.

Networking under the hood

Networking is the key functionality in every distributed computing solution, so it is a critical part of Kubernetes clusters. Central cluster nodes share similar primary networking configurations, including network interface definitions, network bridges, routes, and DNS server configurations, as they run on the same infrastructure. However, you expect the remote workers to get deployed on different infrastructures, so they normally have site-specific networking configurations.

The Kubernetes community is increasingly using Open Virtual Network (OVN) fabric with Internet Protocol security (IPSec) networking solutions. It enables IPSec egress to be assigned to tenant namespaces on desired worker nodes through node labeling, breaking out traffic on premises with RWNs.

[ Learn 16 steps for building production-ready Kubernetes clusters. ]

You should consider the RWN approach mainly with a long-living control plane implementation, where the short-term loss of the control plane would not cause critical service outages. The distance between remote workers and the control plane nodes must be within a latency range where keepalive timers do not time out so that RWNs don't get marked as unhealthy or unreachable by the control plane. That's why you might see YAML like this: 

$ cat egress-ip.yaml
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: edge-test-egressip
spec:
  egressIPs:
  - 172.27.200.5
  - 172.27.200.6
   namespaceSelector:
    matchLabels:
      env: prod
$ oc label nodes ip-172-27-201-49.ec2.internal k8s.ovn.org/egress-assignable=""

OVN-IPSec cluster networking allows the assignment of cluster traffic (north-south and east-west) to exit clusters in the desired location through remote worker nodes performing the networking breakout. You can achieve this for individual tenants using tenant namespace label selectors while pointing exactly where the traffic exits the cluster through which remote worker, node by node.

Allowing network breakouts on remote worker nodes enables low-latency access to consumers and backend systems with secure access.

Expand Kubernetes clusters

Telecommunications and media solutions use widely distributed systems over multiple geolocations, allowing them to reach a greater consumer base, be it human subscribers or machine-to-machine systems.

Kubernetes, with its origins in an enterprise datacenter, was not intended for deployment across distributed locations. But that doesn't mean it can't grow and adjust. This article offers some possible solutions to expand the scale of a Kubernetes cluster while constraining the failure domain.

The article showed the details of a distributed Kubernetes cluster networking and discussed how it allows Kubernetes clusters to build reliable low-latency access across wide geographical areas. This could be of significant value for many modern services, including 5G.


This article is adapted from Episode-II The Grid on Medium and is republished with permission.

Topics:   Kubernetes   Edge computing  
Author’s photo

Rimma Iontel

Rimma Iontel is a chief architect in Red Hat's Telecommunications, Entertainment, and Media (TME) Technology, Strategy, and Execution office.  She is responsible for supporting Red Hat’s global ecosystem of customers and partners in the telecommunications industry. More about me

Author’s photo

Fatih Nar

Fatih (aka The Cloudified Turk) has been involved over several years in Linux, Openstack, and Kubernetes communities, influencing development and ecosystem cultivation, including for workloads specific to telecom, media, and More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement