What Is the Kubernetes Network Model and How Does It Work?

What Is the Kubernetes Network Model and How Does It Work?
0:00
/
1:00
1x

Kubernetes, commonly known as k8s, is open-source container orchestration software. In other words, Kubernetes helps you eliminate manual processes when deploying and scaling containerized applications.

Kubernetes uses a network model to automate functions such as storage allocation, load balancing, and service discovery which saves you significant time, cost, and effort.

You can think of the Kubernetes network model as a system allowing pods, containers, and other services to communicate. Containers hold the logic, dependencies, tools, and configurations that are required by your application to run. On the other hand, a pod is responsible for how containers network and consume resources.

In this article, we will dive deep into the Kubernetes network model to understand how it works.

The Kubernetes network model

A Kubernetes cluster is created whenever you deploy your containerized application on Kubernetes. The cluster hosts several nodes or worker machines responsible for running the application.

The nodes are managed using the control plane. Generally, all these Kubernetes components must work together to make sure your application runs well in a cloud environment. This is where the Kubernetes network model comes in.

Cluster networking ensures that containerized applications that share resources and machines do not use the same host ports. A Kubernetes cluster uses a DNS service that translates readable names to IP addresses, allowing pods and other services to be discoverable.

Each pod in a Kubernetes cluster is assigned a unique IP address that facilitates seamless communication between other pods and containers. A kube-proxy is responsible for maintaining these network rules, which enhances communication.

The Kubernetes network model is concerned with the following use cases:

  • Pod-to-pod communication. Pods communicate with one another using their unique pod IP addresses. You can read more about this communication process in this section.
  • Container-to-container communication. Containers hosted by a pod communicate with one another freely using a shared volume. In other words, containers in the same pod have access to data stored on a Kubernetes volume. This data is available as long as the associated pod is running and is deleted when the pod exits. A pod allows you to have separate routing tables and network interfaces that work independently and thus enhance networking.
  • External to cluster communication. A containerized application running on Kubernetes needs a way to send and receive data via the internet. Kubernetes makes this communication possible using the ingress and egress components. The ingress controller allows you to specify the traffic type and sources allowed to connect your services. Network requests from sources that have not been whitelisted will be blocked. On the other hand, the egress controller enables you to connect to outside entities using an internet gateway.
  • Pod to service networking. Pods scale according to user demand. However, scaling the application may cause the pods’ IP addresses to change. To deal with this issue, Kubernetes uses a service function to keep track of IP addresses. In case of changes, the service will install iptables rules to help direct the traffic between the back end and service.

The Kubernetes network model is implemented by a container runtime that’s responsible for managing network requests on each different node. Container runtimes rely on the Container Network Interface (CNI plugin) to integrate with container orchestration systems and add and delete network interfaces. CNI plugins are offered by cloud service providers such as Azure, AWS, and Google.

Kubernetes pod network basics: What it is and how it works

In Kubernetes, a pod is the smallest unit you can create and deploy. A pod usually hosts one or more tightly coupled containers and provides the necessary network resources and shared storage.

Each Kubernetes node features a CIDR range of IPs assigned to pods and used for communication. Even if a pod hosts multiple containers, it will still have one IP address. Containers inside a pod share network port numbers, IP addresses, and root network namespace. This means containers in a pod can find one another using the localhost since they are running in a shared environment.

Though using IP addresses simplifies the networking process between pods, it's not recommended since IP addresses keep changing. This might be a problem, especially if your application’s front-end communicates regularly with the back end using the service IP address. It means that the networking process may fail at some point which will cause your app to crash.

The preferred communication method between pods is through a service. For instance, if your service name is api-server, the link would look like this https://api-server:8000 instead of https://192.8.8.89:8000. Using the service name for communication provides an abstraction layer that reduces the likelihood of your application crashing.

Kubernetes services

Kubernetes services provide an abstraction layer that allows you to expose your containerized application running inside a pod as a network service.

A Kubernetes service will assign a name and virtual IP address to pods that perform similar functions. The IP address and name will not change until the pod exits.

Here are the main types of Kubernetes services:

  • NodePort. This service type exposes your application to the IP addresses of the virtual machine or node at a static port. As a result, your service will be accessible from outside the Kubernetes cluster. In other words, the NodePort is a reserved port that allows other services to be accessible to other network endpoints via a virtual ethernet device (veth). The NodePort ranges from 30000 to 32767 and is generated by the control plane.
  • LoadBalancer. This service type uses an external load balancer to expose your service to a cloud provider. The load balancer may vary depending on the cloud provider’s capabilities.
  • Cluster IP. This default service type allows your service to be accessible only within the Kubernetes cluster. The cluster IP does not change even when pods supporting a particular service are destroyed.
  • ExternalName. This service type maps your service to the ExternalName field and returns the value as a CNAME record.

Kubernetes networking policies

Pods can communicate with one another and receive traffic from different sources. However, accepting all traffic is quite risky. Hackers can easily intercept and modify network requests and then deliver dangerous payloads to your application. To counter this problem, Kubernetes introduced network policies that allow you to control and manage traffic.

Kubernetes networking policies help you implement firewall rules at different application stages. For instance, you can have a policy that prevents your website’s front end from communicating with the back end when it's compromised or specific criteria are not met.

The top Kubernetes networking policies are:

  • Deny all incoming traffic by default. You can set a network policy that rejects all ingress (incoming) traffic to particular pods.
  • Allow all incoming connections. You can write a network policy to allow all the ingress traffic.
  • Deny all egress traffic. With this policy, you can reject all outgoing connections from particular pods.
  • Allow egress traffic. You use this network policy to allow all outgoing communication from pods in a specific namespace.
  • Deny all egress and ingress traffic. You can specify a network policy that rejects all incoming and outgoing traffic.

Kubernetes network policies are usually written in a Markup language known as YAML.

The network policy specifies the podSelector (pods that will be affected by the network policy) and policyTypes (ingress or egress traffic) resource fields. For instance, the network policy for denying all incoming traffic will appear as follows:

--CODE language-markup line-numbers--
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress

Code sourced from official Kubernetes tutorial.

Once you have written the network policy, you can create it using the command below:

--CODE language-markup line-numbers--
Kubectl create -f policy.yaml

Monitoring network policies helps you make sure that your pods and application work as required. You can also identify new security gaps that may impact your cluster. Modifying your security policy can help prevent such risks.

You can monitor network policies and configurations using the following Kubernetes command:

--CODE language-markup line-numbers--
Kubectl describe networkpolicy <NETWORK_POLICY_NAME>

You will also need to install a network plugin such as Calico to enforce network policies, since Kubernetes does not do this by default.

Hire Docker developers or work as one yourself

The Kubernetes virtual network model ensures that your cloud-native application works well by facilitating pod-to-service communication, container-to-container communication, and cluster-to-external service communication.

By default, Kubernetes allows all incoming and outgoing traffic from or to pods. This is risky since it exposes your application to external threats. Fortunately, you can use network policies to set rules and manage traffic.

The demand for Kubernetes and Docker specialists is on the rise. Organizations need these specialists to help write network policies, manage continuous integration pipelines, monitor containerized applications, and perform different tests.

If you are a Docker or Kubernetes specialist, you can sell your services, find work, and meet potential clients on Upwork, the world’s work marketplace.

Upwork is not affiliated with and does not sponsor or endorse any of the tools or services discussed in this article. These tools and services are provided only as potential options, and each reader and company should take the time needed to adequately analyze and determine the tools or services that would best fit their specific needs and situation.

Heading

asdassdsad
Projects related to this article:
No items found.

Author Spotlight

What Is the Kubernetes Network Model and How Does It Work?
The Upwork Team

Upwork is the world’s work marketplace that connects businesses with independent talent from across the globe. We serve everyone from one-person startups to large, Fortune 100 enterprises with a powerful, trust-driven platform that enables companies and talent to work together in new ways that unlock their potential.

Get This Article as a PDF

For easy printing, reading, and sharing.

Download PDF

Latest articles

X Icon
Hide