We have covered storage, security topic in Kubernetes cluster in previous blogs. Now it’t time to talk about network. Network is important to help cluster communication and service communication. So we have to know how Kubernetes takes care the network challenge.
Container Network Interface(CNI)
Commonly, there are following steps of a network solution.
- Create Network Namespace
- Create Bridge Network/Interface
- Create VETH Pairs(Pipe, Virtual Cable)
- Attach vEth to Namespace
- Attach Other vEth to Bridge
- Assign IP Addresses
- Bring the interfaces up
- Enable NAT - IP Masquerade
Since almost all network solutions follow the similar pattern, we bring step 2 - step 9 together as a bridge program. Such a bridge program can work across different container runtime like Docker and Kubernetes etc. as a standard. This is where container network interface comes in.
To mess with CNI, on the container runtime side, we have following requirement,
- Must create network namespace
- Identify network the container must attach to
- Invoke network plugin(bridge) when container is added
- Invoke network plugin(bridge) when container is deleted
- Json format of the network configuration
Otherwise, on the plugin side, there are following things to be taken care of,
- Must support command line arguments ADD/DEL/CHECK
- Must support parameters container id, network ns etc..
- Must manage IP Address assignment to PODs
- Must return results in a specific format
Pod Networking
In pod level, we have to come up with the solution how pods communicates within the cluster and gets accessed by external party. But Kubernetes has a network model for pods we can follow.
- Every POD should have an IP address
- Every POD should be able to communicate with every other POD in the same node
- Every POD should be able to communicate with every other POD on the other nodes without NAT
Let’s take an example to demonstrate how to solve the network issue for pods. Suppose we have two nodes with ip address
192.168.1.11
and 192.168.1.12
. These two nodes are parts of the external network. Now we generate pods inside of
these two nodes. These pods have their own namespace. Then we attach different namespaces into the different network.
Bridge network is the what we can attach namespace to. With the bridge network, we can have pods communicating to each
other. We know each bridge network will belong to its own subnet, eg 10.244.1.0/24
for node1, 10.244.2.0/24
for node2.
Then we set the ip address with the bridge interface meaning pod communication can work inside each node. To make sure
pods can communicate with pods in other nodes. We have add route table, eg, we run ip route add 10.244.2.2 via 192.168.1.12
inside node1 if we have a pod running inside node2 with ip address 10.244.2.2
. Instead of doing the set up manually
when there are thousands of pods, there comes CNI. We can tell the above example is quite similar with the 7 steps we
mentioned in previous section. CNI can wrapper those steps together, which make our life easier.
When the route becomes complicated, since the application or service could be within a big network, it’s hard to track
each of communication end to end. Things like weave
come into birth to solve complex network solution. They assign
an agent or we can say a doorman for each node. They know pods information in other nodes and help with the route. With
this, we can get rid of the manually setting of the network. We can directly deploy weave
in the cluster and let them
to take care the network communication. The weave
will be deployed as a daemonset in Kubernetes. As we learned before,
daemonset will be deployed in every nodes within a cluster.
Service Networking
After we talk about pod network in previous section, we are going to talk about service network. In real life, we seldom
have a single pod to communicate with another pod. We normally do it in service level in industry. The services are always
deployed across the cluster. So the service network will not be bounded by nodes. We know one type of service used internally
only is called ClusterIP
. Another type of service used externally like a web application is called NodePort
.
For service network, it relies on one type of object called kube-proxy
. kube-proxy
helps create an ip tables. Ip tables
are mappings from IP:port
to Forward to
fields, meaning the service ip will point to a pod ip.
DNS
To remember those IP number might be hard for client, so we can use DNS to call the service easily in Kubernetes. Kubernetes has its own convention. We can see an example to understand the convention intuitively.
Hostname | Namespace | Type | Root | IP Address |
---|---|---|---|---|
web-service | apps | svc | cluster.local | 10.107.37.188 |
10-244-2-5 | apps | pod | cluster.local | 10.244.2.5 |
According to the example, for service DNS, we can ping 10.107.37.188
via web-service.apps.svc.cluster.local
.
For pod DNS, we replace the .
with -
in hostname. We can ping 10.244.2.5
via 10-244-2-5.apps.pod.cluster.local
.
The DNS resolver will be held in CoreDNS
server which is spawned up by the kubelet when we setup a cluster with kubelet.
Ingress
The relationship between service and ingress is that the ingress helps the users access the application with a
single url that we can configure to route to different services within a cluster according to different url ports. We can
consider ingress an object acting as like a load balancer within Kubernetes. We can perform load balance, authentication,
SSL configuration in ingress controller. With the use of ingress, we need to config ingress controller
and ingress resource
.
Ingress Controller
There are many proxy solution we can use to config an ingress controller. We will take nginx
as an example.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration # say we have a configmap to config nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
Then we need to expose the ingress to external world with the service definition.
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
We also need a service account for the above configuration as well and attach the corresponding Roles
, ClusterRoles
,
RoleBindings
to the account definition.
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
...
Ingress Resource
In ingress definition we can specify different rules to route the traffic.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-wear
spec:
rules:
- http:
paths:
- path: /wear
backend:
serviceName: wear-service
servicePort: 80
- path: /watch
backend:
serviceNme: watch-service
servicePort: 80
If the host name is not the same for different service, we can specify multiple rules to route the traffic.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-wear
spec:
rules:
- host: wear.my-online-store.com
http:
paths:
- backend:
serviceName: wear-service
servicePort: 80
- host: watch.my-online-store.com
paths: /watch
- backend:
serviceNme: watch-service
servicePort: 80
Summary
So this is the last learning note of this kubernetes series. I am happy to have a basic understanding of Kubernetes concepts. The most important thing I can summarize during the learning is that tools are sharing similar concepts. When we compare different tools while learning with their own specific use cases, it’s helpful to understand the motivation and the stand point of the design and implementation. For example, when I learn Kubernetes, docker and service implementation in my daily work will come into my mind as well and help me to understand Kubernetes easily.
Stay tuned for my lesson learned and best practice on Kubernetes when my hands on experience becomes more and more!