After learning cluster components in the last note, I plan to cover some of the core concepts in Kubernetes and deployment.
Core Concept
Pod
Pod is an core concept in Kubernetes. Intuitively, we may think deployment is just putting an application on a machine or we say node to host so that other people can access through a link. But obviously putting an application randomly somewhere in a node is hard to manage and even integrate with other applications later. As we mentioned last note, Kubernetes is a tool used for deployment taking communication, resource allocation, access, migration and so on among various services or applications into account. To achieve these functionality easily, Kubernetes introduces the concept of Pod.
A Pod is a single instance holding an application and it is the smallest object we can create in Kubernetes. If more and more users try to access an application, we probably need spawn up a new application to share the load. In Kubernetes, we actually spawn up a new pod capsuling the same application instead of spawn up a same application inside the same pod. Pods usually have 1 to 1 relation with the container running the application. But a pod can also hold multiple containers. Considering if we containerize an application with docker compose, it means an application is combined with multiple containers. If we would like to deploy this application in a pod, the pod actually holds multiple containers under this situation.
ReplicaSets
To prevent user losing access to the application, we intend to have replica application in the node. The replica controller in Kubernetes helps create replica application inside the node to achieve the high availability of the application. Another reason we need replication is that we can share the load of an application to make sure the application is scaling. Instead of just create replication inside on node, we can create replication cross different nodes to balance the load when de demand increases.
One remark for replica in Kubernetes is that if we define the replica size is 4, the application will always have 4 replica running. Even if one of the replica is down, Kubernetes will coordinate to re-create a new replica to make sure the replica size is 4.
Deployment
Before jumping into the deployment config, we introduce a little bit about deployment. Considering CI/CD we mentioned in another blog, deployment is actually how to make sure the application get updated continuously such adding new functionality or rollback to previous version without down time. Kubernetes here is trying to update pods and replicas related to a application one by one to make sure the application to get updated without down time. To make this happen, we have to create a deployment object for an application.
Service
After we have applications as the pods or replicas in some node, the problem now is how a user can access the application pod or replicas in a node. Here comes the services in Kubernetes. We have to wrap the related pods or replicas up as a service so that user can access the service through an so-called entrance. A service is mapping a port on a node to a port on a pod.
There are three types of services:
- NodePort(Accessed externally outside the cluster)
- ClusterIP(Accessed internally within the cluster)
- LoadBalance
Yaml Config
Normal pattern of yaml config is as follows,
apiVersion:
kind:
metadata:
spec:
In Kubernetes, we have 4 kinds of instances as follow,
kind | apiVersion |
---|---|
POD | v1 |
Service | v1 |
ReplicaSet | apps/v1 |
Deployment | apps/v1 |
This is basically how to fill up the first two line of configuration.
For metadata, it is actually a dictionary defined there, usually including name, labels and so on. Spec configuration is basically adding containers inside the instance. Let’s go through these 4 types of instances configuration in Kubernetes one by one.
POD config
This is a pod-definition.yaml
file for capsuling a redis application and it is easy to understand
the config file given the general introduction of the yaml configuration.
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: myapp
tier: db # tier here can be considered as type
spec:
containers: # the container inside the pod
- name: redis
image: redis:alpine
If we have kubectl
installed, we can run kubectl create -f pod-definition.yaml
to get the pod created.
We can use kubectl get pods
to view the new pod that we created.
ReplicaSets Config
The main difference between pod and replica config is the spec
definition. In replica config, the
spec
is actually containing a pod definition as we see in previous section. Under spec
, we also
have replicas
to define the size. Furthermore, The selector is a compulsory element in replica config.
The matchLabels
has to be exactly the same as we defined in metadata. The purpose of labels and
selectors is helping replica controller to monitor the replica pods among big amount of pods inside
a node.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels:
type: front-end
If we have kubectl
installed, we can run kubectl create -f replica-definition.yaml
to get the replicaset created.
We can use kubectl get replicaset
to view the new replicaset that we created.
To scale the replica, we can update the replicas
definitions with the number we want. We can
also run kubectl scale --replicas=6 -f repilcaset-definition.yaml
or
kubectl scale --replicas=6 replicaset myapp-replicaset
.
Deployment Config
The definition of deployment is similar to replica config.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
template:
metadata:
name: webapp
labels:
type: frontend
spec:
containers:
- name: web-container
image: kodekloud/webapp-color
replicas: 3
selector:
matchLabels:
type: frontend
If we have kubectl
installed, we can run kubectl create -f deployment-definition.yaml
to get the deployment created.
We can use kubectl get deployment
to view the new deployment that we created.
Service Config
NodePort Service
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort # type of service
ports:
- targetPort: 8080 # port on pods or replicas
port: 8080 # port on service object
nodePort: 30080 # port on node
selector:
app: simple-webapp # when there are multiple ports from different pods, we use the app name as selector
If we have kubectl
installed, we can run kubectl create -f service-definition.yaml
to get the service created.
We can use kubectl get service
to view the new service that we created.