Kubernetes Series Learning Note 3: Scheduling

Recently I learned how scheduling works in kubernetes. Scheduling in terms of kubernetes is how pods are arranged into nodes. In this blog, I will cover some concepts related to kubernetes scheduling.

Labels and Selectors

In a kubernetes cluster, we may have many different objects such as pods, service, deployment, nodes etc. in real industrial environment. Labels and selectors give us an easy way to view or filter different objects. We may select objects according to their types, app names or app functionality with a simple key value pair.

Taint and Tolerations

Taint and tolerations act following the literal meaning of them. The purpose of taints and tolerations is to help scheduler pods into the nodes in the way we want.

If we taint a node with key word blue, then only pods with blue toleration can be scheduled in that node. But these pods with blue toleration can also be in other nodes without taint blue. The taint is only used for preventing the pods without corresponding toleration.

We can use kubectl taint nodes node-name key=value:taint-effect to taint a node. To add toleration into pods, we need to modify the definition yaml file with new key word tolaration under spec session.

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    app: myapp
    tier: db
spec:
  containers:
    - name: redis
      image: redis:alpine
  tolerations:
    - key: "app"
      operator: "Equal"
      value: "blue"
      effect: "NoSchedule"

Apart from taint node, we can also label nodes via kubectl label nodes <node-name> <label-key>=<label-value>. Combining this with the concept of taint and toleration, we can schedule pods or applications to the desired nodes easily.

Node Affinity

As we mentioned the node selector in previous section, node selector is actually a one-to-one key value pair. Node affinity provides way that we label a node with multiple values under the same key word by defining the yaml file like this,

apiVersion: 
kind: 
metadata:
  name: myapp-pod
spec:
  containers:
    - name: data-processor
      image: data-processor
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: In
            value:
            - Large
            - Medium

There are three types of node affinity here.

  • requiredDuringSchedulingIgnoredDuringExecution
  • preferredDuringSchedulingIgnoredDuringExecution
  • requiredDuringSchedulingRequiredDuringExecution

DuringScheduling means the pods or applications are created the first time. requiredDuringScheduling means the pod has to satisfy the match expression while being created and scheduled in a node. If the pod is not satisfied to the match expression of node, then the pod will not get scheduled. preferredDuringScheduling will first nodes available for the pods. If there is no nodes fit for the pods, the pods will be scheduled in one of any other available nodes. DuringExecution means the pods or applications have already been running in a node. IgnoredDuringExecution means the pods or applications will not be affected even if the label is removed from the nodes. Similarly, RequiredDuringExecution means the pods or applications will break if the label is removed from the nodes.

Daemon Sets

Daemon set is a replicaset actually and located in each worker nodes with exactly one replica. Every time a new node up, a new daemon set pod will be generated for that node. The usage of daemon set is mainly for monitor solutions or logs viewer. Apart from those purpose, network is also a use case of daemon set. As we all know kube-proxy is also a daemon set use case as part of kubernetes core component.

Here is an example of the definition file,

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: monitoring-daemon
spec:
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      containers:
      - name: monitoring-agent
        image: monitoring-agent
comments powered by Disqus