Kubernetes Series Learning Note 5: Security

This blog will cover the topic about the security configuration of Kubernetes。

Certificate

Certificates is a normal way to secure the communication between server and client. I have got a blog to discuss the SSL/TLS certificate here. In Kubernetes, there are 3 types of certificates, root certificates(CA server), client certificates and server certificates. Client certificates are required by admin, Kube Scheduler, Kube Controller Manager, Kube Proxy, Kube API Server Kubelet Server. Server certificates are required by ETCD, Kube Api Server, Kubelet Server.

Certificate Creation

Here is an example how to generate a certificate. Suppose we have had the CA certificate and the key already and we would like to generate an admin certificate.

First we generate the admin.key,

openssl genrsa -out admin.key 2048

Then, we generate the certificate sign request,

openssl req -new -key admin.key -subj "/CN=kube-admin" -out admin.csr
# we can add /O= as group info for the subj content

Finally, we sign the certificate,

openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -out admin.crt

To view the certificate, we can run

openssl x509 -in admin.crt -text -noout

Remark

  • ETCD server can be deployed as a cluster across multiple servers for high availability, so we may generate peer certificate to increase the security for different members in the ETCD cluster and declare it under the --peer-key-file argument in the ETCD config yaml file.
  • Since Kube Api Server is used as client and server both, so we have apiserver.crt, apiserver-kubelet-client.crt and apiserver-etcd-client.crt. They are specified under the argument --tls-cert-file, --kubelet-client-certificate and etcd-certfile respectively in the config yaml file.
  • For Kubelet, the certificate and key will be generate for each node and the certificate name will be after the node name.

Certificate API

If a user try to request sign for the certificate, there are following steps,

Frist, create CertificateSigningRequest objects as follow

apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
    name:
spec:
    groups:
    - system:authenticated(for example)
    usages:
    - digital signature
    - key encipherment
    - server auth
    request:
      

For the request part, we need to copy paste the result of cat user.csr | base64. Then We can run kubectl get csr to review existing signature requests and run kubectl certificate approve <csr-name> to review existing signature requests. Finally We can view the certificate by running kubectl get csr <csr-name> -o yaml and share the decoded certificate to the user.

KubeConfig

It is tedious that we specify the certificates with arguments when we try to manipulate with pods. We can define KubeConfig to hold security config for server, client and the relationship between server and client.

Here is an example of KubeConfig yaml.

apiVersion: v1
kind: Config
current-context:
clusters:
- name: 
  cluster:
    certificate-authority:
    server:
contexts:
- name:
  context:
    cluster:
    name:
    namespace:
users:
- name:
  user:
    client-certificate:
    client-key:

Context here defines which user account will be used to access with cluster.

Role Based Access Controls

Kubernetes has the similar role based access control as the other tools like database, AWS etc.. We have to create a role, which can be regarded as a group with a config file.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
    name: developer
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list", "get", "create", "update", "delete"]
  resouceNames: ["blue", "orange"]
# can have multiple rules

To link a user to the role, we need a RoleBinding object defined.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
    name: user-developer-binding
subjects:
- kind: User
  name: user
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

After these created, we can check our access by run kubectl auth can-i <operation> --as <user> for example. Apart from define roles in terms of users, we can also define roles in terms of clusters called ClusterRole. With the ClusterRole, we can limit the privileges the nodes privileges.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: cluster-administrator
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list", "get", "create", "delete"]
# can have multiple rules

We need a ClusterRoleBinding now.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: cluster-admin-role-binding
subjects:
- kind: User
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-administrator
  apiGroup: rbac.authorization.k8s.io
comments powered by Disqus