Wednesday, 31 May 2017

Fun With JSONPath and Kubernetes

Fun With JSONPath and Kubernetes

So, I'm trying to get my head around this Kubernetes stuff and the kubectl tool can output everything in JSON so I'm going to try to use JSONPath.sh to learn k8s!

If you've not done so already, read about JSONPath.sh at http://jsonpath.obdi.io/ or https://github.com/mclarkson/JSONPath.sh and then install it using the instructions there.

You'll need a Bash shell to use JSONPath.sh and kubectl so fire up your terminal and let's explore...

Exploration

I have a working Kubernetes installation running on three VirtualBox machines. I got kubernetes installed by using the davidkbainbridge/k8s-playground GitHub project, chosen because it uses kubeadm to bootstrap the cluster.

Once the 3-node cluster is up we can see that user pods will be scheduled to run on the 'worker' nodes only:

$ kubectl get nodes -o json | JSONPath.sh '..spec' -j -u
[
    {
        "spec":
        {
            "externalID":"k8s1",
            "taints":
            [
                {
                    "effect":"NoSchedule",
                    "key":"node-role.kubernetes.io/master",
                    "timeAdded":null
                }
            ]
        }
    },
    {
        "spec":
        {
            "externalID":"k8s2"
        }
    },
    {
        "spec":
        {
            "externalID":"k8s3"
        }
    }
]

I started messing around with Pods, Deployments, ReplicaSets etc. and installed the Kubernetes Dashboard. First I used the yaml files that came with the current stable kubernetes (v1.6.4). That worked but the dashboard could not authenticate. Next I tried the yaml files from https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml, which uses RBAC, and that worked! How? (more on that later...)

So, I've altready looked at the some resources but there's a whole bunch I have been ignoring. From the kubernetes-dashboard.yaml I see that it uses additional items, ServiceAccount and ClusterRoleBinding.

I want a good overview of this running system so maybe I can save the entire kubernetes configuration in one file and then grep through it to see the relationships and I can go through the file on any host... maybe... anyway let's try.

Save the entire kubernetes config using the following:

# Get the list of all resource names and skip the first one ('all')
resources=`kubectl help get | grep '\*' | tail -n +2 | cut -d " " -f 4`

# Empty the file
>AllResources

# Merge all resources into one JSON.sh format file
for i in $resources; do kubectl get --all-namespaces $i -o json | \
  JSONPath.sh | sed 's/^\[/["'"$i"'",/' >>AllResources; done

# Create a .json file from the merged resources file
JSONPath.sh -p AllResources.json

Now there's a JSON file called AllResources.json that contains everthing. Mine is over half a megabyte in size and took about a minute to create.

Let's start searching...

Get an overview of everything by stepping through the JSON.sh format output. The first column is the resource name.

less AllResources

We can see all the resources that were actually used with the following (I crossed out things that were found but aren't actually [top-level?] resources):

$ JSONPath.sh -f AllResources.json '..kind' -b | sort -u
CertificateSigningRequest
ClusterRole
ClusterRoleBinding
ComponentStatus
ConfigMap
DaemonSet
Deployment
Endpoints
Group
List
Namespace
Node
Pod
ReplicaSet
Role
RoleBinding
Secret
Service
ServiceAccount
User

Type 'kubectl get' and compare to all available resources - less than half are used.

I spent a while, until I got tired (about 40 minutes!), going through the AllResources file and trying different JSONPaths, but it did not present to me how the Dashboard magically worked.

I found that there was a mount point magically added:

$ JSONPath.sh -f AllResources.json -j -i -u \
  '$.pods.items[?(@.metadata.name==".*dash.*")]..[".*volume.*"]'
{
    "containers":
    [
        {
            "volumeMounts":
            [
                {
                    "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount",
                    "name":"kubernetes-dashboard-token-w54jx",
                    "readOnly":true
                }
            ]
        }
    ],
    "volumes":
    [
        {
            "name":"kubernetes-dashboard-token-w54jx",
            "secret":
            {
                "defaultMode":420,
                "secretName":"kubernetes-dashboard-token-w54jx"
            }
        }
    ]
}

And using grep on the JSON.sh format file:

$ grep "kubernetes-dashboard-token" AllResources
["pods","items",11,"spec","containers",0,"volumeMounts",0,"name"]       "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"name"]   "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"secret","secretName"]    "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","name"]        "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","selfLink"]    "/api/v1/namespaces/kube-system/secrets/kubernetes-dashboard-token-w54jx"
["serviceaccounts","items",16,"secrets",0,"name"]       "kubernetes-dashboard-token-w54jx"

This shows that the ServiceAccount was created, as instructed by the yaml file, but the Secret resource was also magically created along with a reference to it.
Looking at the output from the following command:

JSONPath.sh -f AllResources.json \
  '$.secrets..items[?(@.metadata.name==".*dash.*)]' -j

There is quite a bit of information for this automatically created Secret.

So, there's a lot of magic going on here and the only way to find out what's going on is by reading the source code or the documentation. Hopefully the docs will tell me what's going on, so off to the docs I go...

Kubernetes documents helped (and are great!), but did not answer all my questions but here's what I found:

Magically Added Mount Point

On the Master node all the mounts can be seen by using 'docker inspect':

$ sudo docker ps | \
  awk '{ print $NF; }' | \
  while read a; echo "$a"; do sudo docker inspect $a | \
  JSONPath.sh '..Mounts[?(@.Destination=='.*serviceaccount')]' -j -u | \
  grep -v "^$"; done
Error: No such image, container or task: NAMES
k8s_kube-apiserver_kube-apiserver-k8s1_kube-system_ec32b5b6e23058ce97a4d8bdc3628e81_1
k8s_kubedns_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_sidecar_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_dnsmasq_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_POD_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
k8s_weave-npc_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_weave_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_POD_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
k8s_kube-proxy_kube-proxy-rsdbd_kube-system_09c2216f-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09c2216f-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-proxy-token-m9b6j",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
 
The output shows those magical serviceaccount entries are mounted from directories under '/var/lib/kubelet', and the output of 'mount' matches. Those directories hold files, whose filenames match the key names in the data part of the ServiceAccount, and the file contents are the value of the key.

The kubernetes documentation also describes the ServiceAccount,
"A service account provides an identity for processes that run in a Pod."
So now I think I know how the magic was created but what about that ClusterRoleBinding?

$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[*].metadata.name' -b
cluster-admin
kubeadm:kubelet-bootstrap
kubeadm:node-proxier
kubernetes-dashboard
system:basic-user
system:controller:attachdetach-controller
system:controller:certificate-controller
system:controller:cronjob-controller
system:controller:daemon-set-controller
system:controller:deployment-controller
system:controller:disruption-controller
system:controller:endpoint-controller
system:controller:generic-garbage-collector
system:controller:horizontal-pod-autoscaler
system:controller:job-controller
system:controller:namespace-controller
system:controller:node-controller
system:controller:persistent-volume-binder
system:controller:pod-garbage-collector
system:controller:replicaset-controller
system:controller:replication-controller
system:controller:resourcequota-controller
system:controller:route-controller
system:controller:service-account-controller
system:controller:service-controller
system:controller:statefulset-controller
system:controller:ttl-controller
system:discovery
system:kube-controller-manager
system:kube-dns
system:kube-scheduler
system:node
system:node-proxier
weave-net


Looking at cluster-admin and kubernetes-dashboard:

$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[?(@.metadata.name==cluster-admin)]' -j
{
    "clusterrolebindings":
    {
        "items":
        [
            {
                "apiVersion":"rbac.authorization.k8s.io/v1beta1",
                "kind":"ClusterRoleBinding",
                "metadata":
                {
                    "annotations":
                    {
                        "rbac.authorization.kubernetes.io/autoupdate":"true"
                    },
                    "creationTimestamp":"2017-05-11T12:26:11Z",
                    "labels":
                    {
                        "kubernetes.io/bootstrapping":"rbac-defaults"
                    },
                    "name":"cluster-admin",
                    "resourceVersion":"50",
                    "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingscluster-admin",
                    "uid":"04d54c20-3645-11e7-877b-02193b8aadb3"
                },
                "roleRef":
                {
                    "apiGroup":"rbac.authorization.k8s.io",
                    "kind":"ClusterRole",
                    "name":"cluster-admin"
                },
                "subjects":
                [
                    {
                        "apiGroup":"rbac.authorization.k8s.io",
                        "kind":"Group",
                        "name":"system:masters"
                    }
                ]
            }
        ]
    }
}


$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[?(@.metadata.name==kubernetes-dashboard)]' -j
{
    "clusterrolebindings":
    {
        "items":
        [
            {
                "apiVersion":"rbac.authorization.k8s.io/v1beta1",
                "kind":"ClusterRoleBinding",
                "metadata":
                {
                    "creationTimestamp":"2017-05-25T08:08:08Z",
                    "labels":
                    {
                        "k8s-app":"kubernetes-dashboard"
                    },
                    "name":"kubernetes-dashboard",
                    "resourceVersion":"1597763",
                    "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingskubernetes-dashboard",
                    "uid":"497ccb3a-4121-11e7-a9e3-02193b8aadb3"
                },
                "roleRef":
                {
                    "apiGroup":"rbac.authorization.k8s.io",
                    "kind":"ClusterRole",
                    "name":"cluster-admin"
                },
                "subjects":
                [
                    {
                        "kind":"ServiceAccount",
                        "name":"kubernetes-dashboard",
                        "namespace":"kube-system"
                    }
                ]
            }
        ]
    }
}


So the kubernetes-dashboard ClusterRoleBinding maps to cluster-admin, as does the cluster-admin ClusterRoleBinding, and the cluster-admin ClusterRole that is referenced looks like:

$ JSONPath.sh -f AllResources.json \
  '$.clusterroles.items[?(@.metadata.name=cluster-admin)]' -j -u
{
    "apiVersion":"rbac.authorization.k8s.io/v1beta1",
    "kind":"ClusterRole",
    "metadata":
    {
        "annotations":
        {
            "rbac.authorization.kubernetes.io/autoupdate":"true"
        },
        "creationTimestamp":"2017-05-11T12:26:11Z",
        "labels":
        {
            "kubernetes.io/bootstrapping":"rbac-defaults"
        },
        "name":"cluster-admin",
        "resourceVersion":"6",
        "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolescluster-admin",
        "uid":"04b75ff2-3645-11e7-877b-02193b8aadb3"
    },
    "rules":
    [
        {
            "apiGroups":
            [
                "*"
            ],
            "resources":
            [
                "*"
            ],
            "verbs":
            [
                "*"
            ]
        },
        {
            "nonResourceURLs":
            [
                "*"
            ],
            "verbs":
            [
                "*"
            ]
        }
    ]
}


Whatever that means! There's a lot to read about here: https://kubernetes.io/docs/admin/authorization/

So in answer to the question, "How did kubernetes-dashboard just work!" the answer is below.

How did the kubernetes-dashboard just work


The kubernetes-dashboard YAML file contains 4 resources:
    1. ServiceAccount
    2. ClusterRoleBinding
    3. Deployment
    4. Service
When the cluster administrator created the Dashboard, with 'kubectl create -f https://git.io/kube-dashboard', the resources were applied with the following results:
  • The ServiceAccount was created, which automatically created the associated Secret resource. The Secret resource contains a 'data' section containing three variables: ca.crt, namespace, and token. The variable values are all base64 encoded. The name of this Secret is saved in the ServiceAccount in the 'secrets' array section.
  • The ClusterRoleBinding was created, which maps the cluster-admin account to kubernetes-dashboard ServiceAccount (via a Subject. Subjects can be groups, users or service accounts.)
  • The Deployment was created in the usual way (this is not being investigated here.). This creates the ReplicaSet, which creates the Pod.
  • The Service was created in the usual way (this is not being investigated here.). This defines how to access the Pod.
The Pod is created by kubectl. Before the actual kubernetes-dashboard container is started, kubectl creates a directory containing the files, from the data section in the Secret, under '/var/lib/kubectl/...'. This directory is passed as a volume definition to docker so that it is mounted inside the container. The Secret files are created on-demand on tmpfs filesystems, so the contents will be lost when the Pod is deleted - it cleans up after itself.

And finally after doing all this I searched for '/var/run/secrets/kubernetes.io/serviceaccount' in the Github kubernetes/dashboard project and found a document that explains the 'magic', here: https://github.com/kubernetes/dashboard/blob/ac938b5ad86d17188a27002d5854ef2edc9d7ce5/docs/user-guide/troubleshooting.md

Now I'm happy - wasn't that fun? 


No comments:

Post a Comment