Monday, 16 October 2017
How to clone all your GitHub projects
I have over 60 projects in GitHub, including forks, and thought that I really should back them up locally. I started doing it manually and after a few minutes realised that this is a job for the computer to do.
So, I wrote a quick script to do it for me.
To use the script, first install JSONPath.sh.
Then copy the script into a new empty directory. The script is available as a GitHub Gist from https://gist.github.com/mclarkson/5f53e0ca46e1f3989dc0b69b6818b410
Then you just type:
./clone-my-project.sh github_user_id
and it will clone them all, lovely!
NOTE: Look at the top of the source code to see how to change from cloning using ssh to cloning using https.
Sunday, 18 June 2017
Sportsworkout.com/chart.htm
I was reading 'The Ultimate Guide to Weight Training for Gymnastics', and went to download the training cards from sportsworkout.com/chart.htm but the site has gone and I couldn't find the training cards anywhere!
I created a set of training cards and I'm sharing it here to save others the hassle! I'll upload a PDF version of this file if anyone asks for it (I couldn't upload it here unfortunately, so it would be hosted somewhere else), but a JPEG version is below.
Note that Blogger reduces the size/quality of the JPEG so if anyone wants the originals just say 'post the originals' in the comments and I'll upload them to GitHub.
Enjoy!
Friday, 16 June 2017
How to include a subdirectory of an excluded directory
This post is for the Obdi Rsync Backup plugin.
So you have a backup configuration for a server with a bunch of excludes and now you need to explicitly include just one directory, but a parent of that directory is explicitly excluded. What do you do?
Add a new Host entry.
The graphic at the top of this post shows how. In that graphic the top line has '/opt/work/**' excluded. The next line has a new host (which is actually the same host) but with the full path, 'backup/opt/work/perfci_result_trends', shown for the Include.
That's all :)
So you have a backup configuration for a server with a bunch of excludes and now you need to explicitly include just one directory, but a parent of that directory is explicitly excluded. What do you do?
Add a new Host entry.
The graphic at the top of this post shows how. In that graphic the top line has '/opt/work/**' excluded. The next line has a new host (which is actually the same host) but with the full path, 'backup/opt/work/perfci_result_trends', shown for the Include.
That's all :)
Wednesday, 7 June 2017
JSONPath Cheat Sheet
I'll keep adding to this post. JSONPath.sh is required.
The following "one-liner" will change 'spec.replicas' to 2 for hello-deployment:
Set the bash function:
- List all containers in a Kubernetes system Pod
- Get the Internal IP Address of all Kubernetes Nodes
- Show all the Volume Mounts for a Pod
- Modify the Number of Replicas in a Kubernetes Deployment
List all containers in a Kubernetes system Pod
Get the list of System Pods
$ kubectl get -n kube-system pods
NAME READY STATUS RESTARTS AGE
etcd-k8s1 1/1 Running 0 26d
kube-apiserver-k8s1 1/1 Running 1 26d
kube-controller-manager-k8s1 1/1 Running 0 26d
kube-dns-3913472980-mbxbg 3/3 Running 0 26d
kube-proxy-7r5p1 1/1 Running 0 26d
kube-proxy-l6dq4 1/1 Running 0 26d
kube-proxy-rsdbd 1/1 Running 0 26d
kube-scheduler-k8s1 1/1 Running 0 26d
kubernetes-dashboard-2039414953-bb1jx 1/1 Running 0 13d
weave-net-pfv92 2/2 Running 0 22d
weave-net-tnwjt 2/2 Running 35 22d
weave-net-xnfk9 2/2 Running 34 22d
Find the names of all the containers in the weave-net-pfv92 Pod
$ kubectl get -n kube-system pods weave-net-pfv92 -o json \
| JSONPath.sh '..containers[*].name' -b
weave
weave-npc
Get the Internal IP Address of all Kubernetes Nodes
$ kubectl get nodes -o json | JSONPath.sh -b \
'..addresses[?(@.type=="InternalIP")].address'
172.42.42.1
172.42.42.2
172.42.42.3
Show all the Volume Mounts for a Pod
$ kubectl get -n kube-system pods etcd-k8s1 -o json \
| JSONPath.sh -j '..[volumeMounts,volumes]'
{
"spec":
{
"containers":
[
{
"volumeMounts":
[
{
"mountPath":"/etc/ssl/certs",
"name":"certs"
},
{
"mountPath":"/var/lib/etcd",
"name":"etcd"
},
{
"mountPath":"/etc/kubernetes/",
"name":"k8s",
"readOnly":true
}
]
}
],
"volumes":
[
{
"hostPath":
{
"path":"/etc/ssl/certs"
},
"name":"certs"
},
{
"hostPath":
{
"path":"/var/lib/etcd"
},
"name":"etcd"
},
{
"hostPath":
{
"path":"/etc/kubernetes"
},
"name":"k8s"
}
]
}
}
Modify the Number of Replicas in a Kubernetes Deployment
The following "one-liner" will change 'spec.replicas' to 2 for hello-deployment:
$ kubectl get deployment hello-deployment -o json | \Or you could use a bash function and reuse at will.
JSONPath.sh | \
sed 's/\["spec","replicas"\].*/["spec","replicas"]\t2/' | \
JSONPath.sh -p | \
kubectl replace deployment hello-deployment -f -
deployment "hello-deployment" replaced
Set the bash function:
$ change_replicas() { kubectl get deployment $1 -o json | \Then use it:
JSONPath.sh | \
sed 's/\["spec","replicas"\].*/["spec","replicas"]\t'"$2"'/' | \
JSONPath.sh -p | \
kubectl replace deployment $2 -f - ;}
$ change_replicas hello-deployment 4
deployment "hello-deployment" replaced
$ change_replicas hello-deployment 1
deployment "hello-deployment" replaced
Wednesday, 31 May 2017
Fun With JSONPath and Kubernetes
Fun With JSONPath and Kubernetes
So, I'm trying to get my head around this Kubernetes stuff and the kubectl tool can output everything in JSON so I'm going to try to use JSONPath.sh to learn k8s!If you've not done so already, read about JSONPath.sh at http://jsonpath.obdi.io/ or https://github.com/mclarkson/JSONPath.sh and then install it using the instructions there.
You'll need a Bash shell to use JSONPath.sh and kubectl so fire up your terminal and let's explore...
Exploration
I have a working Kubernetes installation running on three VirtualBox machines. I got kubernetes installed by using the davidkbainbridge/k8s-playground GitHub project, chosen because it uses kubeadm to bootstrap the cluster.
Once the 3-node cluster is up we can see that user pods will be scheduled to run on the 'worker' nodes only:
$ kubectl get nodes -o json | JSONPath.sh '..spec' -j -u
[
{
"spec":
{
"externalID":"k8s1",
"taints":
[
{
"effect":"NoSchedule",
"key":"node-role.kubernetes.io/master",
"timeAdded":null
}
]
}
},
{
"spec":
{
"externalID":"k8s2"
}
},
{
"spec":
{
"externalID":"k8s3"
}
}
]
[
{
"spec":
{
"externalID":"k8s1",
"taints":
[
{
"effect":"NoSchedule",
"key":"node-role.kubernetes.io/master",
"timeAdded":null
}
]
}
},
{
"spec":
{
"externalID":"k8s2"
}
},
{
"spec":
{
"externalID":"k8s3"
}
}
]
I started messing around with Pods, Deployments, ReplicaSets etc. and installed the Kubernetes Dashboard. First I used the yaml files that came with the current stable kubernetes (v1.6.4). That worked but the dashboard could not authenticate. Next I tried the yaml files from https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml, which uses RBAC, and that worked! How? (more on that later...)
So, I've altready looked at the some resources but there's a whole bunch I have been ignoring. From the kubernetes-dashboard.yaml I see that it uses additional items, ServiceAccount and ClusterRoleBinding.
I want a good overview of this running system so maybe I can save the entire kubernetes configuration in one file and then grep through it to see the relationships and I can go through the file on any host... maybe... anyway let's try.
Save the entire kubernetes config using the following:
# Get the list of all resource names and skip the first one ('all')
resources=`kubectl help get | grep '\*' | tail -n +2 | cut -d " " -f 4`
resources=`kubectl help get | grep '\*' | tail -n +2 | cut -d " " -f 4`
# Empty the file
>AllResources
# Merge all resources into one JSON.sh format file
for i in $resources; do kubectl get --all-namespaces $i -o json | \
JSONPath.sh | sed 's/^\[/["'"$i"'",/' >>AllResources; done
# Create a .json file from the merged resources file
JSONPath.sh -p
Now there's a JSON file called AllResources.json that contains everthing. Mine is over half a megabyte in size and took about a minute to create.
Let's start searching...
Get an overview of everything by stepping through the JSON.sh format output. The first column is the resource name.
less AllResources
We can see all the resources that were actually used with the following (I crossed out things that were found but aren't actually [top-level?] resources):
$ JSONPath.sh -f AllResources.json '..kind' -b | sort -u
CertificateSigningRequest
ClusterRole
ClusterRoleBinding
ComponentStatus
ConfigMap
DaemonSet
Deployment
Endpoints
Group
List
Namespace
Node
Pod
ReplicaSet
Role
RoleBinding
Secret
Service
ServiceAccount
User
CertificateSigningRequest
ClusterRole
ClusterRoleBinding
ComponentStatus
ConfigMap
DaemonSet
Deployment
Endpoints
Namespace
Node
Pod
ReplicaSet
Role
RoleBinding
Secret
Service
ServiceAccount
Type 'kubectl get' and compare to all available resources - less than half are used.
I spent a while, until I got tired (about 40 minutes!), going through the AllResources file and trying different JSONPaths, but it did not present to me how the Dashboard magically worked.
I found that there was a mount point magically added:
$ JSONPath.sh -f AllResources.json -j -i -u \
'$.pods.items[?(@.metadata.name==".*dash.*")]..[".*volume.*"]'
{
"containers":
[
{
"volumeMounts":
[
{
"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount",
"name":"kubernetes-dashboard-token-w54jx",
"readOnly":true
}
]
}
],
"volumes":
[
{
"name":"kubernetes-dashboard-token-w54jx",
"secret":
{
"defaultMode":420,
"secretName":"kubernetes-dashboard-token-w54jx"
}
}
]
}
"containers":
[
{
"volumeMounts":
[
{
"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount",
"name":"kubernetes-dashboard-token-w54jx",
"readOnly":true
}
]
}
],
"volumes":
[
{
"name":"kubernetes-dashboard-token-w54jx",
"secret":
{
"defaultMode":420,
"secretName":"kubernetes-dashboard-token-w54jx"
}
}
]
}
And using grep on the JSON.sh format file:
$ grep "kubernetes-dashboard-token" AllResources
["pods","items",11,"spec","containers",0,"volumeMounts",0,"name"] "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"name"] "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"secret","secretName"] "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","name"] "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","selfLink"] "/api/v1/namespaces/kube-system/secrets/kubernetes-dashboard-token-w54jx"
["serviceaccounts","items",16,"secrets",0,"name"] "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","containers",0,"volumeMounts",0,"name"] "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"name"] "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"secret","secretName"] "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","name"] "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","selfLink"] "/api/v1/namespaces/kube-system/secrets/kubernetes-dashboard-token-w54jx"
["serviceaccounts","items",16,"secrets",0,"name"] "kubernetes-dashboard-token-w54jx"
This shows that the ServiceAccount was created, as instructed by the yaml file, but the Secret resource was also magically created along with a reference to it.
Looking at the output from the following command:
JSONPath.sh -f AllResources.json \
'$.secrets..items[?(@.metadata.name==".*dash.*)]' -j
There is quite a bit of information for this automatically created Secret.
So, there's a lot of magic going on here and the only way to find out what's going on is by reading the source code or the documentation. Hopefully the docs will tell me what's going on, so off to the docs I go...
Kubernetes documents helped (and are great!), but did not answer all my questions but here's what I found:
Magically Added Mount Point
On the Master node all the mounts can be seen by using 'docker inspect':
$ sudo docker ps | \
awk '{ print $NF; }' | \
while read a; echo "$a"; do sudo docker inspect $a | \
JSONPath.sh '..Mounts[?(@.Destination=='.*serviceaccount')]' -j -u | \
grep -v "^$"; done
Error: No such image, container or task: NAMES
Error: No such image, container or task: NAMES
k8s_kube-apiserver_kube-apiserver-k8s1_kube-system_ec32b5b6e23058ce97a4d8bdc3628e81_1
k8s_kubedns_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_sidecar_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_dnsmasq_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_POD_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
k8s_weave-npc_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_weave_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_POD_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
k8s_kube-proxy_kube-proxy-rsdbd_kube-system_09c2216f-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09c2216f-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-proxy-token-m9b6j",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_kubedns_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_sidecar_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_dnsmasq_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_POD_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
k8s_weave-npc_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_weave_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
k8s_POD_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
k8s_kube-proxy_kube-proxy-rsdbd_kube-system_09c2216f-3645-11e7-877b-02193b8aadb3_0
{
"Source":"/var/lib/kubelet/pods/09c2216f-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-proxy-token-m9b6j",
"Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
"Mode":"ro",
"RW":false,
"Propagation":"rprivate"
}
The output shows those magical serviceaccount entries are mounted from directories under '/var/lib/kubelet', and the output of 'mount' matches. Those directories hold files, whose filenames match the key names in the data part of the ServiceAccount, and the file contents are the value of the key.
The kubernetes documentation also describes the ServiceAccount,
"A service account provides an identity for processes that run in a Pod."So now I think I know how the magic was created but what about that ClusterRoleBinding?
$ JSONPath.sh -f AllResources.json \
'$.clusterrolebindings.items[*].metadata.name' -b
cluster-admin
kubeadm:kubelet-bootstrap
kubeadm:node-proxier
kubernetes-dashboard
system:basic-user
system:controller:attachdetach-controller
system:controller:certificate-controller
system:controller:cronjob-controller
system:controller:daemon-set-controller
system:controller:deployment-controller
system:controller:disruption-controller
system:controller:endpoint-controller
system:controller:generic-garbage-collector
system:controller:horizontal-pod-autoscaler
system:controller:job-controller
system:controller:namespace-controller
system:controller:node-controller
system:controller:persistent-volume-binder
system:controller:pod-garbage-collector
system:controller:replicaset-controller
system:controller:replication-controller
system:controller:resourcequota-controller
system:controller:route-controller
system:controller:service-account-controller
system:controller:service-controller
system:controller:statefulset-controller
system:controller:ttl-controller
system:discovery
system:kube-controller-manager
system:kube-dns
system:kube-scheduler
system:node
system:node-proxier
weave-net
Looking at cluster-admin and kubernetes-dashboard:
$ JSONPath.sh -f AllResources.json \
'$.clusterrolebindings.items[?(@.metadata.name==cluster-admin)]' -j
{
"clusterrolebindings":
{
"items":
[
{
"apiVersion":"rbac.authorization.k8s.io/v1beta1",
"kind":"ClusterRoleBinding",
"metadata":
{
"annotations":
{
"rbac.authorization.kubernetes.io/autoupdate":"true"
},
"creationTimestamp":"2017-05-11T12:26:11Z",
"labels":
{
"kubernetes.io/bootstrapping":"rbac-defaults"
},
"name":"cluster-admin",
"resourceVersion":"50",
"selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingscluster-admin",
"uid":"04d54c20-3645-11e7-877b-02193b8aadb3"
},
"roleRef":
{
"apiGroup":"rbac.authorization.k8s.io",
"kind":"ClusterRole",
"name":"cluster-admin"
},
"subjects":
[
{
"apiGroup":"rbac.authorization.k8s.io",
"kind":"Group",
"name":"system:masters"
}
]
}
]
}
}
$ JSONPath.sh -f AllResources.json \
'$.clusterrolebindings.items[?(@.metadata.name==kubernetes-dashboard)]' -j
{
"clusterrolebindings":
{
"items":
[
{
"apiVersion":"rbac.authorization.k8s.io/v1beta1",
"kind":"ClusterRoleBinding",
"metadata":
{
"creationTimestamp":"2017-05-25T08:08:08Z",
"labels":
{
"k8s-app":"kubernetes-dashboard"
},
"name":"kubernetes-dashboard",
"resourceVersion":"1597763",
"selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingskubernetes-dashboard",
"uid":"497ccb3a-4121-11e7-a9e3-02193b8aadb3"
},
"roleRef":
{
"apiGroup":"rbac.authorization.k8s.io",
"kind":"ClusterRole",
"name":"cluster-admin"
},
"subjects":
[
{
"kind":"ServiceAccount",
"name":"kubernetes-dashboard",
"namespace":"kube-system"
}
]
}
]
}
}
So the kubernetes-dashboard ClusterRoleBinding maps to cluster-admin, as does the cluster-admin ClusterRoleBinding, and the cluster-admin ClusterRole that is referenced looks like:
$ JSONPath.sh -f AllResources.json \
'$.clusterroles.items[?(@.metadata.name=cluster-admin)]' -j -u
{
"apiVersion":"rbac.authorization.k8s.io/v1beta1",
"kind":"ClusterRole",
"metadata":
{
"annotations":
{
"rbac.authorization.kubernetes.io/autoupdate":"true"
},
"creationTimestamp":"2017-05-11T12:26:11Z",
"labels":
{
"kubernetes.io/bootstrapping":"rbac-defaults"
},
"name":"cluster-admin",
"resourceVersion":"6",
"selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolescluster-admin",
"uid":"04b75ff2-3645-11e7-877b-02193b8aadb3"
},
"rules":
[
{
"apiGroups":
[
"*"
],
"resources":
[
"*"
],
"verbs":
[
"*"
]
},
{
"nonResourceURLs":
[
"*"
],
"verbs":
[
"*"
]
}
]
}
Whatever that means! There's a lot to read about here: https://kubernetes.io/docs/admin/authorization/
So in answer to the question, "How did kubernetes-dashboard just work!" the answer is below.
How did the kubernetes-dashboard just work
The kubernetes-dashboard YAML file contains 4 resources:
- ServiceAccount
- ClusterRoleBinding
- Deployment
- Service
- The ServiceAccount was created, which automatically created the associated Secret resource. The Secret resource contains a 'data' section containing three variables: ca.crt, namespace, and token. The variable values are all base64 encoded. The name of this Secret is saved in the ServiceAccount in the 'secrets' array section.
- The ClusterRoleBinding was created, which maps the cluster-admin account to kubernetes-dashboard ServiceAccount (via a Subject. Subjects can be groups, users or service accounts.)
- The Deployment was created in the usual way (this is not being investigated here.). This creates the ReplicaSet, which creates the Pod.
- The Service was created in the usual way (this is not being investigated here.). This defines how to access the Pod.
And finally after doing all this I searched for '/var/run/secrets/kubernetes.io/serviceaccount' in the Github kubernetes/dashboard project and found a document that explains the 'magic', here: https://github.com/kubernetes/dashboard/blob/ac938b5ad86d17188a27002d5854ef2edc9d7ce5/docs/user-guide/troubleshooting.md
Now I'm happy - wasn't that fun?
Wednesday, 26 April 2017
Announcing: JSONPath.sh
Announcing: A JSONPath implementation written in Bash - JSONPath.sh.
It implements the JSONPath standard as written at http://goessner.net/ and was based on the original JSON.sh code from https://github.com/dominictarr/JSON.sh.
Note that JSON output is not arrays as with other JSONPath processors.
The script and documentation can be found at https://github.com/mclarkson/JSONPath.sh
It implements the JSONPath standard as written at http://goessner.net/ and was based on the original JSON.sh code from https://github.com/dominictarr/JSON.sh.
Note that JSON output is not arrays as with other JSONPath processors.
The script and documentation can be found at https://github.com/mclarkson/JSONPath.sh
Tuesday, 7 March 2017
CI for Obdi Plugins
![]() |
A Plugin I was Working On |
On this page you will find one solution for auto-updating a plugin immediately after a Git Push. I tested using Stash (now BitBucket) at work using the 'Http Request Post Receive Hook', which works well.
Theory
The workflow goes like this:- Make a code change.
- Do a 'git commit' and 'git push'.
- The GIT provider immediately opens a user-provided Web URL.
- This URL is a simple Web Server and:
- The Web Server runs an Obdi plugin update script.
- The update script logs into Obdi and updates the plugin.
Code
The following code snippet is the full Web Server written in Google Go (golang).package main
import (
"bytes"
"io"
"log"
"net/http"
"os/exec"
)
func main() {
http.HandleFunc("/post", PostOnly(HandlePost))
log.Fatal(http.ListenAndServe(":8988", nil))
}
type handler func(w http.ResponseWriter, r *http.Request)
func HandlePost(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
cmd := exec.Command("bin/kick_off_build.sh")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
io.WriteString(w, "ERROR\n"+out.String()+"\n")
return
}
io.WriteString(w, out.String()+"\n")
}
func PostOnly(h handler) handler {
return func(w http.ResponseWriter, r *http.Request) {
if r.Method == "POST" {
h(w, r)
return
}
http.Error(w, "post only", http.StatusMethodNotAllowed)
}
}
So, the above code, compiled with 'go build FILENAME' will create a server that:
- Listens on port 8988.
- Only accepts a POST request - and it discards any data sent to it.
- Runs a script, './bin/kick_off_build.sh'.
#!/bin/bash
ipport="1.2.3.4:443"
plugin="myplugin"
adminpass="password"
# Login
guid=`curl -ks -d \
'{"Login":"admin","Password":"'"$adminpass"'"}' \
https://$ipport/api/login | grep -o "[a-z0-9][^\"]*"`
# Find plugin
declare -i id
id=$(curl -sk \
"https://$ipport/api/admin/$guid/plugins?name=$plugin" \
| sed -n 's/^ *"Id": \([0-9]\+\).*/\1/p')
[[ $id -le 0 ]] && {
echo "Plugin, $plugin, not found. Aborting update"
exit 1
}
# Remove plugin
curl -sk -X DELETE "https://$ipport/api/admin/$guid/plugins/$id"
# Install plugin
curl -sk -d '{"Name":"'"$plugin"'"}' \
"https://$ipport/api/admin/$guid/repoplugins"
exit 0
Three variables will need changing:
- ipport - the address and port of the Obdi master.
- plugin - the name of the plugin.
- adminpass - Admin's password.
Done
That's it! It's quite simple but it works pretty well.For a plugin with three script-running REST end-points it takes about 13 seconds after 'git push'ing for the plugin to be fully updated, end-points compiled, and ready for use.
To force the plugin to reinstall, maybe because the auto-update failed, which could happen if you logged into the admin interface at just the wrong point, then run, 'curl -X POST http://WebServerIP:8988/post'.
Thursday, 26 January 2017
Obdi Rsync Backup Lift-and-Shift Stats
Hi! I've just done a fairly large lift-and-shift from a physical server to AWS using Obdi Rsync Backup and thought I'd share a couple stats.
First off, what's this lift and shift? Well, our Obdi server with the Rsync Backup plugin lives in AWS on a single instance. It does daily backups over VPN of all the physical servers. We can choose backup files and get rsyncbackup to turn them into AWS instances.
Yesterday I did this for a server with tons of small files and around 460 GB of data.
I started it around midday yesterday and it produced the following output.
So, I was pretty happy that it worked! Looking at the following screen shot you can see that initially, just getting the size of the files took 32 minutes (A), copying the files took 10.8 hours (B), and making the filesystem modifications (running grub etc.) took about 1 minute.
Thought I'd share that so you can get an idea of times involved. Incidentally, the obdi AWS instance, which also contains the backup files, is an m4.2xlarge instance.
Cheers!
First off, what's this lift and shift? Well, our Obdi server with the Rsync Backup plugin lives in AWS on a single instance. It does daily backups over VPN of all the physical servers. We can choose backup files and get rsyncbackup to turn them into AWS instances.
Yesterday I did this for a server with tons of small files and around 460 GB of data.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 493G 462G 26G 95% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
I started it around midday yesterday and it produced the following output.
So, I was pretty happy that it worked! Looking at the following screen shot you can see that initially, just getting the size of the files took 32 minutes (A), copying the files took 10.8 hours (B), and making the filesystem modifications (running grub etc.) took about 1 minute.
Thought I'd share that so you can get an idea of times involved. Incidentally, the obdi AWS instance, which also contains the backup files, is an m4.2xlarge instance.
Cheers!
Subscribe to:
Posts (Atom)