Sunday, 18 June 2017

Sportsworkout.com/chart.htm


I was reading 'The Ultimate Guide to Weight Training for Gymnastics', and went to download the training cards from sportsworkout.com/chart.htm but the site has gone and I couldn't find the training cards anywhere!

I created a set of training cards and I'm sharing it here to save others the hassle! I'll upload a PDF version of this file if anyone asks for it (I couldn't upload it here unfortunately, so it would be hosted somewhere else), but a JPEG version is below.


Note that Blogger reduces the size/quality of the JPEG so if anyone wants the originals just say 'post the originals' in the comments and I'll upload them to GitHub.

Enjoy!

Friday, 16 June 2017

How to include a subdirectory of an excluded directory

This post is for the Obdi Rsync Backup plugin.


So you have a backup configuration for a server with a bunch of excludes and now you need to explicitly include just one directory, but a parent of that directory is explicitly excluded. What do you do?

Add a new Host entry.

The graphic at the top of this post shows how. In that graphic the top line has '/opt/work/**' excluded. The next line has a new host (which is actually the same host) but with the full path, 'backup/opt/work/perfci_result_trends', shown for the Include.

That's all :)

Wednesday, 7 June 2017

JSONPath Cheat Sheet

I'll keep adding to this post. JSONPath.sh is required.

  1. List all containers in a Kubernetes system Pod
  2. Get the Internal IP Address of all Kubernetes Nodes
  3. Show all the Volume Mounts for a Pod
  4. Modify the Number of Replicas in a Kubernetes Deployment

List all containers in a Kubernetes system Pod

Get the list of System Pods

$ kubectl get -n kube-system pods
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-k8s1                               1/1       Running   0          26d
kube-apiserver-k8s1                     1/1       Running   1          26d
kube-controller-manager-k8s1            1/1       Running   0          26d
kube-dns-3913472980-mbxbg               3/3       Running   0          26d
kube-proxy-7r5p1                        1/1       Running   0          26d
kube-proxy-l6dq4                        1/1       Running   0          26d
kube-proxy-rsdbd                        1/1       Running   0          26d
kube-scheduler-k8s1                     1/1       Running   0          26d
kubernetes-dashboard-2039414953-bb1jx   1/1       Running   0          13d
weave-net-pfv92                         2/2       Running   0          22d
weave-net-tnwjt                         2/2       Running   35         22d
weave-net-xnfk9                         2/2       Running   34         22d


Find the names of all the containers in the weave-net-pfv92 Pod

$ kubectl get -n kube-system pods weave-net-pfv92 -o json \
    | JSONPath.sh '..containers[*].name' -b

weave
weave-npc

Get the Internal IP Address of all Kubernetes Nodes

 $ kubectl get nodes -o json | JSONPath.sh -b \
    '..addresses[?(@.type=="InternalIP")].address'
172.42.42.1
172.42.42.2
172.42.42.3

Show all the Volume Mounts for a Pod

$ kubectl get -n kube-system pods etcd-k8s1 -o json \
    | JSONPath.sh -j '..[volumeMounts,volumes]'

{
    "spec":
    {
        "containers":
        [
            {
                "volumeMounts":
                [
                    {
                        "mountPath":"/etc/ssl/certs",
                        "name":"certs"
                    },
                    {
                        "mountPath":"/var/lib/etcd",
                        "name":"etcd"
                    },
                    {
                        "mountPath":"/etc/kubernetes/",
                        "name":"k8s",
                        "readOnly":true
                    }
                ]
            }
        ],
        "volumes":
        [
            {
                "hostPath":
                {
                    "path":"/etc/ssl/certs"
                },
                "name":"certs"
            },
            {
                "hostPath":
                {
                    "path":"/var/lib/etcd"
                },
                "name":"etcd"
            },
            {
                "hostPath":
                {
                    "path":"/etc/kubernetes"
                },
                "name":"k8s"
            }
        ]
    }
}

Modify the Number of Replicas in a Kubernetes Deployment


The following "one-liner" will change 'spec.replicas' to 2 for hello-deployment:
$ kubectl get deployment hello-deployment -o json | \
    JSONPath.sh | \
    sed 's/\["spec","replicas"\].*/["spec","replicas"]\t2/' | \
    JSONPath.sh -p | \
    kubectl replace deployment hello-deployment -f -

deployment "hello-deployment" replaced
Or you could use a bash function and reuse at will.
Set the bash function:
$ change_replicas() { kubectl get deployment $1 -o json | \
JSONPath.sh | \
sed 's/\["spec","replicas"\].*/["spec","replicas"]\t'"$2"'/' | \
JSONPath.sh -p | \
kubectl replace deployment $2 -f - ;}
Then use it:
$ change_replicas hello-deployment 4
deployment "hello-deployment" replaced
$ change_replicas hello-deployment 1
deployment "hello-deployment" replaced



Wednesday, 31 May 2017

Fun With JSONPath and Kubernetes

Fun With JSONPath and Kubernetes

So, I'm trying to get my head around this Kubernetes stuff and the kubectl tool can output everything in JSON so I'm going to try to use JSONPath.sh to learn k8s!

If you've not done so already, read about JSONPath.sh at http://jsonpath.obdi.io/ or https://github.com/mclarkson/JSONPath.sh and then install it using the instructions there.

You'll need a Bash shell to use JSONPath.sh and kubectl so fire up your terminal and let's explore...

Exploration

I have a working Kubernetes installation running on three VirtualBox machines. I got kubernetes installed by using the davidkbainbridge/k8s-playground GitHub project, chosen because it uses kubeadm to bootstrap the cluster.

Once the 3-node cluster is up we can see that user pods will be scheduled to run on the 'worker' nodes only:

$ kubectl get nodes -o json | JSONPath.sh '..spec' -j -u
[
    {
        "spec":
        {
            "externalID":"k8s1",
            "taints":
            [
                {
                    "effect":"NoSchedule",
                    "key":"node-role.kubernetes.io/master",
                    "timeAdded":null
                }
            ]
        }
    },
    {
        "spec":
        {
            "externalID":"k8s2"
        }
    },
    {
        "spec":
        {
            "externalID":"k8s3"
        }
    }
]

I started messing around with Pods, Deployments, ReplicaSets etc. and installed the Kubernetes Dashboard. First I used the yaml files that came with the current stable kubernetes (v1.6.4). That worked but the dashboard could not authenticate. Next I tried the yaml files from https://github.com/kubernetes/dashboard/blob/master/src/deploy/kubernetes-dashboard.yaml, which uses RBAC, and that worked! How? (more on that later...)

So, I've altready looked at the some resources but there's a whole bunch I have been ignoring. From the kubernetes-dashboard.yaml I see that it uses additional items, ServiceAccount and ClusterRoleBinding.

I want a good overview of this running system so maybe I can save the entire kubernetes configuration in one file and then grep through it to see the relationships and I can go through the file on any host... maybe... anyway let's try.

Save the entire kubernetes config using the following:

# Get the list of all resource names and skip the first one ('all')
resources=`kubectl help get | grep '\*' | tail -n +2 | cut -d " " -f 4`

# Empty the file
>AllResources

# Merge all resources into one JSON.sh format file
for i in $resources; do kubectl get --all-namespaces $i -o json | \
  JSONPath.sh | sed 's/^\[/["'"$i"'",/' >>AllResources; done

# Create a .json file from the merged resources file
JSONPath.sh -p AllResources.json

Now there's a JSON file called AllResources.json that contains everthing. Mine is over half a megabyte in size and took about a minute to create.

Let's start searching...

Get an overview of everything by stepping through the JSON.sh format output. The first column is the resource name.

less AllResources

We can see all the resources that were actually used with the following (I crossed out things that were found but aren't actually [top-level?] resources):

$ JSONPath.sh -f AllResources.json '..kind' -b | sort -u
CertificateSigningRequest
ClusterRole
ClusterRoleBinding
ComponentStatus
ConfigMap
DaemonSet
Deployment
Endpoints
Group
List
Namespace
Node
Pod
ReplicaSet
Role
RoleBinding
Secret
Service
ServiceAccount
User

Type 'kubectl get' and compare to all available resources - less than half are used.

I spent a while, until I got tired (about 40 minutes!), going through the AllResources file and trying different JSONPaths, but it did not present to me how the Dashboard magically worked.

I found that there was a mount point magically added:

$ JSONPath.sh -f AllResources.json -j -i -u \
  '$.pods.items[?(@.metadata.name==".*dash.*")]..[".*volume.*"]'
{
    "containers":
    [
        {
            "volumeMounts":
            [
                {
                    "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount",
                    "name":"kubernetes-dashboard-token-w54jx",
                    "readOnly":true
                }
            ]
        }
    ],
    "volumes":
    [
        {
            "name":"kubernetes-dashboard-token-w54jx",
            "secret":
            {
                "defaultMode":420,
                "secretName":"kubernetes-dashboard-token-w54jx"
            }
        }
    ]
}

And using grep on the JSON.sh format file:

$ grep "kubernetes-dashboard-token" AllResources
["pods","items",11,"spec","containers",0,"volumeMounts",0,"name"]       "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"name"]   "kubernetes-dashboard-token-w54jx"
["pods","items",11,"spec","volumes",0,"secret","secretName"]    "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","name"]        "kubernetes-dashboard-token-w54jx"
["secrets","items",17,"metadata","selfLink"]    "/api/v1/namespaces/kube-system/secrets/kubernetes-dashboard-token-w54jx"
["serviceaccounts","items",16,"secrets",0,"name"]       "kubernetes-dashboard-token-w54jx"

This shows that the ServiceAccount was created, as instructed by the yaml file, but the Secret resource was also magically created along with a reference to it.
Looking at the output from the following command:

JSONPath.sh -f AllResources.json \
  '$.secrets..items[?(@.metadata.name==".*dash.*)]' -j

There is quite a bit of information for this automatically created Secret.

So, there's a lot of magic going on here and the only way to find out what's going on is by reading the source code or the documentation. Hopefully the docs will tell me what's going on, so off to the docs I go...

Kubernetes documents helped (and are great!), but did not answer all my questions but here's what I found:

Magically Added Mount Point

On the Master node all the mounts can be seen by using 'docker inspect':

$ sudo docker ps | \
  awk '{ print $NF; }' | \
  while read a; echo "$a"; do sudo docker inspect $a | \
  JSONPath.sh '..Mounts[?(@.Destination=='.*serviceaccount')]' -j -u | \
  grep -v "^$"; done
Error: No such image, container or task: NAMES
k8s_kube-apiserver_kube-apiserver-k8s1_kube-system_ec32b5b6e23058ce97a4d8bdc3628e81_1
k8s_kubedns_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_sidecar_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_dnsmasq_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09cbfa2c-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-dns-token-2pn7r",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_POD_kube-dns-3913472980-mbxbg_kube-system_09cbfa2c-3645-11e7-877b-02193b8aadb3_0
k8s_weave-npc_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_weave_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/6781d2ec-395b-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/weave-net-token-njdtx",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
k8s_POD_weave-net-pfv92_kube-system_6781d2ec-395b-11e7-877b-02193b8aadb3_0
k8s_kube-proxy_kube-proxy-rsdbd_kube-system_09c2216f-3645-11e7-877b-02193b8aadb3_0
{
    "Source":"/var/lib/kubelet/pods/09c2216f-3645-11e7-877b-02193b8aadb3/volumes/kubernetes.io~secret/kube-proxy-token-m9b6j",
    "Destination":"/var/run/secrets/kubernetes.io/serviceaccount",
    "Mode":"ro",
    "RW":false,
    "Propagation":"rprivate"
}
 
The output shows those magical serviceaccount entries are mounted from directories under '/var/lib/kubelet', and the output of 'mount' matches. Those directories hold files, whose filenames match the key names in the data part of the ServiceAccount, and the file contents are the value of the key.

The kubernetes documentation also describes the ServiceAccount,
"A service account provides an identity for processes that run in a Pod."
So now I think I know how the magic was created but what about that ClusterRoleBinding?

$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[*].metadata.name' -b
cluster-admin
kubeadm:kubelet-bootstrap
kubeadm:node-proxier
kubernetes-dashboard
system:basic-user
system:controller:attachdetach-controller
system:controller:certificate-controller
system:controller:cronjob-controller
system:controller:daemon-set-controller
system:controller:deployment-controller
system:controller:disruption-controller
system:controller:endpoint-controller
system:controller:generic-garbage-collector
system:controller:horizontal-pod-autoscaler
system:controller:job-controller
system:controller:namespace-controller
system:controller:node-controller
system:controller:persistent-volume-binder
system:controller:pod-garbage-collector
system:controller:replicaset-controller
system:controller:replication-controller
system:controller:resourcequota-controller
system:controller:route-controller
system:controller:service-account-controller
system:controller:service-controller
system:controller:statefulset-controller
system:controller:ttl-controller
system:discovery
system:kube-controller-manager
system:kube-dns
system:kube-scheduler
system:node
system:node-proxier
weave-net


Looking at cluster-admin and kubernetes-dashboard:

$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[?(@.metadata.name==cluster-admin)]' -j
{
    "clusterrolebindings":
    {
        "items":
        [
            {
                "apiVersion":"rbac.authorization.k8s.io/v1beta1",
                "kind":"ClusterRoleBinding",
                "metadata":
                {
                    "annotations":
                    {
                        "rbac.authorization.kubernetes.io/autoupdate":"true"
                    },
                    "creationTimestamp":"2017-05-11T12:26:11Z",
                    "labels":
                    {
                        "kubernetes.io/bootstrapping":"rbac-defaults"
                    },
                    "name":"cluster-admin",
                    "resourceVersion":"50",
                    "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingscluster-admin",
                    "uid":"04d54c20-3645-11e7-877b-02193b8aadb3"
                },
                "roleRef":
                {
                    "apiGroup":"rbac.authorization.k8s.io",
                    "kind":"ClusterRole",
                    "name":"cluster-admin"
                },
                "subjects":
                [
                    {
                        "apiGroup":"rbac.authorization.k8s.io",
                        "kind":"Group",
                        "name":"system:masters"
                    }
                ]
            }
        ]
    }
}


$ JSONPath.sh -f AllResources.json \
  '$.clusterrolebindings.items[?(@.metadata.name==kubernetes-dashboard)]' -j
{
    "clusterrolebindings":
    {
        "items":
        [
            {
                "apiVersion":"rbac.authorization.k8s.io/v1beta1",
                "kind":"ClusterRoleBinding",
                "metadata":
                {
                    "creationTimestamp":"2017-05-25T08:08:08Z",
                    "labels":
                    {
                        "k8s-app":"kubernetes-dashboard"
                    },
                    "name":"kubernetes-dashboard",
                    "resourceVersion":"1597763",
                    "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingskubernetes-dashboard",
                    "uid":"497ccb3a-4121-11e7-a9e3-02193b8aadb3"
                },
                "roleRef":
                {
                    "apiGroup":"rbac.authorization.k8s.io",
                    "kind":"ClusterRole",
                    "name":"cluster-admin"
                },
                "subjects":
                [
                    {
                        "kind":"ServiceAccount",
                        "name":"kubernetes-dashboard",
                        "namespace":"kube-system"
                    }
                ]
            }
        ]
    }
}


So the kubernetes-dashboard ClusterRoleBinding maps to cluster-admin, as does the cluster-admin ClusterRoleBinding, and the cluster-admin ClusterRole that is referenced looks like:

$ JSONPath.sh -f AllResources.json \
  '$.clusterroles.items[?(@.metadata.name=cluster-admin)]' -j -u
{
    "apiVersion":"rbac.authorization.k8s.io/v1beta1",
    "kind":"ClusterRole",
    "metadata":
    {
        "annotations":
        {
            "rbac.authorization.kubernetes.io/autoupdate":"true"
        },
        "creationTimestamp":"2017-05-11T12:26:11Z",
        "labels":
        {
            "kubernetes.io/bootstrapping":"rbac-defaults"
        },
        "name":"cluster-admin",
        "resourceVersion":"6",
        "selfLink":"/apis/rbac.authorization.k8s.io/v1beta1/clusterrolescluster-admin",
        "uid":"04b75ff2-3645-11e7-877b-02193b8aadb3"
    },
    "rules":
    [
        {
            "apiGroups":
            [
                "*"
            ],
            "resources":
            [
                "*"
            ],
            "verbs":
            [
                "*"
            ]
        },
        {
            "nonResourceURLs":
            [
                "*"
            ],
            "verbs":
            [
                "*"
            ]
        }
    ]
}


Whatever that means! There's a lot to read about here: https://kubernetes.io/docs/admin/authorization/

So in answer to the question, "How did kubernetes-dashboard just work!" the answer is below.

How did the kubernetes-dashboard just work


The kubernetes-dashboard YAML file contains 4 resources:
    1. ServiceAccount
    2. ClusterRoleBinding
    3. Deployment
    4. Service
When the cluster administrator created the Dashboard, with 'kubectl create -f https://git.io/kube-dashboard', the resources were applied with the following results:
  • The ServiceAccount was created, which automatically created the associated Secret resource. The Secret resource contains a 'data' section containing three variables: ca.crt, namespace, and token. The variable values are all base64 encoded. The name of this Secret is saved in the ServiceAccount in the 'secrets' array section.
  • The ClusterRoleBinding was created, which maps the cluster-admin account to kubernetes-dashboard ServiceAccount (via a Subject. Subjects can be groups, users or service accounts.)
  • The Deployment was created in the usual way (this is not being investigated here.). This creates the ReplicaSet, which creates the Pod.
  • The Service was created in the usual way (this is not being investigated here.). This defines how to access the Pod.
The Pod is created by kubectl. Before the actual kubernetes-dashboard container is started, kubectl creates a directory containing the files, from the data section in the Secret, under '/var/lib/kubectl/...'. This directory is passed as a volume definition to docker so that it is mounted inside the container. The Secret files are created on-demand on tmpfs filesystems, so the contents will be lost when the Pod is deleted - it cleans up after itself.

And finally after doing all this I searched for '/var/run/secrets/kubernetes.io/serviceaccount' in the Github kubernetes/dashboard project and found a document that explains the 'magic', here: https://github.com/kubernetes/dashboard/blob/ac938b5ad86d17188a27002d5854ef2edc9d7ce5/docs/user-guide/troubleshooting.md

Now I'm happy - wasn't that fun? 


Wednesday, 26 April 2017

Announcing: JSONPath.sh

Announcing: A JSONPath implementation written in Bash - JSONPath.sh.

It implements the JSONPath standard as written at http://goessner.net/ and was based on the original JSON.sh code from https://github.com/dominictarr/JSON.sh.

Note that JSON output is not arrays as with other JSONPath processors.

The script and documentation can be found at https://github.com/mclarkson/JSONPath.sh

Tuesday, 7 March 2017

CI for Obdi Plugins

A Plugin I was Working On
I've been writing Obdi plugins and I wanted the plugin I was currently working on to auto-update itself when I do a git push. I started looking at adding an option to the admin interface, something like an 'auto-update' checkbox in the Plugins page but after looking at the problem for a while it made sense not to add any bloat to Obdi, especially since online GIT sites each do things in slightly different ways, and instead make simple scripts to auto-update the Obdi plugin for me.

On this page you will find one solution for auto-updating a plugin immediately after a Git Push. I tested using Stash (now BitBucket) at work using the 'Http Request Post Receive Hook', which works well.

Theory

The workflow goes like this:
  • Make a code change.
  • Do a 'git commit' and 'git push'.
  • The GIT provider immediately opens a user-provided Web URL.
  • This URL is a simple Web Server and:
    • The Web Server runs an Obdi plugin update script.
    • The update script logs into Obdi and updates the plugin.
NOTE that this workflow is for a development box, and is just a temporary feature whilst developing. Extending this to a more permanent solution would be fairly straight-forward but would be tuned for each environment.

Code

The following code snippet is the full Web Server written in Google Go (golang).

package main

import (
        "bytes"
        "io"
        "log"
        "net/http"
        "os/exec"
)

func main() {

        http.HandleFunc("/post", PostOnly(HandlePost))

        log.Fatal(http.ListenAndServe(":8988", nil))
}

type handler func(w http.ResponseWriter, r *http.Request)

func HandlePost(w http.ResponseWriter, r *http.Request) {

        defer r.Body.Close()

        cmd := exec.Command("bin/kick_off_build.sh")

        var out bytes.Buffer
        cmd.Stdout = &out

        err := cmd.Run()
        if err != nil {
                io.WriteString(w, "ERROR\n"+out.String()+"\n")
                return
        }

        io.WriteString(w, out.String()+"\n")
}

func PostOnly(h handler) handler {

        return func(w http.ResponseWriter, r *http.Request) {
                if r.Method == "POST" {
                        h(w, r)
                        return
                }
                http.Error(w, "post only", http.StatusMethodNotAllowed)
        }
}


So, the above code, compiled with 'go build FILENAME' will create a server that:
  • Listens on port 8988.
  • Only accepts a POST request - and it discards any data sent to it.
  • Runs a script, './bin/kick_off_build.sh'.
And the script is as follows:

#!/bin/bash

ipport="1.2.3.4:443"
plugin="myplugin"
adminpass="password"

# Login

guid=`curl -ks -d \
    '{"Login":"admin","Password":"'"$adminpass"'"}' \
    https://$ipport/api/login | grep -o "[a-z0-9][^\"]*"`

# Find plugin

declare -i id

id=$(curl -sk \
    "https://$ipport/api/admin/$guid/plugins?name=$plugin" \
    | sed -n 's/^ *"Id": \([0-9]\+\).*/\1/p')

[[ $id -le 0 ]] && {
  echo "Plugin, $plugin, not found. Aborting update"
  exit 1
}

# Remove plugin

curl -sk -X DELETE "https://$ipport/api/admin/$guid/plugins/$id"

# Install plugin

curl -sk -d '{"Name":"'"$plugin"'"}' \
    "https://$ipport/api/admin/$guid/repoplugins"

exit 0

 

Three variables will need changing:
  • ipport - the address and port of the Obdi master.
  • plugin - the name of the plugin.
  • adminpass - Admin's password.

Done

That's it! It's quite simple but it works pretty well.

For a plugin with three script-running REST end-points it takes about 13 seconds after 'git push'ing for the plugin to be fully updated, end-points compiled, and ready for use.

To force the plugin to reinstall, maybe because the auto-update failed, which could happen if you logged into the admin interface at just the wrong point, then run, 'curl -X POST http://WebServerIP:8988/post'.



Thursday, 26 January 2017

Obdi Rsync Backup Lift-and-Shift Stats

Hi! I've just done a fairly large lift-and-shift from a physical server to AWS using Obdi Rsync Backup and thought I'd share a couple stats.

First off, what's this lift and shift? Well, our Obdi server with the Rsync Backup plugin lives in AWS on a single instance. It does daily backups over VPN of all the physical servers. We can choose backup files and get rsyncbackup to turn them into AWS instances.

Yesterday I did this for a server with tons of small files and around 460 GB of data.

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      493G  462G   26G  95% /
tmpfs           3.9G     0  3.9G   0% /dev/shm

I started it around midday yesterday and it produced the following output.


So, I was pretty happy that it worked! Looking at the following screen shot you can see that initially, just getting the size of the files took 32 minutes (A), copying the files took 10.8 hours (B), and making the filesystem modifications (running grub etc.) took about 1 minute.


Thought I'd share that so you can get an idea of times involved. Incidentally, the obdi AWS instance, which also contains the backup files, is an m4.2xlarge instance.

Cheers!

Friday, 28 October 2016

Obdi - Introducing Rsync Backup



Obdi plugin to do rsync backups with compression, deduplication and snapshotting using zfs.

Additionally it can turn your backup into an Amazon AWS EC2 Volume, AMI or Instance!

See: http://rsyncbackup.obdi.io/

Thursday, 8 September 2016

Firefox 48 on Linux Fedora 24

Firefox broke today after an update to Firefox 48.0.1 which was really annoying being my primary work tool, keywords:
The application has been updated, but your version of SQLite is too old and the application cannot run.
The fix for me, for today:
  • Browse to http://rpm.pbone.net/
  • Do an advanced search. Tick only 'Fedora Other' and search for
  • sqlite-libs-3.14
  • sqlite-3.14
  • Download each of the above rpms for Fedora 25 - they work on Fedora 24
  • I did:
  • sudo rpm -e sqlite-devel
  • sudo rpm -Uvh sqlite-3.14.1-1.fc25.x86_64.rpm sqlite-libs-3.14.1-1.fc25.x86_64.rpm
That's it. Firefox works again.

Wednesday, 13 July 2016

Nagrestconf Web Site is down

My apologies to anyone unable to access the Nagrestconf Web Site. It's hosted on Sourceforge and it seems to be down - since last night, for the UK at least. Hopefully it will be back up soon.

Thursday, 14 April 2016

New Synagios Released - 0.14.3

Synagios version 0.14.3 released for x86 and arm. Notable changes:
  • More fixes for DSM 6
    • New mailsender binary
      - Reported by Thomas Rosin
      • Statically built binary that runs on DSM6 x86
      • Now accepts lowercase 'data' SMTP command
    • Adds system host name to chroot's /etc/hosts file.
      - Reported by Thomas Rosin
    • Log shows the processes running inside the chroot again.
    • Fix ownership details for /var/mail.
      - Reported by Thomas Rosin
 

Saturday, 26 March 2016

Nagrestconf on Synology DSM 6

Synagios version 0.14.2 released for x86 and arm. Notable changes:
  • Fixes for DSM 6
  • Updated to Nagrestconf 1.174.5.

Sunday, 6 December 2015

Nagrestconf on Centos 7

Installation instructions and RPMs are now available for installing Nagresconf on Centos 7.

More work is needed to make the installation process as short as a Centos 6 install, but it's a start.

Thursday, 3 December 2015

Nrcq Update 0.1.1

Nrcq 0.1.1 has been released today.

Notable changes:

* Now supports Basic Auth through -U, user, and -P, password, options.

Go to the GitHub page to get it.


Monday, 23 November 2015

New Nagrestconf Release fixes Centos 6

A new Nagrestconf release 1.174.4 is available in the usual location at Sourceforge.

This fixes Nagrestconf on Centos 6:

PHP 5.3.3 Centos 6 - GUI only shows error message #49

PHP 5.3.3 - Centos 6
After installation the following message is shown in the browser:

Could not execute query using REST.
Please check system settings.

httpd error log shows:

PHP Warning:  json_encode() expects parameter 2 to be long, string given in /usr/share/nagrestconf/htdocs/rest/index.php

Saturday, 31 October 2015

Nrcq - Nagrestconf Query Utility


Nrcq is a command line tool that hopes to make the nagrestconf REST api easier to use than using Resty or 'curl' directly.

Intended for scripting, it automatically url-encodes/decodes fields where required, outputs in text or json and can show all valid nagrestconf endpoints, options and required fields.

It's written in Go so should work in Linux, Windows and Mac. Compiled binaries are available for those platforms.

The Nagrestconf Rest Tutorial and Cook Book have been updated to use nrcq.

It's available from here:

https://github.com/mclarkson/nrcq

And the golang library it uses is here:

https://github.com/mclarkson/nagrestconf-golib

Please report any issues on the Issue tracker.

Sunday, 13 September 2015

New Nagrestconf and Synagios Releases

Nagrestconf version 1.174.1 released. Notable changes:
  • Refresh hosts page after restore. Closes #20.
  • Status Map Image fields added for templates. Closes #22.
  • Added 'parents' field to hosts dialog. Closes #17.
  • Allow hostnames, not just ip addresses. Closes #26.
  • Alias field added to clone host dialog. Closes #29.
  • Added Host Custom Variables and Notes fields to REST and UI. Closes #38.
  • Added extra dependencies for really minimal systems.
Synagios version 0.14 released. Notable changes:
  • Includes Nagrestconf 1.174.1.
  • Base Operating System updated from Debian Wheezy to Debian Jessie.
  • Nagios updated to 3.5.1.
  • Pnp4nagios updated to 0.6.24.
  • Installed nagios_nrpe_plugin. Closes #21.
  • Make synology log output useful. Closes #2.
Installation guides have been updated for more recent Operating System versions. New packages are available from the Downloads section of the Nagrestconf Web site.

Sunday, 6 September 2015

GD2 file names

Here are all the gd2 file names that can be set for the statusmap_image parameter for Nagrestconf and Synagios in the Status Map Image field for templates.

station.gd2
cat5000.gd2
beos.gd2
aix.gd2
caldera.gd2
storm.gd2
stampede.gd2
nagios.gd2
irix.gd2
yellowdog.gd2
router40.gd2
turbolinux.gd2
ng-switch40.gd2
unicos.gd2
ubuntu.gd2
novell40.gd2
switch40.gd2
openbsd.gd2
cat1900.gd2
apple.gd2
redhat.gd2
mac40.gd2
logo.gd2
next.gd2
debian.gd2
slackware.gd2
linux40.gd2
sun40.gd2
cat2900.gd2
sunlogo.gd2
mandrake.gd2
win40.gd2
amiga.gd2
hpux.gd2
freebsd40.gd2
hp-printer40.gd2
ultrapenguin.gd2


Synagios: Enabling HTTPS for Nagrestconf and Nagios

How to enable https access to the nagios3 and nagrestconf:

Thanks to Juan GarcĂ­a for providing this solution.

Configure HTTPS


Go to apache2 config files in Synagios package:

    cd /volume1/@appstore/Synagios/nagios-chroot/etc/apache2/sites-enabled

Copy available conf file for ssl:

    cp ../sites-available/default-ssl .

Change port 443 for desired one (4443 in this case):

    vi default-ssl

    ...
    <VirtualHost default:4443>
    ...


Enable HTTPS


When service ''Synagios'' is launched (then ''dev'', ''proc'', ''sys'' are mounted), launch a shell in chroot environment:

    chroot /volume1/@appstore/Synagios/nagios-chroot /bin/bash

Enable ssl in apache2:

    a2enmod ssl

Restart apache:

    /etc/init.d/apache2 restart

Exit from chroot environment:

    exit

Friday, 4 September 2015

Junos Pulse Secure SmartConnect with DUO on Linux

I want to use my Fedora 22 laptop to connect to the work VPN but SmartConnect with DUO isn't available for Linux yet. So, until there's a proper client, I got it working using a Windows virtual machine for home working, and I can ditch Windows 8 - great!

There are other ways of doing this, not using a virtual machine, but I wanted to use something with a low risk of breaking if there are any software updates in the future.

This solution is only good for accessing Intranet Web sites and for ssh connections, which is exactly what I need.

Here's what working from home looks like now:
  1. Power on the laptop choosing Fedora boot.
  2. Log in.
  3. Start VirtualBox.
    1. Start the Windows 7 VM without the GUI (headless).
    2. Close VirtualBox.
  4. Start Remote Desktop.
    1. Connect to the Windows 7 VM.
    2. Click the SmartConnect icon to connect to the VPN.
    3. Confirm the connection in the Duo Android App.
    4. Close Remote Desktop.

      The VPN session lasts the whole day.
  1. Start an ssh session to the Windows 7 VM with SOCKS enabled.
    1. Connect to a work server to work from for the day with 'screen'.
  2. Start a Web Browser.
    1. Click the 'Socks Proxy' plugin button.
    2. Log in to Jira, Wiki, etc.
All the Intranet sites work, as do HP iLO and Remote Consoles through the iLO, since all DNS queries go to the SOCKS 4 ssh connection.

Setting it all up on Fedora 22


If they aren't installed already, install VirtualBox and Vagrant.
sudo cat >/etc/yum.repos.d/virtualbox.repo <<EnD
[virtualbox]
name=Fedora $releasever - $basearch - VirtualBox
baseurl=http://download.virtualbox.org/virtualbox/rpm/fedora/$releasever/$basearch
enabled=1
gpgcheck=1
gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc
EnD
sudo dnf install virtualbox vagrant
After installation go to the Web Site and install the VirtualBox Extensions so that virtual machines can be started without a GUI (headless) later.

Get a Windows image from vagrant.

This minimal Windows 7 box has an ssh server, BitVise, already installed as a service and a bash shell, Git Bash, so there's not much to do once it's installed.

mkdir win7 
cd win7
vagrant init ferventcoder/win7pro-x64-nocm-lite
vagrant up --provider virtualbox

Once Vagrant has started, stop it.

vagrant halt

Download Firefox for Windows and put it in the win7 folder.

Google for Firefox and download the 64 bit Windows version then put it in the win7 folder created earlier. This folder is accessible by the virtual machine when it's running so Firefox can be easily installed inside the VM.

Start the Windows 7 virtual machine and configure it.

Start VirtualBox then start the Windows 7 VM that vagrant created.

Click on Network then VBOXSVR and open the share.

Run the 'Firefox Setup...' executable to install it.



Now that it's installed, open Firefox and search for Microsoft Security Essentials then download and install it. This is required for SmartConnect to work - it will complain otherwise.

Use Firefox again to get SmartConnect DUO from your IT department. They will have supplied a Web address to get the client from.

Run SmartConnect with DUO and connect to the corporate VPN.

The Windows installation is very limited, but enough of it works to make it completely usable as a VPN proxy.

Minimise the Windows 7 VM window. We don't need it anymore.

SSH to the Windows 7 VM

The following ssh command connects to the VM with SOCKS 4 enabled on port 1337. Use the password, 'vagrant', when prompted.
ssh -D1337 -p2222 vagrant@127.0.0.1
I use this terminal to connect to other hosts on the corporate network using ssh. A bash terminal is available by typing 'bash', '~/.ssh/authorized_keys' can be used (but tick the 'use authorized_keys file' in BitVise), and public/private keys can be put in '~/.ssh/'.

By default the terminal foreground colour is green. Type 'color 7' to make it white.

Install 'Socks Proxy' in the Firefox Web browser.

Install Socks Proxy from Add Ons and set it up as shown:



Other proxies, such as Foxy Proxy, could be used to selectively choose when the socks proxy is used, but this one is really simple, and it works, although all Web traffic will go through the proxy.

Try connecting to corporate Web sites.

Enable the socks proxy plugin using the toolbar button .

Navigate to corporate Web sites and they should work, even web sites that specify ports to use.



No screen clutter - use a headless Windows 7 VM

Windows 7 can be started in headless mode by pressing the 'shift' key when starting the virtual machine in the VirtualBox GUI. Then Remote Desktop can be used to start/stop the VPN. These are the Remote Desktop settings:



That's it!