Kubernetes log tailer

Found this super nice tool for tailing kubernetes logs called kail. It works similar to kubetail but has automatic pod discovery while running so you don’t have to restart it to pick up new pods.

Kubernetes and Flannel and Ingress Controller

After setting up my Kubernetes master and adding a couple of nodes to it, I ran into an issue where the pods on one node could not communicate with pods on another node. I had assumed installing Flannel would handle this, but apparently it did not enable packet forwarding in the kernel or add the appropriate IP tables rules. This cause Kube-lego to not work correctly when running on a node and not the master.

It wasn’t super obvious at first but after a quick poke around I ran into someone that had the same issue here and provided the fix (which is obviously to enable forwarding).

To save you time here are the steps you need to take:

1
2
3
4
5
6
7
8
1. Make sure the ip-forward enabled on the linux kernel of every node. Just execute command:
$ sysctl net.ipv4.conf.all.forwarding = 1

2. Ensure the following is enabled in your /etc/sysctl.conf file:
net.ipv4.conf.all.forwarding = 1

3. Set the default policy of the FORWARD chain to ACCEPT:
$ sudo iptables -P FORWARD ACCEPT

Datadog broken in Kubernetes 1.7

I updated my kubernetes cluster this weekend to 1.7 and noticed my Datadog metrics stopped reporting. Took a few hours of digging but I eventually found the culprit as a change kubernetes did that disables cAdvisor from port 4194 which the Datadog agent relies on to get information.

I found the relevant PR here which shows the change and reasoning (and the reasoning makes sense to me). I brought this to the attention of datadog and they said they are aware of it and are working on either updating the documentation with the changes or using the stats endpoint instead.

That said it is possible to re-enable the cAdvisor port should you have taken precautions to prevent public access to it which you can find here.

The gist is to run the following:

1
2
3
sed -e "/cadvisor-port=0/d" -i /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet

Hopefully this saves you the hours it took me to find this!

Revive old hardware with Kubernetes and kubeadm

If you are anything like me, after many years of working with computers, you end up with a lot of random computer hardware that just sits in a close. This is especially true if you got caught up in the Altcoin mining craze for a bit.

Well the good news is you can breathe some new life into that hardware with the power of Kubernetes! You don’t need much, a dual core with 4GB of ram is enough for a simple k8s node, and if you have a couple of the, you can combine the power of them into a proper cluster! That said, even if you only have one machine - it’s enough to setup a basic “cluster” for testing/dev purposes.

To do this I used kubeadm to setup a k8s master/node. I only setup one machine, but with kubeadm it’s super simple to add more nodes as you need to.

The first thing you need to do is setup the machine with your favorite flavor of linux, I almost always use Debian because that’s what I use.

Next get Docker installed. The kubeadm instructions seem to negelect to metion this, but trying to install it without it produces an obvious error. I used the latest version of docker even though kubeadm complained about it being unsupported. I have not had any issues but YMMV.

Once docker is kicking, next you’ll need to install kubeadm. Pretty straightfoward just follow the instructions on their site.

Now optional step if you are setting up a single node and not adding anymore is to turn master isolation off. In my case I only have one machine, so that means turning it off. Even if you have multiple nodes, on a dev cluster like this, it’s probably worthwhile to shut off master isolation anyway.

kubectl taint nodes --all node-role.kubernetes.io/master-

The last step is to setup pod networking. This step gave me the most trouble because there was a big shift in security in k8s between 1.5 and 1.6 involving RBAC and this breaks a lot of examples online. So when you try to go straight to installing a network as provided in the documentation, you will run into problems.

I ended up finding that Weave worked the best and provided instructions on how to setup for Kubernetes 1.6 which you can find here. The gist being you need to create the RBAC roles before applying the daemonsets.

All together this entire process took me maybe 30 minutes from Debian install to running cluster, if you exclude the time I spent trying to resolve the pod networking problem I outlined above.

Now take your totally sweet k8s cluster and start deploying your self hosted apps!

Deploy a searx instance with Kubernetes

Recently the searx instance I generally used suddenly went down, so I decided to host my own. If you don’t know what searx is it’s a metasearch engine which strips any tracking info and aggregates other search engines. Check out their github page for more info.

Now, for simplicity sake right now, we are going to use a pre-build docker image provided by wonderfall. In the future we’ll be building our own along with the inclusion of some proxy changes.

Deployment is very straight forward in this case. I will be using an ingress rule for routing of the traffic and kube-lego for generating and managing an SSL certificate. Then in 30 seconds we will have a running searx instance in our own Kubernetes cluster.

Here is the complete yml config, edit the settings and deploy this using kubectl create -f searx.yml.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: searx
  namespace: searx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: searx
    spec:
      containers:
      - name: searx
        image: wonderfall/searx:latest
        imagePullPolicy: Always
        env:
        - name: IMAGE_PROXY
          value: False
          name: BASE_URL
          value: https://my.search.com
        ports:
        - containerPort: 8888
---
kind: Service
apiVersion: v1
metadata:
  name: searx
  namespace: searx
spec:
  selector:
    app: searx
  type: LoadBalancer
  ports:
  - name: http
    port: 8888
    targetPort: 8888
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: searx
  namespace: searx
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - my.search.com
    secretName: searx-tls
  rules:
  - host: my.search.com
    http:
      paths:
      - path: /
        backend:
          serviceName: searx
          servicePort: 8888