Using Consul-Helm to create mTLS infrastructure.

Steve Dillon
6 min readFeb 5, 2020

This story is out of date. The Consul-Helm repo has been merged into Consul, and consul has changed in how they allocate agents. I’m leaving it up, but HashiCorp has pretty good documentation on this now.
I’ve written a follow-up story to this, with a rapid start guide. https://stvdilln.medium.com/using-consul-based-service-mesh-on-kubernetes-2db8f72ad407

Creating a consul service mesh using consul-helm.

Consul does many things, one of the things it does is to create mTLS tunnels between your microservices. Being an intrepid explorer I attempted to get consul connect working in Kubernetes using consul-helm. I hit several roadblocks along the way, and needed some help from the developers to get it working. This article explains how I got it to work, and hopefully make other peoples experience easier.

This is a demonstration on how to get connect-inject working with the helm chart consul-helm. This is not a generic demo of how to use consul-helm, and we mostly assume that you have already run and understand consul-helm.

Prerequisites

  • helm 3.0 installed on this machine
  • kubectl configured and pointed to the k8s cluster you want to test with.
  • These scripts will work on osx and linux as is. On a windows client just examine any of the simple scripts and run the appropriate commands.
  • git installed and able to clone projects from GitHub
  • I have only tested this with a 3 node cloud based k8s cluster. I’ve tested with AKS and will soon test AWS. I don’t think this is a project where testing on miniKube or MicroK8s has a lot of relevance. The default install of consul needs a cluster of machines (normally 3 minimum), so the compromises of running on miniKube seem too much.
  • The consul-helm requests 10gb storage on each server node by default. You will need to have sufficient storages on your nodes.

First clone the demonstration code and consul-helm

git clone git@github.com:stvdilln/consul-inject-demo.git
cd consul-inject-demo
git clone git@github.com:hashicorp/consul-helm.git

PreReq: Create Kubernetes Cluster

Create a basic starter 3 node Kubernetes Cluster on your favorite cloud provider and get kubectl working against it.

The demo should work on any 3 node or larger Kubernetes cluster. Consul is a consensus protocol and generally a 3 node installation is the minimum size for any real testing or validating how it will work when scaled larger.

For AKS, to create a simple cluster, follow the steps below. To delete, just delete the resource group.

az group create --name consulDemo --location westusaz aks create --resource-group consulDemo  --name consulDemo \
--kubernetes-version 1.15.7 --location westus \
--node-count 3 --generate-ssh-keys \
--node-osdisk-size 50 | tee clusterinfo.json
az aks get-credentials --resource-group consulDemo \
--name consulDemo

Running the Helm Chart

First clone the consul-helm chart.

#CD to git folder
cd consul-inject-demo
git clone git@github.com:hashicorp/consul-helm.git

I have created a file values-standalone.yaml that has overrides to the default vaules.yaml with the changes that I need. Putting your changes into a private copy of values.yaml puts you on a treadmill of keeping your file up to date with the master. This file just has overrides to the default.

Security Warning: My yaml files enables a public load balancer to the Consul Cluster, due to the varied environments that this may be deployed to, I don’t know if your internal IP addresses are routable or not. So creating a public load balancer gets this demo to work for most people, but it’s not how you want to run this long term. There are commented out annotations in the yaml file where you can switch this to an internal load balancer. If your POD IPs in your kubernetes cluster are routable, then you don’t even need an internal load balancer.

Let’s now run this chart:

helm install consul -f values-standalone.yaml ./consul-helm

Wait for the dust to settle, it will be a couple of minutes. When complete you should have 3 server consul pods, and 3 client pods, and one connect-inject pod. All should be in “running 1/1” state, except for the acl-init pod, which should be in a “completed 0/1” state.

A Working Consul Installation

The load balancer for the UI can take awhile kubectl get service and see when the load balancer completes. It is best to have the UI working so that you can see the services you are creating. Since ACLs are enabled, the UI will show only minimal information. You need to 'login' to the UI with a token. This command will get the master token, (not really a best practice to use a root token to log into a UI, but ok for a demo).

kubectl get secret consul-consul-bootstrap-acl-token -o json \
| jq -r '.data.token' | base64 -D

Bring up the Consul UI, kubectl get service the ui should be available at http://{ipaddr}:80. In the UI navigate to the ACL page and paste the GUID you just retrieved. Your UI should show you more items now.

Starting the demonstration server

kubectl apply -f demo-server.yaml

When you run that command you should see new services appear on the UI services page. “demo-helloworld” and “demo-helloworld-sidecar-proxy”. The demo-helloworld is your container, and the proxy is the container that consul-inject added as the pod was created. The sidecar proxy will open a port say 1234, and allow you to connect to other microservices. It will also work the reverse direction and allow other services to connect securely to your localhost:8080 port (or whatever you have open.

Your application connects to http://localhost:1234/, and behind the scenes Consul-connect creates certificates, does ACL mgmt, and keeps metrics. You service is fat, dumb and happy serving on port 8080, and consul-connect handles all of network layer security.

How it works:

The Service Definition

The file demo-service.yaml has tricks.

  • The name of the service defaults to the name of the first container in the pod. Or you can override with annotation consul.hashicorp.com/connect-service. If ACLs are enabled, as is the case here, the ServiceAccount.Name must match this value.
  • The Annotation “consul.hashicorp.com/connect-inject”: “true” explicitly tells consul-inject to expose this service. Depending on how you deployed consul-inject the default may be All-in, or Opt-in.

You can examine what consul-inject did when you launched the container.

kubectl describe pods --selector=app=consul-inject-demo-server

There is now another container in the pod that is listening for Envoy connections and will forward them to our container (port 80).

Goto the consul UI and see that the demo-server is now known to consul.

Start the Client

kubectl apply -f demo-client.yaml

This will create a pod demo-client, that you can open a shell into:

kubectl exec -it demo-client -- bash

Poking around in the above shell. netstat -nltp shows 3 ports, port 20000 is inbound envoy traffic to the consul-connect sidecar, port 19000 is another envoy port. Port 1234 is the port that we opened as a tunnel to our demo-service. Look at the demo-client.yaml and demo-server.yaml for the annotations that creates this port.

We have not enabled the intention for demo-client to talk to demo-server. If you have ACLs enabled as we do, you need the annotations correct and the ACL correct for communication to happen.

Lets fix the ACLs. First make sure that you are logged in to the UI as described above.

Once you have set the intentions you can execute curl http://localhost:1234/ in the client shell and should retrieve a web page. There are also environment variables you can use for the service name and port. So you could also use, and abstract out any port changes.

curl http://localhost:$DEMO_HELLOWORLD_CONNECT_SERVICE_PORT/

Wrapping it up. We have used consul-helm to create a standalone consul infrastructure inside a kubernetes cluster. We explicitly enabled consul-inject and then created a Demo Service and a Client. We configured ACLs so that the client can talk to the server. We finally demonstrated calling the server service from the client.

In the future we may look at creating a consul cluster in k8s and then joining it to an external consul cluster. With that and a mesh gateway configured we can have our microservices discover and securely call another kubernetes cluster.

Thanks for Listening,

Steve

--

--

Steve Dillon

Cloud Architect and Automation specialist. Specializing in AWS, Hashicorp and DevOps.