Steve Dillon
5 min readJul 19, 2023

--

Using Consul based Service Mesh on Kubernetes

Photo by Nick Fewings on Unsplash

Three years ago I wrote a blog about how to setup consul service mesh and back then it was a complicated task. Since that time, the helm chart for running consul in a kubernetes cluster has moved into the main consul git repo and HashiCorp has some good instructions on how to make it work.
I’m writing this blog as a replacement to the prior. The focus of this blog is how to get a Consul Gateway running that receives HTTPS traffic and forwards this inside of kubernetes to a website connected via a service mesh. This will be a focused blog, to get this target piece of technology working. It will not spend a ton of time discussing various options. After you get a working system, you can read other blogs and modify your working system to play around.

Pre-steps:

  • Create a kubernetes cluster on your favorite cloud provider. In this blog we will be working on a public IP and ingress controllers, and minikube and microk8s may work, IP addresses on them are different. I used a very small AKS and kubernetes version 1.25.6
  • Get KubeCtl configured to work with your cluster.
  • Install Helm 3.0

Install Consul:

Save the below to a file values.yaml:

# Contains values that affect multiple components of the chart.
global:
# The main enabled/disabled setting.
# If true, servers, clients, Consul DNS and the Consul UI will be enabled.
enabled: true
# The prefix used for all resources created in the Helm chart.
name: consul
# The name of the datacenter that the agents should register as.
datacenter: dc1
# Enables TLS across the cluster to verify authenticity of the Consul servers and clients.
tls:
enabled: true
# Enables ACLs across the cluster to secure access to data and APIs.
acls:
# If true, automatically manage ACL tokens and policies for all Consul components.
manageSystemACLs: true
# Configures values that configure the Consul server cluster.
server:
enabled: true
# The number of server agents to run. This determines the fault tolerance of the cluster.
replicas: 1
# Contains values that configure the Consul UI.
ui:
enabled: true
# Registers a Kubernetes Service for the Consul UI as a NodePort.
service:
#type: NodePort
type: 'LoadBalancer'

Run helm with those values:

helm repo add hashicorp https://helm.releases.hashicorp.com
# the current chart version is 1.2.0 when I write this, if you get errors
# try that version
helm install --values values.yaml consul hashicorp/consul --version 1.2.0 --create-namespace --namespace consul

When this completes, you may view the consul UI through the loadblancer just created. It will allow you to see services coming online and see the service mesh permissions (intentions) as they are created.

#!/bin/bash
export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export CONSUL_HTTP_SSL_VERIFY=false
echo "HTTP Address is: $CONSUL_HTTP_ADDR"
echo "Token for logging in is: $CONSUL_HTTP_TOKEN"

Next we deploy the API gateway, save the below to a file and kubectl apply it.

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: api-gateway
namespace: consul
spec:
gatewayClassName: consul
listeners:
- protocol: HTTPS
port: 8443
name: https
allowedRoutes:
namespaces:
from: All
tls:
certificateRefs:
- name: consul-server-cert

kubectl get gateway -A will now show an external IP address, which will later show our test website as https://{ip address}:8443.

Lets create a website to add to our api-gateway (kubectl apply -f). In the below you will see a line with ‘connect-inject’. That line is the magic line that adds this service to the service mesh. Consul is looking at container creation, and when it see’s this annotation, that is the trigger for it to add this to the service mesh and not to do normal kubenet networking. In the default ‘transparent’ mode, consul modifies IP tables and the echo service is only allowed to talk to other pods/services on the service mesh.

---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: echo-1
namespace: default
spec:
protocol: http
---
apiVersion: v1
kind: Service
metadata:
labels:
app: echo-1
name: echo-1
namespace: default
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: echo-1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: echo-1
namespace: default
automountServiceAccountToken: true
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo-1
name: echo-1
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: echo-1
template:
metadata:
labels:
app: echo-1
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
serviceAccountName: echo-1
containers:
- image: k8s.gcr.io/ingressconformance/echoserver:v0.0.1
name: echo-1
env:
- name: SERVICE_NAME
value: echo-1
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 3000

We now need to allow the API gateway (on the service mesh) to talk to the Echo Service on the service mesh:

---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: api-gateway
spec:
destination:
name: echo-1
sources:
- name: api-gateway
action: allow

And we finally need to map a route for where the API gateway will serve this. We will use /echo-demo/ path for this service.

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: route-echo
namespace: default
spec:
parentRefs:
- name: api-gateway
namespace: consul
rules:
- matches:
- path:
type: PathPrefix
value: /echo-demo
backendRefs:
- kind: Service
name: echo-1

And it should be working now.

➜  temp kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
consul api-gateway LoadBalancer 10.0.201.152 20.246.192.4 8443:32595/TCP
consul consul-connect-injector ClusterIP 10.0.9.239 <none> 443/TCP
consul consul-dns ClusterIP 10.0.74.180 <none> 53/TCP,53/UDP
consul consul-server ClusterIP None <none> 8501/TCP...
consul consul-ui LoadBalancer 10.0.196.41 20.185.103.121 443:32458/TCP
default echo-1 ClusterIP 10.0.8.249 <none> 3000/TCP
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP
kube-system metrics-server ClusterIP 10.0.33.112 <none> 443/TCP

https://{api-gateway-ip}:8443/echo-demo should print all of the http info that it received. (note that this demo is using 8443). You will get certificate warnings on these as this demo is just serving up self signed certs.

https://{consul-ui-ip}:443 shows the consul ui:

Consul-UI showing active items in service mesh.
Consul UI showing Service mesh permissions.

--

--

Steve Dillon

Cloud Architect and Automation specialist. Specializing in AWS, Hashicorp and DevOps.