First off thanks to Martin for taking this from a POC to a product within Kubernetes.
When it comes to managing secrets inside Kubernetes, Vault is our go to solution. It is not exposed externally at this time although we have considered it for external workloads. We are working with it in a couple areas including dynamic secrets and have intentions of using it with OTP, SSH, MFA and SSL cert rotation in the near future.
We spin Vault up as a part of our default cluster build, use consul as its storage backend, automatically unseal the vault and ship the keys off to admins.
Reference Deploying Consul in Kubernetes for more information there.
First off lets start with the Dockerfile. This is a pretty standard Dockerfile. Nothing crazy here.
FROM alpine:latest
MAINTAINER Martin Devlin <martin.devlin@pearson.com>
ENV VAULT_VERSION 0.4.1
ENV VAULT_PORT 7392
COPY config.json /etc/vault/config.json
RUN apk --update add openssl zip\
&& mkdir -p /etc/vault/ssl \
&& wget http://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip \
&& unzip vault_${VAULT_VERSION}_linux_amd64.zip \
&& mv vault /usr/local/bin/ \
&& rm -f vault_${VAULT_VERSION}_linux_amd64.zip
EXPOSE ${VAULT_PORT}
COPY /run.sh /usr/bin/run.sh
RUN chmod +x /usr/bin/run.sh
ENTRYPOINT ["/usr/bin/run.sh"]
CMD []
But now lets take a look at run.sh. This is where the magic happens.
#!/bin/sh
if [ ! -z ${VAULT_SERVICE_PORT} ]; then
export VAULT_PORT=${VAULT_SERVICE_PORT}
else
export VAULT_PORT=7392
fi
if [ ! -z ${CONSUL_SERVICE_HOST} ]; then
export CONSUL_SERVICE_HOST=${CONSUL_SERVICE_HOST}
else
export CONSUL_SERVICE_HOST="127.0.0.1"
fi
if [ ! -z ${CONSUL_SERVICE_PORT} ]; then
export CONSUL_PORT=${CONSUL_SERVICE_PORT}
else
export CONSUL_PORT=8500
fi
openssl req -x509 -newkey rsa:1024 -nodes -keyout /etc/vault/ssl/some-vault-key.key -out /etc/vault/ssl/some-vault-crt.crt -days some_number_of_days -subj "/CN=some-vault-cn-or-other"
export VAULT_IP=`hostname -i`
sed -i "s,%%CONSUL_SERVICE_HOST%%,$CONSUL_SERVICE_HOST," /etc/vault/config.json
sed -i "s,%%CONSUL_PORT%%,$CONSUL_PORT," /etc/vault/config.json
sed -i "s,%%VAULT_IP%%,$VAULT_IP," /etc/vault/config.json
sed -i "s,%%VAULT_PORT%%,$VAULT_PORT," /etc/vault/config.json
## Master stuff
master() {
vault server -config=/etc/vault/config.json $@ &
if [ ! -f ~/vault_keys.txt ]; then
export VAULT_SKIP_VERIFY=true
export VAULT_ADDR="https://${VAULT_IP}:${VAULT_PORT}"
vault init -address=${VAULT_ADDR} > ~/vault_keys.txt
export VAULT_TOKEN=`grep 'Initial Root Token:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 1:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 2:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 3:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 4:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 5:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 6:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 7:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key 8:' ~/vault_keys.txt | awk '{print $NF}'`
vault unseal `grep 'Key another_key:' ~/vault_keys.txt | awk '{print $NF}'`
fi
}
case "$1" in
master) master $@;;
*) exec vault server -config=/etc/vault/config.json $@;;
esac
### Exec sending keys to admins
exec /tmp/shipit.sh
sleep 600
Above we do a few important things:
- We use environment variables from within the container to set configs in config.json
- We generate an x509 cert
- We unseal the vault with some sed magic
- We run shipit.sh to send off the keys and remove the vault_keys.txt file. The shipit script has information on admins we dynamically created to send keys to.
Here is what config.json looks like. Nothing major. A basic Vault config.json.
### Vault config
backend "consul" {
address = "%%CONSUL_SERVICE_HOST%%:%%CONSUL_PORT%%"
path = "vault"
advertise_addr = "https://%%VAULT_IP%%:%%VAULT_PORT%%"
}
listener "tcp" {
address = "%%VAULT_IP%%:%%VAULT_PORT%%"
tls_key_file = "/etc/vault/ssl/some-key.key"
tls_cert_file = "/etc/vault/ssl/some-crt.crt"
}
disable_mlock = true
Kubernetes Config for Vault. We deploy a service accessible internally to the cluster with proper credentials. And we create a replication controller to ensure a Vault container is always up.
---
apiVersion: v1
kind: Service
metadata:
name: vault
namespace: your_namespace
labels:
name: vault-svc
spec:
ports:
- name: vaultport
port: 8200
selector:
app: vault
---
apiVersion: v1
kind: ReplicationController
metadata:
name: vault
namespace: your-namespace
spec:
replicas: 1
selector:
app: vault
template:
metadata:
labels:
app: vault
spec:
containers:
- name: vault
image: 'private_repo_url:5000/vault:latest'
imagePullPolicy: Always
ports:
- containerPort: 8200
name: vaultport
Once Vault is up and running we insert a myriad of policies by which Vault can use to for various secret and auth backends. For obvious reasons I won’t be showing those.
@devoperandi
Note: Some data in code above intentionally changed for security reasons.

Just a quick tip on your run.sh
You can simplify some of the if statements at the beginning by using bash’s built in variable substitution.
eg
export CONSUL_SERVICE_HOST=${CONSUL_SERVICE_HOST:-“127.0.0.1”}
http://tldp.org/LDP/abs/html/parameter-substitution.html
Thanks for the post
Excellent feedback. Thanks Justin. Sorry for the delay. I’ve been getting a lot of spammers but now that you are approved you can post and it will show up right away.
What’s the point of sleep 600 at the end? And how does the container keep running? If I send my `vault server` to the background like you do with &, the container just exits. Great post!
Trevor,
Our vault deployment has changed significantly since I wrote this but originally the script was to account for some background work going on. I’ll work on another post to update soon. If the container is existing, something is missing. Backend data store? We use Consul.
Thanks, Michael. That would be awesome to see what you guys ended up with!