skip to content
ainoya.dev

Securing k8s Service Communication with Envoy

/ 4 min read

Introduction

Diagram

In this article, I’ll introduce an example of implementing secure service-to-service communication using Envoy. The motivation for this experiment was to secure gRPC communications within a Kubernetes (k8s) cluster. While it’s possible to implement secure authentication at the application layer, achieving this at the infrastructure layer can reduce the concerns in application implementation, offering significant benefits.

For this experiment, I tried communication using client certificate authentication. The code for this experiment is stored in the following GitHub repository:

The setup for the sample realized in k8s involved using Envoy as a sidecar in k8s. The communication structure was between a client-side pod (application container + Envoy sidecar container) and a server-side pod (Envoy sidecar container + application container).

Implementing mTLS with Envoy

Generating Client Certificates

  • Generating client certificates is straightforward using cfssl.

Generating Server Certificates

Envoy Configuration

To ensure that only specific client certificates are accepted in Envoy, use validation_context and match_typed_subject_alt_names.

First, create a Certificate Authority (CA):

# Example CA creation
$ cat certs/ca/ca-csr.json
{
    "CN": "app-ca",
    "hosts": [""],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "O": "app-ca",
            "OU": "development",
            "ST": "Tokyo",
            "C": "JP"
        }
    ],
    "ca": {
        "expiry": "876000h"
    }
}
# Create the CA key and certificate with cfssl, cfssljson
cd certs/ca
cfssl genkey -initca ca-csr.json | cfssljson -bare ca

Next, generate server certificates using this CA:

# Example server certificate creation
$ cat certs/server/server-config.json
{
  "signing": {
    "default": {
      "expiry": "876000h",
      "usages": [
        "signing",
        "key encipherment",
        "server auth"
      ]
    }
  }
}
$ cat certs/server/server.json
{
  "CN": "app-internal-api",
  "hosts": [""],
  "key": {
    "algo": "ecdsa",
    "size": 256
  },
  "names": [
    {
      "ST": "Tokyo",
      "C": "JP"
    }
  ]
}
cd certs/server
cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem \
  -config=./server-config.json server.json | cfssljson -bare server

Finally, generate client certificates that will communicate with this server:

$ cat certs/client/client-config.json
{
  "signing": {
    "default": {
      "expiry": "876000h",
      "usages": [
        "signing",
        "key encipherment",
        "client auth"
      ]
    }
  }
}
$ cat certs/client/client.json
{
  "CN": "appclient",
  "hosts": ["app-internal-api"],
  "key": {
    "algo": "ecdsa",
    "size": 256
  },
  "names": [
    {
      "ST": "Tokyo",
      "C": "JP"
    }
  ]
}

Point to note: The hosts section should be set with names to be validated by match_typed_subject_alt_names.

cd certs/client
cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem \
  -config=./client-config.json client.json | cfssljson -bare client

With match_typed_subject_alt_names, requests trying to authenticate with a client certificate that doesn’t match the host name will be rejected with a CERTIFICATE_VERIFY_FAILED error.

For more detailed examples of Envoy configuration, refer to the demo repository:

Manifest Configuration for Deployment in k8s

For configuring the client-side k8s manifest, consider the following setup:

  • Include processes that terminate the sidecar when the main container exits.
    • Share an emptyDir volume between the main container and the sidecar container, and create a file upon exit of the main container. The sidecar should terminate itself upon detecting this file.
    • From k8s 1.28 onwards, use initContainer and restartPolicy:Always to avoid such configurations. Kubernetes v1.28: Introducing native sidecar containers
  • Ensure that the application starts after the sidecar is ready.
  • Here’s an excerpt of the configuration:
            - name: app
     ...
              command:
                - /bin/bash
                - -c
                # 1. Set up a trap to terminate the envoy sidecar after the batch process ends.
                # 2. Simple waiting process until Envoy starts
                - trap "touch /tmp/pod/main-terminated" EXIT && while ! nc -z 0.0.0.0 2443; do echo "Waiting for the envoy sidecar to be up..."; sleep 1; done && /path/your/application-command $@
                - --
              args:
                - # application args
            - name: envoy
              image: envoyproxy/envoy:v1.28-latest
              command:
                - /bin/sh
                - -c
              args:
                - |
                  envoy \
                   -c /etc/envoy/client-conf.yaml &
                   CHILD_PID=$!
                   (while true; do if [[ -f "/tmp/pod/main-terminated" ]]; then kill $CHILD_PID; echo "Killed $CHILD_PID as the main container terminated."; fi; sleep 1; done) &
                   wait $CHILD_PID
                   if [[ -f "/tmp/pod/main-terminated" ]]; then exit 0; echo "Job completed. Exiting..."; fi

gRPC Load Balancing

  • gRPC communicates over an established TCP connection, which can lead to load imbalances if the connections are unevenly distributed across backend servers.
  • Normally, k8s services function as L4 load balancers and cannot manage L7 load distribution.
  • To address this, start k8s services in headless mode and let Envoy handle the load balancing. In this setup, the Envoy sidecar container on the client side plays this role.

Conclusion

Further Developments

This article aimed to provide practical insights into setting up secure service-to-service communication within a Kubernetes cluster using Envoy and mTLS. Feel free to refer to the provided repository for a hands-on demonstration and detailed configuration examples.

References