Running microservices in Amazon EKS with AWS App Mesh and Kong

  • API Management covering external traffic ingress to the API endpoints.
  • Service management capabilities focusing on operational controls and service health.

Enter AWS App Mesh and Kong for Kubernetes Ingress Controller

AWS App Mesh is a fully managed service that customers can use to implement a service mesh. This service makes it easy to manage internal service-to-service communication across multiple types of compute infrastructure. Kong for Kubernetes is responsible for controlling the traffic going through the ingresses that expose the service mesh to external consumers by defining, applying, and enforcing policies to the ingresses.

  • Scalability: Based on the Kong API gateway, it’s responsible for managing the ingresses. It is common for applications to experience significant fluctuations in volume of traffic, affecting your ingress as well. Kong for Kubernetes is taking advantage of standard Kubernetes scalability controls like Horizontal Pod Autoscaler (HPA) and will scale seamlessly with the demand.
  • Security: Leverages Kubernetes namespace-based RBAC model to ensure consistent access controls. These controls are essential to segregate responsibilities between platform, API, and application teams, which handle their part in the software delivery and operations. For example, application teams restricted to their individual namespaces must still be able to define ingress objects, while access to the ingress controller and API management components can be restricted to the dedicated team(s).
  • Extensibility: An extensive plugin ecosystem offers a variety of options to protect your service mesh, such as OpenID Connect and mutual TLS authentication and authorization, rate-limiting, IP restrictions, and self-service credential registration through the Kong Enterprise Developer Portal.
  • Observability: It can be fully integrated with monitoring, tracing and logging tools like Prometheus, Jaeger, and AWS CloudWatch.

Kong for Kubernetes Aarchitecture

The following diagram describes the Kong for Kubernetes architecture:

  • The Kong Gateway container represents the data plane responsible for processing API traffic and enforcement of policies defined by the ready-to-use plugins available in Kong for Kubernetes.
  • The controller container represents the control plane that translates Kubernetes manifests and CRDs into Kong configuration, removing the need for separate administration of proxy and Kubernetes configuration.

Prerequisites

Before starting this process, ensure the following prerequisites are ready:

  • An EKS 1.15 or higher cluster is already deployed. For this exercise, eksctl was used
  • Kubectl 1.15 or higher installed locally
  • Helm V3
  • Curl or any other HTTP client

Solution deployment

The deployment is an evolution of the DJ Service Mesh Application, adding the ingress controller layer on top of it. Kong for Kubernetes provides an extensive list of plugins to implement numerous policies, such as authentication, log processing, caching, and more.

Step 1: Deploy your DJ service mesh application

Follow the following steps described in the EKS workshop to deploy the DJ service mesh application:

  1. Deploy DJ App
  2. Install App Mesh integration
  3. Port DJ App to App Mesh
$ kubectl get virtualservices -n prod
NAME ARN AGE
jazz arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualService/jazz.prod.svc.cluster.local 2m39s
metal arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualService/metal.prod.svc.cluster.local 2m38s
$ kubectl get virtualrouters -n prod
NAME ARN AGE
jazz-router arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualRouter/jazz-router_prod 2m54s
metal-router arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualRouter/metal-router_prod 2m53s
$ kubectl get virtualnodes -n prod
NAME ARN AGE
dj arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualNode/dj_prod 3m8s
jazz-v1 arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualNode/jazz-v1_prod 3m4s
jazz-v2 arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualNode/jazz-v2_prod 2m41s
metal-v1 arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualNode/metal-v1_prod 3m4s
metal-v2 arn:aws:appmesh:us-west-2:<AWS_ACCOUNT>:mesh/dj-app/virtualNode/metal-v2_prod 2m40s
$ kubectl describe virtualrouter jazz-router -n prod 
...
Routes:
Http Route:
Action:
Weighted Targets:
Virtual Node Ref:
Name: jazz-v1
Weight: 95
Virtual Node Ref:
Name: jazz-v2
Weight: 5
...

Step 2: Deploy Kong for Kubernetes Ingress Controller

In this blog post, we will replace the redundant DJ node that can be fully replaced with Kong for Kubernetes. We will define an ingress object that will expose all of our services to external consumers.

kubectl create namespace kong
kubectl label namespace kong mesh=dj-app appmesh.k8s.aws/sidecarInjectorWebhook=enabled
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: kong
namespace: kong
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: kong
listeners:
- portMapping:
port: 80
protocol: http
backends:
- virtualService:
virtualServiceRef:
name: jazz
namespace: prod
- virtualService:
virtualServiceRef:
name: metal
namespace: prod
serviceDiscovery:
dns:
hostname: kong-kong-proxy.kong.svc.cluster.local
  • Selecting the Kong for Kubernetes pod to be installed in the step.
  • Defining Jazz and Metal services as the backend, since they are the only allowed ingress points.
  • Setting the Kong for Kubernetes’ Service FQDN as the DNS Service Discovery.
kubectl apply -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/kongvirtualnode.yml
$ kubectl get virtualnodes --all-namespaces
NAMESPACE NAME ARN AGE
kong kong arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/kong_kong 62s
prod dj arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/dj_prod 22m
prod jazz-v1 arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/jazz-v1_prod 22m
prod jazz-v2 arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/jazz-v2_prod 21m
prod metal-v1 arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/metal-v1_prod 22m
prod metal-v2 arn:aws:appmesh:us-west-2::mesh/dj-app/virtualNode/metal-v2_prod 21m
helm repo add kong https://charts.konghq.com 
helm repo update
helm install -n kong kong kong/kong --version "1.11.0" --set ingressController.installCRDs=false
kubectl patch deploy -n kong kong-kong -p '{"spec":{"template":{"spec":{"containers":[{"name":"ingress-controller","securityContext":{"runAsUser": 1337}}]}}}}'
$ kubectl get pods -n kong
NAME READY STATUS RESTARTS AGE
kong-kong-5b4499bc4-rgxnd 3/3 Running 0 13m
$ kubectl get service -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-kong-proxy LoadBalancer 10.100.18.222 adf320d20effa44d4b49ca2cf279e0b8-240857585.{region}.elb.amazonaws.com 80:32445/TCP,443:32024/TCP 5d19h
$ kubectl get po -n kong -o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .spec.containers[*]}{"\tname: "}{.name}{"\n\timage: "}{.image}{"\n"}{end}'
pod: kong-kong-6f784b6686-qlrvp
name: ingress-controller
image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:1.0
name: proxy
image: kong:2.1
name: envoy
image: 840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.15.1.0-prod

Step 3: Define Ingress to expose and protect the service mesh

Service configuration

kubectl apply -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S%2BAWSAppMesh/fqdn-service-routing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: djingress
namespace: prod
annotations:
konghq.com/strip-path: "true"
kubernetes.io/ingress.class: kong
konghq.com/override: do-not-preserve-host
spec:
rules:
- http:
paths:
- path: /dj/jazz
backend:
serviceName: jazz
servicePort: 9080
- path: /dj/metal
backend:
serviceName: metal
servicePort: 9080
kubectl create -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/dj_ingress.yml
  • The annotation konghq.com/strip-path: "true" removes the extended path like “/dj/jazz” from the request before sending it to target virtual service.
  • The annotation konghq.com/override: do-no-preserve-host points to the configuration object that removes the original host for the request. Combined with the FQDN annotation applied to the service, it allows the sidecar to route the request based on the right authority.
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: do-not-preserve-host
namespace: prod
route:
preserve_host: false
kubectl create -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/kongingress-dontpreservehost.yml
$ kubectl get ingress -n prod
NAME HOSTS ADDRESS PORTS AGE
djingress * {name.region}.elb.amazonaws.com
curl {your ingress address}/dj/jazz
["Astrud Gilberto","Miles Davis"]
while [ 1 ];
do curl http://{your ingress address}/dj/jazz/
echo
done
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto (Bahia, Brazil)","Miles Davis (Alton, Illinois)"]
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]

Step 4: Apply rate-limiting policy

With the ingress in place, it’s necessary to define policies to control its consumption. The first one is rate-limiting. The process to apply policies to an ingress is very simple:

  • Declare and create a policy
  • Patch the ingress with an annotation
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rl-by-minute
namespace: prod
config:
minute: 3
policy: local
plugin: rate-limiting
kubectl create -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/ratelimiting.yml
kubectl patch ingress djingress -n prod -p '{"metadata":{"annotations":{"konghq.com/plugins":"rl-by-minute"}}}'
while [ 1 ] 
do curl {your ingress address}/dj/jazz/
echo
done
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]
["Astrud Gilberto","Miles Davis"]
{
"message":"API rate limit exceeded"
}

Step 5: Define the API key security policy

Similarly to the rate-limiting policy, we will need to create the policy first and then apply this policy to the ingress:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: apikey
namespace: prod
plugin: key-auth
kubectl create -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/apikey.yml
kubectl patch ingress djingress -n prod -p '{"metadata":{"annotations":{"konghq.com/plugins":"apikey, rl-by-minute"}}}'
curl {your ingress address}/dj/jazz
{
"message":"No API key found in request"
}
kubectl create secret generic consumerapikey -n prod --from-literal=kongCredType=key-auth --from-literal=key=kong-secret
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
name: consumer1
namespace: prod
annotations:
kubernetes.io/ingress.class: kong
username: consumer1
credentials:
- consumerapikey
kubectl apply -f https://raw.githubusercontent.com/Kong/aws-blogposts/master/K4K8S+AWSAppMesh/consumer.yml
curl {your ingress address}/dj/jazz -H 'apikey:kong-secret'
["Astrud Gilberto","Miles Davis"]

Conclusion

Kong for Kubernetes and AWS App Mesh make it easy to run services by providing consistent visibility and network traffic controls for services built across multiple platforms.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Priya Reddy

Priya Reddy

Hey This Is priya Reddy Iam a tech writer