Create a High-Availability Kubernetes Cluster on AWS with Kops

High-Availability Cluster

AWS account and IAM role

$ aws configureAWS Access Key ID: YOUR_ACCESS_KEY
AWS Secret Access Key: YOUR_SECRET_ACCESS_KEY
Default region name [None]:
Default output format [None]:
$ aws iam list-users
{
"Users": [
{
"Path": "/",
"UserName": "alvise",
"UserId": ...,
"Arn": ...
}
]
}

Install kops and kubectl

Enroll for Free Demo

Real domain in Route53

S3 bucket to store the cluster state

Creating the Kubernetes cluster

$ kops create cluster \
--state "s3://state.chat.poeticoding.com" \
--zones "us-east-1d,us-east-1f" \
--master-count 3 \
--master-size=t2.micro \
--node-count 2 \
--node-size=t2.micro \
--name chat.poeticoding.com \
--yes
  • --state is the S3 bucket, where kops stores the state files
  • --zones we specify two availability zones in the same region, us-east-1d and us-east-1f
  • --master-count the number of masters must be odd (1,3,5…), so if we want to have a HA cluster we need at least 3 masters. Since for simplicity we’ve chosen to use just two AZ, one of the two zones will have two masters.
  • --master-size this is the type of EC2 Instance for the master servers. For a medium size cluster I usually use C4/C5.large masters, but for this example t2.micro works well. You find t2 instances pricing here.
  • --node-count and --node-size in this example we just need two nodes, which in this case are two t2.micro instances.
  • --name the name of our cluster, which is also a real subdomain which will be created on route53.
$ kops create cluster ... --yesInferred --cloud=aws from zone "us-east-1d"
Running with masters in the same AZs; redundancy will be reduced
Assigned CIDR 172.20.32.0/19 to subnet us-east-1d
Assigned CIDR 172.20.64.0/19 to subnet us-east-1f
Using SSH public key: /Users/alvise/.ssh/id_rsa.pub
...
Tasks: 83 done / 83 total; 0 can run
Pre-creating DNS records
Exporting kubecfg for cluster
kops has set your kubectl context to chat.poeticoding.com
Cluster is starting. It should be ready in a few minutes.
$ kops validate cluster \
--state "s3://state.chat.poeticoding.com" \
--name chat.poeticoding.com
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-33-199.ec2.internal Ready master 11m v1.11.6
ip-172-20-49-249.ec2.internal Ready node 10m v1.11.6
ip-172-20-59-126.ec2.internal Ready master 11m v1.11.6
ip-172-20-71-37.ec2.internal Ready master 11m v1.11.6
ip-172-20-88-143.ec2.internal Ready node 10m v1.11.6

Kubernetes API and Security Group

kops edit cluster \
--state "s3://state.chat.poeticoding.com"
kops update cluster  \
--state "s3://state.chat.poeticoding.com" \
--yes

Deploy an Nginx server

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-33-199.ec2.internal Ready master 11m v1.11.6
ip-172-20-49-249.ec2.internal Ready node 10m v1.11.6
ip-172-20-59-126.ec2.internal Ready master 11m v1.11.6
ip-172-20-71-37.ec2.internal Ready master 11m v1.11.6
ip-172-20-88-143.ec2.internal Ready node 10m v1.11.6
# nginx_deploy.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15
ports:
- containerPort: 80
$ kubectl create -f nginx_deploy.yaml
deployment.apps "nginx" created
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-c9bd9bc4-jqvb5 1/1 Running 0 1m
# nginx_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-elb
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
$ kubectl create -f nginx_svc.yaml
service "nginx-elb" created
$ kubectl describe svc nginx-elb
Name: nginx-elb
...
LoadBalancer Ingress: a41626d3d169811e995970e07eeed2b2-243343502.us-east-1.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31225/TCP
...
$ kubectl delete svc nginx-elb
service "nginx-elb" deleted
$ kubectl delete deploy nginx

Deploy the Phoenix Chat

kind: Deployment
apiVersion: apps/v1
metadata:
name: chat
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: phoenix-chat
image: alvises/phoenix-chat-example:1_kops_chat
ports:
- containerPort: 4000
env:
- name: PORT
value: "4000"
- name: PHOENIX_CHAT_HOST
value: "chat.poeticoding.com"
  • PORT to set the phoenix app port to 4000
  • PHOENIX_CHAT_HOST to let Phoenix know in which domain the chat is hosted, in this case "chat.poeticoding.com"
kind: Service
apiVersion: v1
metadata:
name: chat-elb
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: chat
ports:
- name: http
port: 80
targetPort: 4000
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
chat-b4d7d4b98-vxckn 1/1 Running 0 3m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chat-elb LoadBalancer 100.66.10.231 a28419b91169b... 80:31181/TCP 3m

Multiple Chat replicas

$ kubectl scale --replicas=2 deploy/chat

Destroy the cluster

$ kops delete cluster \
--state "s3://state.chat.poeticoding.com" \
--name chat.poeticoding.com \
--yes
...
Deleted kubectl config for chat.poeticoding.com
Deleted cluster: "chat.poeticoding.com"

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store