This snippet will setup and start a kubernetes cluster on AWS consisting of 1 master and 2 nodes (minions). Requirements: - kubectl - kops - AWS CLI IAM Account in AWS with the following permissions: AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
========================================
Pre-Requirements
Create a Route53 DNS Zone
https://console.aws.amazon.com/route53/
DIG ns k8s.claudioteixeira.com
## Install CLI Software
Kops 1.9
brew update && brew install kops
kops version
Kubectl
brew update && brew install kubectl
kubectl version --client
AWS CLI
brew install awscli
aws configure
========================================
Configuring
Kops needs a place to store it's configs.
Create a s3 bucket
aws s3 mb s3://kops-cluster1.k8s.claudioteixeira.com
aws s3 ls
## Store kops config storage on a variable
export KOPS_STATE_STORE=s3://kops-cluster1.k8s.claudioteixeira.com
Create and start cluster (via EC2, 2 workers)
kops create cluster \
--cloud=aws --zones=eu-west-1b \
--dns-zone=k8s.claudioteixeira.com \
--name=kops-cluster1.k8s.claudioteixeira.com --yes
kops validate cluster
What Kops has done?
- Created VPC
- Created Route53 Record
- Created IAM Roles
- Created Volumes
- Created Security Groups
- Created Subnets
- Created EC2 instances (defaults 1 Master, 2 Workers)
- Created AutoScaling Groups for both the master and workers
Delete Cluster
`kops delete cluster --name=kops-cluster1.k8s.claudioteixeira.com --yes
Created on 10/5/2017