Terraformを使ってEKSを作成してみた
はじめに
おはようございます、加藤です。Terraformを使ってEKSを作成してみました。
やってみた
解説
コードはGitHubにアップしています。すぐにデプロイしたい場合はクローンして使用してください。
vpc.tf
data "aws_availability_zones" "available" {} resource "aws_vpc" "vpc" { cidr_block = "${var.vpc_cidr_block}" enable_dns_hostnames = true enable_dns_support = true tags = "${merge(local.default_tags, map("Name", "${local.base_name}-vpc"))}" } resource "aws_subnet" "subnet" { count = "${var.num_subnets}" vpc_id = "${aws_vpc.vpc.id}" availability_zone = "${data.aws_availability_zones.available.names[ count.index % var.num_subnets ]}" cidr_block = "${cidrsubnet(var.vpc_cidr_block, 8, count.index + var.num_subnets * 0 )}" map_public_ip_on_launch = true tags = "${merge(local.default_tags, map("Name", "${local.base_name}-subnet-${count.index+1}"))}" } resource "aws_internet_gateway" "igw" { vpc_id = "${aws_vpc.vpc.id}" tags = "${merge(local.default_tags, map("Name", "${local.base_name}-igw"))}" } resource "aws_route_table" "rtb" { vpc_id = "${aws_vpc.vpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.igw.id}" } tags = "${merge(local.default_tags, map("Name", "${local.base_name}-rtb"))}" } resource "aws_route_table_association" "rtba" { count = "${var.num_subnets}" subnet_id = "${element(aws_subnet.subnet.*.id, count.index)}" route_table_id = "${aws_route_table.rtb.id}" }
Publicなサブネットのみで作成します。num_subnetsで作成するサブネットの個数を設定しています。(デフォルトは3個) また、EKSで使用する場合は、VPCリソースには下記のタグが必須なので対応しています。
key | value |
---|---|
kubernetes.io/cluster/ | shared |
iam.tf
resource "aws_iam_role" "eks-master-role" { name = "eks-master-role" assume_role_policy = <<EOS { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "eks.amazonaws.com" }, "Effect": "Allow" } ] } EOS } resource "aws_iam_role_policy_attachment" "eks-cluster-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy" role = "${aws_iam_role.eks-master-role.name}" } resource "aws_iam_role_policy_attachment" "eks-service-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy" role = "${aws_iam_role.eks-master-role.name}" } resource "aws_iam_role" "eks-node-role" { name = "eks-node-role" assume_role_policy = <<EOS { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow" } ] } EOS } resource "aws_iam_role_policy_attachment" "eks-worker-node-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" role = "${aws_iam_role.eks-node-role.name}" } resource "aws_iam_role_policy_attachment" "eks-cni-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" role = "${aws_iam_role.eks-node-role.name}" } resource "aws_iam_role_policy_attachment" "ec2-container-registry-readonly" { policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" role = "${aws_iam_role.eks-node-role.name}" } resource "aws_iam_role_policy_attachment" "ec2-role-for-ssm" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM" role = "${aws_iam_role.eks-node-role.name}" } resource "aws_iam_instance_profile" "eks-node-role-profile" { name = "eks-node-role-profile" role = "${aws_iam_role.eks-node-role.name}" }
EKSのマスター、ノード用のIAMロール、インスタンスプロファイルを作成します。 ノードに対してコマンドを実行する際にはSSHではなくSSMを想定して、AmazonEC2RoleforSSMを割り当てています。
security_group.tf
resource "aws_security_group" "cluster-master" { name = "cluster-master" description = "EKS cluster master security group" tags = "${merge(local.default_tags,map("Name","eks-master-sg"))}" vpc_id = "${aws_vpc.vpc.id}" ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } resource "aws_security_group" "cluster-nodes" { name = "cluster-nodes" description = "EKS cluster nodes security group" tags = "${merge(local.default_tags,map("Name","eks-nodes-sg"))}" vpc_id = "${aws_vpc.vpc.id}" ingress { description = "Allow cluster master to access cluster nodes" from_port = 1025 to_port = 65535 protocol = "tcp" security_groups = ["${aws_security_group.cluster-master.id}"] } ingress { description = "Allow cluster master to access cluster nodes" from_port = 1025 to_port = 65535 protocol = "udp" security_groups = ["${aws_security_group.cluster-master.id}"] } ingress { description = "Allow inter pods communication" from_port = 0 to_port = 0 protocol = "-1" self = true } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
Egressは全許可でIngressは下記の様に許可します。今回はLBの許可は行っていません。
送信元 | 通信内容 | 送信先 |
---|---|---|
ALL(0.0.0.0/0) | TCP/443 | マスター |
マスター | TCP&UDP/Dynamic Port | ノード |
ノード | ALL | ノード |
eks.tf
locals { kubeconfig = <<KUBECONFIG apiVersion: v1 clusters: - cluster: server: ${aws_eks_cluster.eks-cluster.endpoint} certificate-authority-data: ${aws_eks_cluster.eks-cluster.certificate_authority.0.data} name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "${local.cluster_name}" KUBECONFIG eks_configmap = <<CONFIGMAPAWSAUTH apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: ${aws_iam_role.eks-node-role.arn} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes CONFIGMAPAWSAUTH } resource "aws_eks_cluster" "eks-cluster" { name = "${local.cluster_name}" role_arn = "${aws_iam_role.eks-master-role.arn}" version = "${local.cluster_version}" vpc_config { security_group_ids = ["${aws_security_group.cluster-master.id}"] subnet_ids = ["${aws_subnet.subnet.*.id}"] } depends_on = [ "aws_iam_role_policy_attachment.eks-cluster-policy", "aws_iam_role_policy_attachment.eks-service-policy", ] }
EKSの作成部分です。アウトプットする為のkubeconfig,configmapをlocalに格納しておきます。
asg.tf
resource "aws_autoscaling_group" "eks-asg" { name = "EKS cluster nodes" desired_capacity = "${var.num_subnets}" launch_configuration = "${aws_launch_configuration.eks-lc.id}" max_size = "${var.num_subnets}" min_size = "${var.num_subnets}" vpc_zone_identifier = ["${aws_subnet.subnet.*.id}"] tag { key = "Name" value = "${local.base_name}-nodes" propagate_at_launch = true } tag { key = "kubernetes.io/cluster/${local.cluster_name}" value = "owned" propagate_at_launch = true } tag { key = "Project" value = "${var.project}" propagate_at_launch = true } tag { key = "Terraform" value = "true" propagate_at_launch = true } tag { key = "Environment" value = "${var.environment}" propagate_at_launch = true } lifecycle { create_before_destroy = true } }
ノード用のAutoScalingグループを作成します。desired_capacity, min_size, max_sizeをnum_subnetsにしています。num_subnets=3の場合だと基本的に、各サブネットに1台で立ち上がります。 台数を増やしたい場合は、*2などで対応すると良いと思います。 また、EKSで使用する場合は、EC2インスタンスには下記のタグが必須なので対応しています。
key | value |
---|---|
kubernetes.io/cluster/ | owned |
launch_config.tf
locals { userdata = <<USERDATA #!/bin/bash set -o xtrace /etc/eks/bootstrap.sh --apiserver-endpoint "${aws_eks_cluster.eks-cluster.endpoint}" --b64-cluster-ca "${aws_eks_cluster.eks-cluster.certificate_authority.0.data}" "${aws_eks_cluster.eks-cluster.name}" USERDATA } data "aws_ami" "eks-node" { most_recent = true owners = ["602401143452"] filter { name = "name" values = ["amazon-eks-node-${local.cluster_version}-*"] } } data "aws_ami" "eks-gpu-node" { most_recent = true owners = ["679593333241"] filter { name = "name" values = ["amazon-eks-gpu-node-${local.cluster_version}-*"] } } resource "aws_launch_configuration" "eks-lc" { associate_public_ip_address = true iam_instance_profile = "${aws_iam_instance_profile.eks-node-role-profile.id}" image_id = "${data.aws_ami.eks-node.image_id}" instance_type = "t3.medium" name_prefix = "eks-node" key_name = "${var.key_name}" enable_monitoring = false root_block_device { volume_type = "gp2" volume_size = "50" } security_groups = ["${aws_security_group.cluster-nodes.id}"] user_data_base64 = "${base64encode(local.userdata)}" lifecycle { create_before_destroy = true } }
AutoScalingグループ用の起動設定です。data "aws_ami"を使って使用するk8sバージョン用の最新のAMIを参照してきます。 GPU用のAMIを定義だけしておきました。
outputs.tf
output "kubectl config" { value = "${local.kubeconfig}" } output "EKS ConfigMap" { value = "${local.eks_configmap}" }
kubeconfig, configmapをコンソールに出力する設定です。
variable "region" { default = "us-west-2" } variable "project" { default = "eks" } variable "environment" { default = "dev" } variable "vpc_cidr_block" { default = "10.0.0.0/16" } variable "num_subnets" { default = 3 } variable "key_name" { default = "your_key_name" } locals { base_tags = { Project = "${var.project}" Terraform = "true" Environment = "${var.environment}" } default_tags = "${merge(local.base_tags, map("kubernetes.io/cluster/${local.cluster_name}", "shared"))}" base_name = "${var.project}-${var.environment}" cluster_name = "${local.base_name}-cluster" cluster_version = "1.10" }
variable, localの設定です。
デプロイ
EKSを作成します。
terraform plan terraform apply -var 'key_name=YOUR_KEY_NAME'
terraform output の結果をそれぞれファイルに保存する。
- EKS ConfigMap → manifests/config_map.yml
- kubectl config → .kube/config
terraform output kubectl_config > .kube/config terraform output ConfigMap > manifests/config_map.yml
以下のコマンドを実行する。
export KUBECONFIG='.kube/config' kubectl apply -f manifests/config_map.yml
ノードがReadyになって居ることを確認する。
kubectl get nodes
あとがき
今回はその利点を生かしていませんが、CloudFormationと比べTerraformはAWS以外、特にKubenetesに対してもInfrastructure as Codeが行えるというメリットがあります。 今は手動で対応が必要な部分が多く、ELBに対応していませんが、徐々に成長させて行こうと思います。