Terraformで、同じ構成を複数プロビジョニングしたい: Terragruntでrun-all編

2022.08.29

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

先日 HashiTalks Japanで「シングルテナント構成のSaaSのIaCにTerraform Workspacesを導入してみた」というビデオ登壇をしました。その中で時間の都合でご紹介できなかった、「Workspacesを使う以外の、同じ構成(リソースセット)を複数個プロビジョニングする方法案」を、複数回に分けてご紹介していきます。

関連エントリ

今回はTerragruntのrun-all機能を使う案のご紹介です。この案は、前回のディレクトリ分割案のデメリットを解消することができます。

詳細

前回のディレクトリ構成から始めます。

dest aとdest bという展開先がある例です。

前回のディレクトリ構成

.
├── dest
│   ├── a
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── variables.tf
│   │   └── versions.tf
│   └── b
│       ├── main.tf
│       ├── outputs.tf
│       ├── variables.tf
│       └── versions.tf
└── modules
    └── base
        ├── ・ 
        └── ・

dest/a/dest/b/terragrunt.hclファイルを追加します。ファイルの中身は空で良いです。

.
├── dest
│   ├── a
│   │   ├── terragrunt.hcl
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── variables.tf
│   │   └── versions.tf
│   └── b
│       ├── terragrunt.hcl
│       ├── main.tf
│       ├── outputs.tf
│       ├── variables.tf
│       └── versions.tf
└── modules
    └── base
        ├── ・ 
        └── ・

destディレクトリにてterragrunt run-all applyコマンドを実行します。(もちろんですが、事前に Terragruntのインストールが必要です。)

すると以下のように、destディレクトリ以下のterragrunt.hclファイルがあるディレクトリ全て(この例だとdest/a/dest/b/)にて terraform apply (厳密にはterragrunt apply)が並列実行されます。

% terragrunt run-all apply
INFO[0000] The stack at /hoge/terragrunt-run-all/dest will be processed in the following order for command apply:
Group 1
- Module /hoge/terragrunt-run-all/dest/a
- Module /hoge/terragrunt-run-all/dest/b
 
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) y
Initializing modules...
- base in ../../modules/base

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.26.0"...
Initializing modules...
- base in ../../modules/base

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.26.0"...
- Installing hashicorp/aws v4.26.0...
- Installing hashicorp/aws v4.26.0...
- Installed hashicorp/aws v4.26.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
- Installed hashicorp/aws v4.26.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.base.aws_s3_bucket.main will be created
  + resource "aws_s3_bucket" "main" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = (known after apply)
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = "saas"
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + cors_rule {
          + allowed_headers = (known after apply)
          + allowed_methods = (known after apply)
          + allowed_origins = (known after apply)
          + expose_headers  = (known after apply)
          + max_age_seconds = (known after apply)
        }

      + grant {
          + id          = (known after apply)
          + permissions = (known after apply)
          + type        = (known after apply)
          + uri         = (known after apply)
        }

      + lifecycle_rule {
          + abort_incomplete_multipart_upload_days = (known after apply)
          + enabled                                = (known after apply)
          + id                                     = (known after apply)
          + prefix                                 = (known after apply)
          + tags                                   = (known after apply)

          + expiration {
              + date                         = (known after apply)
              + days                         = (known after apply)
              + expired_object_delete_marker = (known after apply)
            }

          + noncurrent_version_expiration {
              + days = (known after apply)
            }

          + noncurrent_version_transition {
              + days          = (known after apply)
              + storage_class = (known after apply)
            }

          + transition {
              + date          = (known after apply)
              + days          = (known after apply)
              + storage_class = (known after apply)
            }
        }

      + logging {
          + target_bucket = (known after apply)
          + target_prefix = (known after apply)
        }

      + object_lock_configuration {
          + object_lock_enabled = (known after apply)

          + rule {
              + default_retention {
                  + days  = (known after apply)
                  + mode  = (known after apply)
                  + years = (known after apply)
                }
            }
        }

      + replication_configuration {
          + role = (known after apply)

          + rules {
              + delete_marker_replication_status = (known after apply)
              + id                               = (known after apply)
              + prefix                           = (known after apply)
              + priority                         = (known after apply)
              + status                           = (known after apply)

              + destination {
                  + account_id         = (known after apply)
                  + bucket             = (known after apply)
                  + replica_kms_key_id = (known after apply)
                  + storage_class      = (known after apply)

                  + access_control_translation {
                      + owner = (known after apply)
                    }

                  + metrics {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }

                  + replication_time {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }
                }

              + filter {
                  + prefix = (known after apply)
                  + tags   = (known after apply)
                }

              + source_selection_criteria {
                  + sse_kms_encrypted_objects {
                      + enabled = (known after apply)
                    }
                }
            }
        }

      + server_side_encryption_configuration {
          + rule {
              + bucket_key_enabled = (known after apply)

              + apply_server_side_encryption_by_default {
                  + kms_master_key_id = (known after apply)
                  + sse_algorithm     = (known after apply)
                }
            }
        }

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }

      + website {
          + error_document           = (known after apply)
          + index_document           = (known after apply)
          + redirect_all_requests_to = (known after apply)
          + routing_rules            = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
module.base.aws_s3_bucket.main: Creating...
module.base.aws_s3_bucket.main: Refreshing state... [id=saas20220827133102741400000001]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.base.aws_s3_bucket.main will be created
  + resource "aws_s3_bucket" "main" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = (known after apply)
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = "saas"
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + cors_rule {
          + allowed_headers = (known after apply)
          + allowed_methods = (known after apply)
          + allowed_origins = (known after apply)
          + expose_headers  = (known after apply)
          + max_age_seconds = (known after apply)
        }

      + grant {
          + id          = (known after apply)
          + permissions = (known after apply)
          + type        = (known after apply)
          + uri         = (known after apply)
        }

      + lifecycle_rule {
          + abort_incomplete_multipart_upload_days = (known after apply)
          + enabled                                = (known after apply)
          + id                                     = (known after apply)
          + prefix                                 = (known after apply)
          + tags                                   = (known after apply)

          + expiration {
              + date                         = (known after apply)
              + days                         = (known after apply)
              + expired_object_delete_marker = (known after apply)
            }

          + noncurrent_version_expiration {
              + days = (known after apply)
            }

          + noncurrent_version_transition {
              + days          = (known after apply)
              + storage_class = (known after apply)
            }

          + transition {
              + date          = (known after apply)
              + days          = (known after apply)
              + storage_class = (known after apply)
            }
        }

      + logging {
          + target_bucket = (known after apply)
          + target_prefix = (known after apply)
        }

      + object_lock_configuration {
          + object_lock_enabled = (known after apply)

          + rule {
              + default_retention {
                  + days  = (known after apply)
                  + mode  = (known after apply)
                  + years = (known after apply)
                }
            }
        }

      + replication_configuration {
          + role = (known after apply)

          + rules {
              + delete_marker_replication_status = (known after apply)
              + id                               = (known after apply)
              + prefix                           = (known after apply)
              + priority                         = (known after apply)
              + status                           = (known after apply)

              + destination {
                  + account_id         = (known after apply)
                  + bucket             = (known after apply)
                  + replica_kms_key_id = (known after apply)
                  + storage_class      = (known after apply)

                  + access_control_translation {
                      + owner = (known after apply)
                    }

                  + metrics {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }

                  + replication_time {
                      + minutes = (known after apply)
                      + status  = (known after apply)
                    }
                }

              + filter {
                  + prefix = (known after apply)
                  + tags   = (known after apply)
                }

              + source_selection_criteria {
                  + sse_kms_encrypted_objects {
                      + enabled = (known after apply)
                    }
                }
            }
        }

      + server_side_encryption_configuration {
          + rule {
              + bucket_key_enabled = (known after apply)

              + apply_server_side_encryption_by_default {
                  + kms_master_key_id = (known after apply)
                  + sse_algorithm     = (known after apply)
                }
            }
        }

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }

      + website {
          + error_document           = (known after apply)
          + index_document           = (known after apply)
          + redirect_all_requests_to = (known after apply)
          + routing_rules            = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
module.base.aws_s3_bucket.main: Creating...
module.base.aws_s3_bucket.main: Creation complete after 2s [id=saas20220828084112960900000001]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
module.base.aws_s3_bucket.main: Creation complete after 2s [id=saas20220828084114486300000001]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

この構成の良い点

自動化が簡単

前回の構成は展開先毎にディレクトリを切って、展開先毎にterraform applyするものでした。そのためCDパイプラインを作成するとなると、展開先毎に個別のパイプラインを作成する、もしくは複数の展開先に一括でプロビジョニングできるような少し複雑なパイプラインを用意する必要があります。

Terragruntのrun-all applyを使うことで、コマンド実行は一回だけで済みます。そのためパイプラインはシンプルにすることができます。

各展開先毎に個別にコマンド実行もできる

terragrunt run-all xxxによって1コマンドで全展開先まとめてプロビジョニングできるのに加えて、各展開先のディレクトリ(dest/a/dest/b/)に移動すれば個別にterragruntコマンドやterraformコマンドを実行することもできます。

並列処理されるので処理時間は長くならない

terragrunt run-all xxxによってそのディレクトリ下のterragrunt.hclファイルがあるディレクトリ全てでterragrunt xxxが実行されます。その時、各実行は並列実行されますので、展開先が多い場合でもそれほど処理時間は長くならないでしょう。

この構成のイマイチな点

Terragrunt学習コスト

当たり前ですがTerragruntについて学ぶ必要があります。とはいえTerragruntはそこまで難しいツールではないので大して問題にはならないと思います。

ログが読みづらい

前述のterragrunt run-all apply実行例を見ていただければわかると思いますが、各ディレクトリでのコマンド実行結果が混ざって表示されてしまっています。読みづらいです。読みやすくするようなオプションはざっと見た限りなさそうです…(もしあったら教えて下さい)

対策として、以下のように各ルートモジュール間に依存関係を書くことができます。

dest/a/terragrunt.hcl

dependency "b" {
  config_path = "../b"
}

このように書くと、展開先Aが展開先Bに依存していることになり、Bのapply完了を待ってAのapplyが開始されます。ただしこうすると当然並列処理ではなくなるので処理時間は長くなります。また実態としては依存関係が無いのにも関わらずdependencyを書くということなので、可読性は悪くなるでしょう。

実行環境はスケールしない

terragrunt run-all xxx実行によってterragrunt.hclファイルがあるディレクトリ全てでterragrunt xxxが並列実行されるわけですが、実行環境はひとつです。ローカルのPCでやればローカルのPC内で並列処理が走りますし、CodeBuildなどのCI環境でやればひとつのコンピューティングリソースのなかで並列実行されます。そのため並列数が多いとパフォーマンスが落ちる可能性があります。こちらの構成はTerraformコマンド実行数分だけCodeBuildを並列に実行する案ですが、そういった案に比べるとスケーラビリティーの点で劣ります。

※ パフォーマンスが落ちる場合は -terragrunt-parallelismによって並列実行数を制限することができます。

参考情報