kakakakakku blog

Weekly Tech Blog: Keep on Learning!

tflocal を使って Terraform から LocalStack にデプロイしよう

Terraform から LocalStack にデプロイする場合,以下のように provider.tfprovider 設定で LocalStack のエンドポイントを参照するように実装する必要がある💡

provider "aws" {
  region     = "ap-northeast-1"
  access_key = "DUMMY"
  secret_key = "DUMMY"

  s3_use_path_style           = true
  skip_requesting_account_id  = true
  skip_credentials_validation = true
  skip_metadata_api_check     = true

  endpoints {
    s3 = "http://localhost:4566"
  }
}

しかし LocalStack のために設定が必要なのは微妙だったりもして,そんなときは tflocal コマンドが便利❗️tflocal コマンドは LocalStack 公式のツールで,他にも LocalStack AWS CLI(awslocal コマンド)や LocalStack AWS SAM CLI(samlocal コマンド)もあって,同じように Terraform 専用の LocalStack ツールと言える👌

github.com

docs.localstack.cloud

セットアップ

pip コマンドでセットアップできる.

$ pip install terraform-local

もしくは Homebrew でセットアップすることもできる.

$ brew install terraform-local

formulae.brew.sh

localstack_providers_override.tf

tflocal コマンドは terraform コマンドを実行する前に,一時的に localstack_providers_override.tf というファイルを生成して,terraform コマンドの実行後に自動的にファイルを消す仕組みになっている.tflocal コマンドの環境変数 DRY_RUN を使えば localstack_providers_override.tf を確認できる📝

$ DRY_RUN=1 tflocal

👾 localstack_providers_override.tf

Terraform はプロジェクトに *_override.tf という命名規則に沿ったファイルがあると上書きして読み込む仕組みがあって,tflocal コマンドが生成する localstack_providers_override.tf はその命名規則に沿っている.

developer.hashicorp.com

実際に以下のファイルが生成される.各サービスで LocalStack のエンドポイントを参照するようになっていて,他には access_keysecret_key も自動的に設定されるようになっている👌

provider "aws" {
  access_key                  = "test"
  secret_key                  = "test"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  region = "ap-northeast-1"
  endpoints {
    acm = "http://localhost:4566"
    amplify = "http://localhost:4566"
    apigateway = "http://localhost:4566"
    apigatewayv2 = "http://localhost:4566"
    appautoscaling = "http://localhost:4566"
    appconfig = "http://localhost:4566"
    appflow = "http://localhost:4566"
    appsync = "http://localhost:4566"
    athena = "http://localhost:4566"
    autoscaling = "http://localhost:4566"
    backup = "http://localhost:4566"
    batch = "http://localhost:4566"
    ce = "http://localhost:4566"
    cloudformation = "http://localhost:4566"
    cloudfront = "http://localhost:4566"
    cloudsearch = "http://localhost:4566"
    cloudtrail = "http://localhost:4566"
    cloudwatch = "http://localhost:4566"
    codecommit = "http://localhost:4566"
    cognitoidentity = "http://localhost:4566"
    cognitoidp = "http://localhost:4566"
    configservice = "http://localhost:4566"
    docdb = "http://localhost:4566"
    dynamodb = "http://localhost:4566"
    ec2 = "http://localhost:4566"
    ecr = "http://localhost:4566"
    ecs = "http://localhost:4566"
    efs = "http://localhost:4566"
    eks = "http://localhost:4566"
    elasticache = "http://localhost:4566"
    elasticbeanstalk = "http://localhost:4566"
    elasticsearch = "http://localhost:4566"
    elb = "http://localhost:4566"
    elbv2 = "http://localhost:4566"
    emr = "http://localhost:4566"
    events = "http://localhost:4566"
    firehose = "http://localhost:4566"
    fis = "http://localhost:4566"
    glacier = "http://localhost:4566"
    glue = "http://localhost:4566"
    iam = "http://localhost:4566"
    iot = "http://localhost:4566"
    iotanalytics = "http://localhost:4566"
    iotevents = "http://localhost:4566"
    kafka = "http://localhost:4566"
    keyspaces = "http://localhost:4566"
    kinesis = "http://localhost:4566"
    kinesisanalytics = "http://localhost:4566"
    kinesisanalyticsv2 = "http://localhost:4566"
    kms = "http://localhost:4566"
    lakeformation = "http://localhost:4566"
    lambda = "http://localhost:4566"
    logs = "http://localhost:4566"
    mediaconvert = "http://localhost:4566"
    mediastore = "http://localhost:4566"
    mq = "http://localhost:4566"
    mwaa = "http://mwaa.localhost.localstack.cloud:4566"
    neptune = "http://localhost:4566"
    opensearch = "http://localhost:4566"
    organizations = "http://localhost:4566"
    pinpoint = "http://localhost:4566"
    pipes = "http://localhost:4566"
    qldb = "http://localhost:4566"
    ram = "http://localhost:4566"
    rds = "http://localhost:4566"
    redshift = "http://localhost:4566"
    redshiftdata = "http://localhost:4566"
    resourcegroups = "http://localhost:4566"
    resourcegroupstaggingapi = "http://localhost:4566"
    route53 = "http://localhost:4566"
    route53domains = "http://localhost:4566"
    route53resolver = "http://localhost:4566"
    s3 = "http://s3.localhost.localstack.cloud:4566"
    s3control = "http://localhost:4566"
    sagemaker = "http://localhost:4566"
    scheduler = "http://localhost:4566"
    secretsmanager = "http://localhost:4566"
    serverlessrepo = "http://localhost:4566"
    servicediscovery = "http://localhost:4566"
    ses = "http://localhost:4566"
    sesv2 = "http://localhost:4566"
    sfn = "http://localhost:4566"
    sns = "http://localhost:4566"
    sqs = "http://localhost:4566"
    ssm = "http://localhost:4566"
    sts = "http://localhost:4566"
    swf = "http://localhost:4566"
    timestreamwrite = "http://localhost:4566"
    transcribe = "http://localhost:4566"
    transfer = "http://localhost:4566"
    waf = "http://localhost:4566"
    wafv2 = "http://localhost:4566"
    xray = "http://localhost:4566"
 }
}

👾 provider.tf

よって,tflocal コマンドを使う場合の provider.tf はシンプルになる👌

provider "aws" {
  region = "ap-northeast-1"
}

あとは tflocal コマンドで plan と apply を実行すれば OK❗️

$ tflocal plan
$ tflocal apply

S3 Backend サポート

tflocal コマンドは S3 Backend もサポートしていて,例えば以下の backend.tf のように S3 Backend で tfstate を管理している場合,自動的に同じ名前の Amazon S3 バケットを LocalStack にデプロイしてくれる.さらにステートロックのための Amazon DynamoDB テーブル tf-test-state まで作ってくれる💡

👾 backend.tf

terraform {
  backend "s3" {
    region = "ap-northeast-1"
    bucket = "kakakakakku-sandbox-tfstates"
    key    = "terraform.tfstate"
  }
}

そして localstack_providers_override.tf には S3 Backend のbackend 設定まで追加される👌

provider "aws" {
  access_key                  = "test"
  secret_key                  = "test"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  region = "ap-northeast-1"
  endpoints {
    acm = "http://localhost:4566"
    amplify = "http://localhost:4566"
    apigateway = "http://localhost:4566"
    apigatewayv2 = "http://localhost:4566"
    appautoscaling = "http://localhost:4566"
    appconfig = "http://localhost:4566"
    appflow = "http://localhost:4566"
    appsync = "http://localhost:4566"
    athena = "http://localhost:4566"
    autoscaling = "http://localhost:4566"
    backup = "http://localhost:4566"
    batch = "http://localhost:4566"
    ce = "http://localhost:4566"
    cloudformation = "http://localhost:4566"
    cloudfront = "http://localhost:4566"
    cloudsearch = "http://localhost:4566"
    cloudtrail = "http://localhost:4566"
    cloudwatch = "http://localhost:4566"
    codecommit = "http://localhost:4566"
    cognitoidentity = "http://localhost:4566"
    cognitoidp = "http://localhost:4566"
    configservice = "http://localhost:4566"
    docdb = "http://localhost:4566"
    dynamodb = "http://localhost:4566"
    ec2 = "http://localhost:4566"
    ecr = "http://localhost:4566"
    ecs = "http://localhost:4566"
    efs = "http://localhost:4566"
    eks = "http://localhost:4566"
    elasticache = "http://localhost:4566"
    elasticbeanstalk = "http://localhost:4566"
    elasticsearch = "http://localhost:4566"
    elb = "http://localhost:4566"
    elbv2 = "http://localhost:4566"
    emr = "http://localhost:4566"
    events = "http://localhost:4566"
    firehose = "http://localhost:4566"
    fis = "http://localhost:4566"
    glacier = "http://localhost:4566"
    glue = "http://localhost:4566"
    iam = "http://localhost:4566"
    iot = "http://localhost:4566"
    iotanalytics = "http://localhost:4566"
    iotevents = "http://localhost:4566"
    kafka = "http://localhost:4566"
    keyspaces = "http://localhost:4566"
    kinesis = "http://localhost:4566"
    kinesisanalytics = "http://localhost:4566"
    kinesisanalyticsv2 = "http://localhost:4566"
    kms = "http://localhost:4566"
    lakeformation = "http://localhost:4566"
    lambda = "http://localhost:4566"
    logs = "http://localhost:4566"
    mediaconvert = "http://localhost:4566"
    mediastore = "http://localhost:4566"
    mq = "http://localhost:4566"
    mwaa = "http://mwaa.localhost.localstack.cloud:4566"
    neptune = "http://localhost:4566"
    opensearch = "http://localhost:4566"
    organizations = "http://localhost:4566"
    pinpoint = "http://localhost:4566"
    pipes = "http://localhost:4566"
    qldb = "http://localhost:4566"
    ram = "http://localhost:4566"
    rds = "http://localhost:4566"
    redshift = "http://localhost:4566"
    redshiftdata = "http://localhost:4566"
    resourcegroups = "http://localhost:4566"
    resourcegroupstaggingapi = "http://localhost:4566"
    route53 = "http://localhost:4566"
    route53domains = "http://localhost:4566"
    route53resolver = "http://localhost:4566"
    s3 = "http://s3.localhost.localstack.cloud:4566"
    s3control = "http://localhost:4566"
    sagemaker = "http://localhost:4566"
    scheduler = "http://localhost:4566"
    secretsmanager = "http://localhost:4566"
    serverlessrepo = "http://localhost:4566"
    servicediscovery = "http://localhost:4566"
    ses = "http://localhost:4566"
    sesv2 = "http://localhost:4566"
    sfn = "http://localhost:4566"
    sns = "http://localhost:4566"
    sqs = "http://localhost:4566"
    ssm = "http://localhost:4566"
    sts = "http://localhost:4566"
    swf = "http://localhost:4566"
    timestreamwrite = "http://localhost:4566"
    transcribe = "http://localhost:4566"
    transfer = "http://localhost:4566"
    waf = "http://localhost:4566"
    wafv2 = "http://localhost:4566"
    xray = "http://localhost:4566"
 }
}

terraform {
  backend "s3" {
    access_key = "test"
    bucket = "kakakakakku-sandbox-tfstates"
    dynamodb_table = "tf-test-state"
    endpoints = {
      s3 = "http://s3.localhost.localstack.cloud:4566"
      iam = "http://localhost:4566"
      sso = "http://localhost:4566"
      sts = "http://localhost:4566"
      dynamodb = "http://localhost:4566"
    }
    key = "terraform.tfstate"
    region = "ap-northeast-1"
    secret_key = "test"
    skip_credentials_validation = true
    skip_metadata_api_check = true
  }
}

ちなみに Terraform v1.10 からは Amazon DynamoDB テーブルを使わずにステートロックを実現できるようになっていて(まだ Experimental),Terraform v1.11 から正式リリースになる予定のため,Terraform v1.11 だったら Amazon DynamoDB テーブルを作らず実行するように tflocal コマンドを改善できそうだな〜と考えている👌

Terraform v1.10.0 で導入された S3 Backend の use_lockfile オプションに関しては以下の記事にまとめてある❗️

kakakakakku.hatenablog.com

まとめ

Terraform から LocalStack にデプロイする場合,tflocal コマンドが便利なのでおすすめ〜 \( 'ω')/