Territorial general state

Terraform 0.9.5.

I'm going to put together a bunch of modules that our infrastructure team and automation team will use to create resources in a standard way and in turn create stacks to provide various capabilities. Everything works well.

As with all commands using shared state terraform

it becomes a problem. I configured terraform to use s3 backend which was versioned and encrypted, added locking via dynamo db table. Fine. Everything works with local accounts ... Ok, problem ...

We have multiple aws accounts, 1 for IAM, 1 for billing, 1 for production, 1 for non-production, 1 for shared services, etc ... you get where I go. My problem is as follows.

I authenticate as a user to our IAM account and take on the required role. This worked like a dream until I presented a terraform backend config to use s3 for general state. It looks like the backend configuration in terraform requires the default credentials to be set to ~ / .aws / credentials. It also looks like it should be a user that is local to the account where the s3 bucket was created.

Is there a way to get the backend configuration setting to use the accounts and role configured internally by the provider? Is there a better way to set up shared state and locking? Any suggestions are appreciated :)

Refresh . Got this job. I created a new user in the account where the s3 bucket is created. Created a policy to only allow this new user s3: DeleteObject, GetObject, PutObject, ListBucket and dynamodb: * in a specific s3 bucket and dynamodb table. Created a custom credential file and added a default profile with access and private keys assigned to this new user. Used a backend config similar to

terraform {
required_version = ">= 0.9.5"

backend "s3" {
bucket                  = "remote_state"
key                     = "/NAME_OF_STACK/terraform.tfstate"
region                  = "us-east-1"
encrypt                 = "true"
shared_credentials_file = "PATH_TO_CUSTOM_CREDENTAILS_FILE"
lock_table              = "MY_LOCK_TABLE"
}
}

      

This works, but you must have an initial configuration in your profile to make it work. If anyone knows of a better setup or can identify issues with my backend configuration, please let me know.

+3


source to share


1 answer


Terraform expects the backend configuration to be static and does not allow interpolated variables to be included, as may be true elsewhere in the configuration due to the need to initialize the backend before any other work can be done.

As such, it can be tricky to use the same configuration multiple times using different AWS accounts, but it can be done in one of two ways.


The least friction method is to create a single S3 bucket and DynamoDB table to hold state across all environments, and use S3 permissions and / or IAM policies to overlay granular access controls.

Organizations adopting this strategy sometimes create an S3 bucket in a separate "administrative" AWS account, and then grant restrictive access to individual state objects in the bucket to specific roles that will run Terraform under each of the other accounts.

This solution has the advantage that when it has been installed correctly in S3, terraforming can usually be used without any fancy process: setting up one S3 bucket in the interface, and supplying the appropriate credentials using environment variables to allow them to change. After the base is initialized, use State Environments (soon to be renamed in the Terraform 0.10 workspace) to create a separate state for each of the target environments of the same configuration.

The downside is, of course, the need to manage more complex access configuration around S3 rather than just relying on coarse access control with all AWS accounts. It's also more difficult with DynamoDB in the mix because the access controls on DynamoDB are not that flexible.




If a complex S3 configuration is not desired, the complexity can instead be biased into Terraform's workflow using partial configuration . In this mode, only a subset of the backend settings are provided in the configuration, and additional parameters are provided on the command line at startup terraform init

.

This allows options to vary between scripts, but since it requires additional arguments, most organizations taking this approach will use a wrapper script to properly configure Terraform based on local conventions. It might just be a simple shell script that runs terraform init

with the appropriate arguments.

This allows you to modify, for example, a user credential file by providing it on the command line. In this case, environment states are not used, and instead switching between environments requires reinitializing the working directory with a new backend configuration.

The advantage of this solution is that it does not impose any particular restrictions on the use of S3 and DynamoDB as long as the differences can be exposed as CLI options.

The downside is the need for fancy workflow or wrapper scripts to customize Terraform.

+3


source







All Articles