Fri Jun 09 2023
In this post, I will try to start a Terraform project, and move from local tf state storage to remote storage
Written by: Cesar
7 min read
I will begin by spining up a box in Linode, I will do this using the Linode Terraform provider. The end result is a running server on Linode and some tf files:
Next I want to create a cloud bucket to store the tf state files, so I again use Terraform to provision a bucket in Linode.
Next I update my Terraform files to use a remote backend for state, instead of my local one. Doing this will take my local state and move it over to my new remote bucket.
Next I will put my work into a local git repo. Then I will commit my changes to a remote repo on GitLab. Finally I will update my Terraform files to use Gitlab as my remote backend for state storage.
I begin by installing Terraform, I then create my first Terraform configuration file, main.tf, it contains the following:
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
}
provider "linode" {
token = var.linode_pat_token
}
resource "linode_instance" "cs-workstation" {
image = "linode/ubuntu20.04"
label = "cs-worksation"
group = "cs-learning"
region = "us-west"
type = "g6-nanode-1"
authorized_keys = [ var.authorized_key]
root_pass = var.root_user_pw
tags = ["cs-workstation", "remote-ws"]
}
This tells Terraform that I want to use the Linode provider, and that I want to spin up a box, with ubuntu, in the west region, yada, yada, yada.
I also create two other files, variables.tf, which contains the var.* in the main.tf file and a terraform.tfvars where I store all my sensitive secret stuff like the Linode API token, and the root password.
I then run:
terraform apply -auto-approve
This will actually create my new server box on Linode. In addition I now have a few new files:
terraform.tfstate
terraform.tfstate.backup
I then update the main.tf and add two addtional resrouces for my cloud storage bucket and an access key to access the bucket.
...
resource "linode_object_storage_bucket" "mybucket" {
cluster: "us-southeast-1"
label: "remote-tf-state-store"
}
resource "linode_object_storage_key" "mykey" {
label: "remote-tf-state-storage-access-key"
bucket_access {
bucket_name = linode_object_storage_bucket.mybucket.label
cluster = linode_object_storage_bucket.mybucket.cluster
permission = "read_write"
}
}
I will need the access key and secret key from the key resource, in order to view that after I execute an apply I create another file, output.tf there I put any outputs I want to see, in that file I put the following:
output "keys" {
value = "access_key: ${linode_object_storage_key.mykey.access_key} secret_key: ${linode_object_storage_key.mykey.secret_key} limited: ${linode_object_storage_key.mykey.limited}"
sensitive = true
}
Afer I run the apply again, I don’t see the value in the console instead I see:
keys = <sensitive>
hmmm, I then take a look at the terraform.tfstate file and it contains what I need:
Now I have everyting I need to switch form local tf state to remote.
Next I create a new file, that will contain the configuration for accessing the bucket, so Terraform knows how to access that bucket, I call the file backend:
skip_credentials_validation = true
skip_region_validation = true
bucket="remote-tf-state-store"
key="ws-terraform.tfstate"
region="us-southeast-1"
endpoint="us-southeast-1.linodeobjects.com"
access_key="your_access_key"
secret_key="your_secret_key"
Next I update main.tf to use this as its state store.
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
backend "s3" {
skip_credentials_validation = true
skip_region_validation = true
}
}
Then I run:
terraform init -backend-config=backend
to use the new storage configuration, I was able to verify in Linode that one new object was created in my Object Storage bucket.
Next created a repository in Gitlab and commited all my Terraform files except for the files containing sensitive information. I did this by creating a .gitingore file that contains the following:
**/*.terraform/*
*.tfstate
*.tfstate.*
*.tfvars
*.tfvars.json
backend
The next thing I need to do to store the Terraform state file on Gitlab is to upate my main.tf to use another type of backend:
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
backend "http" {}
}
Then in Gitlab I need to create a personal access token with API scope and grab the backend parameters to use Gitlab as storage by going to Gitlab -> Operate -> Terraform states, it has an option to copy Terrafrom init command:
export GITLAB_ACCESS_TOKEN=<YOUR-ACCESS-TOKEN>
terraform init \
-backend-config="address=https://gitlab.com/api/v4/projects/<PROJECT-ID>/terraform/state/<YOUR-STATE-NAME>" \
-backend-config="lock_address=https://gitlab.com/api/v4/projects/<PROJECT-ID>/terraform/state/<YOUR-STATE-NAME>/lock" \
-backend-config="unlock_address=https://gitlab.com/api/v4/projects/<PROJECT-ID>/terraform/state/<YOUR-STATE-NAME>/lock" \
-backend-config="username=<USERNAME>" \
-backend-config="password=$GITLAB_ACCESS_TOKEN" \
-backend-config="lock_method=POST" \
-backend-config="unlock_method=DELETE" \
-backend-config="retry_wait_min=5"
After running this command Terraform shows an error about the backend configuraiton changing and provides two options -reconfigure or -migrate-state. If I run the same command with -reconfigure, It succeeds but the current state is not perserved. When I run
terraform plan
Terraform shows that no resources currently exist, and suggests it will create 3 new ones. If I then run with -migrate-state it has the same effect of succeeding but not with the state that we created previously.
After taking a small brake and trying to fiugre out why I was not being able to store the current state I had an idea, maybe if I want to migrate state from Linode s3 to Gitlab, I first needed to switch back to a local storage configuration, and then try the same command with -migrate-state
At this point I give it a try. First I update the main.tf to use the s3 backend which currently contains a correct backend configuration by commenting out the http backend (Gitlab):
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
backend "s3" {
skip_credentials_validation = true
skip_region_validation = true
}
#backend "http" {}
}
and running:
terraform init -backend-config=backend
I then see it pulling from that backend and when I run:
terraform plan
it shows no changes to be made, since we already have 3 resources provisioned. Then I migrate from that to local state storage by commenting out the s3 backend:
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
#backend "s3" {
# skip_credentials_validation = true
# skip_region_validation = true
#}
#backend "http" {}
}
and running:
terraform init -migrate-state
Which then prompts me for migrating existing s3 state locally, to which I reply yes. I then run
terraform plan
And I see it downloading the state locally and it now reports no changes needed, which is what I expect. Now I will update the main.tf to use Gitlab backend:
terraform {
required_providers {
linode = {
source = "linode/linode"
}
}
#backend "s3" {
# skip_credentials_validation = true
# skip_region_validation = true
#}
backend "http" {}
}
I then run the Gitlab long Terraform init with -migrate-state and now it too asks me if I want to migrate my existing storage to the new backend (Gitlab), to which I say yes, and one last run of:
terraform plan
reveals what is expected, no changes required by Terraform
I was able to use the Linode Terraform provider to create a compute resource. After that I wanted a way to manage my Terraform state file so I used Terrafrom to provision some storage and created a backend configuration to store that state on Linode. Then I added versioning to my Terraform project by using Gitlab, then I decided I wanted to move my storage from Linode to Gitlab. In order to do that I first had to migrate my state from Linode back to my local workspace by removing my s3 backend configuration, and running the terraform init command with the migrate-state flag. Then I added the http backend configuration and ran the Gitlab init command with the migrate-state flag, then I was able to migrate the tf state successfully.
I will create a Gitlab pipeline to trigger on commit and run my changes using Terraform. I hope to be able to have the option of running changes locally or via the Gitlab pipeline. Not sure how others do this.