How to Delete Local and Remote Tags on Git
Locall delete a tag
git tag -d <tag_name>
Remotely delete tag
git push --delete origin tagname
Checkout the commit that you want to retag. Grab the sha from your Github history.
git checkout <SHA>
git tag M.M.P
git push --tags
git checkout main
Our root module structure is as follows:
PROJECT_ROOT
│
├── main.tf # everything else.
├── variables.tf # stores the structure of input variables
├── terraform.tfvars # the data of variables we want to load into our terraform project
├── providers.tf # defined required providers and their configuration
├── outputs.tf # stores our outputs
└── README.md # required for root modules
In terraform we can set two kind of variables:
- Enviroment Variables - those you would set in your bash terminal eg. AWS credentials
- Terraform Variables - those that you would normally set in your tfvars file
We can set Terraform Cloud variables to be sensitive so they are not shown visibliy in the UI.
We can use the -var
flag to set an input variable or override a variable in the tfvars file eg. terraform -var user_ud="my-user_id"
- TODO: document this flag
This is the default file to load in terraform variables in blunk
- TODO: document this functionality for terraform cloud
- TODO: document which terraform variables takes presendence.
If you lose your statefile, you most likley have to tear down all your cloud infrastructure manually.
You can use terraform port but it won't for all cloud resources. You need check the terraform providers documentation for which resources support import.
terraform import aws_s3_bucket.bucket bucket-name
Terraform Import AWS S3 Bucket Import
If someone goes and delete or modifies cloud resource manually through ClickOps.
If we run Terraform plan is with attempt to put our infrstraucture back into the expected state fixing Configuration Drift
terraform apply -refresh-only -auto-approve
It is recommend to place modules in a modules
directory when locally developing modules but you can name it whatever you like.
We can pass input variables to our module. The module has to declare the terraform variables in its own variables.tf
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
user_uuid = var.user_uuid
bucket_name = var.bucket_name
}
Using the source we can import the module from various places eg:
- locally
- Github
- Terraform Registry
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
}
LLMs such as ChatGPT may not be trained on the latest documentation or information about Terraform.
It may likely produce older examples that could be deprecated. Often affecting providers.
This is a built in terraform function to check the existance of a file.
condition = fileexists(var.error_html_filepath)
https://developer.hashicorp.com/terraform/language/functions/fileexists
https://developer.hashicorp.com/terraform/language/functions/filemd5
In terraform there is a special variable called path
that allows us to reference local paths:
- path.module = get the path for the current module
- path.root = get the path for the root module Special Path Variable
resource "aws_s3_object" "index_html" { bucket = aws_s3_bucket.website_bucket.bucket key = "index.html" source = "${path.root}/public/index.html" }
Locals allows us to define local variables. It can be very useful when we need transform data into another format and have referenced a varaible.
locals {
s3_origin_id = "MyS3Origin"
}
This allows use to source data from cloud resources.
This is useful when we want to reference cloud resources without importing them.
data "aws_caller_identity" "current" {}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}
We use the jsonencode to create the json policy inline in the hcl.
> jsonencode({"hello"="world"})
{"hello":"world"}
Plain data values such as Local Values and Input Variables don't have any side-effects to plan against and so they aren't valid in replace_triggered_by. You can use terraform_data's behavior of planning an action each time input changes to indirectly use a plain value to trigger replacement.
https://developer.hashicorp.com/terraform/language/resources/terraform-data
Provisioners allow you to execute commands on compute instances eg. a AWS CLI command.
They are not recommended for use by Hashicorp because Configuration Management tools such as Ansible are a better fit, but the functionality exists.
This will execute command on the machine running the terraform commands eg. plan apply
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/local-exec
This will execute commands on a machine which you target. You will need to provide credentials such as ssh to get into the machine.
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/remote-exec
For each allows us to enumerate over complex data types
[for s in var.list : upper(s)]
This is mostly useful when you are creating multiples of a cloud resource and you want to reduce the amount of repetitive terraform code.