Hey Learners! Welcome back. Hope you all get some understanding about Terraform, and how to provision infrastructure using Terraform. We understood the different blocks used in Terraform and the concept of Modules to simplify configuration maintenance. In this challenge, we'll look for some important questions might ask in an interview. Let's start...
1- What is Terraform and how is it different from other IaaC tools?
Terraform is an infrastructure as code (IaC) tool developed by HashiCorp. It allows you to define and provision infrastructure resources using declarative configuration files.
Unlike others, it uses a declarative syntax where you describe the infrastructure state, letting Terraform handle the"how".
It's Cloud-agnostic, supports multi-cloud environments, promotes code, excels in parallel execution, and has a robust community.
2- What do you call a main.tf module?
In Terraform, the main.tf
file is typically the entry point for a module or the main configuration file in the root module.
Terraform automatically processes all .tf files in the working directory as part of its configuration. So, the main configuration module is essentially the collection of all these .tf files. This approach simplifies module organization and reduces the need for explicit calls.
3- What exactly is Sentinel? Can you provide a few examples that we can use for Sentinel policies?
Sentinel is a policy as a code framework developed by HashiCorp. It is used for policy enforcement, compliance, and governance across infrastructure as code.
Sentinel policies can be used to define rules and constraints on Terraform configurations.
Examples:-
Restrict the creation of public S3 buckets
Only approved AWS instance types are used
Specifying that certain tags be applied to resources
Enforcing specific access control rules
4- You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?
To create multiple instances of the same resource in Terraform, we can use the count
or for_each
meta-argument in the resource block depending on our use case.
To know more see the DAY69 challenge
5- You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?
A. Set the environment variable TF_LOG=TRACE
B. Set verbose logging for each provider in your Terraform configuration
C. Set the environment variable TF_VAR_log=TRACE
D. Set the environment variable TF_LOG_PATH
Ans- A. Set the environment variable TF_LOG=TRACE
6- The below command (terraform destroy
) will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure.
We can use -target
flag to specify the specific resource you want to retain with the terraform destroy
command.
terraform destroy -target=<resource_tyep>.<resource_name>
Example:- To retain the EC2 Instance with the Example resource name we use the following command
terraform destroy -target=aws_instance.Example
7- Which module is used to store the .tfstate file in S3?
To store the Terraform state file in an S3 bucket, we can use the Terraform backend
configuration with the s3 backend type.
Example:-
terraform {
backend "s3" {
bucket = "S3-BUCKET-NAME"
key = "terraform.tfstate" #any path to store in s3 bucket
region = "us-west-2"
}
}
8- How do you manage sensitive data in Terraform, such as API keys or passwords?
In Terraform, it's essential to manage sensitive data, such as API keys or Passwords, securely to maintain the integrity and security of your infrastructure.
Use Environment Variables
Variable Files
Terraform Input Variables
State Management
HashiCorp Vault etc...
9- You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
To provision an S3 bucket and an IAM user with read and write access, you can use the following Terraform Resources.
aws_s3_bucket
to create an S3 bucket in AWSaws_iam_user
to create an IAM useraws_iam_policy
to define a policy that allows read and write access to the S3 bucketaws_iam_user_policy_attachment
to attach the policy to the IAM user.
Refer to the DAY67 challenge
10- Who maintains Terraform providers?
Terraform providers are typically maintained by the respective cloud providers, organizations, or open-source contributions. For example, AWS maintains the AWS provider, and HashiCorp maintains many official providers.
Additionally, the Terraform community often contributes to and maintains various third-party providers.
11- How can we export data from one module to another?
In Terraform, you can export data from one module to another using output variables. Follow the below steps to export data
Export data from the source module
Import data from the calling module
Example:-
Source module:- ("data/output.tf")
Define the output variable in your source module's(data/) output.tf file
output "data_exported" {
value = "This data is exported"
}
Calling Module:- ("main.tf")
Define the data source that references the source module.
module "source" {
source = "./data"
}
resource <resource_tag> <resource_name> {
data_from_source = module.source.data_exported
}
In the above example, the value from the data_exported
variable from the source module data
is stored in the data_from_source
in the resource block.
By using these steps you can easily export data from one module and import it into another.
Thank you so much for taking the time to read till the end! Hope you found this blog informative.
Feel free to explore more of my content, and don't hesitate to reach out if need any assistance from me or in case of you have any questions.
Find me on:- Hashnode LinkedIn Github
Happy Learning!