Why IaC is complex but simple
We hear a lot about infrastructure as code, and it seems like magic.
But what is this magic called infrastructure as code ? Well, let me clarify it for you. Simply put, in the most basic terms, infrastructure as code is writing down what you want to deploy as human-readable code. So, you write down whatever you want to be deployed, and in most cases it’s going to be declarative if you’re using Terraform for example. But you basically write it down, and you can track it, as opposed to having to log into an AWS console and clicking on 15 different places to spin up a VM, where there’s a chance that somebody is going to fat-finger something or double-click on a button and cause extra resources to be deployed — which ties into teh next benefit, which is that it enables DevOps the codification of deployment means that it can be tracked in version control such as Git, enabling better visibility and collaboration across teams. So now, distributed teams can work on the same chunk of code, which can deploy infrastructure and make sure that they agree on something which is immutable and stored in version control before it is deployed. For the majority of the IaC tools out there, the code is usually declarative, which means that you basically declare or write down exactly what you want without caring about what underlying functions or API calls will need to be made to deploy that infrastructure.
The beauty of writing code this way is that when someone reads it,
they can easily make sense of what is being deployed, and in addition to that,
the code can be a form of documentation of the deployment itself.
And finally, the speed, cost, and reduced risk of using infrastructure as code cannot be undermined, because less human intervention during deployment means fewer your chances of security flaws, somebody double-clicking and creating extra resources, or just wasting time by trying to find way to click next to create a resource, which ties down to something that I mentioned right at the start — that when you’re trying to deploy something by memory or clicking around on a console, there’s a chance that, on your fifth attempt,
you’ll definitely make a mistake of choosing some wrong option in a dropdown or click on the wrong type of EC2 instance and therefore mess up your application.
With using a consistent IaC tool such as Terraform, you can make sure that your code is uploaded to Git, tracked, and is always consistent. And then no matter how many times you deploy it, it is always going to be deployed with the same inputs and the same expected output.
So, why terraform is so successful ? there are others solution out there.
Terraform can be used to codify the configuration for software defined networks. A popular usage of an SDN deployment via Terraform is AWS’ virtual private cloud, or VPC, all through a few lines of HCL, Terraform’s language. This opens up new possibilities for developers to think of network in terms of code, thereby enabling DevOps.
This ties into the next feature of Terraform’s versatility when interfacing with
cloud and infrastructure vendor APIs or even resource schedulers like
Kubernetes. Do you write your Terraform code in one uniform language and your syntax won’t change under the hood? Terraform will make the experience seamless for you, regardless of where your deploying.
You might be wondering cloud agnostic by now. Simply ,
it means that Terraform doesn’t care what cloud or infrastructure deployment
method you’re using. The library of cloud and infrastructure vendors that Terraform support is already quite impressive and growing by the day.
This also means that you don’t have to rely on just one vendor for high
availability. You can use Terraform domains in a highly available solution across two public clouds, such as AWS and GCP, and achieve high availability beyond what only a single vendor can offer.
By browsing at Hasicorp website you will notice the verity of cloud provisrs they support: Alibaba, AWS, Azure, VMware, Oracle, all the major public clouds. And the second category, the cloud category, has vendors such as DigitalOcean, Fastly, Gridscale, Heroku. So, notice how many options Terraform has given you to interface with various cloud vendors, not just the popular ones, but the ones that you might even not have heard of.
And if you go inside databases, for example, you’ll notice that Terraform even has providers for interfacing with MySQL, InfluxDB. So, this is one of the pluses of Terraform. The community is very active, and they are churning out a lot of interfacing with different cloud vendors through providers. And this is one of Terraform’s strengths. So, now back to the advantages of Terraform and why you love it. Terraform’s state tracking mechanism takes away the worry of dependency and resource tracking by keeping it all in one place.
You don’t have to worry about how to make changes to the Terraform deployed infrastructure. That part is handled by Terraform.
You just need to know what you want to modify it through Terraform configuration or code, and Terraform handles getting you to your desired state. For example, if you change the operating system image for a cloud VM,
which could be any cloud, Terraform automatically handles deleting the VM with the old image and spinning up the new one, saving you on the time spent on carrying out manual, laborious tasks and human errors. So, these international were reasons that Terraform is so convenient to use as an
IaC tool.
A code can interact with different platforms. Terraform abstracts away all the API calls that it makes under the hood using a term called providers. Every cloud vendor has its own provider. This code fetches a provider — for example AWS provider. The word provider is a reserved keyword. We then have the name of the provider — in this case, AWS — that we want to get from the public Terraform registry. And we have the configuration parameters, which help define the arguments for the AWS provider. These will vary,
depending on the provider that we use. For GCP, or Google cloud provider. Again, we have a similar structure.We have the reserved keyword provider.
We’re telling it to fetch the Google provider, with the Google keyword in the double quotes, and then we are providing it arguments that it needs to set up the authentication and the environment for the Google provider.Ffor example
for credentials argument in GCP, we get built-in function file,
which helps fetch the credentials to authenticate against the Google cloud.
Terraform by default looks for providers in the Terraform providers registry,
the link for which is shown below. However, providers can also be sourced locally or internally and referenced within your
Terraform code. You can even write your own custom providers.
The Concept of State
why is state so important to Terraform? The simple two-word answer is resource tracking. Simply, it is a mechanism for Terraform to keep tabs on what has been deployed, and it’s critical to Terraform because, at various lifecycles of deploying infrastructure through Terraform, Terraform needs to refer back to the state of deployed resources before making decisions about whether resources need to be created from scratch, modified,
or even destroyed. You have your Terraform configuration or code,
and you have deployed the code into managed infrastructure platforms.
What helps Terraform map the resources in code to the resources in the team
is the Terraform state file.
This file is a JSON dump containing all the metadata about your Terraform
deployment, as well as details about all the resources that it has deployed.
For example, if you now want to delete all the resources that you created,
instead of having to separately write code to delete everything, you would just issu the Terraform destroy command.
Terraform will look at the state file and know exactly what resources to
destroy. It is usually stored locally in the same directory where your Terraform code resides. However, for better integrity and availability, it can also be stored remotely. Basically, the state file helps Terraform calculate deployment deltas. That is the difference between what is deployed previously via Terraform and what the code records we deployed currently.
The plan provided by the Terraform plan command is compared to the state file for calculating this delta and reconciling and deploying the actual changes into the environment. One final word of warning, though:
Because the state file is so critical to Terraform’s functionality, never lose it or let it fall into wrong hands even. You don’t want to lose it because you’ll have no codified way to go back and make changes to our infrastructure, which is deployed through Terraform, because manually deleting or trying to make changes to complex infrastructures can be a real pain. Also,
you wouldn’t want the state file to fall into the wrong hands because it may
contain sensitive data and details about the resources deployed through
Terraform.
The workflow
Terraform workflow, there’s only really two things in there. First, you write the code. Then you review the changes that the code will make.
And finally you execute or deploy the actual code to realize real
infrastructure, wherever you might want to deploy it.
In the write phase, you’re generally starting off either with a version control system as a best practice or even a flat file if you’re working individually.
Version control is recommended as a best practice so that you and a team can
collaborate and enter it over the issues within your code.
We then move on to planning or reviewing the changes that your code will make. This is an important step in the Terraform workflow, because at this point you’re not yet deploying any infrastructure, but you can go ahead and see in detail the changes that the code will make within your actual environment. And looking at the review of the changes that the code will make, you can go back and modify your code. And with this cycle,
you can perfect your code and get it to work exactly as you need it to.
Finally, you move on to deploying or executing the code and making actual changes to an environment. This step deploys actual infrastructure,
and creates real resources in the cloud.
Whether you’re working as an individual or a team, this workflow will yield the best results and the best efficiency.