Terraform Libraries for Azure — Blog 3

Steve Dillon
4 min readJan 11, 2021

Don’t start from scratch, and don’t ask 100 questions before getting to work. The terraform module library is designed to build on top of the platforms created by other modules in the library.

In blog 2 we created 2 foundational pieces: the Observability object , and the context object. The Observability object has identifiers for Application Insights, Azure Logs. You take the output of the Observability module and you pass this into other modules, and they will log and use the objects provided.

Similarly, the context object, provides information about Azure Region, Product Name, Lifecycle being deployed. Almost every AzureRM in terraform asks for a Resource Group and Azure Region. Those common questions are passed into the module library as a ‘context’ object, and we do not clutter the module interface with many repetitive parameters.

Introducing another common input object is the service_settings. These are the settings that a really important for this object. We often create a map of service settings, one for the common lifecycles, so that all of the variables to ramp up a service from Developer use to production is easily qualified.

For Azure KeyVault the service settings are fairly limited. They get more complicated for API Management and Cosmos. For KeyVault:

KeyVault Service Settings

Here are the settings for KeyVault, and a Map variable that might show how to change settings based on deployment environment.

Example of a map of changes

There are several ways of creating changes in deployment sizes based on lifecycle, we currently are in favor of this, as you get to see explicitly what changes between each lifecycle.

Using the Output of one Module as a Input to Another

Knowing that one module’s output is going to be used in other places, we often package the relevant outputs into an object. This allows less code to be written as the caller doesn’t need to call one module, and then gather up all the necessary variables to call the next. If you think of Object oriented code, you call a function to allocate an object and then another to act on that object, you don’t want the glue code to know a lot about what is in the object. We developed this pattern later in the development of the module library so it’s not in all the early modules, but it is working well in our later code.

Getting to work:

In our last blog we allocated an observability package and created a context package. We created 2 objects with settings created outputs to expose them to the next in line.

observability.tf

Observability settings being packaged for the next user, in Blog 2, where we created the landing_zone framework.

Adding an Azure KeyVault to the landing zone created in Blog 2

One note, in these demos we are using the prior-demo as a module. In a more real world scenario you might do this or you could import the state from a shared location.

The code to create the KeyVault is pretty small as we build on the prior work we did in 01-coreinfra.

This is where the rubber meets the road. Lines 1–4 we call the prior blogs code to create core infrastructure.

On lines 11 and 15, we pass the context from one module to other.

Deploying it:

You should terraform destroy the 01-coreinfra project before running this. This module will deploy all the resources fresh.

cd terraform-azurerm-samples/samples/scenarios/01-coreinfra
terraform destroy
cd ../02-keyvault
terraform init
terraform apply

Verifying outputs:

In the resource group myapp-dev-eastus you can see the following resources with key vault and the necessary logging bits:

If you to the KeyVault->Diagnostics, these are setup:

And the Security Center reports on the resource:

Thank you for joining, in this blog we built upon the foundation work in the prior blogs and hopefully everybody following along was able to deploy an Azure Key Vault successfully.

--

--

Steve Dillon

Cloud Architect and Automation specialist. Specializing in AWS, Hashicorp and DevOps.