Creating Naming Prefixes in Terraform with Python

Posted by John Q. Martin on Tue, Apr 9, 2024

In my recent post Automating Resource Naming with Terraform Locals I referred to the Microsoft Cloud Adoption Framework (CAF) naming convention and how I used Terraform locals and expressions to build the names. Part of that was establishing a locals map of resource keys and values which are then used in the expression. Given the number of resources which exist this can be a bit of a task to create and then maintain, so I have looked to do a little bit of scripting to automate this task. In this post I will demonstrate how we can use Python to generate a little text file output for us which does the heavy lifting so that we can then copy/paste the output or extend the script from there to do more if needed.

Resources

You can find the Python script over in my GitHub repository here.

I’m going to assume that you have a Python environment setup which you can use for following along or running the scripts.

The Process

There are three parts to the script which I have written. The first is the setup, in that we import the libraries we need for the activities within the script, set variables and then scrape the CAF naming web page. The second section transforms the raw tables of content from the page into a format which we can use by performing string manipulation. Finally, we construct a file output and then write out the data to something we can then incorporate into our Terraform projects.

Setup

The libraries we are going to use are;

To do this we use the import command at the top of the script file, in the case of Pandas we are going to alias it as pd so we can call it elsewhere with the shorthand alias rather than explicitly.

1import pandas as pd
2import re
3import argparse

The input parameters for the script are the output filename with path, and the CAF web page which is to be scraped. In the case of the latter parameter this is optional and the existing URL is provided as a default but can be overridden if needed should it get changed. We take these inputs and assign them to internal variables for use within the script.

 1parser = argparse.ArgumentParser()
 2parser.add_argument("outputFile", 
 3                    help="The path and filename for the output file.",
 4                    type=str)
 5
 6parser.add_argument("--sourceUri", "-s",
 7                    help="Source URI for Microsoft CAF naming page.",
 8                    default="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-abbreviations")
 9
10args = parser.parse_args()
11
12filePath = args.outputFile
13cafURI = args.sourceUri

Now we can use Pandas to read the html on the CAF page and load the tables of data there into DataFrames for us to access as we need. Because all of the tables within the Microsoft documentation are the same structure we can also consolidate these multiple DataFrames down into a single DataFrame which is then sorted by resource name in alphabetical order.

1# Pull all of the tables from the Azure CAF into DataFrames.
2DataFrames = pd.read_html(cafURI)
3
4# Concatenate all of the DataFrames into a single DataFrame.
5df = pd.concat(DataFrames)
6df = df.sort_values('Resource')

Processing the Data

Now that we have got a single, sorted, DataFrame we can start cleaning it up and making it a little easier for us to use in our Terraform code. The process here is to instantiate a list object which we will populate with strings allowing us to write them out later. We load this list by iterating over the content of the DataFrame. While doing this we can remove non-alphanumeric characters and swap out spaces for underscores. At the same time we can merge the two DataFrame columns into a single string including the formatting we will need for the key-value pairs in the locals we will be creating.

Additionally, we are going to strip out any rows where there are and “<” characters present which Microsoft have used to denote an abbreviation which is subjective and dependent on and environment specific situation.

 1# Loop through all the rows in our DataFrame to create the strings we need for naming.
 2## Replace all of the spaces with underscores in the Resource column.
 3## Add these to a list object we can work with.
 4nameList = []
 5for row in df.itertuples():
 6    #print(f'{re.sub(r"__","_",re.sub(r"[^a-zA-Z0-9_]","",row.Resource.replace(" ", "_")))} = "{row.Abbreviation}"')
 7    nameList.append(f'{re.sub(r"__","_",re.sub(r"[^a-zA-Z0-9_]","",row.Resource.replace(" ", "_")))} = "{row.Abbreviation}"')
 8
 9# Strip out strings where there is < character which Microsoft put in for DNS things.
10nameList = list(filter(lambda x: '<' not in x, nameList))

Generating Output File

Now we are in a position to write out the data into the file for us to use with Terraform. Using the “with open()” construct we can create the file, write to it, and then automatically close it once we have finished with it. We are going to use the “w! flag when we open the file which means we are going to write out to it, this will also truncate the target file if it exists.

We are explicitly writing lines to the file which includes some comments as well as the code structure to open and close out the locals definition. The only formatting that we apply here is to make all of the strings lowercase, this is a personal preference of mine so feel free to adjust that as you need. The structural format for the Terraform locals will be handled by the Terraform fmt command which I run on a project directory as part of my workflows when it comes to pushing to source control.

 1# Write out the scaffolding for our naming.tf file which will be populated with the Azure abbreviations.
 2with open(filePath, 'w') as file:
 3    # Write out the top lines of the locals for the list of naming abbreviations.
 4    ## We want structure not formatting we will use the Terraform fmt statement to clean up the file once we finish writing it.
 5    file.write("# Contains all constants for resource naming for this Terraform solution\n")
 6    file.write(f"# See: {cafURI}\n")
 7    file.write("locals {\n")
 8    file.write("azNaming = {\n")
 9    
10    # Write out the content of the name list.
11    for s in nameList:
12        file.write(str.lower(s) +'\n')
13
14    # Close out the locals.
15    file.write("}\n}\n")

Now we have a file which has a locals prefix which we can call when defining our resources as I described in my recent blog post on the topic.

Summary

Thanks for coming along on this little trip to automating a task which can take time and has room for error if we build and maintain it manually. While this has focused on the Azure naming prefixes the fundamental process is the same for AWS, GCP, or other providers who have a prescriptive naming recommendation for resources.

It would be great to hear if this is useful to you, or if you have done something similar to automate a small task which is infrequent but has the potential for errors if it were to be completed manually.

Thanks for your time.

/JQ



comments powered by Disqus