Skip to main content
 
au

  • Increase Speed to Market
    Deliver quality quicker by optimising your delivery pipeline, removing bottlenecks, getting faster feedback from customers and iterating quickly.

  • Enhance Customer Experience
    Delight your customers in every digital interaction by optimising system quality and performance to provide a smooth, speedy and seamless user experience.

  • Maximise Your Investment
    Realise a positive ROI sooner and maximise your investment by focusing your energy on high-value features, reducing waste, and finding and fixing defects early.
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page


INSIGHTS / Articles

Using Packer and Terraform to Deploy a Windows Server 2016 AMI on AWS - Part II

 12 Mar 2019 
Using Packer and Terraform to Deploy a Windows Server 2016 AMI on AWS - Part II Using Packer and Terraform to Deploy a Windows Server 2016 AMI on AWS - Part II
Using Packer and Terraform to Deploy a Windows Server 2016 AMI on AWS - Part II
INSIGHTS / Articles

Using Packer and Terraform to Deploy a Windows Server 2016 AMI on AWS - Part II

 12 Mar 2019 

In my previous article, I looked at using WINRM to build an Amazon Machine Image (AMI) using Packer. I also covered how to install custom applications using Chocolatey and overcome connectivity issues with Remote Desktop Protocol (RDP).

At this point, I have a working AMI in AWS that I can use to spawn multiple copies on demand. The next step is to automate the provisioning of the server so that it can be predictably provisioned and cleanly destroyed.

My one requirement was to keep the configuration of the infrastructure-as-code (IaC) in source control. For this, I decided to use Terraform from HashiCorp, which is a very powerful tool to deploy, tear down and codify infrastructure.

I wrote three files to break up my Terraform script for deploying infrastructure:

  • Main.tf — The core of my server and configuration.
  • Variables.tf — All the variables that Main.tf will use. I can just change a value in this file without needing to look through Main.tf
  • Output.tf — Specifies any outputs that I need. More on this later.

To let Terraform know that I want to provision to AWS, I add the following code:

provider “aws” {
region = “${var.aws_region}”
profile = “${var.aws_profile}”
}
 

I use the ${var.aws_region} variable to reference my AWS region in Variables.tf, and ${var.aws_profile} to reference my AWS Command Line Interface (CLI) named profile.

The CLI named profile makes it easier to have multiple AWS accounts, such as one for development, another for production, and so on. This is easily set using aws configure - profile prod.

The correct subnet to deploy needs to be dynamically identified. This can be done using tags set up in the AWS Virtual Private Cloud (VPC).

# — — Get VPC ID — — -
data “aws_vpc” “selected” {
tags = {
Name = “${var.name_tag}”
}
}
# — Get Public Subnet List
data “aws_subnet_ids” “selected” {
vpc_id = “${data.aws_vpc.selected.id}”
tags  = {
Tier = “public”
}
}
 

The data”aws_vpc” resource in Terraform is used to identify which one of the subnets are Prod or Staging environment. The public subnet is then pull out using the interpolation syntax in the data “aws_subnet_ids” resource.

A separate data resource is used to identify an already existing security group that has been defined. To keep everything contained for my project, I created my own specific security group for this:

data “aws_security_group” “selected” {
tags = {
Name = “${var.name_tag}*”
}
}
 

Before defining my Elastic Compute Cloud (EC2) instance, I need to make sure I can use the RDP to connect to it. I achieved this in Part 1 with a userdata script used by my EC2 instance at boot:

data “template_file” “user_data” {
template = “/scripts/user_data.ps1”
}
 

Dynamically creating and storing key pairs on S3

As part of this project, I wanted to dynamically create, register, and store the instance key pairs on Amazon’s Simple Storage Service (S3) for later use. This required me to do a bit of research for a clean solution that would store on S3.

I finally discovered a great Terraform module at Cloud Posse that would do this for me:

module “ssh_key_pair” {
source = “git::https://github.com/cloudposse/terraform-aws-key-pair.git?ref=master"
namespace = “example”
stage = “dev”
name = “${var.key_name}”
ssh_public_key_path = “${path.module}/secret”
generate_ssh_key = “true”
private_key_extension = “.pem”
public_key_extension = “.pub”
}
 

To store the new keys in the S3 bucket, I decided to use Terraform’s local-exec provisioner. The keys are copied to the S3 bucket using the AWS CLI:

# — — Copy ssh keys to S3 Bucket
provisioner “local-exec” {
command = “aws s3 cp ${path.module}/secret s3://PATHTOKEYPAIR/ — recursive”
}
# — — Deletes keys on destroy
provisioner “local-exec” {
when = “destroy”
command = “aws s3 rm 3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pem”
}
provisioner “local-exec” {
when = “destroy”
command = “aws s3 rm s3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pub”
 

The first provisioner copies both keys from the path in the ssh_public_key_path section of the “ssh_key_pair” module to my S3 bucket using AWS CLI commands.

The last two provisioners remove the keys when Terraform Destroy is done. This is done by adding the when = “destroy” command to your aws_instance resource.

Next, the EC2 instance needs to be configured with the AMI we created using the Packer script in Part 1. To do this, we need to find the AMI using the data “aws_ami” resource in Terraform and then filter to find the image:

data “aws_ami” “Windows_2016” {
filter {
name = “is-public”
values = [“false”]
}
filter {
name = “name”
values = [“windows2016Server*”]
}
most_recent = true
}
 

With the AMI image defined, we can use it when we create the EC2 instance with ${data.aws_ami.Windows_2016.image_id}, with image_id attributed to the resource. Add a few variables and the Windows Server 2016 will be ready to deploy:

resource “aws_instance” “this” {
ami = “${data.aws_ami.Windows_2016.image_id}”
instance_type = “${var.instance}”
key_name = “${module.ssh_key_pair.key_name}”
subnet_id = “${data.aws_subnet_ids.selected.ids[01]}”
security_groups = [“${data.aws_security_group.selected.id}”]
user_data = “${data.template_file.user_data.rendered}”
iam_instance_profile = “${var.iam_role}”
get_password_data = “true”
root_block_device {
volume_type = “${var.volume_type}”
volume_size = “${var.volume_size}”
delete_on_termination = “true”
}
tags {
“Name” = “NEW_windows2016”
“Role” = “Dev”
}
 

Automate the decryption of the Admin password

When RDPing on to the server, I currently need to log into the AWS console every time to decrypt the admin password. The solution then is to automate the password decryption process so that it is presented to me as an output of that process.

The first step is to fetch the encrypted password from the server. Terraform is able to do this by setting the “get_password_data” argument to True.

However, a base64 encrypted password is too long and complicated to be of any use. Using Terraform, rsadecrypt can be used as part of the output in my Output.tf to decrypt the password generated and present it in a human-readable format:

output “Administrator_Password” {
value = “${rsadecrypt(aws_instance.this.password_data, file(“${module.ssh_key_pair.private_key_filename}”))}”
}
 

Now when I run Terraform Apply I get:

Administor_Password = XXADMIN PASSWORDXXX
 

With Terraform apply, the admin password will be output without requiring a log in to the AWS console.

Final thoughts

Automating the build of Windows Server 2016 was not as straightforward as I initially thought. I had to overcoming build issues in Packer and deployment challenges with Terraform before I could integrate the solution in a continuous integration (CI) process.

Hopefully, the steps I outlined in these two articles will help anyone who may find themselves in the same situation as I was. The Terraform and Packer scripts are still code and need to be validated after every update before released into Prod, so adding these tests as part of our CI server will ensure that feedback comes early and often.

My next project will be to write a Behaviour Driven Development (BDD) framework that will sit on top of my Terratest scripts. Terratest is a Go library written by the Gruntwork team responsible for writing automation tests used to test infrastructure code.

Deliver Quality Quicker

At Planit, we give our clients a competitive edge by providing them with the right advice, expert skills, and technical solutions they need to assure success for their key projects. As your independent quality partner, you gain a fresh set of eyes, an honest account of your systems and processes, and expert solutions and recommendations for your challenges.
 
Find out how we can help you get the most out of your digital platforms and core business systems to deliver quality quicker.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.