In my last two articles, I've talked conceptually and theoretically about the need for DevOps testers.
Part I: Does DevOps Need Dedicated Testers?
Part II: Cloud Breaches Prove DevOps Needs Dedicated Testers
In this article, I will provide practical examples of unit testing.
Since public cloud storage seems to be a common problem, I will begin with an example unit test for a terraform project which creates a simple S3 bucket.
First, we need to install localstack, so we can test AWS locally.
pip install localstack
export SERVICES=s3
export DEFAULT_REGION='us-east-1'
localstack start
In a new console/terminal and new directory, create a simple terraform project. The provider.tf file should point to the localstack ports.
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
skip_requesting_account_id = true
skip_get_ec2_platforms = true
access_key = "mock_access_key"
secret_key = "mock_secret_key"
endpoints {
s3 = "http://localhost:4572"
}
}
resource "aws_s3_bucket" "b" {
bucket = "test"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Deploy the terraform project.
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_s3_bucket.b: Refreshing state... [id=test]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.b will be created
+ resource "aws_s3_bucket" "b" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "test"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Environment" = "Dev"
+ "Name" = "My bucket"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ versioning {
+ enabled = (known after apply)
+ mfa_delete = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terrafor can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
$ terraform apply
aws_s3_bucket.b: Refreshing state... [id=test]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.b will be created
+ resource "aws_s3_bucket" "b" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "test"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Environment" = "Dev"
+ "Name" = "My bucket"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ versioning {
+ enabled = (known after apply)
+ mfa_delete = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions in workspace "kitchen-terraform-base-aws"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.b: Creating...
aws_s3_bucket.b: Creation complete after 0s [id=test]
Create a test.py file with the following code to test the deployment of the S3 bucket.
import boto3
def test_s3_bucket_creation():
s3 = boto3.client(
's3',
endpoint_url='http://localhost:4572',
region_name='us-east-1'
)
# Call S3 to list current buckets
response = s3.list_buckets()
# Get a list of all bucket names from the response
buckets = [bucket['Name'] for bucket in response['Buckets']]
assert len(buckets) == 1
Test that the bucket was created.
$ pytest test.py
=============================================================== test session starts ===============================================================
platform darwin -- Python 3.6.0, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /private/tmp/myterraform/tests/test/fixtures
plugins: localstack-0.4.1
collected 1 item
test.py .
Now, let’s destroy the S3 bucket.
$ terraform destroy
aws_s3_bucket.b: Refreshing state... [id=test]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_s3_bucket.b will be destroyed
- resource "aws_s3_bucket" "b" {
- acl = "private" -> null
- arn = "arn:aws:s3:::test" -> null
- bucket = "test" -> null
- bucket_domain_name = "test.s3.amazonaws.com" -> null
- bucket_regional_domain_name = "test.s3.amazonaws.com" -> null
- force_destroy = false -> null
- hosted_zone_id = "Z3AQBSTGFYJSTF" -> null
- id = "test" -> null
- region = "us-east-1" -> null
- tags = {
- "Environment" = "Dev"
- "Name" = "My bucket"
} -> null
- object_lock_configuration {
}
- replication_configuration {
}
- server_side_encryption_configuration {
}
- versioning {
- enabled = false -> null
- mfa_delete = false -> null
}
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_s3_bucket.b: Destroying... [id=test]
aws_s3_bucket.b: Destruction complete after 0s
Destroy complete! Resources: 1 destroyed.
Next, we will install the terraform-compliance python module.
pip install terraform-compliance
Next, we will set up the directory for our test.
Next, make a file named s3.features inside the features directory with the following content.
Feature: test
In order to make sure the s3 bucket is secure:
Scenario: No public read
Given I have AWS S3 Bucket defined
When it contains acl
Then its value must not match the "public-read" regex
Now, we will return to the root directory for the project and run a terraform plan to get the plans output in json format.
terraform plan -out=myout
terraform show -json myout > myout.json
Lastly, we will test the terraform project against the feature file to see if the project is compliant.
$ terraform-compliance -p /tmp/junk/myout.json -f /tmp/junk/features
terraform-compliance v1.1.7 initiated
🚩 Features : /tmp/junk/features
🚩 Plan File : /tmp/junk/myout.json
🚩 Running tests. 🎉
Feature: test # /tmp/junk/features/s3.feature
In order to make sure the s3 bucket is secure:
Scenario: No public read
Given I have AWS S3 Bucket defined
When it contains acl
Then its value must not match the "public-read" regex
1 features (1 passed)
1 scenarios (1 passed)
3 steps (3 passed)
As you will notice from the results, all tests passed because the S3 bucket deployed is private.
While these are just basic examples, they are intended to demonstrate the concept of unit testing infrastructure-as-code, and testing for various rules.
Read the Entire DevOps Testing Series
Part I: Does DevOps Need Dedicated Testers?
Part II: 2019 Cloud Breaches Prove DevOps Needs Dedicated Testers
Part III: Practical Examples of DevOps Unit Testing
Part IV: More Complex Examples of DevOps Unit Testing