we will deploy two Ubuntu virtual machines running the Apache web server located in a private subnet without a public IP address, and we will use a load balancer to publish the web service on the port 80.
When we deploy a public HTTP(S) load balancer, we need to use instance groups to organize instances.
An instance group is a collection of virtual machine (VM) instances that you can manage as a single entity and GCP offers two kinds of VM instance groups, managed and unmanaged:
- Managed instance groups let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of autoscaling, autohealing, regional (multiple zones) deployment, and automatic updating. This option requires a virtual machine template.
- Unmanaged instance groups let you load balance across a fleet of VMs that you manage yourself. This option is good when you have existing VMs and you want to add a load balancer to your infrastructure.
This post will use unmanaged instance groups. Check part 3 for managed instance groups.
1. Requirements
Please refer to part 1 of the tutorial for the requirements (Section 1, Requirements)
2. The Provider
The provider is the section of the Terraform script that will start the connection with GCP. The Terraform provider looks like this:
# setup the GCP provider
terraform {
required_version = ">= 0.12"
}
provider "google" {
project = "my-gcp-project"
credentials = file("kopicloud-tfadmin.json")
region = "europe-west1"
zone = "europe-west1-b"
}
To simplify the management of variables and help with the reusability of code we will move variables out the provider file. It is recommended to keep variables in the variables.tf file:
# define the GCP authentication file
variable "gcp_auth_file" {
type = string
description = "GCP authentication file"
}
# define GCP project name
variable "app_project" {
type = string
description = "GCP project name"
}
# define GCP region
variable "gcp_region_1" {
type = string
description = "GCP region"
}
# define GCP zone
variable "gcp_zone_1" {
type = string
description = "GCP zone"
}
# define private subnet
variable "private_subnet_cidr_1" {
type = string
description = "private subnet CIDR 1"
}
Create a file terraform.tfvars with your GCP settings.
# GCP Settings
gcp_region_1 = "europe-west1"
gcp_zone_1 = "europe-west1-b"
gcp_auth_file = "../auth/kopicloud-tfadmin.json"
# GCP Netwok
private_subnet_cidr_1 = "10.10.1.0/24"
Update the provider section, usually in your main.tf or provider.tf, file to use the GCP variables defined above. In this case, we will need to use some features available at the google-beta provider too.
# setup the GCP provider | provider.tf
terraform {
required_version = ">= 0.12"
}
provider "google" {
project = var.app_project
credentials = file(var.gcp_auth_file)
region = var.gcp_region_1
zone = var.gcp_zone_1
}
provider "google-beta" {
project = var.app_project
credentials = file(var.gcp_auth_file)
region = var.gcp_region_1
zone = var.gcp_zone_1
}
3. Configure the Network
Create a network.tf file to store the network code.
The first step is to create the VPC:
# create VPC
resource "google_compute_network" "vpc" {
name = "${var.app_name}-vpc"
auto_create_subnetworks = "false"
routing_mode = "GLOBAL"
}
then we define the private subnet:
# create private subnet
resource "google_compute_subnetwork" "private_subnet_1" {
provider = "google-beta"
purpose = "PRIVATE"
name = "${var.app_name}-private-subnet-1"
ip_cidr_range = var.private_subnet_cidr_1
network = google_compute_network.vpc.name
region = var.gcp_region_1
}
The purpose setting is currently in Beta, this is the reason we need to add the provider = “google-beta” line.
The purpose can be either INTERNAL_HTTPS_LOAD_BALANCER or PRIVATE.
A subnetwork with purpose set to INTERNAL_HTTPS_LOAD_BALANCER is a user-created subnetwork that is reserved for Internal HTTP(S) Load Balancing.
If set to INTERNAL_HTTPS_LOAD_BALANCER you must also set the role. The value of the role can be set to ACTIVE or BACKUP.
An ACTIVE subnetwork is one that is currently being used for Internal HTTP(S) Load Balancing. A BACKUP subnetwork is one that is ready to be promoted to ACTIVE or is currently draining.
4. Create the NAT Router (Google Cloud Router)
We will add the following code to build a Google Cloud Router and allow internal machines to access the internet in the network.tf code:
# create a public ip for nat service
resource "google_compute_address" "nat-ip" {
name = "${var.app_name}-nap-ip"
project = var.app_project
region = var.gcp_region_1
}
# create a nat to allow private instances connect to internet
resource "google_compute_router" "nat-router" {
name = "${var.app_name}-nat-router"
network = google_compute_network.vpc.name
}
resource "google_compute_router_nat" "nat-gateway" {
name = "${var.app_name}-nat-gateway"
router = google_compute_router.nat-router.name
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = [ google_compute_address.nat-ip.self_link ]
source_subnetwork_ip_ranges_to_nat =
"ALL_SUBNETWORKS_ALL_IP_RANGES"
depends_on = [ google_compute_address.nat-ip ]
}
output "nat_ip_address" {
value = google_compute_address.nat-ip.address
}
5. Define Firewall Rules
Create a network-firewall.tf file and add most popular firewall rules to the file:
# allow http traffic
resource "google_compute_firewall" "allow-http" {
name = "${var.app_name}-fw-allow-http"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["80"]
}
target_tags = ["http"]
}
# allow https traffic
resource "google_compute_firewall" "allow-https" {
name = "${var.app_name}-fw-allow-https"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["443"]
}
target_tags = ["https"]
}
# allow ssh traffic
resource "google_compute_firewall" "allow-ssh" {
name = "${var.app_name}-fw-allow-ssh"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["22"]
}
target_tags = ["ssh"]
}
# allow rdp traffic
resource "google_compute_firewall" "allow-rdp" {
name = "${var.app_name}-fw-allow-rdp"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["3389"]
}
target_tags = ["rdp"]
}
6. Create Virtual Machines
The next step is to create two virtual machines with Ubuntu running Apache in the private network. Create a new vm.tf file and add the following code:
# Create Google Cloud VMs | vm.tf
# Create web server #1
resource "google_compute_instance" "web_private_1" {
name = "${var.app_name}-vm1"
machine_type = "f1-micro"
zone = var.gcp_zone_1
hostname = "${var.app_name}-vm1.${var.app_domain}"
tags = ["ssh","http"]
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
metadata_startup_script = "sudo apt-get update;
sudo apt-get install -yq build-essential apache2"
network_interface {
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.private_subnet_1.name
}
}
# Create web server #2
resource "google_compute_instance" "web_private_2" {
name = "${var.app_name}-vm2"
machine_type = "f1-micro"
zone = var.gcp_zone_1
hostname = "${var.app_name}-vm2.${var.app_domain}"
tags = ["ssh","http"]
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
metadata_startup_script = "sudo apt-get update;
sudo apt-get install -yq build-essential apache2"
network_interface {
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.private_subnet_1.name
}
}
Note: to create virtual machines with private IP only, don’t include the line access_config { } in the network interface section.
If you want to deploy another OS, take a look at the following link and update the image: https://cloud.google.com/compute/docs/images
The tags are used to attach firewall rules defined in the network-firewall.tf files. In this case, we are allowing SSH to manage the server and HTTP to allow Apache to serve web pages.
The metadata_startup_script is used to bootstrap the installation of applications or configure settings on the server.
Create a new file called vm-output.tf to display the output of VMs:
# Virtual machine output | vm-output.tf
output "web-1-name" {
value = google_compute_instance.web_private_1.name
}
output "web-1-internal-ip" {
value = google_compute_instance.web_private_1.network_interface.0.
network_ip
}
output "web-2-name" {
value = google_compute_instance.web_private_2.name
}
output "web-2-internal-ip" {
value = google_compute_instance.web_private_2.network_interface.0.
network_ip
}
7. Create the Load Balancer
The final step is to create and configure a load balancer to allow private webservers VMs in an unmanaged group to publish content to the internet. To create a load balancer you will need:
- google_compute_global_forwarding_rule → used to forward traffic to the correct load balancer for HTTP load balancing.
- google_compute_target_http_proxy → used by one or more global forwarding rule to route incoming HTTP requests to a URL map
- google_compute_backend_service → defines a group of virtual machines that will serve traffic for load balancing
- google_compute_instance_group → creates a group of dissimilar virtual machine instances
- google_compute_health_check → determine whether instances are responsive and able to do work
- google_compute_url_map → used to route requests to a backend service based on rules that you define for the host and path of an incoming URL
Create an lb-unmanaged.tf file and add the following code:
# Load balancer with unmanaged instance group | lb-unmanaged.tf
# used to forward traffic to the correct load balancer for HTTP load balancing
resource "google_compute_global_forwarding_rule" "global_forwarding_rule" {
name = "${var.app_name}-global-forwarding-rule"
project = var.app_project
target =
google_compute_target_http_proxy.target_http_proxy.self_link
port_range = "80"
}
# used by one or more global forwarding rule to route incoming HTTP requests to a URL map
resource "google_compute_target_http_proxy" "target_http_proxy" {
name = "${var.app_name}-proxy"
project = var.app_project
url_map = google_compute_url_map.url_map.self_link
}
# defines a group of virtual machines that will serve traffic for load balancing
resource "google_compute_backend_service" "backend_service" {
name = "${var.app_name}-backend-service"
project = "${var.app_project}"
port_name = "http"
protocol = "HTTP"
health_checks =
"${google_compute_health_check.healthcheck.self_link}"]
backend {
group =
google_compute_instance_group.web_private_group.self_link
balancing_mode = "RATE"
max_rate_per_instance = 100
}
}
# creates a group of dissimilar virtual machine instances
resource "google_compute_instance_group" "web_private_group" {
name = "${var.app_name}-vm-group"
description = "Web servers instance group"
zone = var.gcp_zone_1
instances = [
google_compute_instance.web_private_1.self_link,
google_compute_instance.web_private_2.self_link
]
named_port {
name = "http"
port = "80"
}
}
# determine whether instances are responsive and able to do work
resource "google_compute_health_check" "healthcheck" {
name = "${var.app_name}-healthcheck"
timeout_sec = 1
check_interval_sec = 1
http_health_check {
port = 80
}
}
# used to route requests to a backend service based on rules that you define for the host and path of an incoming URL
resource "google_compute_url_map" "url_map" {
name = "${var.app_name}-load-balancer"
project = var.app_project
default_service =
google_compute_backend_service.backend_service.self_link
}
# show external ip address of load balancer
output "load-balancer-ip-address" {
value = google_compute_global_forwarding_rule.
global_forwarding_rule.ip_address
}
8. Build the Infrastructure using Terraform
Clone the code from my GitHub repository:
git clone https://github.com/guillermo-musumeci/terraform-gcp-single-region-private-lb-unmanaged
Execute the Terraform script, following instructions