Deploy a Nomad and Consul cluster with Terraform

Nomad is a cluster manager and scheduler that can be used to deploy and manage containers and non-containerized applications across a fleet of servers. It is a great tool to deploy applications on Latitude.sh as it requires less resources than Kubernetes and is easier to use.

Consul is a service mesh that can be used to provide service discovery and networking to Nomad. It is a great tool to deploy on Latitude.sh as it provides a simple way to connect applications together and to the outside world.

If you are not familiar with Terraform, read Using Terraform with Latitude.sh first.

Prerequisites

This guide will help you deploy a Nomad cluster with Consul for networking on Latitude.sh servers. It will create a Nomad cluster with 1 server and 1 client. It will also create a Consul cluster in the same servers. The Nomad and Consul clusters will be connected together.

This guide uses Ubuntu 22.04 LTS. Not all commands are guaranteed to work on other operating systems.

Step 1: Download the contents of the repository

The terraform-nomad-consul directory on the latitude.sh/examples repository is the easiest way to get started. This directory contains all files required to deploy and set up the cluster.

After creating the files to your local working directory, open the terminal and export your Latitude.sh API key to an environment variable and initialize Terraform.

export LATITUDESH_AUTH_TOKEN=<your-api-key>
terraform init

Step 2: Set up variables

The variables.tf file contains all variables that are used in the Terraform configuration. You will change the values of these variables to customize your deployment, replacing the default values with your own.

Variables that you will need to change:

  • project_id: Go to the dashboard, select the project you want to deploy the cluster on, and click on Project settings. Copy the ID from the top-right corner of the page.
  • plan: The slug of the plan you want to use. The slug is exactly like the plan name, but uses hyphens instead of dots. To use the c3.small.x86 plan, add c3-small-x86 as the plan variable value.
  • nomad_server_count and nomad_client_count: The number of Nomad servers and clients you want to deploy. The default values are 1 for both. For production environments, it is recommended to use at least 3 servers and 3 clients.
  • nomad_region: The Latitude.sh location your cluster will be deployed to. The value is the location's slug. Get the slug from the List all regions API endpoint.
    • Make sure the plan and location are available in the region you selected. You can easily test this by going to the server create page in the dashboard
  • nomad_vlan_id: Go to the Latitude.sh dashboard. Click on Private networks in the sidebar. Create a VLAN on the location your cluster will be deployed to. The variable value is the VID.
  • ssh_key_id: Go to the Latitude.sh dashboard, click on Project settings and SSH Keys. Copy the ID of the SSH key you want to use.
  • private_key_path: The path to the private key file on your local machine. This is the private key that matches the public key you added to the project. You only need to change this if the path to the private key is different from ~/.ssh/id_rsa.
    • Make sure ssh key is not passphrase protected otherwise Terraform won't be able to connect to it to install the dependencies.

Save the file.

Step 3: Plan and apply your changes

When all variables are set up, go back to the Terminal and run

terraform plan

Review the changes and, if all looks good, run

terraform apply

Wait for all servers to be deployed. Terraform will let you know when the process finishes.

Step 4: Assign servers to the VLAN

On the Latitude.sh dashboard, return to the Private networks page, open the VLAN your servers were deployed to, and click the Assign button. Select the servers you want to assign to the VLAN and click Assign. You need to assign all servers deployed to your cluster, both Nomad Servers and Nomad Clients.

Step 5: Access your cluster

Copy the public IP of one of your Nomad Servers and add :4646. For example, if one of your Nomad servers' public IP is 189.1.2.3, you will access the Nomad UI at http://189.1.2.3:4646.

Step 6: Deploying a job

To see the Nomad cluster in action, create and execute a job. You can do this programmatically, but this guide won't cover that. Instead, use the Nomad UI to deploy your job.

This example uses a simple Docker image to have each client node echo “Hello, world!” via an HTTP server.

Open the Nomad interface and go to the Jobs page. Click on Run Job. You will write the job definition in HCL, HashiCorp's configuration language.

For this example, use the job script provided below:

job "hello-world-job" {
  datacenters = ["dc-DAL-1"] # Replace DAL with the actual region slug where your cluster is deployed.  
  type = "service"

  group "docker-example" {
    count = 1

    task "docker-server" {
      driver = "docker"

      config {
        image = "hashicorp/http-echo:latest"

        args = [
          "-listen",
          ":3030",
          "-text",
          "Hello, world!",
        ]
      }

      resources {
        network {
          mbits = 10

          port "http" {
            static = "3030"
          }
        }
      }
    }
  }
}

Adding new servers and clients

You can add new servers and clients to your cluster by increasing the nodes in the variables.tf file under nomad_server_count and nomad_client_count.

  • If you add a new client, you must assign the servers to the VLAN after provisioning it.
  • If you are adding a new server, you must go to /etc/consul.d/consul.hcl on all of the existing servers in the cluster and add the private IP of the new server. Then assign the server to the VLAN from the Latitude.sh dashboard.

Further reading

Troubleshooting

Servers or clients not showing up in the Nomad UI

You should see the number of Client and Server nodes you provisioned by accessing the cluster. If you do not see the correct number of either servers or clients, it's probably because the private networking is not working correctly.

Check that the private network is working

If you didn't change the script, your servers should have the following private IPs configured:

  • Nomad servers: Starts at 10.8.0.1, where the last octet is the server number. So, for nomad-server-1 the IP would be 10.8.0.1, for nomad-server-2 the IP is 10.8.0.2, and so on.
  • Nomad clients: The IP is the server number plus 50. So, for nomad-client-1 the IP is 10.8.0.51, etc.

Go to one of the servers in your cluster and ping the private IP of another. For example, if nomad-client-1 isn't showing up on your cluster, log in to nomad-server-1 and ping 10.8.0.51. The server should be pinging, otherwise, the private network isn't working.

Review your settings and reach out to us if you still have issues.

Check netplan and consul settings

  • Go to /etc/netplan/50-cloud-init.yaml and check if the settings look correct. Read Private networking if you're unfamiliar with how Netplan should be set up.