Skip to content

This tutorial shows you how to use Partner Interconnect and Equinix Network Edge to deploy private connectivity between Google Cloud Virtual Private Cloud (VPC) networks and Equinix Metal servers and build a POC or testing environment.

License

Notifications You must be signed in to change notification settings

palimarium/terraform-equinix-gcp-interconnect

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-equinix-gcp-interconnect

This tutorial aims to demonstrate how to use terraform Equinix provider, in conjunction with the Equinix Metal and Google provider, so you can fully automate the entire process of establishing a secure, direct connection between an Equinix bare metal server and Google Cloud.

After completing the tutorial you will be able to communicate from a virtual machine in GCP (GCE instance) to a bare metal server in Equinix (BMaaS Platform), using private addressing.

GCP Equinix Fabric diagram


Requirements

  • Equinix Fabric Account:
    • You can create a 45-day trial account by following this guide.
    • Permission to create Connection and Network Edge devices.
    • Generate Client ID and Client Secret key, from: https://developer.equinix.com/
  • Equinix Metal Account:
    • A user-level API key for the Equinix Metal API.
  • GCP Account:
    • Permission to create a project or select one already created.
    • Enable billing.
    • Enable APIs: Compute Engine API, and Cloud Deployment Manager API.

Setup

Required steps to setup your environment for the tutorial:

Usage

A) Setup Equinix Network Edge Virtual Device and GCP Interconnect

  1. Clone tutorial project.

    mkdir -p $HOME/Workspace/demo-gcp-interconnect; cd $HOME/Workspace/demo-gcp-interconnect
    git clone https://github.com/palimarium/terraform-equinix-gcp-interconnect.git
  2. Enter TF directory and use your text editor to set the required parameters. Only the ones with no default value are necessary, the others can be left as is.

    cd terraform-equinix-gcp-interconnect
    vim terraform.tfvars
    
  3. Create terraform-runner GCP Service Account.

    ./tf-service-acccount-chain-setup.sh
    

terraform runner SA

  1. From the TF directory execute terraform.

    terraform init
    terraform plan
    terraform apply -auto-approve
    

terraform apply equinix edge

B) Setup Equinix Metal

  1. Enter tf-equinix-metal-setup directory and use your text editor to set the required parameters.

    cd tf-equinix-metal-setup
    vim terraform.tfvars
    
  2. From the tf-equinix-metal-setup directory execute terraform.

    terraform init
    terraform plan
    terraform apply -auto-approve
    

terraform apply equinix metal

C) Setup a Shared Port Connection between Equinix Metal and Equinix Fabric NE

Setting up a shared port has two components:

  1. Completing the request in the Equinix Metal console.

To request a connection in the Equinix Metal portal, open the Connections page from the IPs & Networks tab.

request connection

  1. Setting up the connection in Equinix Fabric.

Connections to Equinix Metal shared ports are handled through Equinix Fabric, so log in to the Equinix Fabric portal and follow the documentation steps.

Connecting the Metal VLAN to the Shared Port

Once the L2 connection is ready, between Equinix Metal and Equinix Fabric, you can follow these steps for connecting the Primary Port to the Metal VLAN created by terraform at the previous step B).

D) Equinix Metal to Equinix Fabric, Layer2 & BGP Configuration

  1. Connect to Cisco CSR NE with Putty by using the ssh username & password, generated with terraform.

putty equinix ne

  1. In this step we will configure a basic Layer 2 connection between Network Edge and Equinix Metal. The sub-interface on the Metal server with the IP address 172.16.0.100 has been already created by terraform, we just have to proceed with the Network Edge Configuration by following the steps from here.

  2. In this step we will configure BGP on Cisco CSR NE device for advertising the 172.16.0.0/24 network

equinix ne bgp config

E) Check if connectivity is in place and if we can ping from each side

  1. Check Cisco CSR NE, BGP routing table.

equinix ne bgp vrf cloud

  1. Check Google Cloud Router BGP routing table.

gcp interconnect

❯ gcloud compute routers get-status equinix-demo-interconn-router --region europe-west3 --project equinix-gcp-demo
kind: compute#routerStatusResponse
result:
  bestRoutes:
  - asPaths:
    - asLists:
      - 64538
      pathSegmentType: AS_SEQUENCE
    creationTimestamp: '2022-06-01T00:51:09.465-07:00'
    destRange: 172.16.0.0/24
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/equinix-gcp-demo/global/networks/equinix-demo-gcp-network
    nextHopIp: 169.254.119.106
    priority: 0
    routeType: BGP
  bestRoutesForRouter:
  - asPaths:
    - asLists:
      - 64538
      pathSegmentType: AS_SEQUENCE
    creationTimestamp: '2022-06-01T00:51:09.465-07:00'
    destRange: 172.16.0.0/24
    kind: compute#route
    network: https://www.googleapis.com/compute/v1/projects/equinix-gcp-demo/global/networks/equinix-demo-gcp-network
    nextHopIp: 169.254.119.106
    priority: 0
    routeStatus: ACTIVE
    routeType: BGP
  bgpPeerStatus:
  - advertisedRoutes:
    - destRange: 10.200.0.0/24
      kind: compute#route
      network: https://www.googleapis.com/compute/v1/projects/equinix-gcp-demo/global/networks/equinix-demo-gcp-network
      nextHopIp: 169.254.119.105
      priority: 100
      routeType: BGP
    ipAddress: 169.254.119.105
    name: auto-ia-bgp-equinix-demo-in-2e073a795ae1648
    numLearnedRoutes: 1
    peerIpAddress: 169.254.119.106
    state: Established
    status: UP
    uptime: 17 hours, 9 minutes, 33 seconds
    uptimeSeconds: '61773'
  network: https://www.googleapis.com/compute/v1/projects/equinix-gcp-demo/global/networks/equinix-demo-gcp-network
  1. Ping from Equinix Metal Server(172.16.0.100) to Google Cloud GCE-VM(10.200.0.100).

ping from bms to gcp

  1. Ping from Google Cloud GCE-VM(10.200.0.100) to Equinix Metal Server(172.16.0.100).

ping from gcp to bms

About

This tutorial shows you how to use Partner Interconnect and Equinix Network Edge to deploy private connectivity between Google Cloud Virtual Private Cloud (VPC) networks and Equinix Metal servers and build a POC or testing environment.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 84.0%
  • Shell 16.0%