XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    How to configure multiple networks on a VM with Terraform

    Scheduled Pinned Locked Moved Infrastructure as Code
    12 Posts 3 Posters 2.7k Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      Monkadelic_D
      last edited by

      Lets have a discussion:
      I'm trying to figure out how to optionally assign more than one network to VMs created with Terraform using my module. I created a module for Xen-Orchestra VM creation (vm.tf and variables.tf below) that lets me create any number of VMs (with a set number of network connections, currently just 1) by just defining the required variables in a VM definition terraform file.

      I'm trying to test air gap systems so I need to have an internal private network as well as the the primary LAN network.
      I'd like to keep everything as much in the module as possible and continue to use variables only in the Terraform VM definition. I would like to share this module at some point in the future and allowing multiple networks would be nice.

      If possible, I would like to have the ability to specify 1 or more networks and use count or a foreach loop on a variable declared in the module's variables file and populated by the variable definition in the VM definition. I can't figure out how to only define the specified networks to attach.

      **As I'm writing this I thought of maybe just defining a list of network names and use count to create a data object for each one entered. Then I guess a network block can run through a foreach loop over the network name data object. Would that work?

      I may have overlooked a simple solution or I am just overthinking this.
      Any ideas? Is anyone using a VM creation module like this and defining more than one network?

      VM definition:

      module "xo_lab_vm"{
        source            = "../modules/xenorchestra-lab-vm"
      
        pool_name         = "xcp_pool_1"
      
        vm_template_name  = "01-rhel-9_1-noswap-tpl-NAS"
      
        sr_name           = "Local storage"
      
        network_name      = "Pool-wide network associated with eth1"
      
        vm_count          = 1
      
        vm_name           = "bastion"
      
        vm_domain         = "localtest.lan"
      
        vm_hostname       = "bastion"
      
        vm_username       = "user"
      
        ip_wait           = true
      
        memory_max        = 4294967296
      
        num_cpus          = 2
      
        if_mac            = ["2e:cd:84:b5:15:fe"]
      
        vdisk_info        = {
          name = "root01"
          size = 10737418240
        }
      }
      

      Module:

      # vm.tf
      data "xenorchestra_pool" "pool" {
        name_label          = "${var.pool_name}"
      }
      
      data "xenorchestra_template" "vm_template" {
        name_label          = "${var.vm_template_name}"
      }
      
      data "xenorchestra_sr" "sr" {
        name_label          = "${var.sr_name}"
        pool_id             = data.xenorchestra_pool.pool.id
      }
      
      data "xenorchestra_network" "network" {
        name_label          = "${var.network_name}"
        pool_id             = data.xenorchestra_pool.pool.id
      }
      
      resource "xenorchestra_vm" "vm" {
        count               = "${var.vm_count}"
        memory_max          = "${var.memory_max}"
        cpus                = "${var.num_cpus}"
        name_label          = "${var.vm_name}${count.index}"
        template            = data.xenorchestra_template.vm_template.id
      
        affinity_host       = data.xenorchestra_pool.pool.master
        wait_for_ip         = "${var.ip_wait}"
        network {
          network_id        = data.xenorchestra_network.network.id
          mac_address       = "${var.if_mac[count.index]}"
        }
      
        disk {
          sr_id             = data.xenorchestra_sr.sr.id
          name_label        = "${var.vm_name}${count.index}-${var.vdisk_info.name}"
          size              = "${var.vdisk_info.size}"
        }
      
        timeouts {
          create            = "10m"
        }
      }
      
      # Input variable definitiions.
      
      # Variables for data sources
      variable "pool_name" {
        description       = "Name of the Xen Pool to create VM in."
        type              = string
      }
      variable "vm_template_name" {
        description       = "Name of template. Must already exist in pool."
        type              = string
      }
      variable "sr_name" {
        description       = "Name of SR for vdisk."
        type              = string
      }
      variable "network_name" {
        description       = "Name of network for vif."
        type              = string
      }
      
      # Variables for VM
      variable "vm_count" {
        description       = "Number of VMs to create."
        type              = number
        default           = 1
      }
      variable "vm_name" {
        description       = "Name of VM."
        type              = string
      }
      variable "memory_max" {
        description       = "Maximum memory for VM."
        type              = number
      }
      variable "num_cpus" {
        description       = "Number of vCPUs for VM."
        type              = number
      }
      variable "if_mac" {
        description       = "MAC address to assign to the vif."
        type              = list(string)
      }
      variable "ip_wait" {
        description       = "Should the terraform wait until an IP is assigned to complete creation?"
        type              = bool
      }
      variable "vdisk_info" {
        description       = "Object list containing the name and size of the vdisk."
        type              = object({
          name            = string
          size            = number
        })
      }
      
      ## TODO - utilize these variables via remote-exec and local-exec provisioners
      # Variables for OS configuration
      variable "vm_hostname" {
        description       = "Hostname to assign to the VM."
        type              = string
      }
      variable "vm_domain" {
        description       = "Domain name of VM."
        type              = string
      }
      variable "vm_username" {
        description       = "User name to create on the VM."
        type              = string
      }
      
      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Question for ddelnano 🙂

        1 Reply Last reply Reply Quote 0
        • M Offline
          Monkadelic_D
          last edited by

          I just read about dynamic blocks which may get me closer to my goal. I'll look into that and update this post with any progress.

          1 Reply Last reply Reply Quote 0
          • M Offline
            Monkadelic_D
            last edited by

            I wasn't able to figure it out with Dynamic blocks. I think I'm just over complicating it.
            I tried to use a list(object) for the networks so I can include two values, name and MAC (boolean). The name is needed for the data resource in order to populate the ID. The MAC value would be used to indicate whether I wanted to assign a MAC from a predefined list of locally administered MACs.

            In my environment the LAN network connection would get a statically assigned IP from my DHCP server that is already configured with numerous local MACs. The internal (to the XCP-ng server) network connections would not have a specified MAC since those IPs will be configured later with in the cluster.

            My understanding of using for_each is very limited. This is the first time I've tried to use it.

            I thought I could use for_each in the data "xenorchestra_network" "network" block to get the name values from the list(object) variable, then use for_each with data.xenorchestra_network.network in the dynamic "network" block within the xenorchestra-vm definition to get the ID of each network data object.

            I've got to put more time into some work and had to leave it as is, with just one network created by Terraform.

            I will continue looking for a solution and update this thread with anything I find.

            Still welcoming input from anyone interested. I think If I can get this accomplished this XenOrchestra VM module could be useful to more people, especially when spinning up VMs for homelabs and learning tech. I started this so I could quickly and easily spin up VMs to work with Docker Swarm and k8s clusters.

            D 1 Reply Last reply Reply Quote 0
            • D Offline
              ddelnano Terraform Team @Monkadelic_D
              last edited by

              Hey Monkadelic_D, this is possible with a count'ed network data source and a dynamic block for dynamically creating the xenorchestra_vm network blocks.

              Here is an example that should show off what you are trying to do:

              data "xenorchestra_pool" "pool" {
                name_label = "Your pool name"
              }
              
              data "xenorchestra_network" "networks" {
                count = length(var.networks)
                name_label  = element(var.networks, count.index)
                pool_id = data.xenorchestra_pool.pool.id
              }
              
              resource "xenorchestra_vm" "vm" {
              
                # Fill in required arguments here
              
                dynamic "network" {
                      for_each = data.xenorchestra_network.networks[*].id
                      content {
                        network_id = network.value
                      }
                }
              }
              

              I verified that this creates the correct plan when the networks variable is changed.

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                Monkadelic_D @ddelnano
                last edited by

                ddelnano Thanks! I think my trouble was trying to have a boolean value for whether I wanted to MAC to be manually set from a list as I was doing before I decided to enable configuring multiple NICs.

                Now I need to figure out a way to match up a list of MAC boolean variables with the network. It's complicated since the count index from the networks data object can't be used in another object. I'll have to figure out how to iterate with some sort of index in the dynamic block to align the networks with a list of boolean values to indicate whether or not to use the optional mac_address arguement.

                I'll look into that.

                D 1 Reply Last reply Reply Quote 0
                • D Offline
                  ddelnano Terraform Team @Monkadelic_D
                  last edited by

                  Monkadelic_D can you explain how these MAC addresses will be assigned? Will you have a static list of addresses per network?

                  If you provide an example, I can modify the example to accommodate that if I see a solution.

                  M 1 Reply Last reply Reply Quote 0
                  • M Offline
                    Monkadelic_D @ddelnano
                    last edited by

                    ddelnano They'll be statically assigned via DHCP. I have a list of locally administered MACs that I setup in my DHCP server for various projects/labs.

                    D 1 Reply Last reply Reply Quote 0
                    • D Offline
                      ddelnano Terraform Team @Monkadelic_D
                      last edited by

                      Monkadelic_D I'm still not understanding how you determine which VMs need network interfaces with one of these existing DHCP MAC assignments.

                      Could you provide an example that includes a DHCP server and its MAC assignments, XCP network(s), VMs and what interfaces you expect them to have?

                      M 1 Reply Last reply Reply Quote 0
                      • M Offline
                        Monkadelic_D @ddelnano
                        last edited by Monkadelic_D

                        ddelnano In the VM definition I posted above I will list however many MACs I will need. There must be at least as many MACs as the vm_count. I've pre-generated a bunch of locally administered MACs and set them up for various /28 CIDRs I reserve for homelab/testing in my DHCP server. For instance I've got 192.168.1.240/28 as a lab CIDR. In my DHCP server I setup the following MAC assignments:

                        lab-vm-00	0a:c3:fd:8a:6e:5b	192.168.1.240	
                        lab-vm-01	52:b2:f4:d7:d4:fe	192.168.1.241	
                        lab-vm-02	62:94:26:63:dd:a2	192.168.1.242	
                        lab-vm-03	fa:bb:63:3e:b7:ab	192.168.1.243	
                        lab-vm-04	16:99:1d:0a:73:d6	192.168.1.244	
                        lab-vm-05	a6:48:c5:42:01:99	192.168.1.245	
                        lab-vm-06	72:68:06:76:8f:88	192.168.1.246	
                        lab-vm-07	2e:68:bf:c5:50:48	192.168.1.247	
                        lab-vm-08	ae:3d:1c:14:80:bc	192.168.1.248	
                        lab-vm-09	32:fc:72:4e:f1:5e	192.168.1.249	
                        lab-vm-10	16:f8:57:ab:01:c0	192.168.1.250	
                        lab-vm-11	22:89:3a:02:14:13	192.168.1.251	
                        lab-vm-12	9a:29:f4:9b:c2:d0	192.168.1.252	
                        lab-vm-13	2e:89:09:26:ec:6f	192.168.1.253	
                        lab-vm-14	ba:68:15:7c:b0:c4	192.168.1.254
                        

                        I use the count index in the xenorchestra-vm definition to cycle through the MACs as it creates each VM.
                        As it is, It will only assign one MAC per VM to whichever network block uses mac_address = "${var.if_mac[count.index]}". I use ip_wait = true so terraform apply doesn't complete until the network is confirmed up or the timeout is reached.

                        When I initially wrote this module I was just spinning up various VMs while learning docker and docker swarm. I wanted an easy way to create and destroy a VM I could access via SSH.
                        Trying to use it for an air-gapped RKE2 cluster is when I began looking into how to use multiple networks. I need a network that I can reach over the LAN and a network that's just internal to the cluster VMs. That allows me to do basic configuration then disconnect from the LAN network and use a bastion VM for ingress.

                        In my XCP-ng server I have 3 physical NICs. 1x 1Gbps and 2x10Gbps.
                        Main 1G (PIF eth0) is the management and LAN access network.
                        Migration 10G (PIF eth2) is the default migration network.
                        I have various internal networks for labs and those are the ones I'd like to be able to use for the second network for this use case. On the internal networks I don't need to assign MACs since they won't have access to a DHCP server. I'll configure the IPs with Ansible.

                        D 1 Reply Last reply Reply Quote 0
                        • D Offline
                          ddelnano Terraform Team @Monkadelic_D
                          last edited by ddelnano

                          Monkadelic_D thanks for that additional detail. I still feel that I'm unsure of the type of construct you are looking for. Here is an example that allows each VM to have a single private network and as many DHCP connected networks (meaning mac is provided) as needed.

                          # variables.tf
                          variable "mac_network" {
                            type = string
                          }
                          # Map where key is VM hostname and value is list of MACs
                          variable "vm_network_macs" {
                            type = map(list(string))
                            default = {}
                          }
                          
                          # terraform.tfvars
                          vm_network_macs = {
                            "lab-vm-00" = ["0a:c3:fd:8a:6e:5b"],
                            "lab-vm-01" = ["52:b2:f4:d7:d4:fe"],
                            "lab-vm-02" = ["62:94:26:63:dd:a2"],
                          }
                          
                          vm.tf
                          resource "xenorchestra_vm" "vm" {
                          
                            # The vm_network_macs contains a key for every VM hostname
                            count = length(keys(var.vm_network_macs))
                            name_label =  keys(var.vm_network_macs)[count.index]
                          
                            [ ... ]
                          
                            # This creates a network for each mac address provided
                            dynamic "network" {
                             for_each = lookup(var.vm_network_macs, keys(var.vm_network_macs)[count.index])
                             iterator = mac_addr
                             content {
                              network_id = var.mac_network
                              mac_address = mac_addr.value
                             }
                            }
                          
                            # This is the private network your VMs will be given
                            network {
                              network_id = data.xenorchestra_network.network.id
                            }
                          }
                          

                          Here is what the plan of that looks like:

                          Terraform will perform the following actions:
                          
                            # xenorchestra_vm.vm[0] will be created
                            + resource "xenorchestra_vm" "vm" {
                          
                          [ .. ]
                          
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = "0a:c3:fd:8a:6e:5b"
                                    + network_id     = "Your mac network"
                                  }
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = (known after apply)
                                    + network_id     = "6c4e1cdc-9fe0-0603-e53d-4790d1fce8dd"
                                  }
                              }
                          
                            # xenorchestra_vm.vm[1] will be created
                            + resource "xenorchestra_vm" "vm" {
                          
                          [ .. ]
                          
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = "52:b2:f4:d7:d4:fe"
                                    + network_id     = "Your mac network"
                                  }
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = (known after apply)
                                    + network_id     = "6c4e1cdc-9fe0-0603-e53d-4790d1fce8dd"
                                  }
                              }
                          
                            # xenorchestra_vm.vm[2] will be created
                            + resource "xenorchestra_vm" "vm" {
                           
                          [ ... ]
                          
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = "62:94:26:63:dd:a2"
                                    + network_id     = "Your mac network"
                                  }
                                + network {
                                    + device         = (known after apply)
                                    + ipv4_addresses = (known after apply)
                                    + ipv6_addresses = (known after apply)
                                    + mac_address    = (known after apply)
                                    + network_id     = "6c4e1cdc-9fe0-0603-e53d-4790d1fce8dd"
                                  }
                              }
                          
                          Plan: 3 to add, 0 to change, 0 to destroy.
                          
                          

                          If that isn't what you are looking for, I hope that it helps to spark additional inspiration from what you've tried already.

                          M 1 Reply Last reply Reply Quote 0
                          • M Offline
                            Monkadelic_D @ddelnano
                            last edited by

                            ddelnano My goal is to have one DHCP (on my LAN) network and 0 or 1 private networks per VM.

                            The way I have my module setup currently I will just have one network created via Terraform. The main part I'm having trouble with is the second network being conditionally created with or without a MAC assigned.

                            If I use a dynamic block, all the network options will be configured for each network. There's really only two situations I want the module to work for.

                            1. a single network that has a MAC assigned
                            2. one network with a MAC assigned and another network without a MAC assigned and on a different pool network.

                            Until I can figure out whether it's possible I just created a second module that uses two networks.

                            One idea I had was to use a a complex list(object) variable where each item in the list has a network name and a boolean that determines whether or not to assign a MAC from the pre-populated list but I could not get things going the right direction using the same variable for a terraform data object (used to get the ID associated with the named network) then also for the dynamic network block, having each network only assign a MAC conditionally.
                            If that was possible it would allow much more freedom to create whatever combination of network connections I wanted. If possible it would check for a specific network name that would always get a MAC assigned and another that never gets a MAC.

                            I am currently using the network name, from a list variable, to create a data object that provides the network ID for use in the xenorchestra_vm network block. I had trouble figuring out how to do that for two networks and be able to access the ID for each and get the MAC boolean conditionally applied to the network data object or objects.

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO olivierlambert moved this topic from Xen Orchestra on
                            • olivierlambertO olivierlambert moved this topic from Advanced features on
                            • First post
                              Last post