Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When creating a VM, two hard disks are being created when only one disk block is specified #1944

Open
4 tasks done
grothja opened this issue Jul 9, 2023 · 4 comments
Open
4 tasks done
Labels
area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Milestone

Comments

@grothja
Copy link

grothja commented Jul 9, 2023

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

1.5.1

Terraform Provider

2.4.1

VMware vSphere

7.0.3

Description

When I create a VM cloned from a template, two hard disks are being created, but I'm only specifying a single disk {} block. The primary root//dev/sda device will match what is in the disk block (e.g., if I specify 50GB, Hard Disk 1 in vSphere will be 50GB), and the second device always matches the template disk size (40GB)

It's definitely possible I'm just missing something obvious, but I've searched through the issues in this repo and wasn't able to come up with anything...

Affected Resources or Data Sources

resource/vsphere_virtual_machine

Terraform Configuration

# Lots of data resources like

data "vsphere_virtual_machine" "rockylinux9_base" {
  name          = "rockylinux9-base-0.1.0"
  datacenter_id = data.vsphere_datacenter.dc.id
}

# Passed into a module using the data resources

module "my-vm" {
  ...
  template = data.vsphere_virtual_machine.rockylinux9_base
  ...
}

# In the module

resource "vsphere_virtual_machine" "main" {
  name                 = var.name
  resource_pool_id     = var.compute_cluster.resource_pool_id
  datastore_cluster_id = var.datastore_cluster.id
  num_cpus             = var.num_cpus
  memory               = var.memory
  guest_id             = var.template.guest_id
  scsi_type            = var.template.scsi_type
  tags                 = var.vsphere_tags
  folder               = local.locations[var.location]["folder"]

  network_interface {
    network_id   = var.network.id
    adapter_type = var.template.network_interface_types[0]
  }

  disk {
    label            = var.template.disks.0.label
    thin_provisioned = var.template.disks.0.thin_provisioned
    eagerly_scrub    = var.template.disks.0.eagerly_scrub
    unit_number      = var.template.disks.0.unit_number

    size = (
      var.root_disk_size != null ?
      var.root_disk_size :
      var.template.disks.0.size
    )
  }

  clone {
    template_uuid = var.template.id
    ...
  }
  ...
}

Debug Output

Plan output:

  + resource "vsphere_virtual_machine" "main" {
      + annotation                              = (sensitive value)
      + boot_retry_delay                        = 10000
      + change_version                          = (known after apply)
      + cpu_limit                               = -1
      + cpu_share_count                         = (known after apply)
      + cpu_share_level                         = "normal"
      + datastore_cluster_id                    = "group-p00000"
      + datastore_id                            = (known after apply)
      + default_ip_address                      = (known after apply)
      + ept_rvi_mode                            = "automatic"
      + extra_config                            = (known after apply)
      + extra_config_reboot_required            = true
      + firmware                                = "bios"
      + folder                                  = "Some Folder"
      + force_power_off                         = true
      + guest_id                                = "rhel9_64Guest"
      + guest_ip_addresses                      = (known after apply)
      + hardware_version                        = (known after apply)
      + host_system_id                          = (known after apply)
      + hv_mode                                 = "hvAuto"
      + id                                      = (known after apply)
      + ide_controller_count                    = 2
      + imported                                = (known after apply)
      + latency_sensitivity                     = "normal"
      + memory                                  = 2048
      + memory_limit                            = -1
      + memory_share_count                      = (known after apply)
      + memory_share_level                      = "normal"
      + migrate_wait_timeout                    = 30
      + moid                                    = (known after apply)
      + name                                    = "my-vm"
      + num_cores_per_socket                    = 1
      + num_cpus                                = 2
      + power_state                             = (known after apply)
      + poweron_timeout                         = 300
      + reboot_required                         = (known after apply)
      + resource_pool_id                        = "resgroup-000000"
      + run_tools_scripts_after_power_on        = true
      + run_tools_scripts_after_resume          = true
      + run_tools_scripts_before_guest_shutdown = true
      + run_tools_scripts_before_guest_standby  = true
      + sata_controller_count                   = 0
      + scsi_bus_sharing                        = "noSharing"
      + scsi_controller_count                   = 1
      + scsi_type                               = "pvscsi"
      + shutdown_wait_timeout                   = 3
      + storage_policy_id                       = (known after apply)
      + swap_placement_policy                   = "inherit"
      + tags                                    = [
          + "urn:vmomi:InventoryServiceTag:0b37b83a-31cd-40f0-b3ea-9f12bad8e311:GLOBAL",
          + "urn:vmomi:InventoryServiceTag:191e4e26-65ee-4df4-8e45-176baea5ec2b:GLOBAL",
          + "urn:vmomi:InventoryServiceTag:f2d3cf0b-9f52-43a8-a2ec-bb916cf4d488:GLOBAL",
        ]
      + tools_upgrade_policy                    = "manual"
      + uuid                                    = (known after apply)
      + vapp_transport                          = (known after apply)
      + vmware_tools_status                     = (known after apply)
      + vmx_path                                = (known after apply)
      + wait_for_guest_ip_timeout               = 0
      + wait_for_guest_net_routable             = true
      + wait_for_guest_net_timeout              = 5

      + clone {
          + template_uuid = "4206ba07-f0f9-2d42-a675-377fb451d68c"
          + timeout       = 30

          + customize {
              + dns_server_list = [
                  + "1.1.1.1",
                ]
              + dns_suffix_list = [
                  + "somedomain.com",
                ]
              + ipv4_gateway    = "10.1.1.1"
              + timeout         = 10

              + linux_options {
                  + domain       = "somedomain.com"
                  + host_name    = "my-vm"
                  + hw_clock_utc = true
                }

              + network_interface {
                  + ipv4_address = "10.1.1.2"
                  + ipv4_netmask = 24
                }
            }
        }

      + disk {
          + attach            = false
          + controller_type   = "scsi"
          + datastore_id      = "<computed>"
          + device_address    = (known after apply)
          + disk_mode         = "persistent"
          + disk_sharing      = "sharingNone"
          + eagerly_scrub     = false
          + io_limit          = -1
          + io_reservation    = 0
          + io_share_count    = 0
          + io_share_level    = "normal"
          + keep_on_remove    = false
          + key               = 0
          + label             = "Hard disk 1"
          + path              = (known after apply)
          + size              = 50
          + storage_policy_id = (known after apply)
          + thin_provisioned  = true
          + unit_number       = 0
          + uuid              = (known after apply)
          + write_through     = false
        }

      + network_interface {
          + adapter_type          = "vmxnet3"
          + bandwidth_limit       = -1
          + bandwidth_reservation = 0
          + bandwidth_share_count = (known after apply)
          + bandwidth_share_level = "normal"
          + device_address        = (known after apply)
          + key                   = (known after apply)
          + mac_address           = (known after apply)
          + network_id            = "dvportgroup-111111"
        }
    }

Panic Output

No response

Expected Behavior

A single disk created instead of two

Actual Behavior

two disks are created

Steps to Reproduce

I've tested this against multiple vSphere provider versions with no luck...

Environment Details

No response

Screenshots

No response

References

No response

@grothja grothja added bug Type: Bug needs-triage Status: Issue Needs Triage labels Jul 9, 2023
@github-actions
Copy link

github-actions bot commented Jul 9, 2023

Hello, grothja! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@grothja
Copy link
Author

grothja commented Aug 1, 2023

I circled back to this, and found that there isn't a bug with creating two disks, but maybe a bug (or possible enhancement) for displaying which disks are being created.

For other's sake, my Packer vsphere-clone config contained this for the VM template, which I thought was creating the root volume:

source "vsphere-clone" "rocky" {
  ...

  storage {
    disk_size             = 40000
    disk_thin_provisioned = true
  }
}

But was in reality creating an extra disk.

With that though, when looking at the vSphere template in a TF data source, it was only showing a single disk:

  "disks" = tolist([
    {
      "eagerly_scrub" = false
      "label" = "Hard disk 1"
      "size" = 40
      "thin_provisioned" = true
      "unit_number" = 0
    },
  ])

And as seen above, when creating, it only showed a single disk block for disks being created. It's possible this is related to how vSphere presents this information, and as such there's nothing this Terraform provider can do, but ideally it would show that two disks are being created.

If this is not a TF provider issue though, feel free to close this out.

@tenthirtyam tenthirtyam added this to the Backlog milestone Aug 7, 2023
@tenthirtyam tenthirtyam added the area/vm Area: Virtual Machines label Aug 7, 2023
@tenthirtyam tenthirtyam added vsphere/v8 vSphere 8.0 and removed vsphere/v8 vSphere 8.0 labels Sep 5, 2023
@jbfriedrich
Copy link

I am running into the same issue as you. But I am using RHEL and vSphere 8. I thought it might be because the disk size does not match with the disk size in the template. But when I try to access the templates disks, I get the following error:

│  118:         size  = data.vsphere_virtual_machine.rhel_template.disks.0.size
│     ├────────────────
│     │ data.vsphere_virtual_machine.rhel_template.disks is empty list of object
│ 
│ The given key does not identify an element in this collection value: the collection has no elements.

Something seems to be off here, but I cannot put a finger on it yet. I am doing some more troubleshooting and then might open up a separate issue for my problem.

@lindhe
Copy link

lindhe commented Jul 5, 2024

I think I have the same issue!

I created my VMs with disk.label = "disk0" as per recommendations:

The disks must have a label argument assigned in a convention matching diskN, starting with disk number 0, based on each virtual disk order on the SCSI bus. As an example, a disk on SCSI controller 0 with a unit number of 0 would be labeled as disk0, a disk on the same controller with a unit number of 1 would be disk1, but the next disk, which is on SCSI controller 1 with a unit number of 0, still becomes disk2.

Yet, when I wanted to recreate my state my importing the VM, I ended up with a diff like this:

image

Changing disk.label = "Hard disk 1" seems to fix that issue, I no longer get a diff indicating any disk deletion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Projects
None yet
Development

No branches or pull requests

4 participants