Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plan/Apply shows wrong disk values and created clone with wrong disk values #2191

Open
4 tasks done
thesefer opened this issue May 7, 2024 · 4 comments
Open
4 tasks done
Labels
bug Type: Bug needs-triage Status: Issue Needs Triage
Milestone

Comments

@thesefer
Copy link

thesefer commented May 7, 2024

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

Terraform v1.4.2

Terraform Provider

v2.0.2

VMware vSphere

8.0.2.00200

Description

The dynamic disk0 is wrongly created as thin despite stating eagerly_scrub: true, thin_provisioned: false. This fails consecutive runs.
Persistent / attached dynamic disk1 shows a wrong eagerly_scrub: false, thin_provisioned: true but is created correctly as eagerZeroedThick

The template was created with packer with a thin disk.
Creating the VM manually "Deploy from Template -> VM Templates" and setting "Select virtual disk format" to i.e. "Thick Provision Eager Zeroed" properly creates the disk.

If the whole template is needed, please let me know.

Affected Resources or Data Sources

resource/vsphere_virtual_disk
resource/vsphere_virtual_machine

Terraform Configuration

resource "vsphere_virtual_disk" "PersistentDataDisk" {
  count		              = var.persistent_data_disk == true ? length(var.vms) : 0
  size                  = var.vms[count.index].flavor.disks[1].size
  datacenter            = var.datacenter
  vmdk_path             = "${var.vms[count.index].general.vm_name} - PersistentDisk/${var.vms[count.index].general.host_name}_persistent.vmdk"
  datastore             = var.vms[count.index].general.datastore
  type                  = var.vms[count.index].flavor.disk_type == "thin" ? "thin" : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? "eagerZeroedThick" : "lazy")
  create_directories    = true
}

resource "vsphere_virtual_machine" "vms" {
  depends_on       = [data.vsphere_virtual_machine.template, vsphere_virtual_disk.PersistentDataDisk]
  count		         = length(var.vms)
  host_system_id   = data.vsphere_host.esxi_hosts[count.index].id
  name             = var.vms[count.index].general.vm_name
  resource_pool_id = data.vsphere_compute_cluster.compute_cluster[count.index].resource_pool_id
  datastore_id     = data.vsphere_datastore.datastores[count.index].id
  annotation	     = var.vms[count.index].custom_attr.annotation
  firmware         = var.firmware
  folder           = var.vms[count.index].location.folder_name
  enable_disk_uuid = true

  custom_attributes = tomap({
    (data.vsphere_custom_attribute.admin_contact.id) = (var.vms[count.index].custom_attr.admin_contact)
    (data.vsphere_custom_attribute.owner_name.id) = (var.vms[count.index].custom_attr.owner_name)
  })

  num_cpus = var.vms[count.index].flavor.cpu
  memory   = var.vms[count.index].flavor.ram
  guest_id = var.vms[count.index].general.guest_id

  dynamic "network_interface" {
    for_each = data.vsphere_network.networks
    content {
      network_id = network_interface.value.id
    }
  }

  dynamic disk {
    for_each = [for i in var.vms[count.index].flavor.disks: { 
      size   = i.size
      number = i.number
      attach = var.persistent_data_disk == true ? true : false
      path   = "${var.vms[count.index].general.vm_name} - PersistentDisk/${var.vms[count.index].general.host_name}_persistent.vmdk"
    }]
    content {
      label = "disk${disk.value.number}"
      unit_number = disk.value.number
      size = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? disk.value.size : null
      // Thick Provision Lazy Zeroed -> eagerly_scrub & thin_provisioned: false
      // Thick Provision Eager Zeroed -> eagerly_scrub: true, thin_provisioned: false
      // Thin Provision -> eagerly_scrub: false, thin_provisioned: true
      eagerly_scrub = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? var.vms[count.index].flavor.disk_type == "thin" ? false : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? true : false) : null
      thin_provisioned = (disk.value.attach == true && disk.value.number != 1) || disk.value.attach == false ? var.vms[count.index].flavor.disk_type == "thin" ? true : (var.vms[count.index].flavor.disk_type == "eagerZeroedThick" ? false : false) : null
      attach = disk.value.attach == true && disk.value.number == 1 ? disk.value.attach : null
      path = disk.value.attach == true && disk.value.number == 1 ? disk.value.path : null
      datastore_id = disk.value.attach == true && disk.value.number == 1 ? data.vsphere_datastore.datastores[count.index].id : null
    }
  }

  clone {
    template_uuid = data.vsphere_virtual_machine.template[count.index].id

    customize {
      linux_options {
        host_name = var.vms[count.index].general.host_name
        domain = var.vms[count.index].general.domain
      }

      dynamic "network_interface" {
        for_each = [for i in var.vms[count.index].network_interfaces: {
          ipv4_address = i.ipv4_address
          ipv4_netmask = i.ipv4_netmask
        }]
        content {
          ipv4_address = network_interface.value.ipv4_address
          ipv4_netmask = network_interface.value.ipv4_netmask
        }
      }

      dns_server_list = var.global_dns_server_list
      ipv4_gateway = var.global_ipv4_gateway
    }
  }
}

Debug Output

https://gist.github.com/thesefer/609fcb7334031c7d9d60699b5c1cf2c8

Panic Output

No response

Expected Behavior

disk0 correctly created as lazy, thin or eagerZeroedThick

Actual Behavior

disk0 created as thin (displayed as eager in plan/apply)
disk1 created as eager (displayed as thin in plan/apply)

Steps to Reproduce

Environment Details

No response

Screenshots

No response

References

No response

@thesefer thesefer added bug Type: Bug needs-triage Status: Issue Needs Triage labels May 7, 2024
Copy link

github-actions bot commented May 7, 2024

Hello, thesefer! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@thesefer
Copy link
Author

thesefer commented May 13, 2024

Found #2178 being a duplicate of #2116 but as per their description it looks like my problem extends the underlying issue.

@tenthirtyam tenthirtyam added this to the Backlog milestone Jun 12, 2024
@tenthirtyam
Copy link
Collaborator

Please verify with the latest - v2.8.2 - and let us know since the reported version is rather old.

@tenthirtyam tenthirtyam added the waiting-response Status: Waiting on a Response label Jul 8, 2024
@github-actions github-actions bot removed the waiting-response Status: Waiting on a Response label Jul 8, 2024
@thesefer
Copy link
Author

Hi, I've been a bit constrained last week.

I just ran the same config using v2.8.2:

thick provision:

  • disk0: eagerly_scrub = false and thin_provisioned = false -> thick
  • disk1: eagerly_scrub = false and thin_provisioned = true -> thick (would be thin)

eagerZeroedThick provision:

  • disk0: eagerly_scrub = true and thin_provisioned = false -> thick
  • disk1: eagerly_scrub = false and thin_provisioned = true -> eagerZeroedThick (would be thin)
  • Consecutive run wants to change eagerly_scrub = false -> true
│ Error: disk.0: cannot change the value of "eagerly_scrub" - (old: false newValue: true)
│ 
│   with vsphere_virtual_machine.vms[0],
│   on resources.tf line 58, in resource "vsphere_virtual_machine" "vms":
│   58: resource "vsphere_virtual_machine" "vms" {

At least I can still destroy the resource without an error but consecutive will fail because the state does not match the disks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Type: Bug needs-triage Status: Issue Needs Triage
Projects
None yet
Development

No branches or pull requests

2 participants