Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a disk on a exists vm will recreate this VM #2233

Closed
4 tasks done
johng521888 opened this issue Jul 4, 2024 · 7 comments
Closed
4 tasks done

add a disk on a exists vm will recreate this VM #2233

johng521888 opened this issue Jul 4, 2024 · 7 comments
Assignees
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/vm Area: Virtual Machines bug Type: Bug

Comments

@johng521888
Copy link

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

1.7.1

Terraform Provider

2.6.1

VMware vSphere

7.0.3

Description

I modified main.tf and tried to add a disk to an existing VM. Terraform apply will recreate my VM, which will cause data loss. Is there any way to avoid the re-creation?

Affected Resources or Data Sources

resource/vsphere_virtual_machine

Terraform Configuration

1、create vsphere vm from template with only one disk first
2、add another disk to the vsphere_virtual_machine resource

Debug Output

vsphere_virtual_machine.vm:Destroying... [id=422de896-1dbe-6669-d019-fa8e69afd15f]

Panic Output

No response

Expected Behavior

when I add a new disk to an existing VM, I want to avoid recreating the VM after executing terraform apply.

Actual Behavior

add new disk will recreate VM

Steps to Reproduce

1、use terraform create an VM with only one disk instance
2、use terraform create an disk
3、Modify the main.tf file of the virtual machine and bind the disk created in the second step to the virtual machine
4、apply this VM will destroy and recreate this VM

Environment Details

No response

Screenshots

No response

References

#434

@johng521888 johng521888 added bug Type: Bug needs-triage Status: Issue Needs Triage labels Jul 4, 2024
Copy link

github-actions bot commented Jul 4, 2024

Hello, johng521888! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@spacegospod
Copy link
Collaborator

Hey @johng521888 I tried this myself, I added both a simple disk block and one with a path to an existing vmdk.
Neither of these destroy the VM, they just reconfigure it.
I ran this experiment on a basic VM and on one that I created from a template.

Can you try updating your provider to the latest version?
If you continue to encounter this problem please share your configuration and the output from terraform plan so that we can see the diff

@spacegospod spacegospod added acknowledged Status: Issue or Pull Request Acknowledged area/vm Area: Virtual Machines labels Jul 8, 2024
@tenthirtyam tenthirtyam added the waiting-response Status: Waiting on a Response label Jul 8, 2024
@tenthirtyam tenthirtyam added this to the Backlog milestone Jul 8, 2024
@johng521888
Copy link
Author

Hey @johng521888 I tried this myself, I added both a simple disk block and one with a path to an existing vmdk. Neither of these destroy the VM, they just reconfigure it. I ran this experiment on a basic VM and on one that I created from a template.

Can you try updating your provider to the latest version? If you continue to encounter this problem please share your configuration and the output from terraform plan so that we can see the diff

Hi spacegospod,
i have tried today. but have same issue in version 2.8.2 vsphere provider. i have two modules as below
1、vshereVM(to create a machine)
resource "vsphere_virtual_machine" "vm" {
name=var.hostname
num_cpus = var.vcpu
memory = var.memory
datastore_id = xxx
resource_pool_id = xxx
guest_id = xxx
scsi_type = xxx

disk {
    lable = "disk0"
    size = var.disk_size
    eagerly_scrub = xxx
    thin_provisioned = xxx
}
#disk {
    #attach = true
    #path = "xxxx/thisistestdisk.vmdk"
    #label = "disk1"
    #datastore_id = xxx
    #unit_number = 1
    #io_limit = 3000 

#}
......
......
}
first time i create this machine with one disk use terraform apply

2、create another disk use below tf
resource "vsphere_virtual_disk" "virtual_disk" {
size = 100
vmdk_path = first_create_machine_UUID/thisistestdisk.vmdk
datacenter = same_machine_datacenter
datasotore = same_machine_datastore
}

3、Uncomment virtual machine second disk in main.tf and execute terraform apply in virtual machine module

resource "vsphere_virtual_machine" "vm" {
name=var.hostname
num_cpus = var.vcpu
memory = var.memory
datastore_id = xxx
resource_pool_id = xxx
guest_id = xxx
scsi_type = xxx

disk {
    lable = "disk0"
    size = var.disk_size
    eagerly_scrub = xxx
    thin_provisioned = xxx
}
disk {
    attach = true
    path = first_create_machine_UUID/thisistestdisk.vmdk
    label = "disk1"
    datastore_id = xxx
    unit_number = 1
    io_limit = 3000 

}
......
......
}

plan:1 to add, 0 to change, 1 to destroy.

I don't know why this happens, can I avoid destroying my virtual machine? Because not everyone knows at the beginning whether they want to add a second disk

@github-actions github-actions bot removed the waiting-response Status: Waiting on a Response label Jul 10, 2024
@spacegospod
Copy link
Collaborator

Adding a disk shouldn't just by itself be able to force a VM to get re-created.
Have any changes been made to this VM outside of Terraform?

I'd be curious to see the output just above 1 to add, 0 to change, 1 to destroy.
Terraform should display the configuration diff.
It would also be useful to run the provider with TF_LOG set to TRACE and share the detailed logs, just make sure to obfuscate any potentially sensitive data.

@johng521888
Copy link
Author

Thank you @spacegospod.
i have solved this problem. the reason is that the instance lifycycle has not been completed. so each apply operation will also recreate the vm.

My VM network name contains special symbols, and terraform cannot recognize this symbol "/", which causes the life cycle of the created VM to fail to complete normally.

@spacegospod
Copy link
Collaborator

Thank you for the update, I'm going to close the ticket now.
Feel free to re-open if you need assistance

@tenthirtyam tenthirtyam removed the needs-triage Status: Issue Needs Triage label Aug 14, 2024
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 14, 2024
@tenthirtyam tenthirtyam removed this from the Backlog milestone Sep 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/vm Area: Virtual Machines bug Type: Bug
Projects
None yet
Development

No branches or pull requests

3 participants