Awesome ! Thanks Vates and DESY for all the work that went into this.
I'm really looking forward to use the pulumi provider when I get to the automation part of my tests (probably in a few months though).
Awesome ! Thanks Vates and DESY for all the work that went into this.
I'm really looking forward to use the pulumi provider when I get to the automation part of my tests (probably in a few months though).
@florent said in Feedback on immutability:
@rtjdamen for the immutability to be useful, the full chain must be immutable and must never be out of immutability
the merge process can't lift/ put back the immutability , and increasing synchronization between process will extend the attack surface.
immutability duration must be longer than or equal to 2 time the full backup interval -1
the retention must be strictly longer than the immutability .for example, if you have a full backup interval of 7 a retention of 14 and immutability duration of 13 , key backup are K, delta are D. Immutable backup are in bold . unprotected chain are
striked
KDDDDDDKDDDDDD worst case, only one full chain protected
KDDDDDKDDDDDDK
KDDDDKDDDDDDKD
KDDDKDDDDDDKDD
KDDKDDDDDDKDDD
KDKDDDDDDKDDDD
KKDDDDDDKDDDDD best case almost 2 full chain protected
I have not tried backups in XO yet but I'm really looking forward to test the immutability as we have it configured on all veeam backups at work.
Just to be sure, the XO immutability "agent" only does its immutability check by date right ?
Would it be possible to consider the entire backup chain related to the oldest immutable restore point instead ? This would prevent misconfigurations from the user that result in insecure backup chains.
Hi @olivierlambert @florent ,
I didn't have much time to work on this in the last weeks but I finally could dig deeper thanks to the migratekit repo.
Essentially, they are delegating all the work to nbdkit and its vddk plugin (https://gitlab.com/nbdkit/nbdkit and https://libguestfs.org/nbdkit-vddk-plugin.1.html) by spawning an external process (https://github.com/vexxhost/migratekit/blob/a08325d420733e4eb26331d87bf6ef46d8cccd7f/internal/nbdkit/builder.go#L82).
The authentication info is simply the authentication to vCenter/ESXi provided by the end-user if I'm not mistaken and the filename given to nbdkit is indeed gathered from the VirtualDeviceBackingInfo property. They are using the govmomi auto-generated library for this.
For instance, on a snapshot of one of our VMs:
You can see the property path at the top and the fileName property contains the "[datastore-name] filepath" string.
The "device[2000]" part of the path is from the list of devices attached to the VM that can also be accessed following the snapshot moref:
Migratekit is then filtering on the VirtualDisk type in the device list.
Now, the problem in this setup is that nbdkit is using VDDK directly, but the development kit cannot be redistributed without a licence agreement from Broadcom: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-sdks-tools/8-0/virtual-disk-development-kit-programming-guide/the-virtual-disk-api-and-vsphere/developing-for-vmware-platform-products/redistributing-vddk-components.html
The user would have to download and install VDDK manually.
I hope this helps and let me know if you need more details on all this. I played a bit with pyvmomi 5+ years ago but I never used the SOAP API "manually" though.
Thanks for the details @florent
@florent said in What is the status/roadmap of V2V (Migrating from VMware to XCPng/XO) ?:
the newer VMFS put more lock on the files, locking the full chain of snapshot and base disks instead of locking only the active disk.
Even VMFS5 sometimes lock the full chain.
That explains why I had locking issues trying to restart the source VM on vmware after a migration test.
I'll see if I can find anything on how to use NBD with vmware.
@olivierlambert Thanks for the feedback.
Is the limitation only due to VMFS or both the esxi version and VMFS ? Because vsphere8 still supports VMFS5 and we could imagine a 2-step migration by manually moving VMs on a temporary datastore. However, if the issue is the API change with vsphere8 then I understand that it would be difficult indeed.
I'm sure the dev team has already explored the subject to build V2V in the first place but just in case it could help, here are the relevant veeam and vmware docs for vmdks transport modes (V2V is NBD mode if i'm not mistaken):
https://helpcenter.veeam.com/docs/backup/vsphere/transport_modes.html?ver=120
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vddk-programming-guide/GUID-15395099-5300-4D3F-BCC3-E50DCDC954C2.html
I imagine building a viable alternative is quite a big project in itself.
@olivierlambert Essentialy yes, though it would be great to have a recap of the current situation.
Hello, I open this post to gather some information about the current status of the V2V process and the warm migration that goes with it.
As many in the industry, we are currently exploring our options to migrate to an alternative to vSphere/ESXi. The V2V option offered by XO is great for that but there are some limitations to it especially if you are following the latest esxi releases.
I have found the following information browsing though the forums:
The following comment from Danp (25 may 2024):
While the developers are continuously improving this feature, I don't know if warm VM migration will eventually be supported for VMFS 6.
https://xcp-ng.org/forum/post/77898
The following comment from florent (2 may 2024):
[...] you can migrate without a NFS datastore, but on esxi 6.5+ , you'll need to shutdown the VM before starting the migration [...]
https://xcp-ng.org/forum/post/76709
I have briefly tested to migrate a VM from vcenter/esxi 8. I tested multiple combinations of VMFS5 and VMFS6 (same shared iSCSI datastore) and connecting on vcenter or the esxi host directly. Every test resulted in a cold migration.
So the questions are, what is the currently supported configuration ? What is the roadmap for the V2V feature and its import sources ?