@olivierlambert oh ok, I saw that and thought your comment was specific to the ESXi 7 version.
Thanks for the speedy response. I'd love to know how you clone yourself to be so responsive 24/7. Or am I talking to a Vates instance of ChatGPT?
@olivierlambert oh ok, I saw that and thought your comment was specific to the ESXi 7 version.
Thanks for the speedy response. I'd love to know how you clone yourself to be so responsive 24/7. Or am I talking to a Vates instance of ChatGPT?
@olivierlambert yes, I think that would do it. I can't see any other difference at this point.
@alexredston I don't know what "old server" means to you, but 42MB/se could be ~320Mbps which would be about the maximum on a SCSI bus. If I'm right and it's Ultra2 SCSI, there's little that you'll be able to speed up by increasing available network bandwidth.
https://en.wikipedia.org/wiki/Parallel_SCSI#:~:text=At 10 MHz with a,rate of 640 MB%2Fs.
Mike
Admittedly, I'm not proficient in this kind of programming, not at all. I tried reading through the code you linked to above and it makes no sense to me. Normally I would try to try fixing this myself, but I just don't have time to invest learning something that's this new to me.
However, I know in a bash script we simply partition the drive and it wouldn't change another thing about subsequent commands, because you're still writing to the drive in the same manner. How is that different in this code?
Your comments make me sad and afraid that you won't take up this challenge to support Windows VMs with cloud-init. I would ask that if you put this off into the future, that you update any blogs or documents to explicitly describe that Windows VMs are excluded from cloud-init compatibility for now, to save anyone else from spending as much time on this as I have.
On a personal note, the news that warm VM migrations from VMware will be built into the GUI for all versions of VMware is then a bittersweet story because it will give us what we need to support migration of our virtualization platforms from VMware. But lack of support for terraform / cloud-init for Windows would mean having to pivot to re-engineer our automation service.
Please advise what we can expect of this bug-squashing in terms of time.
Thanks for all you have done to create and continually improve this platform.
Mike
@olivierlambert yes, I think that would do it. I can't see any other difference at this point.
As far as I can tell - XO is creating the cloudconfig on a "VFAT" drive with no partition table. Windows needs a partition table. I think this problem could simply be addressed by adding the creation of a FAT16 partition table - the resulting config drive would still be usable by Linux and additionally by Windows VMs.
FAT32 could be part of the problem - the minimum partition size for FAT32 is 256MB...
https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system#Size_limits
I just ran through this experiment.
Used a terraform plan to deploy a Windows VM.
Used a terraform plan to deply a Linux VM.
On the Linux VM, mounted the "XO CloudConfigDrive" and used fdisk to format the device "/dev/xvdb" (on RHEL in my case) and then added a FAT16 partition.
Here is what I see from fdisk on a "normal CloudConfigDrive
Command (m for help): p
Disk /dev/xvde: 10 MiB, 10485760 bytes, 20480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6c302ad6
Here is what I see from fdisk on a "Windows-enabled" CloudConfigDrive
Command (m for help): p
Disk /dev/xvdc: 10 MiB, 10485760 bytes, 20480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x54d69752
Device Boot Start End Sectors Size Id Type
/dev/xvdc1 2048 20479 18432 9M 6 FAT16
================================
Now, cloudbase-init is able to see the CloudConfig drive and act upon it.
Edit: I see now this should probably have been put in the Xen Orchestra forums - can it be moved? Sorry!
Hello All
Having read the recent Terraform and Cloud-init blogs and emails over the past year or so, we decided to use Terraform to deploy our VMs.
Unfortunately, it sounds like nobody has tried doing this with Windows VMs. If you have, or know anybody who has... PLEASE let me know what I'm doing wrong here.
We have followed the tutorials and online examples in creating the Terraform configurations correctly. And it works. Cloud-init creates the 10MB "nocloud" configuration drive and the VM starts successfully.
Joy!
Except...
Cloudbase-Init (the cloud-init compatible service for Windows) is unable to mount the config drive. It "sees" it as an unformatted drive. I can have Windows format it. But there's nothing I can do to mount it.
I have verified that it IS creating the drive - I disconnected the VDI from the Windows VM, attached it to a Linux VM and presto - I can mount the drive, and I see all the configuration that I've written to it with Terraform.
I thought maybe the Terraform-Provider guys/gals might be interested (https://github.com/terra-farm/terraform-provider-xenorchestra/issues/229), but they indicated that this would be controlled by how XO creates the cloud-init drive.
I'm here to say -- whatever you're doing, Windows can't mount it. I have been reading a LOT of material online - I can see other hypervisors doing this for Windows VMs and Terraform, just nobody else (but me, apparently) has tried it with XO.
I'm hoping you have some easy advice for something that I have missed, so I can get on with my projects.
If not - please consider including someway of formatting this config drive that it can be mounted by both Linux and Windows VMs, not just Linux VMs.
Thank you for your time
Mike
@ddelnano OK I will, thank you for taking the time to respond. I wasn't sure if having an account at GitHub was enough of a qualification to post an issue. I'll take care of doing that ASAP.
But I'm curious - I don't know if this is a bug, or if it is a feature I haven't learned how to maximize yet. If we already have an ISO on an SR, do we even need to mount the ISO for a network build of Centos/RHEL? Isn't... shouldn't it be possible for the ISO to be mounted by the new build VM as source for creating the new VM, instead of using network build?
The new VM mounts the guest tools to the DVD - which I don't even want to use as I prefer to install a package afterwards. Is it possible to mount the RHEL/Centos ISO instead?
In this situation, this would eliminate 3 minutes of upload, 3 minutes of checksum'ing and would probably halve the amount of time it takes to build the new VM/template.
Thanks again, never heard of packer before a friend turned me on to it, and now I really want to maximize it!
@pedroalvesbatista thank you for asking - yes, I did start with that and used it as a template for a RHEL 8.7 VM.
I want there to be NO DOUBT - I love how this works. I just wish it didn't have those bugs.
@olivierlambert ah nuts I pasted in the wrong repo.
Yes, I saw that - this is all still relevant, because I am indeed using this plugin: ddelnano/packer-plugin-xenserver
I'll update my original post.
Unfortunately, I'm not familiar with the underlying programming necessary to do "pull requests" and things like that. Otherwise I would gladly contribute a fix.
I'm posting here to get information that I have not been able to find elsewhere, having read everything I can find about using Packer to deploy VMs through XO, using this Packer plug-in: https://github.com/ddelnano/packer-plugin-xenserver
Over a few weeks I have managed to find a Packer and RHEL Kickstart configuration that produces a VM in XO. That was a good feeling.
But we also still have to support the VMware platform, and having used the VMware plugin for Packer, I can see important differences that do seem to be issues when you think about them. I'm not sure where is "the right place" to do this, so I'm starting here hoping that I'll either find the right eyeballs or ones that can direct me to the right place.
References: https://github.com/ddelnano/packer-plugin-xenserver
Setting the required iso_url variable does not result in the ISO being cached. That poor ISO file is uploaded to an SR each and everytime. And instead of overwriting the existing one, a new copy is spawned on every build. My iso_url is set to use the local filesystem; it would be nice if the plug-in checked to see if the file already existed in the SR before uploading another copy, or at least cleaned-up after itself by deleting the copy. This seems like a bug.
Setting the clone_template variable to one of the XO-provided base templates results in an error during packer build complaining that "multiple" have been found. Why is it querying across all pools, when I had to provide credentials to deploy on a specific XCP-ng host? It's easy to get around by making a copy in the pool I'm using with a unique name, but there's nothing in the documentation specifying this is required. This seems like a bug.
Setting keep_vm to "always" does not result in the packer-built-VM being kept at the end of the build. Seems like a bug.
Setting iso_checksum_type to "none" does not bypass the checksum test. It may be a good thing to perform this during the initial upload of an ISO but... everytime?
The link to the following documentation for "In order to see an exhaustive list of configuration options for the packer builder please see the following documentation. " is broken. (https://github.com/ddelnano/packer-plugin-xenserver/tree/master/examples)
The link examples in "See the examples for working boot commands." is broken. (https://github.com/ddelnano/packer-plugin-xenserver/blob/master/docs/builders/iso/xenserver-iso.html.markdown)
The link xenserver-iso docs in "or complete documentation on configuration commands, see the xenserver-iso docs" is broken. (https://github.com/ddelnano/packer-plugin-xenserver)
In the end... I've got something that works... I'm learning to live with these things, but would love to know if further development is going to be sponsored.
The XCP-ng/XO, Packer and packer-plugin-xenserver developers have my thanks in providing an alternative to VMware.
Mike
Edited to fix my error in referencing the older packer-plugin instead of the new one being maintained by Mr. Dom Del Nano (ddelnano)