@darkbeldin I believe that it was created in UEFI mode but I am not certain.
When starting the .VHD file in UEFI mode I get the following screen:
It doesn't detect the hard drive at all as far as I can see....
@darkbeldin I believe that it was created in UEFI mode but I am not certain.
When starting the .VHD file in UEFI mode I get the following screen:
It doesn't detect the hard drive at all as far as I can see....
I have a Windows Server 2016 VM that gets stuck on Boot Device: Hard Disk -- Success.
Starting the VHD file from a UEFI or Secure UEFI bios config sends it directly to the bios shell prompt - the VHD file is not detected as a bootable media at all.
When set to BIOS, it gets hung up on the Boot Device: Hard Disk -- Success prompt.
Attaching the VHD image (as read-only) in Windows Disk Management from a Windows device displays all of the data so I don't think it is an issue with the .VHD file itself.
I had the same result when starting the VHD file from NFS remote storage as well as local storage.
What could cause this?
@olivierlambert Thanks for the reply. I'll change to the "master" branch. I thought you were on a release schedule because you have a "releases" tab with the releases.
I was on the XO website today looking through some of the news articles and noticed a few on recent releases of XO.
It seems that the latest github community edition release was 5.51.*.
Are the later releases not available in the Community Edition?
@fx991 Mostly fine means that you may run into issues. Just shut down your VM and restart it on the new host.
@BenjiReis Thanks! I was looking in host-view and couldn't find this.
@fx991 You cannot live-migrate between hosts with processors that have different instruction sets.
You would have to shut down the VM and then migrate it and then power it back on after migration completes.
The E5506 and E5620 have similar instruction sets but the E5506 does not support hyperthreading or AES instructions, so there are some differences here.
The E5-2643 v2 is completely different.
If you power off your VM and then migrate it, it should be fine. From what I understood from your post you tried to live-migrate it and then reboot it after it had migrated and run into issues, which I would expect to happen.
Is it possible to rename networks on XO Hosts?
For example, "Pool-wide network associated with eth2" is not very descriptive. Can we rename these to something friendly like "Vlan35" or "Storage Network"? If so, how would this be done?
Also, the (i) button at the bottom right corner under networks on a host redirects to a 404 page on XO-Server 5.51.1 / XO-Web 5.51.0. It points here.
X-O Server is 5.51.1 and XO-Web is 5.51.0.
I'm using version 8.1 with all of the latest updates installed. The pool has two hosts currently. There was a third, but it has been removed from the pool for maintenance.
I'm not sure why it is disagreeing with the host, because it works perfectly from XCP-NG center with no complaints. It just doesn't work when requesting the snaapshot with XO-Source.
Every time I try to take a snapshot in XO, I get the following message: Unknown error from the peer. When looking at it under settings > Logs, it says:
MESSAGE_REMOVED()
This is a XenServer/XCP-ng error.
This is the full text of the log:
vm.snapshot
{
"id": "ad416996-73f5-1b47-4bd3-c4a96e913b7a"
}
{
"code": "MESSAGE_REMOVED",
"params": [],
"call": {
"method": "Async.VM.snapshot_with_quiesce",
"params": [
"OpaqueRef:bb24b322-2517-4f8c-8f3a-640d4f38c29b",
"host.domain.com_2020-10-22T01:35:26.549Z"
]
},
"message": "MESSAGE_REMOVED()",
"name": "XapiError",
"stack": "XapiError: MESSAGE_REMOVED()
at Function.wrap (/etc/xo/xo-builds/xen-orchestra-202002022028/packages/xen-api/src/_XapiError.js:16:11)
at /etc/xo/xo-builds/xen-orchestra-202002022028/packages/xen-api/src/index.js:630:55
at Generator.throw (<anonymous>)
at asyncGeneratorStep (/etc/xo/xo-builds/xen-orchestra-202002022028/packages/xen-api/dist/index.js:58:103)
at _throw (/etc/xo/xo-builds/xen-orchestra-202002022028/packages/xen-api/dist/index.js:60:291)
at tryCatcher (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromise0 (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/promise.js:649:10)
at Promise._settlePromises (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/promise.js:725:18)
at _drainQueueStep (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/async.js:93:12)
at _drainQueue (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues (/etc/xo/xo-builds/xen-orchestra-202002022028/node_modules/bluebird/js/release/async.js:15:14)
at runCallback (timers.js:810:20)
at tryOnImmediate (timers.js:768:5)
at processImmediate [as _immediateCallback] (timers.js:745:5)"
}
I tried restarting the toolstack on the host and was able to successfully take a snapshot from XCP-NG center with no issues. I'm using XCP-NG from the sources and I could try to rebuild it. Has anyone run into a similar issue?