andriy.sultanov said in XCP-ng 8.3 updates announcements and testing:
does this reproduce if you reboot again
Yes, this does. Logs have been provided via pm.
andriy.sultanov said in XCP-ng 8.3 updates announcements and testing:
does this reproduce if you reboot again
Yes, this does. Logs have been provided via pm.
[18:38 vms04 ~]# xe host-call-plugin host-uuid=5f94209f-8801-4179-91d5-5bdf3eb1d3f1 plugin=raid.py fn=check_raid_pool
{}
[18:38 vms04 ~]#
[18:38 vms04 ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdb[1] sda[0] sdc[2]
11720658432 blocks super 1.2 512k chunks
unused devices: <none>
[18:56 vms04 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri May 2 14:43:06 2025
Raid Level : raid0
Array Size : 11720658432 (11177.69 GiB 12001.95 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri May 2 14:43:06 2025
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : vms04:0 (local to host vms04)
UUID : bbdc7253:792a8ada:ee21a207:4b8d52d2
Events : 0
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
[18:56 vms04 ~]#
stormi No stats until the toolstack restarts, as with previous candidates
So it looks like it was a browser cache problem, solved by deleting it.
Hi,
In XO from source commit ab569, the tags I'm using don't appear in the pick list.
In XOA, they are available in the pick list
Any idea?
Under what circumstances is the RAID array status displayed here? I have an mdadm RAID array on my host (dom0), but here it says "No RAID information available":
XCP-ng-JustGreat Until this is fixed, you need to restart the toolstack after each host restart, which is not often done, so it's not really a problem.
Greg_E Please try restarting the toolstack once more.
lawrencesystems
Thanks a lot for all the information and recommendations!
lawrencesystems
And what is this option "Use differential restore" for? Do you have experience with it?
lawrencesystems
Yes I understand that a normal restore will create a new VM. But there are situations where I just need to overwrite the backed up VM itself. So this is only possible with a snapshot and only the new VM is always restored from the backup?
Hi,
how exactly should I proceed if I want to restore a VM from backup but not as a new VM, but restore/replace the original one to keep the GUID etc.
Thanks in advance.
The update went fine and everything is working fine.
gduperrey said in XCP-ng 8.3 updates announcements and testing:
xapi: Re-enabled nested virtualization in 8.3, with the same limitations as in 8.2.
Since I keep bothering with nested virtualization here on the forum, I of course immediately tried the support in 8.3
Setup:
Windows installation on the nested hypervisor went ok and the system seems to be working fine.
The problem occurred with Debian. ISO 12.9 netistall, UEFI was used. The system boots up and shows the notorious install screen: Graphic install, Install ... Regardless of the type of installation chosen, immediately after starting it, nested hypervisor XCP-ng 8.3 crashes and reboots. By the way, this problem with Debian is also on VMware - if I use nested XCP-ng 8.3 there, the Debian installation crashes it just the same.
It looks like this issue occurs only when saving the backup to non-block storage. When saving to block storage ("Store backup as multiple data blocks instead of a whole VHD file: ON"), it does not, at least in my environment.
It's really confusing. I'm using the current version of XO and I see this in the log:
The value in the file mentioned is the expected value and the backup file itself is the appropriate size:
Xen Orchestra, commit 494ea
As if XO calculates the value before the backup is complete and the file has time to grow a bit?
Hello florent may I remind you to make this question clear, when and under what circumstances is compression used and when is it not? Thanks a lot.
tjkreidl We don't need performance, but we do need to test how XCP-ng pools, networking, migration, live migration, backup, import from VMware and so on work. It's just a playground where we can have relatively many XCP-ng hosts, but it's not about performance, it's about efficiency and low requirements, because it's just a playground where we learn, validate how things work, and prepare the process for the final migration from VMware to XCP-ng. We originally had two R630s ready for this, then 4, but that would have been unnecessary, given the power consumption, to have physical hypervisors, so in the end we decided to virtualize it all. Well, on ESXi it's because XCP-ng works seamlessly there in nested virtualization.