An updated installer for XCP-ng 7.5.0
Rinux last edited by Rinux
I tried the ISO but I am not able to create the volume with the softraid; I do not understand where I'm wrong but after choosing the volumes the screen remains blue and does not continue the installation even waiting for minutes...
What I see:
- one process with mdadm that does nothing
- strange, I expected to find /proc/mdstat but it is not present
- I read error messages on the log terminal (F3)
mdadm: Unrecognised md component device - /dev/sda
mdadm: Unrecognised md component device - /dev/sdb
I tried with the various options (install, safe, multipath) but none seems to go; and I can not understand where I'm wrong.
PS: The installation test is performed on non-empty disks, where there is a partitioning and an old installation of XenServer 6.2. I have not tried to wipe the disks, I would first understand if I do not skip a few steps...
@rinux Have these disks been used for RAID in the past? There may be some cases where the steps we took to clean the disks are incomplete.
What's the MD5SUM of the ISO you used (would be a good habit to automatically give that kind of information :))?
Rinux last edited by
@stormi no, the disk pair contains an old installation of XenServer (no raid) and I do not think it has ever been used for anything else... md5sum of the ISO: deb2a0990390a6a4eb51a428b6a53995 (the same as the one shown here!).
I'm not quite inside the xen logic, but on dom0 I do not find any trace of the availability of raid software profiles ... I figured I could find something with "dmesg | grep md" ... sure I'm not forgetting some boot options?
@frank-s newbie here, just 1 week of trying to use, have that exactly same problem, already make raid, did some things, when trying to install again, md127p1 is already in use, I know need the zero raid... But I don't know how.... Using another live Linux?
For me it was old mdadm superblocks. Once the partition tables have been deleted they could be anywhere depending on the original raid setup. Best thing is zero the entire disk.
dd if=/dev/zero of=/dev/sdx bs=1M status=progress
where x is your drive letter.
Do this for each raid disk
Then go and drink some beers. It will take some time...
@frank-s, very thanks for the help!
where I run this line? on shell option at xcp-ng instalation screen, or f3 when installing, or using another live linux?
To avoid filling the whole disk with zeros, you can probably "just" do a
mdadm --zero-superblock /dev/sdX(for each disk).
If it's not enough, please report back
@dvdhngs when you are in any menu in the install, use Alt key + right arrow to get a console.
I did that Olivier but for me it didn't work. That's why I zeroed both disks entirely. Worth a try though as it doesn't take long.
Do you remember, on this disk, which version of mdadm superblocks did you used before?
Hmmm. It might have been 0.9 as it was for boot partition. It wasn't whole disk raid though. Each partition was a different raid set.
I see now! Because we zero the superblock on the whole disk, it doesn't zero all the superblocks on all existing partitions.
I wonder if doing a loop that runs the zero superblock command on each partition would solved this
Probably that would work or as an alternative use dd to zero the first 45GiB of each disk shouldn't take too long. I was not pressed for time and had other things to do so I just zeroed the disks entirely after which setup was flawless.
Yes but your feedback was precious to understand why our zero superblock on the whole drive wasn't enough Now we could maybe improve the installer to avoid this problem in the future!
Issue created: https://github.com/xcp-ng/xcp/issues/107
Glad to be of help. The new raid installer is a really good thing and I am using it now on two servers. The downside, however, is that it uses whole disk raid. If it used partition based raid1 then if XCP-NG were installed without local storage repo, it would be possible (after installation) to manually create raid 10 for the storage. With mdadm this could be done with three or more disks. So at that point all the installation partitions would be raid1 with all disk partitions as members but the bulk of each disk (assuming large disks) would be left unused for raid 10 - faster local storage. Would that be an over complicated change to the installer or is this a possibility?
I completely understand your idea, but I don't see a simple solution (I mean, even just thinking in terms of possible menu in the current UI). If you can go deeper on the functional perspective (drawing with basic wireframe the process), it could help to specify it and maybe make it real then (one big rule in dev: more specs = easier to dev)
I wasn't suggesting that the installer should do raid 10 necessarily. For XCP-NG itself raid 1 is sufficient. Just suggesting that the partitions of the installation could each be a different raid 1 set rather that simply doing raid 1 on a whole disk basis. If the end user chose to have the installer create local storage it could be just raid1 on another (big) partition. For those who want improved performance however there is the possibility to manually create raid 10 local storage post install.
What is wireframe? (I am not a developer).
I was suggesting you just draw the workflow as you imagine it during the install. Example of a wireframe:
@olivierlambert couldn't open /Dev/sda for write - not zeroing
mdadm --stop /dev/md127
Cannot get exclusive access to /dev/md127
Perhaps a running process, mounted filesystem or active volume group?
I will try with DD now