Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
Perhaps with 8.3 stable release?
I'm interested mainly because of bit rotting detection of ZFS.
Latest posts made by Paolo
-
RE: First SMAPIv3 driver is available in preview
-
RE: EOL: XCP-ng Center has come to an end (New Maintainer!)
@michael-manley Hi Michael, have you any plan to release a new version working with latest xcp-ng 8.3?
-
RE: XO - enable PCI devices for pass-through - should it work already or not yet?
@MathieuRA said in XO - enable PCI devices for pass-through - should it work already or not yet?:
So the error is "normal".
XOA tries to evacuate your host before restarting it, but your host's VMs have "nowhere to go" since you don't have another host.
Are you using XO from sources?IMHO If there isn't a host where to evacuate the running VMs, the reboot procedure should simply notify the user and shutdown the VMs.
-
Disk performance on Stats
Why a disk/SR/VDI performance graph is not available on stats page for host/VM?
Thanks -
RE: Suggestions for new servers
I've done some benchmarks with my new servers and want to share results with you.
Server:
- CPU: 2 x Intel Gold 5317
- RAM: 512GB DDR4-3200
- XCP-ng: 8.3-beta2
fio parameters common to all tests:
--direct=1 --rw=randwrite --filename=/mnt/md0/test.io --size=50G --ioengine=libaio --iodepth=64 --time_based --numjobs=4 --bs=32K --runtime=60 --eta-newline=10VM Debian: 4 vCPU, 4 GB Memory, tools installed
VM Windows: 2/4 vCPU, 4 GB Memory, tools installedResults:
- First line: Troughtput/Bandwidth (MiB/s)
- Second line: IOPS: KIOPS (only linux)
Considerations:
- On bare metal I get full disk performance: approx. double read speed due to RAID1.
- On VM the bandwidth and IOPS are approx 20% of bare metal values
- On VM the bottleneck is tapdisk process (CPU at 100%) and can handle approx 1900 MB/s
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
I suppose i can install new servers with xcp-ng-8.3.0-beta2-test3.iso and then update to rc/stable release with yum update.
Am i correct?Regards
Paolo -
RE: Suggestions for new servers
My new servers will be available in the next week. The full specs are:
- 2 x CPU Xeon Gold 5317
- 384 GB DDR4-3200 RAM
- 2 x 10Gb Intel X550 cards
- Adaptec SmartRAID 3102E-81
- 2 x 480 GB SDD
- 2 x 12.8 TB NVMe
Planned setup (KISS):
- XCP installed on 2x480 GB SDD in RAID1 with HW controller
- SR on local storage using NVMe units with SW RAID1
- no HA
- single-host pools with VM replication between hosts/pools
- backup on external NFS file server
- networking 10 Gbe (xcp-ng hosts, backup file server)
Questions / your opinion about:
- Is 8.3 version suitable for production use?
- How can i download beta2 ISO?
- I'm thinking to use ZFS/RAID1 instead of mdadm/EXT4 for local storage (with at least 16 GB on Dom0). Is a good choice?
- bonded links (2 x 10Gb) for xcp-ng server and backup file server (with the switch) are useful or the max. backup speed remain 10 Gb?
- any other suggestion?
Thanks
-
RE: Suggestions for new servers
Tanks a lot @olivierlambert and @planedrop for your answers.
IMHO with NVMe (PCIe devices) RAID1 / RAID10 configuration the mdadm overhead on the CPU should be negligible, and also reliability should be very good.
Anyway I'll investigate the cost of hardware RAID controller. -
Suggestions for new servers
I'm planning to buy new servers with NVMe (U.2) based local SR (and install xcp-ng 8.3).
I've some questions:
- can i use a single software (mdadm) RAID1 or RAID10 volume for both OS and SR?
- Is better to use two distinct (mdadm) volumes for OS (2xSSD RAID1) and SR (2xNVMe RAID1)?
- Is VROC an option (I think no because it is a software raid)?
- Is hw controller an option to evaluate?
- Is there any size limit (I'm thinking to use 2 x 7.68 TB U.2 NVMe Samsung PM9A3 DC Series)?
- I will opt for a CPU (2 x Intel Gold 5317) with good single-thread performance to maximize tapdisk throughput. Is it a good idea?
- Other parts: Intel X550T2 2x10 gbe, 256 GB DDR4-3200 RAM, redundant power supply
Any other suggestion about storage for a new server?
Thanks
-
RE: Not enough server memory when start VM
Hi Olivier,
The host is a single host (no other hosts in the pool) with local storage (EXT) only.
I've done some other tests:-
Tried to detach disks, re-create VM, re-attach disks: no success
-
the VM starts if memory is reduced from 16GB to 10GB
-
I've created a new VM (W2019 template) and the behaviour is the same: with 16GB memory the error is "not enought memory"; with 10GB the VM starts
-
Starting one of the 10GB memory VM, no other VM can be started (also with only 1 GB memory)
It's like the host has lost 15GB of free memory...
SOLVED: Restarting Toolstack the problem is solved: now i can start 16GB VM and another 9GB VM to use all available memory (25,5 GB).
I'd be curious to know what happened...
-