@olivierlambert aha, I misinderstood. Should I open another topic or perhaps a support ticket?

Posts
-
RE: Live Migration Very Slow
-
RE: ACPI Error: SMBus/IPMI/GenericSerialBus
@dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
-
RE: Live Migration Very Slow
Hi, sorry for revisiting an older topic, but we have the same issue with slow VM migration. We changed from 2x1G to 2x10G network, but the migration performance from one host's local SR to another host local SR is not much improved.
Using XCP-ng 8.2.1 with up-to-date patches. Local storage on both hosts is SSD RAID1, ext4.
It would be very good if we could improve this situation. Currently we are seeing only 5% network utilisation.
-
RE: ACPI Error: SMBus/IPMI/GenericSerialBus
Found a link with similar issue and a fix by disabling ACPI power monitoring. Would that have any impact in XCP-ng - i.e. is this feature used by anything?
https://www.suse.com/support/kb/doc/?id=000017865
EDIT: Perhaps it is netdata. I will disable netdata and check again.
-
ACPI Error: SMBus/IPMI/GenericSerialBus
With the latest XCP-ng updates, I am getting dmesg errors. They appeared immediately after
yum update
finished, and remain after reboot. Anyone seen this before and knows what to do?[Mar19 10:20] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [ +0.000009] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [ +0.000008] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) [ +0.999960] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [ +0.000008] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [ +0.000008] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338) [ +0.999961] ACPI Error: SMBus/IPMI/GenericSerialBus write requires Buffer of length 66, found length 32 (20180810/exfield-393) [ +0.000009] ACPI Error: Method parse/execution failed \_SB.PMI0._PMM, AE_AML_BUFFER_LIMIT (20180810/psparse-516) [ +0.000008] ACPI Error: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20180810/power_meter-338)
This is a HPE DL325 Gen10 EPYC system with XCP-ng 8.2.1.
-
RE: Epyc VM to VM networking slow
@olivierlambert said in Epyc VM to VM networking slow:
No obvious solution yet, it's likely due to an architecture problem on AMD, because of CCDs and how CPUs are made. So the solution (if there's any) will be likely a sum of various small improvements to make it bearable.
I'm going to Santa Clara to discuss that with AMD directly (among other things).
Do we have other data to back this? The issue is not really common outside of Xen. I do hope some solution comes out from the meeting with AMD.
-
RE: Misleading status in VM->Backup screen
@olivierlambert Nice
Thanks for the feedback.
-
RE: Misleading status in VM->Backup screen
@DustinB Yes, I remember it being like this. However, it would be nice if it wasn't
So I'd see this as a feature request.
-
Misleading status in VM->Backup screen
When looking at the VM->Backup screen, the status (FAILED/OK) is for the Backup as a whole, and not specifically the VM itself, which is a little misleading.
In this screenshot we see Failed, but this specific VM was in fact backed up properly.
Using XOA Premium, Stable channel.
-
RE: What's the recommended way to reboot after applying XCP-ng patches?
I normally install uptades on the pool master, shutdown VMs, reboot. Then i do the same procedure on pool members.
-
RE: XOCE - Console gor crazy after typing special char
@Chico008 easiest is to switch to another tab, like stats, and then back to console.
xcp-ng center might be an alternative as works better in this regard.
-
RE: Migrating an offline VM disk between two local SRs is slow
Using XOA "Disaster Recovery" backup method can be a lot faster than normal offline migration.
One time I did it, it took approx 10 minutes instead of 2 hours...
-
RE: Delta Backups
@DustinB said in Delta Backups:
@IanMcLachlan Are you trying to run multiple types of backups in the same job? Can you show us your "Backup Jobs" overview page?
I think he is referring to the full backup interval option: https://docs.xen-orchestra.com/incremental_backups#key-backup-interval
-
RE: XCP-ng host - Power management
Enabling various custates and frequency scaling in BIOS/Firmware can help powr consumption. There is some latency cost as it takes longer time from deeper states.
Linux also has CPU frequency governors, but I am not sure how the Xen kernel handles this. Remember that dom0 is a VM under Xen, so things aren't as simple as with plain bare-metal Linuxes.
There's some information here about this:
https://wiki.xenproject.org/wiki/Xen_power_management -
RE: Epyc VM to VM networking slow
@TeddyAstie That is interesting. I had a look. The default seems to be
cubic
, butbbr
is available usingmodprobe tcp_bbr
. I also wonder if different queuing disciplines (tc qdisc
) can help. For example mqprio that spreads packes across the available NIC HW queues? -
RE: Migrating an offline VM disk between two local SRs is slow
@olivierlambert said in Migrating an offline VM disk between two local SRs is slow:
80/100MiB/s for one storage migration is already pretty decent. You might go faster by migrating more disks at once.
I'm not sure to understand what difference are you referring too? It's always has been in that ballpark, per disk.
This is not over a Network, only between local ext4 SRs on the same server. I tried the same migration using XCP-ng center and it is at the moment double as fast:
Can't really see any difference though. It is the same sparse_dd and nbd connection
Perhaps it's a fragmentation issue. Though, doing a copy of the same VHD file gives close to 500MB/s.
-
RE: Migrating an offline VM disk between two local SRs is slow
@DustinB said in Migrating an offline VM disk between two local SRs is slow:
Separate question, why are you opting to use RAID 0, purely for the performance gain?
Yes, performance for bulk/temp data stuff.
-
RE: Migrating an offline VM disk between two local SRs is slow
@DustinB, no these are local disks not networked disks. I used to get around 400-500MB/s or so for plain migration between the two SRs.
-
RE: Backup folder and disk names
@jebrown you could export xva or ova copies. But then those are not exactly identical to backups.
-
Migrating an offline VM disk between two local SRs is slow
Hi!
I had to migrate a VM from one local SR (SSD) to another SR (4x HDD HW-RAID0 with cache) and it is very slow. It is not often I do this, but I think that in the past this migration was a lot faster.
When I look in
iotop
I can see roughly 80-100MiB/s.
I think that the issue is the sparse_dd connecting to localhost (10.12.9.2) over NBD/IP that makes it slow?
/usr/libexec/xapi/sparse_dd -machine -src /dev/sm/backend/55dd0f16-4caf-xxxxx2/46e447a5-xxxx -dest http://10.12.9.2/services/SM/nbd/a761cf8a-c3aa-6431-7fee-xxxx -size 64424509440 -good-ciphersuites ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-SHA256:AES128-SHA256 -prezeroed
I know I have migrated disks before on this server with multiple hundreds of MB/s, so I am curious to what is the difference.
This is XOA stable channel on XCP-ng 8.2.