Good job guys!
Posts
-
RE: Alert: Control Domain Memory Usage
So, I checked the other hosts in my environment that run the same types of VMs and also have the same version of xcp-ng.
Hosts that are not seeing this memory leak have BCM5720 1GbE interfaces. They are not as heavily used so I'm not sure if the leak only occurs if usage is very high or using a specific feature/ function in that driver.
-
RE: Alert: Control Domain Memory Usage
@olivierlambert The servers (2) I am seeing the memory leaks are used exclusively for network intensive applications. they route and tunnel many (100+) tunnels.
Other systems I have with similar host configuration are not seeing any increased domain memory usage.
-
RE: Alert: Control Domain Memory Usage
We also have 10GbE Intel interfaces on the affected servers, not using iscsi yet on these servers.
So I think the comon factors right now would be the sas megaraid and 10GbE Intel nics?
-
RE: Alert: Control Domain Memory Usage
@olivierlambert I don't have a ticket because we don't have paid support for xcp-ng
-
RE: Alert: Control Domain Memory Usage
our environment also seeing the leak in xcp-ng 8.1 hosts. At first I doubled the amount of memory allocated to the control domain from 4G to 8G and now 30 something days later it ran out of ram again... this is the 3rd time the hosts (4) run of memory. This is really causing us issues.
# lsmod Module Size Used by tun 49152 0 nfsv3 49152 1 nfs_acl 16384 1 nfsv3 nfs 307200 2 nfsv3 lockd 110592 2 nfsv3,nfs grace 16384 1 lockd fscache 380928 1 nfs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe scsi_transport_fc 69632 3 fcoe,libfc,bnx2fc openvswitch 147456 11 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 1 garp llc 16384 2 stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 8 xt_multiport 16384 1 xt_conntrack 16384 5 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch dm_multipath 32768 0 iptable_filter 16384 1 sunrpc 413696 18 lockd,nfsv3,nfs_acl,nfs sb_edac 24576 0 intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel crypto_simd 16384 1 aesni_intel cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel dm_mod 151552 5 dm_multipath intel_rapl_perf 16384 0 sg 40960 0 i2c_i801 28672 0 mei_me 45056 0 mei 114688 1 mei_me lpc_ich 28672 0 ipmi_si 65536 0 ipmi_devintf 20480 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si acpi_power_meter 20480 0 ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables hid_generic 16384 0 usbhid 57344 0 hid 122880 2 usbhid,hid_generic sd_mod 53248 5 ahci 40960 3 libahci 40960 1 ahci xhci_pci 16384 0 ehci_pci 16384 0 libata 274432 2 libahci,ahci ehci_hcd 90112 1 ehci_pci xhci_hcd 258048 1 xhci_pci ixgbe 380928 0 megaraid_sas 167936 1 scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 0 scsi_mod 253952 14 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,libata,sg,scsi_dh_rdac,scsi_dh_hp_sw ipv6 548864 193 nf_nat_ipv6 crc_ccitt 16384 1 ipv6
-
Backup to Azure?
Hi, I was just reading the changelog for the latest July 31st XOA release and saw that it was now possible to backup to AWS S3, will Azure eventually be supported as well?
-
RE: Created cloud config, but can't select it when creating a new VM.
You need to add the following 2 lines to /usr/lib/cloud-init/ds-identify at or around line 245 if your file is different than mine.
LABEL_FATBOOT=*) label="${line#LABEL_FATBOOT=}"; labels="${labels}${line#LABEL_FATBOOT=}${delim}";;
The file diff /usr/lib/cloud-init/ds-identify.ORIGINAL is just a backup of the original file. You should do the same in case you need to roll back changes one day.
Here's the end result in my file (see line 245 where I added the above):
236 for line in "$@"; do 237 case "${line}" in 238 DEVNAME=*) 239 [ -n "$dev" -a "$ftype" = "iso9660" ] && 240 isodevs="${isodevs},${dev}=$label" 241 ftype=""; dev=""; label=""; 242 dev=${line#DEVNAME=};; 243 LABEL=*) label="${line#LABEL=}"; 244 labels="${labels}${line#LABEL=}${delim}";; 245 LABEL_FATBOOT=*) label="${line#LABEL_FATBOOT=}"; 246 labels="${labels}${line#LABEL_FATBOOT=}${delim}";; 247 TYPE=*) ftype=${line#TYPE=};; 248 UUID=*) uuids="${uuids}${line#UUID=}$delim";; 249 esac 250 done
-
RE: Created cloud config, but can't select it when creating a new VM.
@olivierlambert
I got it working by applying the diff you posted above. I edited the ds-identity file and added the fatboot label. I then added the datasource entries to cloud.cfg.I converted to a template and 1st bootup it updated automaticaly and ran everything I had already setup in the cloud.cfg.
I would say that the blog post from December 2015 or the wiki needs to be updated with that new information. Also worth mentioning is that unless a network configuration is specified in the cloud.cfg file you need to add this snippet at the bottom so that any future customization of the network interfaces doesn't get wiped up by cloud-init:
network: config: disabled
-
RE: Netdata package is now available in XCP-ng
yea, that's exactly what I was wondering. I'm curious how it was done. I was thinking of doing something similar for my ELK cluster and a few other systems.
-
RE: Created cloud config, but can't select it when creating a new VM.
@olivierlambert said in Created cloud config, but can't select it when creating a new VM.:
your VM to understand why it doesn't fetch your config.
There was a an update to cloud-init recently and it broke the template I've been using to deploy our VMs. What's i've noticed so far is that it overwrites the network configuration files for centos in /etc/sysconfig/network-scripts/ifcfg-eth0. I got around that issue by adding the network: configuration: ignore to the cloud.cfg file..
I then figured maybe it's time to update my template, so I went ahead and did just that using the same guide found here.
Once the VM starts it also fails to load the config drive automatically, so I'm thinking the documentation will need to get some sort of refresh .. Once I figure it out, I can show what worked for me here if no one else figured it out first.
-
RE: Netdata package is now available in XCP-ng
I meant about the how netdata centralizes the nodes information on xoa host:
is this how the xoa plugin/packages was configured?
https://learn.netdata.cloud/docs/agent/registry/#run-your-own-registry -
RE: Netdata package is now available in XCP-ng
Just being curious, your packages, they use a private registry on the xoa server itself?
-
RE: Netdata package is now available in XCP-ng
Hi,
Would pulling the latest version of netdata into each host break anything?
ie running from the cli: bash <(curl -Ss https://my-netdata.io/kickstart.sh)