@olivierlambert is there a plan to deploy the alt-driver over the xcp-ng update or should we installed by the xcp-ng-testing repo?
THX for the good job guys
Latest posts made by daKju
-
RE: Alert: Control Domain Memory Usage
-
RE: Alert: Control Domain Memory Usage
We also have the same setup, running on affected hosts with iSCSI pool devices connected
Module Size Used by tun 49152 0 iscsi_tcp 20480 5 libiscsi_tcp 28672 1 iscsi_tcp libiscsi 61440 2 libiscsi_tcp,iscsi_tcp scsi_transport_iscsi 110592 3 iscsi_tcp,libiscsi dm_service_time 16384 4 arc4 16384 0 md4 16384 0 nls_utf8 16384 1 cifs 929792 2 ccm 20480 0 fscache 380928 1 cifs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe openvswitch 147456 12 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 1 garp llc 16384 2 stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 9 xt_multiport 16384 1 xt_conntrack 16384 5 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch iptable_filter 16384 1 dm_multipath 32768 5 dm_service_time intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel crypto_simd 16384 1 aesni_intel cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel dm_mod 151552 22 dm_multipath ipmi_si 65536 0 i2c_i801 28672 0 sg 40960 0 ipmi_devintf 20480 0 i7core_edac 28672 0 lpc_ich 28672 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si i5500_temp 16384 0 acpi_power_meter 20480 0 sunrpc 413696 1 ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables sr_mod 28672 0 cdrom 69632 1 sr_mod sd_mod 53248 7 ata_generic 16384 0 pata_acpi 16384 0 uhci_hcd 49152 0 lpfc 958464 4 ata_piix 36864 0 nvmet_fc 32768 1 lpfc nvmet 69632 1 nvmet_fc libata 274432 3 ata_piix,pata_acpi,ata_generic nvme_fc 45056 1 lpfc nvme_fabrics 24576 1 nvme_fc ehci_pci 16384 0 igb 233472 0 ehci_hcd 90112 1 ehci_pci nvme_core 81920 2 nvme_fc,nvme_fabrics ixgbe 380928 0 megaraid_sas 167936 2 scsi_transport_fc 69632 4 fcoe,lpfc,libfc,bnx2fc scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 5 scsi_mod 253952 19 fcoe,lpfc,scsi_dh_emc,sd_mod,dm_multipath,scsi_transport_iscsi,scsi_dh_alua,scsi_transport_fc,libfc,iscsi_tcp,bnx2fc,libiscsi,megaraid_sas,libata,sg,scsi_dh_rdac,scsi_dh_hp_sw,sr_mod ipv6 548864 173 nf_nat_ipv6 crc_ccitt 16384 1 ipv6
-
RE: Alert: Control Domain Memory Usage
@dave thx for this hint. yes we have iSCSI SR's attached
-
RE: Alert: Control Domain Memory Usage
@stormi
we have 8.1
I haven't start yet the services. can theopenvswitch
service safely restart without any impact?
nothing changed afterrsyslog
restart -
RE: Alert: Control Domain Memory Usage
@stormi
i have the problem on a PoolMaster with 2 running VM's with memory alerts.
here are some infos. may you find something.slabtop.txt
xehostparamlist.txt
xltop.txt
meminfo.txt
top.txt
grub.cfg.txtsorry, can't add images. it seems there is something broken with some node modules.
-
RE: Alert: Control Domain Memory Usage
@olivierlambert this still happens at 8.1 also
@stormi it seems that the memory is eating somewhere and doesn't point to specific program. @dave also described here https://xcp-ng.org/forum/post/31693 -
RE: File Restore : error scanning disk for recent delta backups but not old
hi all,
i have similar behave like @mtango on xo-server 5.60.0 - xo-web 5.60.0 on a fresh installed Ubuntu 20.04 LTS.
I'm able to choose the disk and then the partition.
If i choose a boot partition with ext4, i can see all files.
If i choose a partition with lvm, i get an error, as you can see in the logfile:Jun 9 13:59:46 xoa-server systemd[1]: Starting LVM event activation on device 7:3... Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] PV /dev/loop3 online, VG cl_server2backup is complete. Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] VG cl_server2backup run autoactivation. Jun 9 13:59:46 xoa-server lvm[15934]: PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2. Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] VG cl_server2backup not using quick activation. Jun 9 13:59:46 xoa-server lvm[15934]: 2 logical volume(s) in volume group "cl_server2backup" now active Jun 9 13:59:47 xoa-server systemd[1]: Finished LVM event activation on device 7:3. Jun 9 13:59:47 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3. Jun 9 13:59:47 xoa-server systemd[977]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded. Jun 9 13:59:47 xoa-server systemd[1]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded. Jun 9 13:59:47 xoa-server lvm[15986]: pvscan[15986] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:47 xoa-server systemd[1]: Stopping LVM event activation on device 7:3... Jun 9 13:59:47 xoa-server systemd[1]: run-rbb9f6e183716490a87f59a7acc3a6db1.service: Succeeded. Jun 9 13:59:47 xoa-server lvm[15989]: pvscan[15989] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:47 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded. Jun 9 13:59:47 xoa-server systemd[1]: Stopped LVM event activation on device 7:3. Jun 9 13:59:52 xoa-server systemd[1]: Starting LVM event activation on device 7:3... Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] PV /dev/loop3 online, VG cl_server2backup is complete. Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] VG cl_server2backup run autoactivation. Jun 9 13:59:52 xoa-server lvm[16001]: PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2. Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] VG cl_server2backup not using quick activation. Jun 9 13:59:52 xoa-server lvm[16001]: 2 logical volume(s) in volume group "cl_server2backup" now active Jun 9 13:59:52 xoa-server systemd[1]: Finished LVM event activation on device 7:3. Jun 9 13:59:53 xoa-server kernel: [ 5776.274595] XFS (loop4): Mounting V5 filesystem in no-recovery mode. Filesystem will be inconsistent. Jun 9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded. Jun 9 13:59:53 xoa-server kernel: [ 5776.295448] XFS (loop4): Unmounting Filesystem Jun 9 13:59:53 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3. Jun 9 13:59:53 xoa-server systemd[1]: Stopping LVM event activation on device 7:3... Jun 9 13:59:53 xoa-server lvm[16069]: pvscan[16069] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:53 xoa-server lvm[16070]: pvscan[16070] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:53 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: Stopped LVM event activation on device 7:3. Jun 9 13:59:53 xoa-server systemd[1]: run-r4d20647bef3942d2a439dcf7d9b50d9b.service: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded. Jun 9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded. Jun 9 13:59:53 xoa-server xo-server[12262]: 2020-06-09T13:59:53.545Z xo:api WARN admin@admin.net | backupNg.listFiles(...) [879ms] =!> TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received undefined
Any ideas ??
THX2all