XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. x-rayd
    3. Posts
    X
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 56
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Logs Partition Full

      How can I determine which VM is causing this problem?

      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830066:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830198:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830311:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830458:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830605:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830712:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830797:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830885:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831006:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831094:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831185:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831260:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831333:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831407:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831478:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831551:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884264:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884519:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884615:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884721:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.923891:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924054:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924207:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924319:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924505:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924675:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924803:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924913:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927026:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927185:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927303:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927436:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927657:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927772:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927903:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927993:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928136:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928255:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928396:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928479:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932786:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017)
      Nov  1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932906:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)
      
      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      What is dm-12?
      Virtual server? how can i check which one?

      [1053612.719918] Buffer I/O error on dev dm-12, logical block 6567808, async page read
      [1053646.789183] Buffer I/O error on dev dm-12, logical block 6567920, async page r
      ```ead
      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @Danp said in Logs Partition Full:

      I could be wrong, but to me "I/O error on dev xxx" indicates a hardware issue. On top of that, you continue to have issues with full log partitions and snapshots that aren't properly removed. Therefore, I assumed that you are having an issue with your storage system.

      All vm´s running on local SSD storage.
      Only backup snapshots running on Equallogic.

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @Danp said in Logs Partition Full:

      Quoted the above as it seems pertinent to the issue at hand. Do you have support for your Equallogic system? If so, then I would recommend reaching out to the vendor for assistance.

      why do you think is Equallogic problem?

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      What is problem?

      [1041538.079027] block tdbb: sector-size: 512/512 capacity: 104857600
      [1041614.490832] block tdag: sector-size: 512/512 capacity: 41943040
      [1041665.112325] block tdah: sector-size: 512/512 capacity: 83886080
      [1041682.920728] block tdal: sector-size: 512/512 capacity: 83886080
      [1041687.977055] block tdan: sector-size: 512/512 capacity: 83886080
      [1041775.983839] block tdar: sector-size: 512/512 capacity: 314572800
      [1041784.022923] block tdat: sector-size: 512/512 capacity: 314572800
      [1041788.477265] block tdau: sector-size: 512/512 capacity: 314572800
      [1041797.083981] Buffer I/O error on dev dm-65, logical block 13134816, async page read
      [1041975.964437] Buffer I/O error on dev dm-65, logical block 13134756, async page read
      [1041987.007220] block tdad: sector-size: 512/512 capacity: 52428800
      [1042011.762333] block tdad: sector-size: 512/512 capacity: 314572800
      [1042025.513177] block tdal: sector-size: 512/512 capacity: 314572800
      [1042030.608114] block tdal: sector-size: 512/512 capacity: 314572800
      [1042046.800416] block tdad: sector-size: 512/512 capacity: 52428800
      [1042051.052154] block tdah: sector-size: 512/512 capacity: 52428800
      [1042060.751052] block tdal: sector-size: 512/512 capacity: 52428800
      [1042075.720653] block tdan: sector-size: 512/512 capacity: 52428800
      [1042092.955679] block tdbb: sector-size: 512/512 capacity: 52428800
      [1042098.149160] block tdbe: sector-size: 512/512 capacity: 52428800
      [1042163.718485] Buffer I/O error on dev dm-46, logical block 10508192, async page read
      [1042234.057144] Buffer I/O error on dev dm-34, logical block 39400446, async page read
      [1042533.212350] Buffer I/O error on dev dm-34, logical block 13134816, async page read
      [1042721.551045] Buffer I/O error on dev dm-34, logical block 264073, async page read
      [1044849.455053] Buffer I/O error on dev dm-12, logical block 39400062, async page read
      [1046391.419666] Buffer I/O error on dev dm-12, logical block 3941302, async page read
      [1049772.497399] Buffer I/O error on dev dm-12, logical block 6567872, async page read
      [1049857.595545] Buffer I/O error on dev dm-12, logical block 6567550, async page read
      [1049929.102838] Buffer I/O error on dev dm-12, logical block 6567822, async page read
      [1050167.988714] Buffer I/O error on dev dm-12, logical block 26267563, async page read
      [1050366.554847] Buffer I/O error on dev dm-12, logical block 6567862, async page read
      [1050776.365052] Buffer I/O error on dev dm-12, logical block 65665963, async page read
      [1051056.348013] Buffer I/O error on dev dm-12, logical block 5255136, async page read
      [1051092.751391] Buffer I/O error on dev dm-12, logical block 13134724, async page read
      [1051328.387483] Buffer I/O error on dev dm-12, logical block 3941344, async page read
      [1051711.573576] Buffer I/O error on dev dm-12, logical block 13134752, async page read
      [1051848.129739] Buffer I/O error on dev dm-12, logical block 6567844, async page read
      [1051992.984716] Buffer I/O error on dev dm-12, logical block 105064334, async page read
      [1052434.107654] Buffer I/O error on dev dm-12, logical block 39400326, async page read
      [1052695.987730] Buffer I/O error on dev dm-12, logical block 13134724, async page read
      [1052923.659130] Buffer I/O error on dev dm-12, logical block 13134726, async page read
      [1053136.646307] Buffer I/O error on dev dm-12, logical block 64153536, async page read
      [1053612.719918] Buffer I/O error on dev dm-12, logical block 6567808, async page read
      [1053646.789183] Buffer I/O error on dev dm-12, logical block 6567920, async page read
      [1053778.875359] Buffer I/O error on dev dm-12, logical block 6567808, async page read
      [1053838.326806] Buffer I/O error on dev dm-12, logical block 10508203, async page read
      [1054000.750328] Buffer I/O error on dev dm-12, logical block 5255076, async page read
      [1054451.772637] Buffer I/O error on dev dm-12, logical block 6567814, async page read
      [1103083.191699] device vif7.0 left promiscuous mode
      [1103208.344685] block tdh: sector-size: 512/512 capacity: 104857600
      [1103217.894245] block tdh: sector-size: 512/512 capacity: 104857600
      [1103218.758711] device vif47.0 entered promiscuous mode
      [1103219.717538] vif vif-47-0 vif47.0: Guest Rx ready
      [1103286.054941] device vif11.0 left promiscuous mode
      [1103306.933523] block tdk: sector-size: 512/512 capacity: 104857600
      [1103316.719467] block tdk: sector-size: 512/512 capacity: 104857600
      [1103317.536966] device vif48.0 entered promiscuous mode
      [1103319.391521] vif vif-48-0 vif48.0: Guest Rx ready
      
      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      Thanks, that help!!!

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      ??

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      waht is command?

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @stormi said in Logs Partition Full:

      lsof +L1

      [10:38 df-c01-node04 ~]# lsof +L1
      COMMAND     PID USER   FD   TYPE DEVICE   SIZE/OFF NLINK     NODE NAME
      rsyslogd   1070 root   11w   REG    8,5 1945839857     0       15 /var/log/xensource.log (deleted)
      monitor    2002 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
      ovsdb-ser  2003 root    7u   REG    8,1        141     0   180233 /tmp/tmpf6sKcHH (deleted)
      stunnel   13458 root    2w   REG   0,19        861     0 97301510 /run/nonpersistent/forkexecd/stunnelcd2b39.log (deleted)
      [17:21 df-c01-node04 ~]#
      
      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @stormi said in Logs Partition Full:

      xe-toolstack-restart

      no help! what can I do now?

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      xensource.log is to big 2 GB
      Im delete logfile, but disk space ist not free. Now cant see xensource.log, why?

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      Idea what is problem?

      [20:31 df-c01-node04 log]# tail -50 xensource.log
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|16 ||xenops_server] TASK.signal 599093 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] Task 599094 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[  ]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:d1538d321269 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404650 UNIX /var/lib/xcp/xapi|session.slave_login D:8d7d2a8f0ca6|xapi] Session.create trackid=59cc70e9542b52fdcd1622725fa443c3 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |events|xenops_server] Received an event on managed VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|4 |queue|xenops_server] Queue.push ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] onto redirected f828ce90-06e0-024f-9c9b-3f30b1a959b4:[ ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"] ]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404651 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:6320774722f1 created by task D:8d7d2a8f0ca6
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:e5300a15f1ea created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:e5300a15f1ea|taskhelper] task VM.update_allowed_operations R:2852f822293e (uuid:ecd676c2-0a35-ad5c-7f38-48fc3e383d52) created (trackid=59cc70e9542b52fdcd1622725fa443c3) by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404652 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:2852f822293e|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 |events|task_server] Task 599094 completed; duration = 0
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|22 ||xenops_server] TASK.signal 599094 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] Queue.pop returned ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] Task 599095 reference events: ["VM_check_state","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:e038fd45a92c created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404653 UNIX /var/lib/xcp/xapi|session.logout D:624f223cdcff|xapi] Session.destroy trackid=59cc70e9542b52fdcd1622725fa443c3
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632725 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 is not requesting any attention
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|xenops_server] VM_DB.signal f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 |events|task_server] Task 599095 completed; duration = 0
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|39 ||xenops_server] TASK.signal 599095 (object deleted)
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: processing event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Will update VM.allowed_operations because guest_agent has changed.
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: Updating VM f828ce90-06e0-024f-9c9b-3f30b1a959b4 domid 9 guest_agent
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:3041ad7a5d1d created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404654 UNIX /var/lib/xcp/xapi|session.slave_login D:fa655deb1721|xapi] Session.create trackid=aa48585d92a36631054ef9468218522c pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404655 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:a1e04c4c79aa created by task D:fa655deb1721
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1235 |xapi events D:f60b314e49a9|dummytaskhelper] task timeboxed_rpc D:98ed995dc8af created by task D:f60b314e49a9
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404656 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:c9f2b3e839ca created by task D:f60b314e49a9
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VM.update_allowed_operations D:4a10137d6bc3 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|dispatch:VM.update_allowed_operations D:4a10137d6bc3|taskhelper] task VM.update_allowed_operations R:e4f434e89b02 (uuid:3e5a0652-8d8f-b774-53d9-f325c4cc63a1) created (trackid=aa48585d92a36631054ef9468218522c) by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404657 UNIX /var/lib/xcp/xapi|VM.update_allowed_operations R:e4f434e89b02|audit] VM.update_allowed_operations: VM = 'f828ce90-06e0-024f-9c9b-3f30b1a959b4 (DF-server24.df-webhosting.de)'
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:9eeefa7124f9 created by task D:c22abe907392
      Jul  1 20:32:06 df-c01-node04 xapi: [ info|df-c01-node04|2404658 UNIX /var/lib/xcp/xapi|session.logout D:b558e450cae3|xapi] Session.destroy trackid=aa48585d92a36631054ef9468218522c
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] Processing event: ["Vm","f828ce90-06e0-024f-9c9b-3f30b1a959b4"]
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenops event on VM f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xenopsd-xc: [debug|df-c01-node04|632727 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops_server] VM.stat f828ce90-06e0-024f-9c9b-3f30b1a959b4
      Jul  1 20:32:06 df-c01-node04 xapi: [debug|df-c01-node04|1230 |org.xen.xapi.xenops.classic events D:c22abe907392|xenops] xenopsd event: ignoring event for VM f828ce90-06e0-024f-9c9b-3f30b1a959b4: metadata has not changed
      Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] setting skip-cycles-after-error for plugin tap-31773-25 to 256
      Jul  1 20:32:06 df-c01-node04 xcp-rrdd: [ warn|df-c01-node04|0 monitor|main|rrdd_server] Failed to process plugin: tap-31773-25 (Rrd_protocol.Invalid_header_string)
      [20:32 df-c01-node04 log]#
      
      posted in Xen Orchestra
      X
      x-rayd
    • RE: Backup Continuous Replication hängt!

      @karlisi said in Backup Continuous Replication hängt!:

      Orphaned

      must delete every day orphaned snapshots! 😞
      No solution for this Problem?

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Backup Continuous Replication hängt!

      that means every pool-master has this problem

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Backup Continuous Replication hängt!

      @karlisi said in Backup Continuous Replication hängt!:

      I have the same problem since... I don't remember, a year perhaps. Orphaned snapshots are some way related to continuous replication, they appears randomly sometimes on almost all VMs, sometimes none. For me this is not a problem, I delete them as part of daily routine.
      I am using XOCE, xo-server and xo-web 5.59.0.

      This problem have only pool-master node, other node does not have the problem.

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Backup Continuous Replication hängt!

      I do not understand!
      Every day have pool-master orpharned disks and I delete it.
      Why does that happen?
      xoa.jpg

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      xe-toolstack-restart not help 😞

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @stormi said in Logs Partition Full:

      du -sh /var/log/*

      [5.1M    /var/log/audit.log
      24M     /var/log/audit.log.1
      0       /var/log/audit.log.1.gz
      3.5M    /var/log/audit.log.2.gz
      0       /var/log/audit.log.3.gz
      9.3M    /var/log/blktap
      0       /var/log/boot.log
      24K     /var/log/boot.log.1
      4.0K    /var/log/boot.log.2.gz
      4.0K    /var/log/boot.log.3.gz
      4.0K    /var/log/boot.log.4.gz
      4.0K    /var/log/boot.log.5.gz
      4.0K    /var/log/btmp
      4.0K    /var/log/btmp.1
      0       /var/log/btmp.1.gz
      4.0K    /var/log/cluster
      8.0K    /var/log/crit.log
      0       /var/log/crit.log.1
      0       /var/log/crit.log.1.gz
      2.9M    /var/log/cron
      40K     /var/log/cron.1
      0       /var/log/cron.1.gz
      4.0K    /var/log/grubby_prune_debug
      1004K   /var/log/installer
      4.0K    /var/log/interface-rename.log
      0       /var/log/interface-rename.log.1
      0       /var/log/interface-rename.log.1.gz
      1.3M    /var/log/kern.log
      20K     /var/log/kern.log.1
      0       /var/log/kern.log.1.gz
      16K     /var/log/lost+found
      164K    /var/log/maillog
      4.0K    /var/log/maillog.32.gz
      0       /var/log/messages
      0       /var/log/messages.1
      0       /var/log/messages.1.gz
      4.0K    /var/log/ntpstats
      4.0K    /var/log/openvswitch
      4.0K    /var/log/ovs-ctl.log
      4.0K    /var/log/ovs-ctl.log.32.gz
      4.0K    /var/log/ovsdb-server.log
      4.0K    /var/log/ovsdb-server.log.1
      0       /var/log/ovsdb-server.log.1.gz
      60K     /var/log/ovs-vswitchd.log
      4.0K    /var/log/ovs-vswitchd.log.1
      0       /var/log/ovs-vswitchd.log.1.gz
      4.0K    /var/log/ovs-xapi-sync.log
      4.0K    /var/log/ovs-xapi-sync.log.32.gz
      8.0K    /var/log/pbis-open-install.log
      4.0K    /var/log/pyperthreading-plugin.log
      114M    /var/log/sa
      8.0K    /var/log/samba
      37M     /var/log/secure
      5.6M    /var/log/SMlog
      29M     /var/log/SMlog.1
      0       /var/log/SMlog.1.gz
      5.0M    /var/log/SMlog.2.gz
      0       /var/log/spooler
      4.0K    /var/log/spooler.32.gz
      0       /var/log/tallylog
      0       /var/log/updater-plugin.log
      972K    /var/log/user.log
      4.0K    /var/log/user.log.1
      0       /var/log/user.log.1.gz
      696K    /var/log/VMSSlog
      20K     /var/log/VMSSlog.1
      0       /var/log/VMSSlog.1.gz
      68K     /var/log/wtmp
      8.0K    /var/log/xcp-rrdd-plugins.log
      13M     /var/log/xcp-rrdd-plugins.log.1
      0       /var/log/xcp-rrdd-plugins.log.1.gz
      240K    /var/log/xcp-rrdd-plugins.log.2.gz
      0       /var/log/xcp-rrdd-plugins.log.3.gz
      220K    /var/log/xen
      96M     /var/log/xensource.log
      18M     /var/log/xensource.log.1
      0       /var/log/xensource.log.1.gz
      8.2M    /var/log/xensource.log.2.gz
      9.2M    /var/log/xensource.log.3.gz
      6.5M    /var/log/xensource.log.4.gz
      8.0M    /var/log/xenstored-access.log
      33M     /var/log/xenstored-access.log.1
      0       /var/log/xenstored-access.log.1.gz
      2.3M    /var/log/xenstored-access.log.2.gz
      4.0K    /var/log/yum.log
      36K     /var/log/yum.log.1
      

      Filesystem Size Used Avail Use% Mounted on
      devtmpfs 3.9G 112K 3.9G 1% /dev
      tmpfs 3.9G 1004K 3.9G 1% /dev/shm
      tmpfs 3.9G 12M 3.9G 1% /run
      tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
      /dev/sda1 18G 2.4G 15G 14% /
      xenstore 3.9G 0 3.9G 0% /var/lib/xenstored
      /dev/sda5 3.9G 3.9G 0 100% /var/log

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      Im delete daemon.log, but command df -h say partition is full, why?
      /dev/sda5 3.9G 3.9G 0 100% /var/log

      posted in Xen Orchestra
      X
      x-rayd
    • RE: Logs Partition Full

      @x-rayd said in Logs Partition Full:

      squeezed: [debug|df-c01-node04|3 ||xenops] watch /data/updated

      no idea?

      posted in Xen Orchestra
      X
      x-rayd