XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. cbaguzman
    3. Posts
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 8
    • Posts 23
    • Groups 0

    Posts

    Recent Best Controversial
    • Watchdog on XCP Host

      Hello, I read in xen-command-line about watchdog on host.

      I want configure watchdog and watchdog_timeout but I don't know how.

      Someone used it? Where do I write this parameters ?(I suspect maybe in grub )

      Thanks Everyone.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: XCP on intel i9 12th or 13th generation

      @olivierlambert

      I'm going to buy a AMD Ryzen CPU.

      Thank @olivierlambert

      posted in Compute
      cbaguzmanC
      cbaguzman
    • XCP on intel i9 12th or 13th generation

      Hello everyone.

      I thing buy a new CPU for use with xcp-ng, but I have a doubts about how XCP works with Hybrid CPUs (P-Cores and E-Cores) .

      Ej. xcp use all Cores or only P-Cores? How xcp works with the VMs and P-Cores and E-Cores

      How xcp works with the VMs and P-Core and E-Core?

      someone know about it?

      Thanks Every One.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @ravenet This is the /var/log/xensource.log when I test watchdog until VM starts again on the vm ID:1938f572-4951-a77f-48ce-9131c07940d4

      Can you understand this process with the log?

      Can you help me understand?

      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] Task 83139 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM 1938f572-4951-a77f-48ce-9131c07940d4 is not requesting any attention
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||167488 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: processing event for VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VM 1938f572-4951-a77f-48ce-9131c07940d4 domid 21 guest_agent
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Task 83143 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/1938f572-4951-a77f-48ce-9131c07940d4/rtc/timeoffset token=xenopsd-xc:domain-21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.reboot 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_destroy","hard-reboot"]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Best_effort",["VM_pause","1938f572-4951-a77f-48ce-9131c07940d4"]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.pause 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_destroy_device_model","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.destroy_device_model 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:23 mercurio xenopsd-xc: [ warn||13 |events|xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167496 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: ignoring event for VM 1938f572-4951-a77f-48ce-9131c07940d4: metadata has not changed
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]],["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] begin_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 2 atoms
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=1:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83144 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83145 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] adding device cache for domid 21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvda; Device is not surprise-removable (ignoring and removing anyway)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/768/online = 0
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvdd; Device is not surprise-removable (ignoring and removing anyway)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/5696/online = 0
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83145 completed; duration = 0
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"]]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvdd; active devices = [ None ]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd matched device 3
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd device <- xvdd; currently_attached <- true
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83144 completed; duration = 25
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VIF_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","0"],true]]
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF.unplug 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvda"]]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvda; active devices = [  ]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda matched device 0
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda device <- xvda; currently_attached <- true
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_destroy","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VM.destroy 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; will not have domain-level information preserved
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Removing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Deleting extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] DB.delete /var/run/nonpersistent/xenopsd/classic/extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: all known devices = [  ]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vif",["1938f572-4951-a77f-48ce-9131c07940d4","0"]]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VIF 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: other domains with the same UUID = [  ]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VIF.stat 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vif; id = 0; active devices = [  ]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VIF 1938f572-4951-a77f-48ce-9131c07940d4.0 currently_attached <- true
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy calling Xenctrl.domain_destroy
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:49 mercurio xenopsd-xc: [ warn||13 ||xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; xenstore-rm /local/domain/21
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; deleting backends
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] begin_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 1 atoms
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] onto Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 ||xenops_server] Queue.pop returned ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83147 reference Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.epoch_end ["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]
      Jan 21 09:12:49 mercurio xenopsd-xc: [ info||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Processing disk SR=d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 VDI=65192a2d-f8f7-41c4-a6b5-9bfdc5110179
      Jan 21 09:12:49 mercurio xenopsd-xc: [error||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Invalid domid, could not be converted to int, passing empty string.
      Jan 21 09:12:49 mercurio xapi: [ info||1293439 ||storage_impl] VDI.epoch_end dbg:Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) sr:d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 vdi:65192a2d-f8f7-41c4-a6b5-9bfdc5110179 vm:
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83147 completed; duration = 4
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_post_destroy","hard-reboot"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_reboot","none"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_start","none"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      
      
      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @stormi I just edited the message.

      Now...

      The watchdog's configuration works ok in my systems.

      My only question is:

      Who is reset my VM, the xen hypervisor or the OS on my VM ?
      I don't find how verify it.

      Pardon for my basic english.👽

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @stormi

      When I wrote: I wait help with this.

      I wanted to say "I hope this will be useful for them. "

      is Clear?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      Hello @olivierlambert and @stormi .

      If you need my help to clear this, I am available.

      @stormi when I writed SO, I wanted to write OS (Operative System).

      I ran this comand on bash to broke OS in VM:

      :(){ :|:& };: ()

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      Hello, finally I installed wachtdog in my VM. I have Linux Ubuntu there.

      I used xen_wdt module in watchdog.

      It is whatchdog configuration:

      "/etc/default/watchdog"

      # Start watchdog at boot time? 0 or 1
      run_watchdog=1
      # Start wd_keepalive after stopping watchdog? 0 or 1
      run_wd_keepalive=1
      # Load module before starting watchdog
      watchdog_module="xen_wdt"
      # Specify additional watchdog options here (see manpage).
      
      

      /etc/watchdog.conf

      #ping                   = 8.8.8.8 
      ping                    = 192.16.171.254 
      interface               = eth0
      file                    = /var/log/syslog
      change                  = 1407
      # Uncomment to enable test. Setting one of these values to '0' disables it.
      # These values will hopefully never reboot your machine during normal use
      # (if your machine is really hung, the loadavg will go much higher than 25)
      max-load-1              = 24
      #max-load-5             = 18
      #max-load-15            = 12
      # Note that this is the number of pages!
      # To get the real size, check how large the pagesize is on your machine.
      #min-memory             = 1
      #allocatable-memory     = 1
      #repair-binary          = /usr/sbin/repair
      #repair-timeout         = 60
      #test-binary            =
      #test-timeout           = 60
      # The retry-timeout and repair limit are used to handle errors in a more robust
      # manner. Errors must persist for longer than retry-timeout to action a repair
      # or reboot, and if repair-maximum attempts are made without the test passing a
      # reboot is initiated anyway.
      #retry-timeout          = 60
      #repair-maximum         = 1
      watchdog-device = /dev/watchdog
      # Defaults compiled into the binary
      #temperature-sensor     =
      #max-temperature        = 90
      # Defaults compiled into the binary
      admin                   = root
      interval                = 20
      logtick                 = 1
      log-dir                 = /var/log/watchdog
      # This greatly decreases the chance that watchdog won't be scheduled before
      # your machine is really loaded
      realtime                = yes
      priority                = 1
      # Check if rsyslogd is still running by enabling the following line
      #pidfile                = /var/run/rsyslogd.pid
      

      I broke my SO and watchdog reset my vm.

      I wait help with this.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: in xcp-ng host lscpu only show 8 cores for on Ryzen 9 5900x

      @olivierlambert and @AtaxyaNetwork pardon for my bad english.

      Maybe I don't write clear. I hope haven't offensive to you.

      Thank for your answer me.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: in xcp-ng host lscpu only show 8 cores for on Ryzen 9 5900x

      @AtaxyaNetwork thanks for anwer me.

      I didn't know that dom0 was a other vm.

      Them why in similar instalations on other equipement with Ryzen 9 5900x and xcp 8.2.1 top shows 24 cores?

      Do You know?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • in xcp-ng host lscpu only show 8 cores for on Ryzen 9 5900x

      Hello everyone. I write because I installed XCP-NG (8.2.1) in a new hardware. This equipement has a Ryzen 9 5900X with a Asus Prime x570-p mother board.

      When finished my instalation I runed lscpu and the system returned:

       Model name: AMD Ryzen 9 5900X 12-Core Prossesor
      Core(s) per socket: 8
      Socket (s): 1 
      

      When I runed in xsconsole -> Hardware and BIOS Information -> Processor he showed:

      Logical CPU: 24
      Populated CPU Sockets: 1
      Total CPU Sockets: 1
      Description: 1 X AMD Ryzen 9 5900x 12-Core Processor

      then, I runed top, and he showed 8 cores. I thinked the cpu is broken!.

      Than, I booted ubuntu server 18.04-2 on this hardware (not virtualised) and runed lscpu and he showed 24 cores.

      Than I booted Windows 10 64bit (not virtualised) and showed 24 cores. With Windows runed passmark and test runed correcly.

      I have other instalations of xcp in other equipements with ryzen 9 and all show 24 cores with lscpu.

      I don't know what to think. I don't know who say a true

      Can you helpe whit this please?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Can I restore a VDI selected from VM's delta backup?

      @olivierlambert thanks.

      When I typed "I leave how suggestion for improvement.", I wanted say "suggestion for nexts releases".

      I 'm not a programmer, I'm sorry. I can only say my experiences of use.

      My English is very poor. I don't want to sound offensive. I'm sorry.

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman
    • Can I restore a VDI selected from VM's delta backup?

      Hello everyone.

      The last week, i needed restore a one vdi from delta backups. This VM has a three vdis.

      I restored a full VM for recovered a particular vdi.

      I don't know if possible restore one vdi from VM's delta backup from XOA's restore GUI or command line.

      If don't possible, I leave how suggestion for improvement.

      regards.

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @olivierlambert Also, I thing others ways.

      Make a script in XCP host what run ping to VMS'S ip and force reboot of VM when ping is not respond.

      What do you think about this?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      hello @olivierlambert

      ( What do you mean exactly for a VM that doesn't respond?)
      For example, when the SO of guest is freeze. (for different reasons ej: driver error, low quality software, etc).

      (Xen will detect if a VM is crashing already, and will reboot it.)
      👆 where can I read information for this?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • Watchdog for reboot VM when it's broken(no respond).

      Hello everybody!!!

      I want automatically reboot VMs when they are broken (no respond).
      I don't know if xcg-ng has process or script for that cases.

      I saw something in the web with ssh but maybe xcp-ng has something else.

      You can help me?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: How setting NFS remote target when NFS target is not all time online?

      Perhaps nfs has to options for restart connection when that is disconnect.
      But I don't find option for this.

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman
    • RE: How setting NFS remote target when NFS target is not all time online?

      Thanks @olivierlambert , in south America the internet and energy services sometimes fails. 😧

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman
    • How setting NFS remote target when NFS target is not all time online?

      Hello, can you help me? please

      I need setting XOA for remote nfs backups.

      My situation is what nfs target is not all time online. In general all fridays is down.

      On saturday the target is online, but when the backups need write in nfs target return error, for example:

      Error: Unknown system error -116: Unknown system error -116, mkdir '/run/xo-server/mounts/11934fec-f3a1-4f7f-a78d-00eeb1b39654'

      I tested more options, but none is ok.

      The last XOA’s remote settings tested is: vers=4.2,soft,rw,retry=3,async,proto=tcp,noexec,nosuid,soft,bg,timeo=30

      and the server’s /etc/exports is:
      media/bkpf/backup 172.16.112.2(rw,all_squash,anonuid=2000,anongid=200)

      Right now, I need go to XOA's remote setting and change option manualy (From Enable, to Disable and later Enable) of target for remount target and later the backup can write ok on target.

      Captura de pantalla de 2021-10-12 10-34-34.png
      In the image you can see the bkpm target state on saturday.

      How to setting XOA for remount nfs target automatlly before the backup run?

      The XOA's version are xo-server 5.80.0 and xo-web 5.84.0

      thank's for your attention.

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman
    • RE: VDI_IO_ERROR(Device I/O errors) when you run scheduled backup

      @fachex said in VDI_IO_ERROR(Device I/O errors) when you run scheduled backup:

      @olivierlambert no more feedback on this problem. We reinstall the whole server, XCP-ng 8.2 and we are still experiencing this problem. We are unable to do Delta backups.

      Hi @fachex, @stormi
      If you update XCP-ng, remember that the installation cleared all settings in XCP-NG (Host).

      In my case, XCP-NG deleted the folders where the mount point of the external device was.

      Maybe this information will help you in your LVM permission problem.

      posted in Xen Orchestra
      cbaguzmanC
      cbaguzman