• iptables rule to allow apcupsd traffic to APC management card

    10
    0 Votes
    10 Posts
    2k Views
    D

    @Ajmind-0
    Well, well, I switched to the snmp connection method and it worked just fine. Um...

    Thank you for your pointer.

  • Hosts not auto balancing in pool

    15
    0 Votes
    15 Posts
    431 Views
    olivierlambertO

    Don't worry, I don't blame you, but the people taking the decision to pay or not ๐Ÿ˜‰

  • Can't move Windows 2003 server from 7.6 to 8.1

    6
    0 Votes
    6 Posts
    559 Views
    olivierlambertO

    You are welcome! Without PV drivers, it will be slower, but probably enough until you migrate the app to a more recent Windows version ๐Ÿ™‚

    Enjoy XCP-ng!

  • Watchdog for reboot VM when it's broken(no respond).

    16
    0 Votes
    16 Posts
    1k Views
    cbaguzmanC

    @ravenet This is the /var/log/xensource.log when I test watchdog until VM starts again on the vm ID:1938f572-4951-a77f-48ce-9131c07940d4

    Can you understand this process with the log?

    Can you help me understand?

    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[ ] Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] Task 83139 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM 1938f572-4951-a77f-48ce-9131c07940d4 is not requesting any attention Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:11:25 mercurio xenopsd-xc: [debug||167488 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: processing event for VM 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VM 1938f572-4951-a77f-48ce-9131c07940d4 domid 21 guest_agent Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[ ] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Task 83143 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/1938f572-4951-a77f-48ce-9131c07940d4/rtc/timeoffset token=xenopsd-xc:domain-21 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.reboot 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_destroy","hard-reboot"]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Best_effort",["VM_pause","1938f572-4951-a77f-48ce-9131c07940d4"]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.pause 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_destroy_device_model","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.destroy_device_model 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:23 mercurio xenopsd-xc: [ warn||13 |events|xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167496 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: ignoring event for VM 1938f572-4951-a77f-48ce-9131c07940d4: metadata has not changed Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]],["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] begin_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 2 atoms Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[ ] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=1:[ ] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83144 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83145 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvda Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvdd Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] adding device cache for domid 21 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvda; Device is not surprise-removable (ignoring and removing anyway) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/768/online = 0 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvdd; Device is not surprise-removable (ignoring and removing anyway) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/5696/online = 0 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696) Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvdd Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83145 completed; duration = 0 Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"]] Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvdd Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvdd; active devices = [ None ] Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ] Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd matched device 3 Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd device <- xvdd; currently_attached <- true Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvda Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83144 completed; duration = 25 Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VIF_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","0"],true]] Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF.unplug 1938f572-4951-a77f-48ce-9131c07940d4.0 Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvda"]] Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvda Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvda; active devices = [ ] Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ] Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda matched device 0 Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda device <- xvda; currently_attached <- true Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.0 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_destroy","1938f572-4951-a77f-48ce-9131c07940d4"] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VM.destroy 1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; will not have domain-level information preserved Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Removing extra/1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Deleting extra/1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] DB.delete /var/run/nonpersistent/xenopsd/classic/extra/1938f572-4951-a77f-48ce-9131c07940d4 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: all known devices = [ ] Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vif",["1938f572-4951-a77f-48ce-9131c07940d4","0"]] Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VIF 1938f572-4951-a77f-48ce-9131c07940d4.0 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: other domains with the same UUID = [ ] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VIF.stat 1938f572-4951-a77f-48ce-9131c07940d4.0 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vif; id = 0; active devices = [ ] Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VIF 1938f572-4951-a77f-48ce-9131c07940d4.0 currently_attached <- true Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy calling Xenctrl.domain_destroy Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:49 mercurio xenopsd-xc: [ warn||13 ||xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; xenstore-rm /local/domain/21 Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; deleting backends Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]]] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] begin_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 1 atoms Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] onto Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[ ] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 ||xenops_server] Queue.pop returned ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83147 reference Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.epoch_end ["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"] Jan 21 09:12:49 mercurio xenopsd-xc: [ info||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Processing disk SR=d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 VDI=65192a2d-f8f7-41c4-a6b5-9bfdc5110179 Jan 21 09:12:49 mercurio xenopsd-xc: [error||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up? Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Invalid domid, could not be converted to int, passing empty string. Jan 21 09:12:49 mercurio xapi: [ info||1293439 ||storage_impl] VDI.epoch_end dbg:Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) sr:d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 vdi:65192a2d-f8f7-41c4-a6b5-9bfdc5110179 vm: Jan 21 09:12:53 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83147 completed; duration = 4 Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_post_destroy","hard-reboot"]] Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up? Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_reboot","none"]] Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up? Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_start","none"]] Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
  • "Correct" way to manage UPS?

    2
  • IP Address changed for a slave within a Pool, How do I reconfigure it?

    8
    0 Votes
    8 Posts
    1k Views
    M

    @olivierlambert Looks possible, since I've clicked forget this particular slave/host in XOA, how would I go around adding it back there?

    The slave is missing from the masters pool list.

  • 1 core per socket (Invalid configuration)

    4
    0 Votes
    4 Posts
    388 Views
    A

    You're trying lie to the VM and tell it that it's running on a system with 24 physical sockets, each with a single core.

    For reference, 2 sockets is the biggest AMD server that you can buy (these days), and Intel top out at 8. If you want a larger system, you could buy a SuperDome which can manage up to 32 sockets (before hitting other limits of UPI switching).

    The various historical enumeration schemes can't encode that high, which is why there's a sanity check in XenCenter.

    You typically want 1 socket, so select 24 cores / socket.

    ~Andrew

  • BackupNG - task has been destroyed before completion

    6
    0 Votes
    6 Posts
    414 Views
    M

    @marclachapelle Hi, sorry for the delay. I had a theory but after tests that theory is wrong.

    Here are the files about the incident.

    Crash-DR-2023-01-11 11_15_03-Window.png

    Log-Extract.txt

  • 0 Votes
    11 Posts
    1k Views
    A

    @chcnetconsulting

    Just to mention here that your problem(s) have been addressed in the most recent version of HA-Lizard (2.3.1).

    Simply upgrading and you are happy again ๐Ÿ™‚

  • Default settings for CPU vulns

    2
    0 Votes
    2 Posts
    158 Views
    olivierlambertO

    Hi,

    All.

  • 0 Votes
    1 Posts
    507 Views
    No one has replied
  • GPU passthrough with Video Out

    7
    0 Votes
    7 Posts
    1k Views
    M

    @splastunov
    Never ran across this before, that's for that screenshot, and command.
    When I'm able to I'll for sure give this another shot and check these.

  • VGPU RPM

    7
    0 Votes
    7 Posts
    510 Views
    H

    @splastunov By 7.3.0 i meant the package version and that version was also in xcp-ng 7.4 so if the package was there and its license hasn't changed since i would assume it means its still free to use.

  • kernel NULL pointer

    9
    0 Votes
    9 Posts
    2k Views
    stormiS

    An update candidate has a fix for this and should be published tomorrow as an official update.

  • UEFI Bootloader and KB5012170

    Solved
    7
    0 Votes
    7 Posts
    2k Views
    B

    @christopher-petzel
    Many thanks saved me from hours of searching for a fix ๐Ÿ™‚

  • Ubuntu Server 22.04 and Java causes VM to hard lock (XCP-ng 8.1)

    1
    0 Votes
    1 Posts
    195 Views
    No one has replied
  • vCPU overcommitment stats

    8
    0 Votes
    8 Posts
    571 Views
    jmaraJ

    @olivierlambert ah good to know ๐Ÿ˜„ thats fine I'll take a look at the metrics maybe I'll write a small rrd stats pusher script as an interim solution for my usecase ๐Ÿ™‚ until XO6 is available.

    Thanks for letting me know ๐Ÿ™‚

  • Dual Display on Win 10 VM

    1
    0 Votes
    1 Posts
    117 Views
    No one has replied
  • XCP-ng console unreadable because of error messages

    5
    0 Votes
    5 Posts
    716 Views
    xerioX

    @stormi thanks, will look forward to try this out when the update drops.

  • When attempting to create a OPNsense VM via XO stack becomes unresponsive.

    10
    0 Votes
    10 Posts
    1k Views
    fohdeeshaF

    @MrXeon So, the actual root issue here I believe, is opnsense installs come with an IP and dhcp server already assigned and enabled on the lan interface (I believe it's 192.168.1.1, but don't quote me). If your existing home network already uses 192.168.1.x/24 and already has a dhcp server, booting an opnsense install with it's virtual lan nic set to your existing home lan, there will be a lot of conflicts. Virtual nic order can be whatever you'd like (you can change and move around assignments in opnsense), but if it's preconfigured lan interface gets set to your preexisting lan network, there will be conflicts ๐Ÿ™‚