XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. cbaguzman
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 9
    • Posts 24
    • Groups 0

    Ariel Guzmán

    @cbaguzman

    5
    Reputation
    3
    Profile views
    24
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online
    Email cbaguzman@gmail.com
    Age 47
    Location Córdoba, Argentina

    cbaguzman Unfollow Follow

    Best posts made by cbaguzman

    • RE: XCP on intel i9 12th or 13th generation

      @olivierlambert

      I'm going to buy a AMD Ryzen CPU.

      Thank @olivierlambert

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      Hello, finally I installed wachtdog in my VM. I have Linux Ubuntu there.

      I used xen_wdt module in watchdog.

      It is whatchdog configuration:

      "/etc/default/watchdog"

      # Start watchdog at boot time? 0 or 1
      run_watchdog=1
      # Start wd_keepalive after stopping watchdog? 0 or 1
      run_wd_keepalive=1
      # Load module before starting watchdog
      watchdog_module="xen_wdt"
      # Specify additional watchdog options here (see manpage).
      
      

      /etc/watchdog.conf

      #ping                   = 8.8.8.8 
      ping                    = 192.16.171.254 
      interface               = eth0
      file                    = /var/log/syslog
      change                  = 1407
      # Uncomment to enable test. Setting one of these values to '0' disables it.
      # These values will hopefully never reboot your machine during normal use
      # (if your machine is really hung, the loadavg will go much higher than 25)
      max-load-1              = 24
      #max-load-5             = 18
      #max-load-15            = 12
      # Note that this is the number of pages!
      # To get the real size, check how large the pagesize is on your machine.
      #min-memory             = 1
      #allocatable-memory     = 1
      #repair-binary          = /usr/sbin/repair
      #repair-timeout         = 60
      #test-binary            =
      #test-timeout           = 60
      # The retry-timeout and repair limit are used to handle errors in a more robust
      # manner. Errors must persist for longer than retry-timeout to action a repair
      # or reboot, and if repair-maximum attempts are made without the test passing a
      # reboot is initiated anyway.
      #retry-timeout          = 60
      #repair-maximum         = 1
      watchdog-device = /dev/watchdog
      # Defaults compiled into the binary
      #temperature-sensor     =
      #max-temperature        = 90
      # Defaults compiled into the binary
      admin                   = root
      interval                = 20
      logtick                 = 1
      log-dir                 = /var/log/watchdog
      # This greatly decreases the chance that watchdog won't be scheduled before
      # your machine is really loaded
      realtime                = yes
      priority                = 1
      # Check if rsyslogd is still running by enabling the following line
      #pidfile                = /var/run/rsyslogd.pid
      

      I broke my SO and watchdog reset my vm.

      I wait help with this.

      posted in Compute
      cbaguzmanC
      cbaguzman

    Latest posts made by cbaguzman

    • There are any commands that allow me to verify the integrity of the backup files?

      Hi, it's great to be back!

      Until now, I've been performing backups on NFS media without the "Encrypt all new data sent to this remote" option and without the "Store backup as multiple data blocks instead of a whole VHD file." option. This allowed me to perform integrity checks on the backup files using a script that utilizes "vhd-cli check"and "xva-validate".

      Now I need to change the backup methods by enabling "Encrypt all new data sent to this remote" and "Store backup as multiple data blocks instead of a whole VHD file.".

      My question is whether there are any commands that allow me to verify the integrity of the backup files from the command line in this new scenario. I have VMs that are several GB in size, so using Auto Restore Check is not an option for me.

      posted in Backup
      cbaguzmanC
      cbaguzman
    • Watchdog on XCP Host

      Hello, I read in xen-command-line about watchdog on host.

      I want configure watchdog and watchdog_timeout but I don't know how.

      Someone used it? Where do I write this parameters ?(I suspect maybe in grub )

      Thanks Everyone.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: XCP on intel i9 12th or 13th generation

      @olivierlambert

      I'm going to buy a AMD Ryzen CPU.

      Thank @olivierlambert

      posted in Compute
      cbaguzmanC
      cbaguzman
    • XCP on intel i9 12th or 13th generation

      Hello everyone.

      I thing buy a new CPU for use with xcp-ng, but I have a doubts about how XCP works with Hybrid CPUs (P-Cores and E-Cores) .

      Ej. xcp use all Cores or only P-Cores? How xcp works with the VMs and P-Cores and E-Cores

      How xcp works with the VMs and P-Core and E-Core?

      someone know about it?

      Thanks Every One.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @ravenet This is the /var/log/xensource.log when I test watchdog until VM starts again on the vm ID:1938f572-4951-a77f-48ce-9131c07940d4

      Can you understand this process with the log?

      Can you help me understand?

      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] Task 83139 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM 1938f572-4951-a77f-48ce-9131c07940d4 is not requesting any attention
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xenopsd-xc: [debug||167488 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: processing event for VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VM 1938f572-4951-a77f-48ce-9131c07940d4 domid 21 guest_agent
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Task 83143 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/1938f572-4951-a77f-48ce-9131c07940d4/rtc/timeoffset token=xenopsd-xc:domain-21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.reboot 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_destroy","hard-reboot"]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Best_effort",["VM_pause","1938f572-4951-a77f-48ce-9131c07940d4"]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.pause 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_destroy_device_model","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.destroy_device_model 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:23 mercurio xenopsd-xc: [ warn||13 |events|xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167496 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: ignoring event for VM 1938f572-4951-a77f-48ce-9131c07940d4: metadata has not changed
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]],["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] begin_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 2 atoms
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=1:[  ]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83144 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83145 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] adding device cache for domid 21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvda; Device is not surprise-removable (ignoring and removing anyway)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/768/online = 0
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvdd; Device is not surprise-removable (ignoring and removing anyway)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/5696/online = 0
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83145 completed; duration = 0
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"]]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
      Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvdd; active devices = [ None ]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd matched device 3
      Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd device <- xvdd; currently_attached <- true
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83144 completed; duration = 25
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VIF_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","0"],true]]
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF.unplug 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvda"]]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvda
      Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvda; active devices = [  ]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda matched device 0
      Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda device <- xvda; currently_attached <- true
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_destroy","1938f572-4951-a77f-48ce-9131c07940d4"]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VM.destroy 1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; will not have domain-level information preserved
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Removing extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Deleting extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] DB.delete /var/run/nonpersistent/xenopsd/classic/extra/1938f572-4951-a77f-48ce-9131c07940d4
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: all known devices = [  ]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vif",["1938f572-4951-a77f-48ce-9131c07940d4","0"]]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VIF 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: other domains with the same UUID = [  ]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VIF.stat 1938f572-4951-a77f-48ce-9131c07940d4.0
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vif; id = 0; active devices = [  ]
      Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VIF 1938f572-4951-a77f-48ce-9131c07940d4.0 currently_attached <- true
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy calling Xenctrl.domain_destroy
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:49 mercurio xenopsd-xc: [ warn||13 ||xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; xenstore-rm /local/domain/21
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; deleting backends
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] begin_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 1 atoms
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] onto Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 ||xenops_server] Queue.pop returned ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83147 reference Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.epoch_end ["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]
      Jan 21 09:12:49 mercurio xenopsd-xc: [ info||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Processing disk SR=d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 VDI=65192a2d-f8f7-41c4-a6b5-9bfdc5110179
      Jan 21 09:12:49 mercurio xenopsd-xc: [error||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Invalid domid, could not be converted to int, passing empty string.
      Jan 21 09:12:49 mercurio xapi: [ info||1293439 ||storage_impl] VDI.epoch_end dbg:Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) sr:d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 vdi:65192a2d-f8f7-41c4-a6b5-9bfdc5110179 vm:
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83147 completed; duration = 4
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_post_destroy","hard-reboot"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_reboot","none"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_start","none"]]
      Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
      
      
      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @stormi I just edited the message.

      Now...

      The watchdog's configuration works ok in my systems.

      My only question is:

      Who is reset my VM, the xen hypervisor or the OS on my VM ?
      I don't find how verify it.

      Pardon for my basic english.👽

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      @stormi

      When I wrote: I wait help with this.

      I wanted to say "I hope this will be useful for them. "

      is Clear?

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      Hello @olivierlambert and @stormi .

      If you need my help to clear this, I am available.

      @stormi when I writed SO, I wanted to write OS (Operative System).

      I ran this comand on bash to broke OS in VM:

      :(){ :|:& };: ()

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: Watchdog for reboot VM when it's broken(no respond).

      Hello, finally I installed wachtdog in my VM. I have Linux Ubuntu there.

      I used xen_wdt module in watchdog.

      It is whatchdog configuration:

      "/etc/default/watchdog"

      # Start watchdog at boot time? 0 or 1
      run_watchdog=1
      # Start wd_keepalive after stopping watchdog? 0 or 1
      run_wd_keepalive=1
      # Load module before starting watchdog
      watchdog_module="xen_wdt"
      # Specify additional watchdog options here (see manpage).
      
      

      /etc/watchdog.conf

      #ping                   = 8.8.8.8 
      ping                    = 192.16.171.254 
      interface               = eth0
      file                    = /var/log/syslog
      change                  = 1407
      # Uncomment to enable test. Setting one of these values to '0' disables it.
      # These values will hopefully never reboot your machine during normal use
      # (if your machine is really hung, the loadavg will go much higher than 25)
      max-load-1              = 24
      #max-load-5             = 18
      #max-load-15            = 12
      # Note that this is the number of pages!
      # To get the real size, check how large the pagesize is on your machine.
      #min-memory             = 1
      #allocatable-memory     = 1
      #repair-binary          = /usr/sbin/repair
      #repair-timeout         = 60
      #test-binary            =
      #test-timeout           = 60
      # The retry-timeout and repair limit are used to handle errors in a more robust
      # manner. Errors must persist for longer than retry-timeout to action a repair
      # or reboot, and if repair-maximum attempts are made without the test passing a
      # reboot is initiated anyway.
      #retry-timeout          = 60
      #repair-maximum         = 1
      watchdog-device = /dev/watchdog
      # Defaults compiled into the binary
      #temperature-sensor     =
      #max-temperature        = 90
      # Defaults compiled into the binary
      admin                   = root
      interval                = 20
      logtick                 = 1
      log-dir                 = /var/log/watchdog
      # This greatly decreases the chance that watchdog won't be scheduled before
      # your machine is really loaded
      realtime                = yes
      priority                = 1
      # Check if rsyslogd is still running by enabling the following line
      #pidfile                = /var/run/rsyslogd.pid
      

      I broke my SO and watchdog reset my vm.

      I wait help with this.

      posted in Compute
      cbaguzmanC
      cbaguzman
    • RE: in xcp-ng host lscpu only show 8 cores for on Ryzen 9 5900x

      @olivierlambert and @AtaxyaNetwork pardon for my bad english.

      Maybe I don't write clear. I hope haven't offensive to you.

      Thank for your answer me.

      posted in Compute
      cbaguzmanC
      cbaguzman