XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Watchdog for reboot VM when it's broken(no respond).

    Scheduled Pinned Locked Moved Compute
    16 Posts 4 Posters 1.6k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • cbaguzmanC Offline
      cbaguzman
      last edited by cbaguzman

      Hello everybody!!!

      I want automatically reboot VMs when they are broken (no respond).
      I don't know if xcg-ng has process or script for that cases.

      I saw something in the web with ssh but maybe xcp-ng has something else.

      You can help me?

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Xen will detect if a VM is crashing already, and will reboot it. What do you mean exactly for a VM that doesn't respond?

        cbaguzmanC 1 Reply Last reply Reply Quote 0
        • cbaguzmanC Offline
          cbaguzman @olivierlambert
          last edited by

          hello olivierlambert

          ( What do you mean exactly for a VM that doesn't respond?)
          For example, when the SO of guest is freeze. (for different reasons ej: driver error, low quality software, etc).

          (Xen will detect if a VM is crashing already, and will reboot it.)
          👆 where can I read information for this?

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            It's related to some on crash parameter in the VM object, but I don't remember exactly where it's documented 🤔

            cbaguzmanC 1 Reply Last reply Reply Quote 0
            • cbaguzmanC Offline
              cbaguzman @olivierlambert
              last edited by

              olivierlambert Also, I thing others ways.

              Make a script in XCP host what run ping to VMS'S ip and force reboot of VM when ping is not respond.

              What do you think about this?

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                Bad idea. Losing network temporarily doesn't mean your app crashed.

                I think first you really need to think about what you need.

                1 Reply Last reply Reply Quote 0
                • cbaguzmanC Offline
                  cbaguzman
                  last edited by cbaguzman

                  Hello, finally I installed wachtdog in my VM. I have Linux Ubuntu there.

                  I used xen_wdt module in watchdog.

                  It is whatchdog configuration:

                  "/etc/default/watchdog"

                  # Start watchdog at boot time? 0 or 1
                  run_watchdog=1
                  # Start wd_keepalive after stopping watchdog? 0 or 1
                  run_wd_keepalive=1
                  # Load module before starting watchdog
                  watchdog_module="xen_wdt"
                  # Specify additional watchdog options here (see manpage).
                  
                  

                  /etc/watchdog.conf

                  #ping                   = 8.8.8.8 
                  ping                    = 192.16.171.254 
                  interface               = eth0
                  file                    = /var/log/syslog
                  change                  = 1407
                  # Uncomment to enable test. Setting one of these values to '0' disables it.
                  # These values will hopefully never reboot your machine during normal use
                  # (if your machine is really hung, the loadavg will go much higher than 25)
                  max-load-1              = 24
                  #max-load-5             = 18
                  #max-load-15            = 12
                  # Note that this is the number of pages!
                  # To get the real size, check how large the pagesize is on your machine.
                  #min-memory             = 1
                  #allocatable-memory     = 1
                  #repair-binary          = /usr/sbin/repair
                  #repair-timeout         = 60
                  #test-binary            =
                  #test-timeout           = 60
                  # The retry-timeout and repair limit are used to handle errors in a more robust
                  # manner. Errors must persist for longer than retry-timeout to action a repair
                  # or reboot, and if repair-maximum attempts are made without the test passing a
                  # reboot is initiated anyway.
                  #retry-timeout          = 60
                  #repair-maximum         = 1
                  watchdog-device = /dev/watchdog
                  # Defaults compiled into the binary
                  #temperature-sensor     =
                  #max-temperature        = 90
                  # Defaults compiled into the binary
                  admin                   = root
                  interval                = 20
                  logtick                 = 1
                  log-dir                 = /var/log/watchdog
                  # This greatly decreases the chance that watchdog won't be scheduled before
                  # your machine is really loaded
                  realtime                = yes
                  priority                = 1
                  # Check if rsyslogd is still running by enabling the following line
                  #pidfile                = /var/run/rsyslogd.pid
                  

                  I broke my SO and watchdog reset my vm.

                  I wait help with this.

                  1 Reply Last reply Reply Quote 1
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    That might worth a guide in our doc. What do you think stormi ?

                    stormiS 1 Reply Last reply Reply Quote 0
                    • stormiS Offline
                      stormi Vates 🪐 XCP-ng Team @olivierlambert
                      last edited by

                      olivierlambert said in Watchdog for reboot VM when it's broken(no respond).:

                      That might worth a guide in our doc. What do you think stormi ?

                      Not everything is clear to me, but from a user point of view, I suppose such a guide could be useful, if someone wants to contribute it to https://xcp-ng.org/docs/guides.html (contribution link at the bottom of the page).

                      cbaguzman said in Watchdog for reboot VM when it's broken(no respond).:

                      I broke my SO and watchdog reset my vm.

                      What do you mean with "I broke my SO"?

                      I wait help with this.

                      What kind of help are you waiting for?

                      1 Reply Last reply Reply Quote 0
                      • cbaguzmanC Offline
                        cbaguzman
                        last edited by cbaguzman

                        Hello olivierlambert and stormi .

                        If you need my help to clear this, I am available.

                        stormi when I writed SO, I wanted to write OS (Operative System).

                        I ran this comand on bash to broke OS in VM:

                        :(){ :|:& };: ()

                        1 Reply Last reply Reply Quote 0
                        • stormiS Offline
                          stormi Vates 🪐 XCP-ng Team
                          last edited by

                          So is everything working as you expected, or do you still need help? Your sentence, "I wait help with this.", suggests you need help.

                          1 Reply Last reply Reply Quote 0
                          • cbaguzmanC Offline
                            cbaguzman
                            last edited by cbaguzman

                            stormi

                            When I wrote: I wait help with this.

                            I wanted to say "I hope this will be useful for them. "

                            is Clear?

                            stormiS 1 Reply Last reply Reply Quote 0
                            • stormiS Offline
                              stormi Vates 🪐 XCP-ng Team @cbaguzman
                              last edited by

                              cbaguzman said in Watchdog for reboot VM when it's broken(no respond).:

                              stormi

                              When I wrote: I wait help with this.

                              I wanted to say "I wait this will be util for them. "

                              is Clear?

                              Not really 😄

                              Do you mean you hope your contribution will be useful to others?

                              cbaguzmanC 1 Reply Last reply Reply Quote 0
                              • cbaguzmanC Offline
                                cbaguzman @stormi
                                last edited by cbaguzman

                                stormi I just edited the message.

                                Now...

                                The watchdog's configuration works ok in my systems.

                                My only question is:

                                Who is reset my VM, the xen hypervisor or the OS on my VM ?
                                I don't find how verify it.

                                Pardon for my basic english.👽

                                R 1 Reply Last reply Reply Quote 0
                                • R Offline
                                  ravenet @cbaguzman
                                  last edited by

                                  cbaguzman Watchdog client service talks to ipmi module on hardware, or service on vm host if running virtualized, providing a heartbeat. If the heartbeat isn't received by the host, then a power cycle is initiated. In xen this would likely be via a xe vm-reset-powerstate command, but I haven't looked at the documentation

                                  In short, watchdog works by host or hardware listening for the heartbeat, then hits a hard reset if doesn't hear it. All the guest does is send heartbeats to host watchdog server,
                                  You can't expect your host to reboot itself if it's locked up, that's the point of it.

                                  cbaguzmanC 1 Reply Last reply Reply Quote 0
                                  • cbaguzmanC Offline
                                    cbaguzman @ravenet
                                    last edited by cbaguzman

                                    ravenet This is the /var/log/xensource.log when I test watchdog until VM starts again on the vm ID:1938f572-4951-a77f-48ce-9131c07940d4

                                    Can you understand this process with the log?

                                    Can you help me understand?

                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] Task 83139 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM 1938f572-4951-a77f-48ce-9131c07940d4 is not requesting any attention
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||19 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:11:25 mercurio xenopsd-xc: [debug||167488 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: processing event for VM 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:11:25 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VM 1938f572-4951-a77f-48ce-9131c07940d4 domid 21 guest_agent
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"] onto 1938f572-4951-a77f-48ce-9131c07940d4:[  ]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 ||xenops_server] Queue.pop returned ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Task 83143 reference events: ["VM_check_state","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/1938f572-4951-a77f-48ce-9131c07940d4/rtc/timeoffset token=xenopsd-xc:domain-21
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.reboot 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_destroy","hard-reboot"]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Best_effort",["VM_pause","1938f572-4951-a77f-48ce-9131c07940d4"]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.pause 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["VM_destroy_device_model","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] VM.destroy_device_model 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vm","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VM 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [ warn||13 |events|xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167496 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VM.stat 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: ignoring event for VM 1938f572-4951-a77f-48ce-9131c07940d4: metadata has not changed
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]],["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] begin_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |events|xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 2 atoms
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]] onto Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=1:[  ]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 ||xenops_server] Queue.pop returned ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83144 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],true]]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83145 reference Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"],true]]]
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvda
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.unplug 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] adding device cache for domid 21
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvda; Device is not surprise-removable (ignoring and removing anyway)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/768/online = 0
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; VBD = xvdd; Device is not surprise-removable (ignoring and removing anyway)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown_request frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-write /local/domain/0/backend/vbd3/21/5696/online = 0
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away frontend
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/768
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=768); backend (domid=0 | kind=vbd3 | devid=768)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.rm_device_state frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /xenops/domain/21/device/vbd/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/device/vbd/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/21
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/21/error/device/vbd/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Device.Vbd.release frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] xenstore-rm /local/domain/0/backend/vbd3/21/5696
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.release: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Hotplug.wait_for_unplug: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|hotplug] Synchronised ok with hotplug script: frontend (domid=21 | kind=vbd | devid=5696); backend (domid=0 | kind=vbd3 | devid=5696)
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_utils] TypedTable: Writing extra/1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||14 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83145 completed; duration = 0
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvdd"]]
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvdd
                                    Jan 21 09:12:23 mercurio xenopsd-xc: [debug||167506 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvdd; active devices = [ None ]
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd matched device 3
                                    Jan 21 09:12:23 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvdd device <- xvdd; currently_attached <- true
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.xvda
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||35 |Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83144 completed; duration = 25
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=2.(VBD.unplug vm=1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VIF_unplug",[["1938f572-4951-a77f-48ce-9131c07940d4","0"],true]]
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF.unplug 1938f572-4951-a77f-48ce-9131c07940d4.0
                                    Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vbd",["1938f572-4951-a77f-48ce-9131c07940d4","xvda"]]
                                    Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VBD.stat 1938f572-4951-a77f-48ce-9131c07940d4.xvda
                                    Jan 21 09:12:48 mercurio xenopsd-xc: [debug||167511 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vbd3; id = xvda; active devices = [  ]
                                    Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM 1938f572-4951-a77f-48ce-9131c07940d4 VBD userdevices = [ 3; 0 ]
                                    Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda matched device 0
                                    Jan 21 09:12:48 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VBD 1938f572-4951-a77f-48ce-9131c07940d4.xvda device <- xvda; currently_attached <- true
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VIF_DB.signal 1938f572-4951-a77f-48ce-9131c07940d4.0
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_destroy","1938f572-4951-a77f-48ce-9131c07940d4"]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] VM.destroy 1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; will not have domain-level information preserved
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Removing extra/1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] TypedTable: Deleting extra/1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_utils] DB.delete /var/run/nonpersistent/xenopsd/classic/extra/1938f572-4951-a77f-48ce-9131c07940d4
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: all known devices = [  ]
                                    Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] Processing event: ["Vif",["1938f572-4951-a77f-48ce-9131c07940d4","0"]]
                                    Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenops event on VIF 1938f572-4951-a77f-48ce-9131c07940d4.0
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy: other domains with the same UUID = [  ]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops_server] VIF.stat 1938f572-4951-a77f-48ce-9131c07940d4.0
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||167517 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Device is not active: kind = vif; id = 0; active devices = [  ]
                                    Jan 21 09:12:49 mercurio xapi: [debug||844 |org.xen.xapi.xenops.classic events D:e4e3a2a5e9df|xenops] xenopsd event: Updating VIF 1938f572-4951-a77f-48ce-9131c07940d4.0 currently_attached <- true
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; Domain.destroy calling Xenctrl.domain_destroy
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] About to stop varstored for domain 21 (1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [ warn||13 ||xenops_sandbox] Can't stop varstored for 21 (1938f572-4951-a77f-48ce-9131c07940d4): /var/run/xen/varstored-root-21 does not exist
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; xenstore-rm /local/domain/21
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops] VM = 1938f572-4951-a77f-48ce-9131c07940d4; domid = 21; deleting backends
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["Parallel",["1938f572-4951-a77f-48ce-9131c07940d4","VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4",[["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]]]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] begin_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 ||xenops_server] queue_atomics_and_wait: Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): chunk of 1 atoms
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||13 |queue|xenops_server] Queue.push ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]] onto Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4).chunk=0.atom=0:[  ]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 ||xenops_server] Queue.pop returned ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] Task 83147 reference Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4): ["Atomic",["VBD_epoch_end",[["1938f572-4951-a77f-48ce-9131c07940d4","xvda"],["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]]]]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops_server] VBD.epoch_end ["VDI","d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3/65192a2d-f8f7-41c4-a6b5-9bfdc5110179"]
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [ info||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Processing disk SR=d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 VDI=65192a2d-f8f7-41c4-a6b5-9bfdc5110179
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [error||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
                                    Jan 21 09:12:49 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|xenops] Invalid domid, could not be converted to int, passing empty string.
                                    Jan 21 09:12:49 mercurio xapi: [ info||1293439 ||storage_impl] VDI.epoch_end dbg:Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4) sr:d62b0e6f-e8c4-c0da-3d73-df672a8a8dc3 vdi:65192a2d-f8f7-41c4-a6b5-9bfdc5110179 vm:
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [debug||22 |Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)|task_server] Task 83147 completed; duration = 4
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] end_Parallel:task=83143.atoms=1.(VBD.epoch_end vm=1938f572-4951-a77f-48ce-9131c07940d4)
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_post_destroy","hard-reboot"]]
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_reboot","none"]]
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [debug||13 ||xenops_server] Performing: ["VM_hook_script",["1938f572-4951-a77f-48ce-9131c07940d4","VM_pre_start","none"]]
                                    Jan 21 09:12:53 mercurio xenopsd-xc: [error||13 ||xenops] Failed to read /vm/1938f572-4951-a77f-48ce-9131c07940d4/domains: has this domain already been cleaned up?
                                    
                                    
                                    1 Reply Last reply Reply Quote 0
                                    • ForzaF Forza referenced this topic on
                                    • ForzaF Forza referenced this topic on
                                    • First post
                                      Last post