XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Help: Clean shutdown of Host, now no network or VMs are detected

    Scheduled Pinned Locked Moved Solved Compute
    31 Posts 6 Posters 5.8k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G Offline
      guiltykeyboard @Danp
      last edited by

      @Danp

      Good advice.

      Occasionally our older servers crap themselves with this same issue and an emergency network reset on the pool master and then rebooting the other hosts once the pool master is back up usually resolves the issue.

      C 1 Reply Last reply Reply Quote 0
      • C Offline
        CodeMercenary @guiltykeyboard
        last edited by

        @guiltykeyboard

        Thanks for the input. Good to know I might need to do the reset on the pool master, not just the host that's impacted. Fortunately, the impacted host is not too big a deal for me to reboot. The pool master will be more annoying since I don't use shared storage and my VMs are big enough that I'd rather not migrate them. Not a big deal to do it some evening or over a weekend.

        G 1 Reply Last reply Reply Quote 0
        • G Offline
          guiltykeyboard @CodeMercenary
          last edited by

          @CodeMercenary Try an emergency network reset on the server that is directly having the issue and if that doesn't work, try reseting the toolstack on the pool master and then try a network reset there as well.

          Recommend that you set up two hosts with shared storage between the two so that you can live-migrate your VM's to the other host and elect it as master temporarily when you do maintenance.

          C 1 Reply Last reply Reply Quote 0
          • C Offline
            CodeMercenary @guiltykeyboard
            last edited by

            @guiltykeyboard Thank you for the guidance. I'll try that. I still think it's super weird that the server did get back into the pool and seems to be working fine. It's just that the physical console still says it can't find the pool master and has no NICs installed.

            The main reason I don't have shared storage is due to concerns over cost. Years ago, I was thinking of setting up a SAN for my VMware servers and they were crazy expensive, way over the budget of the small company I work for. I think I stopped researching when I ran across an article that was titled something like "How to build your own SAN for less than $10,000" and I realized I was way out of my league.

            I do have a bigger budget now though I would not be able to spend $10k to build shared storage. Any recommendations of ways to do reliable shared storage without it being crazy expensive? One thing that helps now is that each of my servers has dual 10Gbe ethernet, something I didn't have the last time I looked into this.

            I've been keeping my eye on XOSTOR lately since I have storage in all of my servers. Unfortunately, from what I've seen, it requires three servers in the cluster and two of my servers have SSDs and the other has only HDDs so I suspect that third one would slow down the other two since a write isn't considered complete until all servers are done writing. XOSTORE feels safer than shared storage since shared storage would be a single point of failure. (Admittedly, right now I also have multiple single points of failure since I use local storage).

            G 1 Reply Last reply Reply Quote 0
            • G Offline
              guiltykeyboard @CodeMercenary
              last edited by

              @CodeMercenary Have you restarted the toolstack now that it is doing what it needs to do?

              That might make the console show what it needs.

              Also try exiting out of the console and then starting it back up with xsconsole.

              C 1 Reply Last reply Reply Quote 0
              • C Offline
                CodeMercenary @guiltykeyboard
                last edited by

                @guiltykeyboard Restarting the toolstack didn't fix it.

                When I use ctrl-C to exit the console it says it's resetting the console and the new instance still says it can't reach the pool master.

                Looking like the network reset on the troublesome host is the next step, then maybe a reboot. Then I can try your suggestions on the pool master. Just a little gun-shy with messing with the master because of this happening to a server that should have just worked.

                G 1 Reply Last reply Reply Quote 0
                • G Offline
                  guiltykeyboard @CodeMercenary
                  last edited by

                  @CodeMercenary Try the toolstack reset on that host.

                  Maybe do a toolstack reset on the pool master as well - which doesn't affect running VM's.

                  If that doesn't work, try an emergency network reset on the host having trouble.

                  C 1 Reply Last reply Reply Quote 0
                  • C Offline
                    CodeMercenary @guiltykeyboard
                    last edited by

                    @guiltykeyboard Restarting the toolstack on the pool master did not fix it so I went ahead and did the emergency network reset on the host having the issue. It came up normally after the reboot. The emergency reset was super easy, Thank you for helping me with this. Still learning XCP-ng and trying to understand what's happening before trying things to fix it, so in the future I'll know more and hopefully be able to help other people.

                    G 1 Reply Last reply Reply Quote 0
                    • G Offline
                      guiltykeyboard @CodeMercenary
                      last edited by

                      @CodeMercenary Glad it worked out for you.

                      1 Reply Last reply Reply Quote 0
                      • C Offline
                        CodeMercenary
                        last edited by

                        Another bit of strangeness. I just noticed that some older backup disaster recovery VMs were booted up on my pool master host. I looked at the stats and they all booted 4 hours ago, right around when I tried restarting the toolstack on the pool master. All of them were set to auto-start, an odd setting I think for disaster recovery VMs unless there is supposed to be something in place to stop them from auto-starting. Easy enough to shut down but kinda strange that they booted. Surely disaster recovery VMs aren't supposed to power up on restart, right?

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          They shouldn't yes. Were you using DR or CR?

                          C 1 Reply Last reply Reply Quote 0
                          • olivierlambertO olivierlambert marked this topic as a question on
                          • olivierlambertO olivierlambert has marked this topic as solved on
                          • C Offline
                            CodeMercenary @olivierlambert
                            last edited by

                            @olivierlambert It was DR. I was testing DR a while ago and after running it once I disabled the backup job so these backups have just been sitting on the server. I don't think I've rebooted that server since running that backup.

                            1 Reply Last reply Reply Quote 1
                            • nick.lloydN nick.lloyd referenced this topic on
                            • C Offline
                              CodeMercenary
                              last edited by

                              Just want to document that this happened again, on the same host.

                              My XO (from source) that manages my backups, ran the backups last night. I know it was active up until at least around 5:30am but by the time I got into the office it was inaccessible by browser, ssh and ping. Other XO instances showed that it was running but the Console tab didn't give me access to its console.

                              A few hours later I found that other VMs on that same host had become inaccessible in the same fashion and also had no console showing in XO.

                              An hour or two later I found that XO showed the host as being missing from the pool, which it had not been earlier in the day. When I checked the physical console for that host, I found it had red text "<hostname> login: root (automatic login)" and did not respond to the keyboard other than putting more red text on screen of whatever I typed. I hit ctrl-alt-del and it didn't seem to do anything so I typed random things, then XCP-ng started rebooting. I'm guessing it was 60 to 120 seconds after my first ctrl-alt-delete.

                              When it came back up it could not find the pool master and said it had no network interfaces. I was able to solve it by doing another emergency network reset.

                              Would be nice if this wouldn't happen, makes me super nervous about stability. Thankful that the two times this has happened, it was on my least mission critical server. However, it's the server that handles backups so it's still stressful to have it go down. Also makes me wonder if there might be something wrong with that server's hardware.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                You need to check the logs because obviously it's not normal. Check if you have a /var/crash folder with some stuff in it. Otherwise, the usual logs to check (on both xen and dom0 side).

                                C 1 Reply Last reply Reply Quote 0
                                • C Offline
                                  CodeMercenary @olivierlambert
                                  last edited by

                                  @olivierlambert I do have a /var/crash folder but it has nothing it in except a file from a year ago named .sacrificial-space-for-logs.

                                  The Xen logs are verbose. Any suggestions of text to grep to find what I'm looking for? Other than the obvious error I should search for.

                                  Currently looking in xensource.log.* for error lines to see if I can figure anything out.

                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by

                                    kernel log and Xen log, check for anything around the time of the issue

                                    C 1 Reply Last reply Reply Quote 0
                                    • C Offline
                                      CodeMercenary @olivierlambert
                                      last edited by

                                      @olivierlambert Having trouble finding the reboot in the log files because I don't know what to look for. I have nearly 1GB of log files from that day and unfortunately, I don't recall when the reboot happened. Is there something I can grep for in the log that would indicate the reboot and I can backtrack from there?

                                      Tried feeding the GB of log files to Phi4 in my Ollama server but so far it has not been any help finding anything either. Well, it found how to make my office a lot louder by running the server fans at full speed for a few minutes but that wasn't helpful.

                                      C 1 Reply Last reply Reply Quote 0
                                      • C Offline
                                        CodeMercenary @CodeMercenary
                                        last edited by

                                        Looking through the kern.log files I found stuff I thought might be interesting but as I scroll, I see it happening a lot so I then wonder if it's normal. I see sets of three events:
                                        Out of memory: Kill process ###
                                        Killed process ###
                                        oom_reaper: reaped process ###

                                        I wonder if it was starving for memory and having to kill off processes to survive then eventually died. I see these 90+ times in one log file and over 200 times in another. Don't know if this is just normal activity or indication of a problem.

                                        1 Reply Last reply Reply Quote 0
                                        • K Offline
                                          kassebasse @CodeMercenary
                                          last edited by kassebasse

                                          @CodeMercenary I had a simillar issue to you, check the time and the date on the host.

                                          If this might be any help to you: https://xcp-ng.org/forum/topic/10687/after-an-update-the-nic-s-has-disappeared-but-still-works-somehow/10?_=1743764792775

                                          1 Reply Last reply Reply Quote 1
                                          • First post
                                            Last post