XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Pool Master unreachable

    Scheduled Pinned Locked Moved XCP-ng
    8 Posts 2 Posters 756 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S Offline
      sushant.diwakar
      last edited by olivierlambert

      We have some hardware maintainence on current pool master so we change the pool master but unfortunalty on new pool master xapi service is in fail state.
      And i am not able to access any of Virtual machines.
      What could be solution. I am not able to access pool now there are 20 xcp-ng host on this pool and 150+ VM.

      Feb 16 14:53:37 sr0002 xenopsd-xc: [debug||36 |events|xenops_server] VM f83416d8-1c6f-b814-bee1-a958fb22b986 is not requesting any attention
      Feb 16 14:53:37 sr0002 xenopsd-xc: [debug||36 |events|xenops_server] VM_DB.signal f83416d8-1c6f-b814-bee1-a958fb22b986
      Feb 16 14:53:37 sr0002 xenopsd-xc: [debug||36 |events|task_server] Task 144 completed; duration = 0
      Feb 16 14:53:37 sr0002 xenopsd-xc: [debug||36 ||xenops_server] TASK.signal 144 (object deleted)
      Feb 16 14:53:51 sr0002 xcp-rrdd: [ info||9 ||rrdd_main] memfree has changed to 13621020 in domain 1
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 851ded34-a5af-4de6-2226-2eb3cd14b5db
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"] onto 851ded34-a5af-4de6-2226-2eb3cd14b5db:[  ]
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 ||xenops_server] Queue.pop returned ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"]
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 |events|xenops_server] Task 145 reference events: ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"]
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 |events|xenops_server] VM 851ded34-a5af-4de6-2226-2eb3cd14b5db is not requesting any attention
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 |events|xenops_server] VM_DB.signal 851ded34-a5af-4de6-2226-2eb3cd14b5db
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 |events|task_server] Task 145 completed; duration = 0
      Feb 16 14:53:51 sr0002 xenopsd-xc: [debug||11 ||xenops_server] TASK.signal 145 (object deleted)
      Feb 16 14:54:21 sr0002 xcp-rrdd: [ info||9 ||rrdd_main] memfree has changed to 34375344 in domain 3
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM bf3352b8-117b-8882-834f-6933dd8486ad
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","bf3352b8-117b-8882-834f-6933dd8486ad"] onto bf3352b8-117b-8882-834f-6933dd8486ad:[  ]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 ||xenops_server] Queue.pop returned ["VM_check_state","bf3352b8-117b-8882-834f-6933dd8486ad"]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 |events|xenops_server] Task 146 reference events: ["VM_check_state","bf3352b8-117b-8882-834f-6933dd8486ad"]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 |events|xenops_server] VM bf3352b8-117b-8882-834f-6933dd8486ad is not requesting any attention
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 |events|xenops_server] VM_DB.signal bf3352b8-117b-8882-834f-6933dd8486ad
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 |events|task_server] Task 146 completed; duration = 0
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||25 ||xenops_server] TASK.signal 146 (object deleted)
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 1f856394-5cb5-6446-0cb1-7b88871a8a7f
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","1f856394-5cb5-6446-0cb1-7b88871a8a7f"] onto 1f856394-5cb5-6446-0cb1-7b88871a8a7f:[  ]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 ||xenops_server] Queue.pop returned ["VM_check_state","1f856394-5cb5-6446-0cb1-7b88871a8a7f"]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 |events|xenops_server] Task 147 reference events: ["VM_check_state","1f856394-5cb5-6446-0cb1-7b88871a8a7f"]
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 |events|xenops_server] VM 1f856394-5cb5-6446-0cb1-7b88871a8a7f is not requesting any attention
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 |events|xenops_server] VM_DB.signal 1f856394-5cb5-6446-0cb1-7b88871a8a7f
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 |events|task_server] Task 147 completed; duration = 0
      Feb 16 14:54:21 sr0002 xenopsd-xc: [debug||21 ||xenops_server] TASK.signal 147 (object deleted)
      Feb 16 14:54:30 sr0002 xcp-rrdd: [ info||7 ||rrdd_main] GC live_words = 748272
      Feb 16 14:54:30 sr0002 xcp-rrdd: [ info||7 ||rrdd_main] GC heap_words = 1554432
      Feb 16 14:54:30 sr0002 xcp-rrdd: [ info||7 ||rrdd_main] GC free_words = 806088
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM f83416d8-1c6f-b814-bee1-a958fb22b986
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","f83416d8-1c6f-b814-bee1-a958fb22b986"] onto f83416d8-1c6f-b814-bee1-a958fb22b986:[  ]
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 ||xenops_server] Queue.pop returned ["VM_check_state","f83416d8-1c6f-b814-bee1-a958fb22b986"]
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 |events|xenops_server] Task 148 reference events: ["VM_check_state","f83416d8-1c6f-b814-bee1-a958fb22b986"]
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 |events|xenops_server] VM f83416d8-1c6f-b814-bee1-a958fb22b986 is not requesting any attention
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 |events|xenops_server] VM_DB.signal f83416d8-1c6f-b814-bee1-a958fb22b986
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 |events|task_server] Task 148 completed; duration = 0
      Feb 16 14:54:37 sr0002 xenopsd-xc: [debug||23 ||xenops_server] TASK.signal 148 (object deleted)
      Feb 16 14:54:45 sr0002 xcp-rrdd: [ info||0 monitor_write|main|rrdd_server] Failed to process plugin metrics file: xcp-rrdd-gpumon ((Invalid_argument\x0A  "Cstruct.blit_to_bytes src=[0,0](0) dst=[11] src-off=0 len=11"))
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM 851ded34-a5af-4de6-2226-2eb3cd14b5db
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"] onto 851ded34-a5af-4de6-2226-2eb3cd14b5db:[  ]
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 ||xenops_server] Queue.pop returned ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"]
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 |events|xenops_server] Task 149 reference events: ["VM_check_state","851ded34-a5af-4de6-2226-2eb3cd14b5db"]
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 |events|xenops_server] VM 851ded34-a5af-4de6-2226-2eb3cd14b5db is not requesting any attention
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 |events|xenops_server] VM_DB.signal 851ded34-a5af-4de6-2226-2eb3cd14b5db
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 |events|task_server] Task 149 completed; duration = 0
      Feb 16 14:54:52 sr0002 xenopsd-xc: [debug||27 ||xenops_server] TASK.signal 149 (object deleted)
      [14:55 sr0002 ~]#
      

      Please consider this as urgent.

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hi!

        For urgent issues, we'll be happy to treat your tickets as per our guaranteed SLA with your existing pro support. Please open a ticket so we can assist you ASAP!

        S 1 Reply Last reply Reply Quote 0
        • S Offline
          sushant.diwakar @olivierlambert
          last edited by

          @olivierlambert
          Hi we have XOA premium license,

          Can you help us ?

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by olivierlambert

            So you have 20 XCP-ng with 150+ VMs without any pro support in production? 😬

            Please contact us. We usually don't do support after you have an issue (here is why), but we can imagine you just missed the fact with offer XCP-ng support (see https://xcp-ng.com or https://vates.tech/pricing-and-support/), but last time I checked you should have various warnings in your XO UI about unsupported hosts 🤔

            Anyway, let's move forward and contact us.

            S 1 Reply Last reply Reply Quote 0
            • S Offline
              sushant.diwakar @olivierlambert
              last edited by

              @olivierlambert
              I will take forward this to my management regarding support.

              I have found a solution should i do this ?
              or is there any other way to start the xapi service ?
              https://xcp-ng.org/forum/topic/4721/pool-master-went-down-all-other-nodes-claims-not-to-be-in-a-pool/4?_=1708076813182

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                Please contact us and open a support ticket so we can take a look remotely.

                S 1 Reply Last reply Reply Quote 0
                • S Offline
                  sushant.diwakar @olivierlambert
                  last edited by

                  @olivierlambert
                  I have create ticket on support.

                  Ticket#7721836

                  S 1 Reply Last reply Reply Quote 0
                  • S Offline
                    sushant.diwakar @sushant.diwakar
                    last edited by

                    @sushant-diwakar

                    1 Reply Last reply Reply Quote 0
                    • S sushant.diwakar deleted this topic on
                    • DanpD Danp restored this topic on
                    • DanpD Danp referenced this topic on
                    • First post
                      Last post