XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 7.6 RC1 available

    Scheduled Pinned Locked Moved Development
    71 Posts 12 Posters 45.0k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      Okay so it's not the right topic indeed, but I really want to know if you still have these issues on 7.6 RC 🙂

      rizaemet 0R 1 Reply Last reply Reply Quote 0
      • rizaemet 0R Offline
        rizaemet 0 @olivierlambert
        last edited by

        olivierlambert I will try and will post result

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          Thanks a lot! It's important to test between 2x 7.6 hosts, to be sure it's still an issue and not a problem from another version in the middle 🙂

          1 Reply Last reply Reply Quote 1
          • C Offline
            conradical
            last edited by conradical

            olivierlambert said in XCP-ng 7.6 RC1 available:

            o test between 2x 7.6 hosts, to be sure it's still an issue and n

            Mine is between multiple 7.6 hosts. PV tools, no PV tools, linux, windows. It all does it.

            rizaemet 0R 1 Reply Last reply Reply Quote 0
            • rizaemet 0R Offline
              rizaemet 0 @conradical
              last edited by

              conradical vm have any snapshot what you try to migrate? is this storage live migrate or vm live migrate?

              C 1 Reply Last reply Reply Quote 0
              • C Offline
                conradical @rizaemet 0
                last edited by

                rizaemet 0
                No snapshots. It happens on both storage migrate and love migrate.

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Any activity in the VM or it's idle?

                  Are they in the same pool?

                  C 1 Reply Last reply Reply Quote 0
                  • C Offline
                    conradical @olivierlambert
                    last edited by

                    olivierlambert
                    Same Pool. It does not seem to matter if it is idle or not.

                    1 Reply Last reply Reply Quote 0
                    • C Offline
                      conradical
                      last edited by

                      Interesting update: I installed the latest PV tools by enabling window update, this fixed 4 out of my 5 test VMs running windows. More testing is underway with linux PV and HVM.

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        That's maybe the reason why I can't reproduce. It's like old tools will lead to migration problems but not more recent ones 😕

                        C 1 Reply Last reply Reply Quote 0
                        • C Offline
                          conradical @olivierlambert
                          last edited by

                          olivierlambert said in XCP-ng 7.6 RC1 available:

                          That's maybe the reason why I can't reproduce. It's like old tools will lead to migration problems but not more recent ones 😕

                          HVM also have migration problems.

                          A 1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            I'm testing with PVHVM guests, I can't reproduce since I got latest tools installed 😞

                            1 Reply Last reply Reply Quote 0
                            • K Offline
                              KFC-Netearth
                              last edited by borzel

                              I don't know if this helps but I have also been doing some test on 7.5 and it looks like VMs that are using a large percentage of their memory have problems migrating.
                              eg Centos 7 HVM with Atlassian bitbucket java process using about 80%
                              From top:

                              PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND 
                               1874 atlbitb+  20   0 3695600   1.3g   6440 S   0.3 42.8   0:52.40 java                 
                               1891 atlbitb+  20   0 4154320   1.1g   8124 S   1.3 36.5   6:40.68 java     
                              

                              Live migration stops at 99% and stays there till you cancel the migration and I have seen it go to 100% and the VM crash.
                              You then need to restart the toolstack on the receiving server or
                              /var/log/xcp-rrdd-plugins.log
                              fills up with these messages

                               xcp-rrdd-squeezed: [ warn|xen1-3|0 ||rrdd-plugins] Couldn't find cached dynamic-max value for domain 66, using 0
                              

                              Shut down the 2 java processes and it migrates OK.

                              I have also seen the same problem with migration of virtual disks between iSCSI SRs

                              borzelB olivierlambertO 2 Replies Last reply Reply Quote 0
                              • borzelB Offline
                                borzel XCP-ng Center Team @KFC-Netearth
                                last edited by

                                KFC-Netearth edited your post to see the top output better

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO @KFC-Netearth
                                  last edited by olivierlambert

                                  KFC-Netearth That's really interesting. Do you have the latest tools version installed in your VM?

                                  edit: I'll try to use 80%+ RAM on a test VM and see if it triggers the problem 🙂

                                  1 Reply Last reply Reply Quote 0
                                  • K Offline
                                    KFC-Netearth
                                    last edited by

                                    I have this version of the guest-tools on the VM
                                    #rpm -aq|grep xen
                                    xe-guest-utilities-xenstore-7.10.0-1.x86_64

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Seems to be latest tools.

                                      Do you have anything during the failed migration in /var/log/xensource.log? Or /var/log/SMlog?

                                      borzelB 1 Reply Last reply Reply Quote 0
                                      • A Offline
                                        Aleksander @conradical
                                        last edited by Aleksander

                                        iSCSI possible problem:

                                        I install XCP-NG 7.6-RC1 on 2 node testing pool and I get error when I try to attach iSCSI SR.

                                        "The SR could not ne connected because the drive gfs2 was not recognized.
                                        Check you settings and try again."

                                        ![0_1540743412810_XCP-NG_7.6-iSCSI error.jpg](Uploading 100%)

                                        I create THICK volume:
                                        ![0_1540724487043_XCP-NG_7.6-iSCSI create.png](Uploading 0%)

                                        With Citrix XEN 7.6 works fine on same QNAP.

                                        Any idea what I do wrong or I encountered an error in XCP-NG 7.6.

                                        1 Reply Last reply Reply Quote 0
                                        • borzelB Offline
                                          borzel XCP-ng Center Team @olivierlambert
                                          last edited by borzel

                                          olivierlambert first testresults:

                                          Situation

                                          • testpool at work with 3 nodes
                                          • XCP-ng 7.5 with latest regular updates
                                          • VM with debian 8.9, 2 vCPU, 2GB RAM, latest updates, xe-guest-utilities 7.4.0-1, x64, PVHVM

                                          Test 1 - VM is idle

                                          • VM migrates just fine

                                          Test 2 - VM stressed with stress --cpu 1

                                          • VM migration crashed with error, VM is stopped
                                          Oct 28 16:17:52 xen5 xenopsd-xc: [debug|xen5|32 ||xenops_server] TASK.signal 192582 = ["Failed",["Internal_error","End_of_file"]]
                                          ...
                                          Oct 28 16:17:52 xen5 xapi: [debug|xen5|78975 INET :::80|Async.VM.pool_migrate R:e90b0ed1692e|xapi] xenops: will retry migration: caught Xenops_interface.Internal_error("End_of_file") from sync_with_task in attempt 1 of 3.
                                          ...
                                          Oct 28 16:17:52 xen5 xenopsd-xc: [ info|xen5|28 |Async.VM.pool_migrate R:e90b0ed1692e|xenops_server] Caught Xenops_interface.Does_not_exist(_) executing ["VM_migrate",["0c19e8aa-24d2-a6a4-5677-8ef4af6dd592",{},{},{},"http://xxx.xxx.xxx.xxx/services/xenops?session_id=OpaqueRef:89ad861c-f0fd-4d70-bb0d-4bf65741f13a"]]: triggering cleanup actions
                                          

                                          (full log: https://schulzalex.de/nextcloud/index.php/s/pT456wEZkBoJrTw)

                                          I had to execute xe-toolstack-restart on the whole pool to get migration working again.

                                          Test3 - VM is idle

                                          • migrate just fine like in Test 1

                                          Test4 - VM is executing apt upgrade

                                          • migrate stucks at ... VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100... until the upgrade inside the VM is finished and the VM is idle

                                          Partial output of tail -f /var/log/xensource.log | grep -i migrate | grep -i progress

                                          ...
                                          Oct 28 16:57:56 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:56 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:56 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:56 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:56 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:57 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:58 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:57:59 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          Oct 28 16:58:00 xen5 xenopsd-xc: [debug|xen5|14 |Async.VM.pool_migrate R:ccb8ffcf5ce5|xenops] VM = 0c19e8aa-24d2-a6a4-5677-8ef4af6dd592; domid = 26; progress = 99 / 100
                                          ....
                                          
                                          • in the end the VM is migrated just fine
                                          1 Reply Last reply Reply Quote 1
                                          • borzelB Offline
                                            borzel XCP-ng Center Team
                                            last edited by borzel

                                            Test 5 - VM idle (with latest guest tools xe-guest-utilities 7.10.0-1)

                                            • migrated just fine

                                            Test 6 - VM stressed with stress --cpu 1 (with latest guest tools xe-guest-utilities 7.10.0-1)

                                            • same error as in Test 2
                                            • logfile from receiving side: https://schulzalex.de/nextcloud/index.php/s/XCPPzN5qcda59wL
                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post