XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RunX: tech preview

    Scheduled Pinned Locked Moved News
    49 Posts 15 Posters 17.9k Views 15 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B Offline
      bc-23 @ronan-a
      last edited by

      @ronan-a The server is still running on 8.2

      [11:21 fraxcp04 ~]# rpm -qa | grep xenops
      xenopsd-0.150.5.1-1.1.xcpng8.2.x86_64
      xenopsd-xc-0.150.5.1-1.1.xcpng8.2.x86_64
      xenopsd-cli-0.150.5.1-1.1.xcpng8.2.x86_64
      

      Are the patches for this version?

      ronan-aR 1 Reply Last reply Reply Quote 0
      • ronan-aR Offline
        ronan-a Vates 🪐 XCP-ng Team @bc-23
        last edited by ronan-a

        @bc-23 You don't have the patched RPMs because there is a new hotfix in the 8.2 and 8.2.1 versions on the main branch. So the actual xenopsd package version is greater than runx... So we must build a new version of the runx packages on our side to correct this issue. We will fix that. 😉

        B 1 Reply Last reply Reply Quote 1
        • B Offline
          bc-23 @ronan-a
          last edited by

          @ronan-a I have seen there are updated packages, thanks 🙂
          After the update I'm able to start the container/VM 🙂

          1 Reply Last reply Reply Quote 3
          • matiasvlM Offline
            matiasvl @olivierlambert
            last edited by ronan-a

            For those that we like to try by using xe, I did this to create the correct template. I have started from a Debian 10 template, you have to replace with the correct UUID (2 VCPUs):

            xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
            xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a HVM-boot-policy=""
            xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a PV-args=""
            xe vm-param-set VCPUs-max=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
            xe vm-param-set VCPUs-at-startup=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
            xe vm-disk-remove device=0 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
            xe template-param-set is-a-template=true uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
            

            The template is listed when you issue xe template-list.

            1 Reply Last reply Reply Quote 2
            • r3m8R Offline
              r3m8
              last edited by

              Hi,

              Same as @bc-23, i get the error :

              message: xenopsd internal error: Could not find File, BlockDevice, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:85daf561-836e-48f1-9b74-1dfef38abe9e share_dir none ///root/runx-sr/1"}]]}
              

              This is my rpm -qa | grep xenops output (my XCP-NG is up-to-date) :

              xenopsd-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
              xenopsd-xc-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
              xenopsd-cli-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
              

              Is it always the runx package that causes problems ? Thanks you all 🙂

              ronan-aR 1 Reply Last reply Reply Quote 0
              • ronan-aR Offline
                ronan-a Vates 🪐 XCP-ng Team @r3m8
                last edited by

                @r3m8 Weird, did you run a xe-toolstack-restart?

                r3m8R 1 Reply Last reply Reply Quote 0
                • r3m8R Offline
                  r3m8 @ronan-a
                  last edited by

                  @ronan-a We have reviewed our SR and template configuration (especially with xe vm-disk-remove device=0 setting) and it works fine (we had already done an xe-toolstack-restart to avoid restarting the hypervisor)

                  1 Reply Last reply Reply Quote 0
                  • etommE Offline
                    etomm
                    last edited by

                    Hello all! After testing this and following the guidelines now my XCP-NG is no more able to run VMs.

                    When I restart the host and try to run a VM it complains that HVM is needed. I just checked the Bios and VT-d is enabled as all the other settings that were there before testing this out.

                    What can I do?

                    ronan-aR 1 Reply Last reply Reply Quote 0
                    • ronan-aR Offline
                      ronan-a Vates 🪐 XCP-ng Team @etomm
                      last edited by

                      @etomm Could you share the full error message/trace please? 🙂

                      etommE 1 Reply Last reply Reply Quote 0
                      • etommE Offline
                        etomm @ronan-a
                        last edited by

                        @ronan-a I could make it start again doing a yum update.

                        Then I think I did an error, because I tried to run the following line:

                        yum remove --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
                        

                        This killed my xapi.service. Not starting anymore. If you will tell me how to find the log I can give to you the trace

                        ronan-aR 1 Reply Last reply Reply Quote 0
                        • ronan-aR Offline
                          ronan-a Vates 🪐 XCP-ng Team @etomm
                          last edited by

                          @etomm Why this yum remove command? You just deleted what allows to manage VMs. 😅 You can try to reinstall the packages using yum install.

                          etommE 1 Reply Last reply Reply Quote 0
                          • etommE Offline
                            etomm @ronan-a
                            last edited by

                            @ronan-a I was in the bad assumptions that all the things that has been installed in this guide where new packages for RunX.

                            So when it began to give problems I went for the remove option. My bad!

                            Everything popped out from the fact that after that I could make the vms start again updating the packages as soon as I was starting the dockerd I was loosing connectivity in the host.

                            So I wanted to remove it.

                            1 Reply Last reply Reply Quote 0
                            • J Offline
                              jmccoy555
                              last edited by

                              Been interested in this since the first blog post but never got round to trying it. Is this still a 'thing'? I'm having little luck trying to get it work and am seeing most of the errors already posted, and have tried the fixes but still no luck. I would guess I may have a template issue.......

                              Mainly wondering if its worth some effort or if its best to just run docker in a VM?

                              Thanks.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Online
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                Hello!

                                @ronan-a will guide you if you have problems on making it work 🙂

                                There's still some work needed to be a 100% feature complete product, in the meantime, if you want to go in production, go for the VM+container inside 🙂

                                J 1 Reply Last reply Reply Quote 0
                                • J Offline
                                  jmccoy555 @olivierlambert
                                  last edited by

                                  @olivierlambert ok, if its still planned to be a feature then I'm up for playing.... testing!!

                                  @ronan-a is this the best way to get the template? The rest of the instructions look pretty simple to follow so I don't think I've got them wrong..... 😄

                                  xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                                  this uuid is of the SR created by the step in the first post?

                                  xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa HVM-boot-policy=""
                                  uuid that is a result of the first step?

                                  xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa PV-args=""
                                  uuid that is a result of the first step?

                                  xe vm-param-set VCPUs-max=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                                  what is this uuid?

                                  xe vm-param-set VCPUs-at-startup=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                                  same again, where does this uuid come from?

                                  xe vm-disk-remove device=0 uuid=cb5a6d67-07d5-b5ea-358a-7ee0d6e535af
                                  and this one too?

                                  xe template-param-set is-a-template=true uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa
                                  uuid generated by the 1st step

                                  Thanks.

                                  ronan-aR 1 Reply Last reply Reply Quote 1
                                  • ronan-aR Offline
                                    ronan-a Vates 🪐 XCP-ng Team @jmccoy555
                                    last edited by ronan-a

                                    @jmccoy555 said in RunX: tech preview:

                                    this uuid is of the SR created by the step in the first post?

                                    Right!

                                    Regarding all xe vm-param-set/xe vm-disk-remove/xe template-param-set commands, you must use the VM UUID returned by the xe vm-install command. I edited the post regarding that. 😉

                                    J 1 Reply Last reply Reply Quote 1
                                    • J Offline
                                      jmccoy555 @ronan-a
                                      last edited by

                                      @ronan-a thanks (was obviously too late for me to think about trying that!!) but no luck 😞

                                      [18:36 bad-XCP-ng-Host-03 ~]# xe vm-install template=Debian\ Buster\ 10 new-name-label=RunX sr-uuid=cb817299-f3ee-9d4e-dd5d-9edad6e55ed0 --minimal
                                      b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                      [18:37 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 HVM-boot-policy=""
                                      [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 PV-args=""
                                      [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-max=2
                                      [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-at-startup=2
                                      [18:39 bad-XCP-ng-Host-03 ~]# xe vm-disk-remove device=0 uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                      [18:39 bad-XCP-ng-Host-03 ~]# xe template-param-set is-a-template=true uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                      [18:41 bad-XCP-ng-Host-03 ~]# nano /etc/runx.conf
                                      [18:43 bad-XCP-ng-Host-03 ~]# podman container create --name archlinux archlinux
                                      0f72916b8cca6c7c4aa68e6ebee467c0480e29e22ebbd6ec666d8385826f88b8
                                      [18:43 bad-XCP-ng-Host-03 ~]# podman start archlinux
                                      The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
                                      message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:1e2e7b99-2ffe-4129-a468-96b57ab685de share_dir none ///root/runx-sr/2"}]]}
                                      archlinux6
                                      [18:44 bad-XCP-ng-Host-03 ~]# rpm -qa | grep xenops
                                      xenopsd-xc-0.150.12-1.2.xcpng8.2.x86_64
                                      xenopsd-cli-0.150.12-1.2.xcpng8.2.x86_64
                                      xenopsd-0.150.12-1.2.xcpng8.2.x86_64
                                      

                                      Not sure what I'm doing wrong now, or if I need an update??

                                      When I created the SR I had to give a host-uuid. I obviously used the host that I'm installing on.

                                      J 1 Reply Last reply Reply Quote 0
                                      • J Offline
                                        jmccoy555 @jmccoy555
                                        last edited by

                                        if this helps....

                                        yum install --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
                                        Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
                                        
                                        This system is not registered with an entitlement server. You can use subscription-manager to register.
                                        
                                        Loading mirror speeds from cached hostfile
                                         * centos-ceph-nautilus: mirror.as29550.net
                                         * centos-nfs-ganesha28: mirrors.vinters.com
                                         * epel: mirror.freethought-internet.co.uk
                                        Excluding mirror: updates.xcp-ng.org
                                         * xcp-ng-base: mirrors.xcp-ng.org
                                        Excluding mirror: updates.xcp-ng.org
                                         * xcp-ng-linstor: mirrors.xcp-ng.org
                                        Excluding mirror: updates.xcp-ng.org
                                         * xcp-ng-runx: mirrors.xcp-ng.org
                                        Excluding mirror: updates.xcp-ng.org
                                         * xcp-ng-updates: mirrors.xcp-ng.org
                                        Package 2:qemu-dp-2.12.0-2.0.5.xcpng8.2.x86_64 already installed and latest version
                                        Package xenopsd-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                                        Package xenopsd-cli-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                                        Package xenopsd-xc-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                                        Package xcp-ng-xapi-storage-1.0.2-3.0.0.runx.1.xcpng8.2.x86_64 already installed and latest version
                                        Package runx-2021.1-1.0.0.runx.1.xcpng8.2.x86_64 already installed and latest version
                                        Nothing to do
                                        
                                        1 Reply Last reply Reply Quote 1
                                        • R Offline
                                          rjt
                                          last edited by

                                          Please, Please, Please make this document itself with bash variables when it comes to the various UUIDs:

                                          [XCP-ngHostX]# SRuuid=$(xe sr-list name-label="Local storage" --minimal)
                                          [XCP-ngHostX]# RunXdebTemplateName="My RunX Template based on Deb10"
                                          [XCP-ngHostX]# RUNXdebTEMPLATEuuid=$(xe vm-install template="Debian Buster 10" new-name-label="${RunXdebTemplateName}" sr-uuid=${SRuuid} --minimal)
                                          [XCP-ngHostX]# echo ${RUNXdebTEMPLATEuuid}
                                          [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} HVM-boot-policy=""
                                          [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} PV-args=""
                                          [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} VCPUs-max=2
                                          [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} VCPUs-at-startup=2
                                          [XCP-ngHostX]# xe vm-disk-remove device=0 uuid=${RUNXdebTEMPLATEuuid}
                                          [XCP-ngHostX]# xe template-param-set is-a-template=true uuid=${RUNXdebTEMPLATEuuid}
                                          [XCP-ngHostX]# xe template-list name-label="${RunXdebTemplateName}"
                                          [XCP-ngHostX]# xe template-list uuid=${RUNXdebTEMPLATEuuid}
                                          
                                          1 Reply Last reply Reply Quote 1
                                          • jchuaJ Offline
                                            jchua
                                            last edited by

                                            is this tech preview available on 8.3 alpha release?
                                            any idea how can i set RunX on 8.3 release?

                                            ronan-aR 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post