XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RunX: tech preview

    Scheduled Pinned Locked Moved News
    49 Posts 15 Posters 17.7k Views 15 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • etommE Offline
      etomm @ronan-a
      last edited by

      @ronan-a I could make it start again doing a yum update.

      Then I think I did an error, because I tried to run the following line:

      yum remove --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
      

      This killed my xapi.service. Not starting anymore. If you will tell me how to find the log I can give to you the trace

      ronan-aR 1 Reply Last reply Reply Quote 0
      • ronan-aR Offline
        ronan-a Vates 🪐 XCP-ng Team @etomm
        last edited by

        @etomm Why this yum remove command? You just deleted what allows to manage VMs. 😅 You can try to reinstall the packages using yum install.

        etommE 1 Reply Last reply Reply Quote 0
        • etommE Offline
          etomm @ronan-a
          last edited by

          @ronan-a I was in the bad assumptions that all the things that has been installed in this guide where new packages for RunX.

          So when it began to give problems I went for the remove option. My bad!

          Everything popped out from the fact that after that I could make the vms start again updating the packages as soon as I was starting the dockerd I was loosing connectivity in the host.

          So I wanted to remove it.

          1 Reply Last reply Reply Quote 0
          • J Offline
            jmccoy555
            last edited by

            Been interested in this since the first blog post but never got round to trying it. Is this still a 'thing'? I'm having little luck trying to get it work and am seeing most of the errors already posted, and have tried the fixes but still no luck. I would guess I may have a template issue.......

            Mainly wondering if its worth some effort or if its best to just run docker in a VM?

            Thanks.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Hello!

              @ronan-a will guide you if you have problems on making it work 🙂

              There's still some work needed to be a 100% feature complete product, in the meantime, if you want to go in production, go for the VM+container inside 🙂

              J 1 Reply Last reply Reply Quote 0
              • J Offline
                jmccoy555 @olivierlambert
                last edited by

                @olivierlambert ok, if its still planned to be a feature then I'm up for playing.... testing!!

                @ronan-a is this the best way to get the template? The rest of the instructions look pretty simple to follow so I don't think I've got them wrong..... 😄

                xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                this uuid is of the SR created by the step in the first post?

                xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa HVM-boot-policy=""
                uuid that is a result of the first step?

                xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa PV-args=""
                uuid that is a result of the first step?

                xe vm-param-set VCPUs-max=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                what is this uuid?

                xe vm-param-set VCPUs-at-startup=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                same again, where does this uuid come from?

                xe vm-disk-remove device=0 uuid=cb5a6d67-07d5-b5ea-358a-7ee0d6e535af
                and this one too?

                xe template-param-set is-a-template=true uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa
                uuid generated by the 1st step

                Thanks.

                ronan-aR 1 Reply Last reply Reply Quote 1
                • ronan-aR Offline
                  ronan-a Vates 🪐 XCP-ng Team @jmccoy555
                  last edited by ronan-a

                  @jmccoy555 said in RunX: tech preview:

                  this uuid is of the SR created by the step in the first post?

                  Right!

                  Regarding all xe vm-param-set/xe vm-disk-remove/xe template-param-set commands, you must use the VM UUID returned by the xe vm-install command. I edited the post regarding that. 😉

                  J 1 Reply Last reply Reply Quote 1
                  • J Offline
                    jmccoy555 @ronan-a
                    last edited by

                    @ronan-a thanks (was obviously too late for me to think about trying that!!) but no luck 😞

                    [18:36 bad-XCP-ng-Host-03 ~]# xe vm-install template=Debian\ Buster\ 10 new-name-label=RunX sr-uuid=cb817299-f3ee-9d4e-dd5d-9edad6e55ed0 --minimal
                    b3b5efcf-e810-57ab-5482-4dba14dda0a6
                    [18:37 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 HVM-boot-policy=""
                    [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 PV-args=""
                    [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-max=2
                    [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-at-startup=2
                    [18:39 bad-XCP-ng-Host-03 ~]# xe vm-disk-remove device=0 uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                    [18:39 bad-XCP-ng-Host-03 ~]# xe template-param-set is-a-template=true uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                    [18:41 bad-XCP-ng-Host-03 ~]# nano /etc/runx.conf
                    [18:43 bad-XCP-ng-Host-03 ~]# podman container create --name archlinux archlinux
                    0f72916b8cca6c7c4aa68e6ebee467c0480e29e22ebbd6ec666d8385826f88b8
                    [18:43 bad-XCP-ng-Host-03 ~]# podman start archlinux
                    The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
                    message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:1e2e7b99-2ffe-4129-a468-96b57ab685de share_dir none ///root/runx-sr/2"}]]}
                    archlinux6
                    [18:44 bad-XCP-ng-Host-03 ~]# rpm -qa | grep xenops
                    xenopsd-xc-0.150.12-1.2.xcpng8.2.x86_64
                    xenopsd-cli-0.150.12-1.2.xcpng8.2.x86_64
                    xenopsd-0.150.12-1.2.xcpng8.2.x86_64
                    

                    Not sure what I'm doing wrong now, or if I need an update??

                    When I created the SR I had to give a host-uuid. I obviously used the host that I'm installing on.

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      jmccoy555 @jmccoy555
                      last edited by

                      if this helps....

                      yum install --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
                      Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
                      
                      This system is not registered with an entitlement server. You can use subscription-manager to register.
                      
                      Loading mirror speeds from cached hostfile
                       * centos-ceph-nautilus: mirror.as29550.net
                       * centos-nfs-ganesha28: mirrors.vinters.com
                       * epel: mirror.freethought-internet.co.uk
                      Excluding mirror: updates.xcp-ng.org
                       * xcp-ng-base: mirrors.xcp-ng.org
                      Excluding mirror: updates.xcp-ng.org
                       * xcp-ng-linstor: mirrors.xcp-ng.org
                      Excluding mirror: updates.xcp-ng.org
                       * xcp-ng-runx: mirrors.xcp-ng.org
                      Excluding mirror: updates.xcp-ng.org
                       * xcp-ng-updates: mirrors.xcp-ng.org
                      Package 2:qemu-dp-2.12.0-2.0.5.xcpng8.2.x86_64 already installed and latest version
                      Package xenopsd-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                      Package xenopsd-cli-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                      Package xenopsd-xc-0.150.12-1.2.xcpng8.2.x86_64 already installed and latest version
                      Package xcp-ng-xapi-storage-1.0.2-3.0.0.runx.1.xcpng8.2.x86_64 already installed and latest version
                      Package runx-2021.1-1.0.0.runx.1.xcpng8.2.x86_64 already installed and latest version
                      Nothing to do
                      
                      1 Reply Last reply Reply Quote 1
                      • R Offline
                        rjt
                        last edited by

                        Please, Please, Please make this document itself with bash variables when it comes to the various UUIDs:

                        [XCP-ngHostX]# SRuuid=$(xe sr-list name-label="Local storage" --minimal)
                        [XCP-ngHostX]# RunXdebTemplateName="My RunX Template based on Deb10"
                        [XCP-ngHostX]# RUNXdebTEMPLATEuuid=$(xe vm-install template="Debian Buster 10" new-name-label="${RunXdebTemplateName}" sr-uuid=${SRuuid} --minimal)
                        [XCP-ngHostX]# echo ${RUNXdebTEMPLATEuuid}
                        [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} HVM-boot-policy=""
                        [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} PV-args=""
                        [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} VCPUs-max=2
                        [XCP-ngHostX]# xe vm-param-set uuid=${RUNXdebTEMPLATEuuid} VCPUs-at-startup=2
                        [XCP-ngHostX]# xe vm-disk-remove device=0 uuid=${RUNXdebTEMPLATEuuid}
                        [XCP-ngHostX]# xe template-param-set is-a-template=true uuid=${RUNXdebTEMPLATEuuid}
                        [XCP-ngHostX]# xe template-list name-label="${RunXdebTemplateName}"
                        [XCP-ngHostX]# xe template-list uuid=${RUNXdebTEMPLATEuuid}
                        
                        1 Reply Last reply Reply Quote 1
                        • jchuaJ Offline
                          jchua
                          last edited by

                          is this tech preview available on 8.3 alpha release?
                          any idea how can i set RunX on 8.3 release?

                          ronan-aR 1 Reply Last reply Reply Quote 0
                          • olivierlambertO Online
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            Question for @stormi and/or @ronan-a

                            1 Reply Last reply Reply Quote 0
                            • ronan-aR Offline
                              ronan-a Vates 🪐 XCP-ng Team @jchua
                              last edited by

                              @jchua We currently don't have a test version planned for XCP-ng 8.3. Only 8.2 is supported for the moment.

                              FinallfF 1 Reply Last reply Reply Quote 0
                              • FinallfF Offline
                                Finallf @ronan-a
                                last edited by

                                @ronan-a would it be too much to ask for a version for 8.3?
                                I'm testing the alpha since publication and I would like to be able to use RunX 😄

                                ronan-aR 1 Reply Last reply Reply Quote 0
                                • ronan-aR Offline
                                  ronan-a Vates 🪐 XCP-ng Team @Finallf
                                  last edited by

                                  @Finallf It's been a while since we've been actively working on this subject, that doesn't mean that this project is abandoned, but it's not our priority target.
                                  Currently it is rather this project that we are trying to move forward: https://xen-orchestra.com/blog/announcing-project-pyrgos/

                                  Sorry but it's not so soon that we will have a new runx update. 😅

                                  1 Reply Last reply Reply Quote 1
                                  • olivierlambertO Online
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by

                                    Yes, and it's not impossible that RunX will be used in Pyrgos at some point 🙂

                                    R 1 Reply Last reply Reply Quote 1
                                    • R Offline
                                      ricardogaspar2 @olivierlambert
                                      last edited by

                                      @olivierlambert Is there any updates on the RunX and Pygros ?
                                      Any tutorials one can find? Docs or other forum threads?
                                      I'm currently running XCP-NG 8.3 beta for my homelab. I'm interested in having kubernetes or Docker runtime using RunX or Pygros.

                                      Thanks in advance

                                      1 Reply Last reply Reply Quote 1
                                      • olivierlambertO Online
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by

                                        Pyrgos is working, you can already use it directly from your XOA. We are now working on doing lifecycle improvements (like updating a cluster) as a next feature for Q1.

                                        RunX is a bit different, it's still a test thing, for now we consider Pyrgos more important (delivering a k8s cluster in a turnkey fashion).

                                        M 1 Reply Last reply Reply Quote 1
                                        • M Offline
                                          mavoff @olivierlambert
                                          last edited by

                                          just a personal opinion I'm leaving here, no follow-up needed.

                                          If I want to run containers I use OKD.
                                          I've always seen all the comments about keeping the dom0 controller to a minimum, docker alone adds a 2GB overhead to it, no more problems in adding stuff to it.

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post