XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RunX: tech preview

    Scheduled Pinned Locked Moved News
    49 Posts 15 Posters 17.7k Views 15 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • theAeonT Offline
      theAeon @ronan-a
      last edited by theAeon

      @ronan-a oop-good to know. Now I guess I need to figure out why the new image i created is exiting instead of, well, working.

      Unless there's something in this command that I shouldn't be invoking.

      podman create --health-cmd="wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/ || exit 1" --volume=/root/mjolnir:/data:Z matrixdotorg/mjolnir

      (I start it separately, later)

      1 Reply Last reply Reply Quote 0
      • System unpinned this topic on
      • B Offline
        bc-23
        last edited by

        Hi,

        I have started to play around with this feature. I think it's a great idea 🙂
        At the moment I'm running into issue on container start:

        message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:80a85063-9b59-4fda-82c9-017be0fe967a share_dir none ///srv/runx-sr/1"}]]}
        

        I have created the SR as described above:

        uuid ( RO)                    : 968d0b84-213e-a269-3a7a-355cd54f1a1c
                      name-label ( RW): runx-sr
                name-description ( RW): 
                            host ( RO): fraxcp04
              allowed-operations (SRO): VDI.introduce; unplug; plug; PBD.create; update; PBD.destroy; VDI.resize; VDI.clone; scan; VDI.snapshot; VDI.create; VDI.destroy; VDI.set_on_boot
              current-operations (SRO): 
                            VDIs (SRO): 80a85063-9b59-4fda-82c9-017be0fe967a
                            PBDs (SRO): 0d4ca926-5906-b137-a192-8b55c5b2acb6
              virtual-allocation ( RO): 0
            physical-utilisation ( RO): -1
                   physical-size ( RO): -1
                            type ( RO): fsp
                    content-type ( RO): 
                          shared ( RW): false
                   introduced-by ( RO): <not in database>
                     is-tools-sr ( RO): false
                    other-config (MRW): 
                       sm-config (MRO): 
                           blobs ( RO): 
             local-cache-enabled ( RO): false
                            tags (SRW): 
                       clustered ( RO): false
        
        
        # xe pbd-param-list uuid=0d4ca926-5906-b137-a192-8b55c5b2acb6
        uuid ( RO)                  : 0d4ca926-5906-b137-a192-8b55c5b2acb6
             host ( RO) [DEPRECATED]: a6ec002d-b7c3-47d1-a9f2-18614565dd6c
                     host-uuid ( RO): a6ec002d-b7c3-47d1-a9f2-18614565dd6c
               host-name-label ( RO): fraxcp04
                       sr-uuid ( RO): 968d0b84-213e-a269-3a7a-355cd54f1a1c
                 sr-name-label ( RO): runx-sr
                 device-config (MRO): file-uri: /srv/runx-sr
            currently-attached ( RO): true
                  other-config (MRW): storage_driver_domain: OpaqueRef:a194af9f-fd9e-4cb1-a99f-3ee8ad54b624
        
        

        I see also that in /srv/runx-sr a symlink 1 is created, pointing to a overlay image.

        The VM is in stated paused after the error above.

        The template I used was a old debian PV template, where I removed the PV-bootloader and install-* attributes from other-config. What template would you recommend to use?

        Any idea what could cause the error above?

        Thanks,
        Florian

        ronan-aR 1 Reply Last reply Reply Quote 0
        • ronan-aR Offline
          ronan-a Vates 🪐 XCP-ng Team @bc-23
          last edited by ronan-a

          @bc-23 What's your xenopsd version? We haven't updated the modified runx package of xenopsd to support runx with XCP-ng 8.2.1. It is possible that you are using the latest packages without the right patches. ^^"

          So please to confirm this issue using rpm -qa | grep xenops. 🙂

          B 1 Reply Last reply Reply Quote 0
          • B Offline
            bc-23 @ronan-a
            last edited by

            @ronan-a The server is still running on 8.2

            [11:21 fraxcp04 ~]# rpm -qa | grep xenops
            xenopsd-0.150.5.1-1.1.xcpng8.2.x86_64
            xenopsd-xc-0.150.5.1-1.1.xcpng8.2.x86_64
            xenopsd-cli-0.150.5.1-1.1.xcpng8.2.x86_64
            

            Are the patches for this version?

            ronan-aR 1 Reply Last reply Reply Quote 0
            • ronan-aR Offline
              ronan-a Vates 🪐 XCP-ng Team @bc-23
              last edited by ronan-a

              @bc-23 You don't have the patched RPMs because there is a new hotfix in the 8.2 and 8.2.1 versions on the main branch. So the actual xenopsd package version is greater than runx... So we must build a new version of the runx packages on our side to correct this issue. We will fix that. 😉

              B 1 Reply Last reply Reply Quote 1
              • B Offline
                bc-23 @ronan-a
                last edited by

                @ronan-a I have seen there are updated packages, thanks 🙂
                After the update I'm able to start the container/VM 🙂

                1 Reply Last reply Reply Quote 3
                • matiasvlM Offline
                  matiasvl @olivierlambert
                  last edited by ronan-a

                  For those that we like to try by using xe, I did this to create the correct template. I have started from a Debian 10 template, you have to replace with the correct UUID (2 VCPUs):

                  xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                  xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a HVM-boot-policy=""
                  xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a PV-args=""
                  xe vm-param-set VCPUs-max=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                  xe vm-param-set VCPUs-at-startup=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                  xe vm-disk-remove device=0 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                  xe template-param-set is-a-template=true uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                  

                  The template is listed when you issue xe template-list.

                  1 Reply Last reply Reply Quote 2
                  • r3m8R Offline
                    r3m8
                    last edited by

                    Hi,

                    Same as @bc-23, i get the error :

                    message: xenopsd internal error: Could not find File, BlockDevice, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:85daf561-836e-48f1-9b74-1dfef38abe9e share_dir none ///root/runx-sr/1"}]]}
                    

                    This is my rpm -qa | grep xenops output (my XCP-NG is up-to-date) :

                    xenopsd-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                    xenopsd-xc-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                    xenopsd-cli-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                    

                    Is it always the runx package that causes problems ? Thanks you all 🙂

                    ronan-aR 1 Reply Last reply Reply Quote 0
                    • ronan-aR Offline
                      ronan-a Vates 🪐 XCP-ng Team @r3m8
                      last edited by

                      @r3m8 Weird, did you run a xe-toolstack-restart?

                      r3m8R 1 Reply Last reply Reply Quote 0
                      • r3m8R Offline
                        r3m8 @ronan-a
                        last edited by

                        @ronan-a We have reviewed our SR and template configuration (especially with xe vm-disk-remove device=0 setting) and it works fine (we had already done an xe-toolstack-restart to avoid restarting the hypervisor)

                        1 Reply Last reply Reply Quote 0
                        • etommE Offline
                          etomm
                          last edited by

                          Hello all! After testing this and following the guidelines now my XCP-NG is no more able to run VMs.

                          When I restart the host and try to run a VM it complains that HVM is needed. I just checked the Bios and VT-d is enabled as all the other settings that were there before testing this out.

                          What can I do?

                          ronan-aR 1 Reply Last reply Reply Quote 0
                          • ronan-aR Offline
                            ronan-a Vates 🪐 XCP-ng Team @etomm
                            last edited by

                            @etomm Could you share the full error message/trace please? 🙂

                            etommE 1 Reply Last reply Reply Quote 0
                            • etommE Offline
                              etomm @ronan-a
                              last edited by

                              @ronan-a I could make it start again doing a yum update.

                              Then I think I did an error, because I tried to run the following line:

                              yum remove --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
                              

                              This killed my xapi.service. Not starting anymore. If you will tell me how to find the log I can give to you the trace

                              ronan-aR 1 Reply Last reply Reply Quote 0
                              • ronan-aR Offline
                                ronan-a Vates 🪐 XCP-ng Team @etomm
                                last edited by

                                @etomm Why this yum remove command? You just deleted what allows to manage VMs. 😅 You can try to reinstall the packages using yum install.

                                etommE 1 Reply Last reply Reply Quote 0
                                • etommE Offline
                                  etomm @ronan-a
                                  last edited by

                                  @ronan-a I was in the bad assumptions that all the things that has been installed in this guide where new packages for RunX.

                                  So when it began to give problems I went for the remove option. My bad!

                                  Everything popped out from the fact that after that I could make the vms start again updating the packages as soon as I was starting the dockerd I was loosing connectivity in the host.

                                  So I wanted to remove it.

                                  1 Reply Last reply Reply Quote 0
                                  • J Offline
                                    jmccoy555
                                    last edited by

                                    Been interested in this since the first blog post but never got round to trying it. Is this still a 'thing'? I'm having little luck trying to get it work and am seeing most of the errors already posted, and have tried the fixes but still no luck. I would guess I may have a template issue.......

                                    Mainly wondering if its worth some effort or if its best to just run docker in a VM?

                                    Thanks.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Hello!

                                      @ronan-a will guide you if you have problems on making it work 🙂

                                      There's still some work needed to be a 100% feature complete product, in the meantime, if you want to go in production, go for the VM+container inside 🙂

                                      J 1 Reply Last reply Reply Quote 0
                                      • J Offline
                                        jmccoy555 @olivierlambert
                                        last edited by

                                        @olivierlambert ok, if its still planned to be a feature then I'm up for playing.... testing!!

                                        @ronan-a is this the best way to get the template? The rest of the instructions look pretty simple to follow so I don't think I've got them wrong..... 😄

                                        xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                                        this uuid is of the SR created by the step in the first post?

                                        xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa HVM-boot-policy=""
                                        uuid that is a result of the first step?

                                        xe vm-param-set uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa PV-args=""
                                        uuid that is a result of the first step?

                                        xe vm-param-set VCPUs-max=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                                        what is this uuid?

                                        xe vm-param-set VCPUs-at-startup=2 uuid=14d91f2f-a103-da0e-51b3-21c8db307e5d
                                        same again, where does this uuid come from?

                                        xe vm-disk-remove device=0 uuid=cb5a6d67-07d5-b5ea-358a-7ee0d6e535af
                                        and this one too?

                                        xe template-param-set is-a-template=true uuid=a2d46568-c9ab-7da2-57cb-d213ee9d8dfa
                                        uuid generated by the 1st step

                                        Thanks.

                                        ronan-aR 1 Reply Last reply Reply Quote 1
                                        • ronan-aR Offline
                                          ronan-a Vates 🪐 XCP-ng Team @jmccoy555
                                          last edited by ronan-a

                                          @jmccoy555 said in RunX: tech preview:

                                          this uuid is of the SR created by the step in the first post?

                                          Right!

                                          Regarding all xe vm-param-set/xe vm-disk-remove/xe template-param-set commands, you must use the VM UUID returned by the xe vm-install command. I edited the post regarding that. 😉

                                          J 1 Reply Last reply Reply Quote 1
                                          • J Offline
                                            jmccoy555 @ronan-a
                                            last edited by

                                            @ronan-a thanks (was obviously too late for me to think about trying that!!) but no luck 😞

                                            [18:36 bad-XCP-ng-Host-03 ~]# xe vm-install template=Debian\ Buster\ 10 new-name-label=RunX sr-uuid=cb817299-f3ee-9d4e-dd5d-9edad6e55ed0 --minimal
                                            b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                            [18:37 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 HVM-boot-policy=""
                                            [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 PV-args=""
                                            [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-max=2
                                            [18:38 bad-XCP-ng-Host-03 ~]# xe vm-param-set uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6 VCPUs-at-startup=2
                                            [18:39 bad-XCP-ng-Host-03 ~]# xe vm-disk-remove device=0 uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                            [18:39 bad-XCP-ng-Host-03 ~]# xe template-param-set is-a-template=true uuid=b3b5efcf-e810-57ab-5482-4dba14dda0a6
                                            [18:41 bad-XCP-ng-Host-03 ~]# nano /etc/runx.conf
                                            [18:43 bad-XCP-ng-Host-03 ~]# podman container create --name archlinux archlinux
                                            0f72916b8cca6c7c4aa68e6ebee467c0480e29e22ebbd6ec666d8385826f88b8
                                            [18:43 bad-XCP-ng-Host-03 ~]# podman start archlinux
                                            The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
                                            message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:1e2e7b99-2ffe-4129-a468-96b57ab685de share_dir none ///root/runx-sr/2"}]]}
                                            archlinux6
                                            [18:44 bad-XCP-ng-Host-03 ~]# rpm -qa | grep xenops
                                            xenopsd-xc-0.150.12-1.2.xcpng8.2.x86_64
                                            xenopsd-cli-0.150.12-1.2.xcpng8.2.x86_64
                                            xenopsd-0.150.12-1.2.xcpng8.2.x86_64
                                            

                                            Not sure what I'm doing wrong now, or if I need an update??

                                            When I created the SR I had to give a host-uuid. I obviously used the host that I'm installing on.

                                            J 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post