XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    RunX: tech preview

    Scheduled Pinned Locked Moved News
    49 Posts 15 Posters 17.7k Views 15 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • theAeonT Offline
      theAeon @theAeon
      last edited by

      Oh now that's interesting. Turns out the containers (both archlinux and the one i just created) are exiting w/ error 143. They're getting sigterm'ed from somewhere.

      ronan-aR 1 Reply Last reply Reply Quote 0
      • ronan-aR Offline
        ronan-a Vates 🪐 XCP-ng Team @theAeon
        last edited by

        @theaeon said in RunX: tech preview:

        For what its worth, the podman logs archlinux command from above is w/o debug. I didn't quite realize it vanishing immediately was intended behavior though, tells you how versed I am in containers.
        I'll try setting up the matrixdotorg/mjolnir thing again now that I have a command I know is working on runc.

        Yeah by default the archlinux image executes the bash command and when the container is started, bash is launched and died just after that. Finally the VM is stopped. This behavior is the same on docker with this image. However using the interactive mode, it's not the case, but we must implement it for a next runx version.

        1 Reply Last reply Reply Quote 0
        • ronan-aR Offline
          ronan-a Vates 🪐 XCP-ng Team @theAeon
          last edited by

          @theaeon said in RunX: tech preview:

          Oh now that's interesting. Turns out the containers (both archlinux and the one i just created) are exiting w/ error 143. They're getting sigterm'ed from somewhere.

          It's related to how we terminate the VM process: it's a wrapper and not the real process that manages the VM. But we shouldn't show this code to users, it's not the real code, I will create an issue on our side, thanks for the feedback. 🙂

          theAeonT 1 Reply Last reply Reply Quote 0
          • theAeonT Offline
            theAeon @ronan-a
            last edited by theAeon

            @ronan-a oop-good to know. Now I guess I need to figure out why the new image i created is exiting instead of, well, working.

            Unless there's something in this command that I shouldn't be invoking.

            podman create --health-cmd="wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/ || exit 1" --volume=/root/mjolnir:/data:Z matrixdotorg/mjolnir

            (I start it separately, later)

            1 Reply Last reply Reply Quote 0
            • System unpinned this topic on
            • B Offline
              bc-23
              last edited by

              Hi,

              I have started to play around with this feature. I think it's a great idea 🙂
              At the moment I'm running into issue on container start:

              message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:80a85063-9b59-4fda-82c9-017be0fe967a share_dir none ///srv/runx-sr/1"}]]}
              

              I have created the SR as described above:

              uuid ( RO)                    : 968d0b84-213e-a269-3a7a-355cd54f1a1c
                            name-label ( RW): runx-sr
                      name-description ( RW): 
                                  host ( RO): fraxcp04
                    allowed-operations (SRO): VDI.introduce; unplug; plug; PBD.create; update; PBD.destroy; VDI.resize; VDI.clone; scan; VDI.snapshot; VDI.create; VDI.destroy; VDI.set_on_boot
                    current-operations (SRO): 
                                  VDIs (SRO): 80a85063-9b59-4fda-82c9-017be0fe967a
                                  PBDs (SRO): 0d4ca926-5906-b137-a192-8b55c5b2acb6
                    virtual-allocation ( RO): 0
                  physical-utilisation ( RO): -1
                         physical-size ( RO): -1
                                  type ( RO): fsp
                          content-type ( RO): 
                                shared ( RW): false
                         introduced-by ( RO): <not in database>
                           is-tools-sr ( RO): false
                          other-config (MRW): 
                             sm-config (MRO): 
                                 blobs ( RO): 
                   local-cache-enabled ( RO): false
                                  tags (SRW): 
                             clustered ( RO): false
              
              
              # xe pbd-param-list uuid=0d4ca926-5906-b137-a192-8b55c5b2acb6
              uuid ( RO)                  : 0d4ca926-5906-b137-a192-8b55c5b2acb6
                   host ( RO) [DEPRECATED]: a6ec002d-b7c3-47d1-a9f2-18614565dd6c
                           host-uuid ( RO): a6ec002d-b7c3-47d1-a9f2-18614565dd6c
                     host-name-label ( RO): fraxcp04
                             sr-uuid ( RO): 968d0b84-213e-a269-3a7a-355cd54f1a1c
                       sr-name-label ( RO): runx-sr
                       device-config (MRO): file-uri: /srv/runx-sr
                  currently-attached ( RO): true
                        other-config (MRW): storage_driver_domain: OpaqueRef:a194af9f-fd9e-4cb1-a99f-3ee8ad54b624
              
              

              I see also that in /srv/runx-sr a symlink 1 is created, pointing to a overlay image.

              The VM is in stated paused after the error above.

              The template I used was a old debian PV template, where I removed the PV-bootloader and install-* attributes from other-config. What template would you recommend to use?

              Any idea what could cause the error above?

              Thanks,
              Florian

              ronan-aR 1 Reply Last reply Reply Quote 0
              • ronan-aR Offline
                ronan-a Vates 🪐 XCP-ng Team @bc-23
                last edited by ronan-a

                @bc-23 What's your xenopsd version? We haven't updated the modified runx package of xenopsd to support runx with XCP-ng 8.2.1. It is possible that you are using the latest packages without the right patches. ^^"

                So please to confirm this issue using rpm -qa | grep xenops. 🙂

                B 1 Reply Last reply Reply Quote 0
                • B Offline
                  bc-23 @ronan-a
                  last edited by

                  @ronan-a The server is still running on 8.2

                  [11:21 fraxcp04 ~]# rpm -qa | grep xenops
                  xenopsd-0.150.5.1-1.1.xcpng8.2.x86_64
                  xenopsd-xc-0.150.5.1-1.1.xcpng8.2.x86_64
                  xenopsd-cli-0.150.5.1-1.1.xcpng8.2.x86_64
                  

                  Are the patches for this version?

                  ronan-aR 1 Reply Last reply Reply Quote 0
                  • ronan-aR Offline
                    ronan-a Vates 🪐 XCP-ng Team @bc-23
                    last edited by ronan-a

                    @bc-23 You don't have the patched RPMs because there is a new hotfix in the 8.2 and 8.2.1 versions on the main branch. So the actual xenopsd package version is greater than runx... So we must build a new version of the runx packages on our side to correct this issue. We will fix that. 😉

                    B 1 Reply Last reply Reply Quote 1
                    • B Offline
                      bc-23 @ronan-a
                      last edited by

                      @ronan-a I have seen there are updated packages, thanks 🙂
                      After the update I'm able to start the container/VM 🙂

                      1 Reply Last reply Reply Quote 3
                      • matiasvlM Offline
                        matiasvl @olivierlambert
                        last edited by ronan-a

                        For those that we like to try by using xe, I did this to create the correct template. I have started from a Debian 10 template, you have to replace with the correct UUID (2 VCPUs):

                        xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                        xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a HVM-boot-policy=""
                        xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a PV-args=""
                        xe vm-param-set VCPUs-max=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                        xe vm-param-set VCPUs-at-startup=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                        xe vm-disk-remove device=0 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                        xe template-param-set is-a-template=true uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                        

                        The template is listed when you issue xe template-list.

                        1 Reply Last reply Reply Quote 2
                        • r3m8R Offline
                          r3m8
                          last edited by

                          Hi,

                          Same as @bc-23, i get the error :

                          message: xenopsd internal error: Could not find File, BlockDevice, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:85daf561-836e-48f1-9b74-1dfef38abe9e share_dir none ///root/runx-sr/1"}]]}
                          

                          This is my rpm -qa | grep xenops output (my XCP-NG is up-to-date) :

                          xenopsd-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                          xenopsd-xc-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                          xenopsd-cli-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                          

                          Is it always the runx package that causes problems ? Thanks you all 🙂

                          ronan-aR 1 Reply Last reply Reply Quote 0
                          • ronan-aR Offline
                            ronan-a Vates 🪐 XCP-ng Team @r3m8
                            last edited by

                            @r3m8 Weird, did you run a xe-toolstack-restart?

                            r3m8R 1 Reply Last reply Reply Quote 0
                            • r3m8R Offline
                              r3m8 @ronan-a
                              last edited by

                              @ronan-a We have reviewed our SR and template configuration (especially with xe vm-disk-remove device=0 setting) and it works fine (we had already done an xe-toolstack-restart to avoid restarting the hypervisor)

                              1 Reply Last reply Reply Quote 0
                              • etommE Offline
                                etomm
                                last edited by

                                Hello all! After testing this and following the guidelines now my XCP-NG is no more able to run VMs.

                                When I restart the host and try to run a VM it complains that HVM is needed. I just checked the Bios and VT-d is enabled as all the other settings that were there before testing this out.

                                What can I do?

                                ronan-aR 1 Reply Last reply Reply Quote 0
                                • ronan-aR Offline
                                  ronan-a Vates 🪐 XCP-ng Team @etomm
                                  last edited by

                                  @etomm Could you share the full error message/trace please? 🙂

                                  etommE 1 Reply Last reply Reply Quote 0
                                  • etommE Offline
                                    etomm @ronan-a
                                    last edited by

                                    @ronan-a I could make it start again doing a yum update.

                                    Then I think I did an error, because I tried to run the following line:

                                    yum remove --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
                                    

                                    This killed my xapi.service. Not starting anymore. If you will tell me how to find the log I can give to you the trace

                                    ronan-aR 1 Reply Last reply Reply Quote 0
                                    • ronan-aR Offline
                                      ronan-a Vates 🪐 XCP-ng Team @etomm
                                      last edited by

                                      @etomm Why this yum remove command? You just deleted what allows to manage VMs. 😅 You can try to reinstall the packages using yum install.

                                      etommE 1 Reply Last reply Reply Quote 0
                                      • etommE Offline
                                        etomm @ronan-a
                                        last edited by

                                        @ronan-a I was in the bad assumptions that all the things that has been installed in this guide where new packages for RunX.

                                        So when it began to give problems I went for the remove option. My bad!

                                        Everything popped out from the fact that after that I could make the vms start again updating the packages as soon as I was starting the dockerd I was loosing connectivity in the host.

                                        So I wanted to remove it.

                                        1 Reply Last reply Reply Quote 0
                                        • J Offline
                                          jmccoy555
                                          last edited by

                                          Been interested in this since the first blog post but never got round to trying it. Is this still a 'thing'? I'm having little luck trying to get it work and am seeing most of the errors already posted, and have tried the fixes but still no luck. I would guess I may have a template issue.......

                                          Mainly wondering if its worth some effort or if its best to just run docker in a VM?

                                          Thanks.

                                          1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates 🪐 Co-Founder CEO
                                            last edited by

                                            Hello!

                                            @ronan-a will guide you if you have problems on making it work 🙂

                                            There's still some work needed to be a 100% feature complete product, in the meantime, if you want to go in production, go for the VM+container inside 🙂

                                            J 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post