XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups

    RunX: tech preview

    News
    12
    43
    5281
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • theAeon
      theAeon @theAeon last edited by

      Worth noting that I have a network adapter on the template-if i run it without a network adapter it still fails, but slightly differently.

      DEBU[0000] Reading configuration file "/etc/containers/libpod.conf"
      DEBU[0000] Merged system config "/etc/containers/libpod.conf": &{{false false false false false true} 0 {   [] [] []}  docker://  runx map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runx:[/usr/bin/runx]] [runx] [runx] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] runx []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}
      DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf"
      DEBU[0000] Merged system config "/usr/share/containers/libpod.conf": &{{false false false false false true} 0 {   [] [] []}  docker://  runx map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc] runx:[/usr/bin/runx]] [runx] [runx] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] systemd   /var/run/libpod -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] runx []   k8s.gcr.io/pause:3.1 /pause false false  2048 shm    false}
      DEBU[0000] Using conmon: "/usr/bin/conmon"
      DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
      DEBU[0000] Using graph driver overlay
      DEBU[0000] Using graph root /var/lib/containers/storage
      DEBU[0000] Using run root /var/run/containers/storage
      DEBU[0000] Using static dir /var/lib/containers/storage/libpod
      DEBU[0000] Using tmp dir /var/run/libpod
      DEBU[0000] Using volume path /var/lib/containers/storage/volumes
      DEBU[0000] Set libpod namespace to ""
      DEBU[0000] [graphdriver] trying provided driver "overlay"
      DEBU[0000] cached value indicated that overlay is supported
      DEBU[0000] cached value indicated that metacopy is not being used
      DEBU[0000] cached value indicated that native-diff is usable
      DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
      DEBU[0000] Initializing event backend journald
      WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
      DEBU[0000] using runtime "/usr/bin/runx"
      DEBU[0000] using runtime "/usr/bin/runc"
      INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
      INFO[0000] Found CNI network runx (type=loopback) at /etc/cni/net.d/99-podman-runx.conflist
      DEBU[0000] Made network namespace at /var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888 for container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4
      INFO[0000] Got pod network &{Name:archlinux Namespace:archlinux ID:abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 NetNS:/var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888 Networks:[] RuntimeConfig:map[runx:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
      INFO[0000] About to add CNI network cni-loopback (type=loopback)
      DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/57IKSKIB7PWKFFISHSGCHN7OJL:/var/lib/containers/storage/overlay/l/UGY43VPEWOHWBCZOQR4WMP5A6F,upperdir=/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/diff,workdir=/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/work
      DEBU[0000] mounted container "abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4" at "/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/merged"
      DEBU[0000] Created root filesystem for container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 at /var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/merged
      INFO[0000] Got pod network &{Name:archlinux Namespace:archlinux ID:abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 NetNS:/var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888 Networks:[] RuntimeConfig:map[runx:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
      INFO[0000] About to add CNI network runx (type=loopback)
      DEBU[0000] [0] CNI result: Interfaces:[{Name:lo Mac:00:00:00:00:00:00 Sandbox:/var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888}], IP:[{Version:4 Interface:0xc0001d8bf0 Address:{IP:127.0.0.1 Mask:ff000000} Gateway:<nil>} {Version:6 Interface:0xc0001d8c28 Address:{IP:::1 Mask:ffffffffffffffffffffffffffffffff} Gateway:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]}
      DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
      DEBU[0000] Setting CGroups for container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 to machine.slice:libpod:abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4
      DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
      DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-register-machine.json
      DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-systemd-hook.json
      DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-umount.json
      DEBU[0000] hook oci-register-machine.json did not match
      DEBU[0000] hook oci-systemd-hook.json did not match
      DEBU[0000] hook oci-umount.json did not match
      DEBU[0000] reading hooks from /etc/containers/oci/hooks.d
      DEBU[0000] Created OCI spec for container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 at /var/lib/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata/config.json
      DEBU[0000] /usr/bin/conmon messages will be logged to syslog
      DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 -u abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 -r /usr/bin/runx -b /var/lib/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata -p /var/run/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runx --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4]"
      INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4.scope
      DEBU[0000] Cleaning up container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4
      DEBU[0000] Tearing down network namespace at /var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888 for container abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4
      INFO[0000] Got pod network &{Name:archlinux Namespace:archlinux ID:abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4 NetNS:/var/run/netns/cni-ad3f7be8-7341-a2d3-fd7d-9e417dd8c888 Networks:[] RuntimeConfig:map[runx:{IP: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
      INFO[0000] About to del CNI network runx (type=loopback)
      DEBU[0000] unmounted container "abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4"
      ERRO[0000] unable to start container "archlinux": container create failed (no logs from conmon): EOF
      
      ronan-a 1 Reply Last reply Reply Quote 0
      • olivierlambert
        olivierlambert Vates πŸͺ Co-Founder🦸 CEO πŸ§‘β€πŸ’Ό last edited by

        Thanks @ronan-a or @BenjiReis will take a look tomorrow πŸ™‚

        1 Reply Last reply Reply Quote 0
        • ronan-a
          ronan-a Vates πŸͺ XCP-ng Team πŸš€ @theAeon last edited by ronan-a

          @theaeon Could you verify the UUIDs of your config in /etc/runx.conf?
          In a shell:

          xe sr-list uuid=<SR_UUID>
          xe template-list uuid=<TEMPLATE_UUID>
          

          You must have a valid output for each command. πŸ™‚

          Also ensure the template is a PV template.

          Edit: You can also try this basic command: podman start archlinux without other arguments. The parsing of command line arguments must be improved.

          theAeon 1 Reply Last reply Reply Quote 0
          • theAeon
            theAeon @ronan-a last edited by theAeon

            @ronan-a Confirmed all three points. I did set GUEST_TOOLS in runx.conf-I do wonder if that could be the problem here.

            For what its worth log-level debug shouldn't ever make it over to runx, so those earlier pastes should effectively be podman start archlinux

            edit: a quick test says that GUEST_TOOLS isnt it.

            edit: more logs for ya:

            [04:01 lenovo150 ~]# podman logs archlinux
            connect to container console with 'xl console abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4'
            mount: mount(2) failed: Not a directory
            mount: mount(2) failed: Not a directory
            mkdir: cannot create directory '/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/merged/etc/hosts': File exists
            mount: mount(2) failed: Not a directory
            rm: cannot remove '/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/merged//etc/hosts': Device or resource busy
            cp: '/var/run/containers/storage/overlay-containers/abb22f4d68083424252ac7116427dfd9c3644291e858039750f135cae499f8b4/userdata/hosts' and '/var/lib/containers/storage/overlay/742626ef59426855d765f2cee7b24cac06ecacc60c5ae37668d1f95ff649cd22/merged//etc/hosts' are the same file
            
            BenjiReis ronan-a 2 Replies Last reply Reply Quote 0
            • BenjiReis
              BenjiReis Vates πŸͺ XCP-ng Team πŸš€ @theAeon last edited by

              @theaeon Hi!

              I've just setup an runx host following @olivierlambert instructions.
              I've reproduced your error when --log-level debug is put in the podman command.

              Do you still have an error without it?
              podman start archlinux shoud create a VM named container:<something> that starts, stops and then is removed like a container would do.

              theAeon 1 Reply Last reply Reply Quote 0
              • ronan-a
                ronan-a Vates πŸͺ XCP-ng Team πŸš€ @theAeon last edited by

                @theaeon Like I said we must change how arguments are parsed in the runx script, so avoid additional params like --log-devel. πŸ˜‰

                1 Reply Last reply Reply Quote 0
                • theAeon
                  theAeon @BenjiReis last edited by

                  For what its worth, the podman logs archlinux command from above is w/o debug. I didn't quite realize it vanishing immediately was intended behavior though, tells you how versed I am in containers.

                  I'll try setting up the matrixdotorg/mjolnir thing again now that I have a command I know is working on runc.

                  theAeon ronan-a 2 Replies Last reply Reply Quote 0
                  • theAeon
                    theAeon @theAeon last edited by

                    Oh now that's interesting. Turns out the containers (both archlinux and the one i just created) are exiting w/ error 143. They're getting sigterm'ed from somewhere.

                    ronan-a 1 Reply Last reply Reply Quote 0
                    • ronan-a
                      ronan-a Vates πŸͺ XCP-ng Team πŸš€ @theAeon last edited by

                      @theaeon said in RunX: tech preview:

                      For what its worth, the podman logs archlinux command from above is w/o debug. I didn't quite realize it vanishing immediately was intended behavior though, tells you how versed I am in containers.
                      I'll try setting up the matrixdotorg/mjolnir thing again now that I have a command I know is working on runc.

                      Yeah by default the archlinux image executes the bash command and when the container is started, bash is launched and died just after that. Finally the VM is stopped. This behavior is the same on docker with this image. However using the interactive mode, it's not the case, but we must implement it for a next runx version.

                      1 Reply Last reply Reply Quote 0
                      • ronan-a
                        ronan-a Vates πŸͺ XCP-ng Team πŸš€ @theAeon last edited by

                        @theaeon said in RunX: tech preview:

                        Oh now that's interesting. Turns out the containers (both archlinux and the one i just created) are exiting w/ error 143. They're getting sigterm'ed from somewhere.

                        It's related to how we terminate the VM process: it's a wrapper and not the real process that manages the VM. But we shouldn't show this code to users, it's not the real code, I will create an issue on our side, thanks for the feedback. πŸ™‚

                        theAeon 1 Reply Last reply Reply Quote 0
                        • theAeon
                          theAeon @ronan-a last edited by theAeon

                          @ronan-a oop-good to know. Now I guess I need to figure out why the new image i created is exiting instead of, well, working.

                          Unless there's something in this command that I shouldn't be invoking.

                          podman create --health-cmd="wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/ || exit 1" --volume=/root/mjolnir:/data:Z matrixdotorg/mjolnir

                          (I start it separately, later)

                          1 Reply Last reply Reply Quote 0
                          • Unpinned by  System 
                          • B
                            bc-23 last edited by

                            Hi,

                            I have started to play around with this feature. I think it's a great idea πŸ™‚
                            At the moment I'm running into issue on container start:

                            message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:80a85063-9b59-4fda-82c9-017be0fe967a share_dir none ///srv/runx-sr/1"}]]}
                            

                            I have created the SR as described above:

                            uuid ( RO)                    : 968d0b84-213e-a269-3a7a-355cd54f1a1c
                                          name-label ( RW): runx-sr
                                    name-description ( RW): 
                                                host ( RO): fraxcp04
                                  allowed-operations (SRO): VDI.introduce; unplug; plug; PBD.create; update; PBD.destroy; VDI.resize; VDI.clone; scan; VDI.snapshot; VDI.create; VDI.destroy; VDI.set_on_boot
                                  current-operations (SRO): 
                                                VDIs (SRO): 80a85063-9b59-4fda-82c9-017be0fe967a
                                                PBDs (SRO): 0d4ca926-5906-b137-a192-8b55c5b2acb6
                                  virtual-allocation ( RO): 0
                                physical-utilisation ( RO): -1
                                       physical-size ( RO): -1
                                                type ( RO): fsp
                                        content-type ( RO): 
                                              shared ( RW): false
                                       introduced-by ( RO): <not in database>
                                         is-tools-sr ( RO): false
                                        other-config (MRW): 
                                           sm-config (MRO): 
                                               blobs ( RO): 
                                 local-cache-enabled ( RO): false
                                                tags (SRW): 
                                           clustered ( RO): false
                            
                            
                            # xe pbd-param-list uuid=0d4ca926-5906-b137-a192-8b55c5b2acb6
                            uuid ( RO)                  : 0d4ca926-5906-b137-a192-8b55c5b2acb6
                                 host ( RO) [DEPRECATED]: a6ec002d-b7c3-47d1-a9f2-18614565dd6c
                                         host-uuid ( RO): a6ec002d-b7c3-47d1-a9f2-18614565dd6c
                                   host-name-label ( RO): fraxcp04
                                           sr-uuid ( RO): 968d0b84-213e-a269-3a7a-355cd54f1a1c
                                     sr-name-label ( RO): runx-sr
                                     device-config (MRO): file-uri: /srv/runx-sr
                                currently-attached ( RO): true
                                      other-config (MRW): storage_driver_domain: OpaqueRef:a194af9f-fd9e-4cb1-a99f-3ee8ad54b624
                            
                            

                            I see also that in /srv/runx-sr a symlink 1 is created, pointing to a overlay image.

                            The VM is in stated paused after the error above.

                            The template I used was a old debian PV template, where I removed the PV-bootloader and install-* attributes from other-config. What template would you recommend to use?

                            Any idea what could cause the error above?

                            Thanks,
                            Florian

                            ronan-a 1 Reply Last reply Reply Quote 0
                            • ronan-a
                              ronan-a Vates πŸͺ XCP-ng Team πŸš€ @bc-23 last edited by ronan-a

                              @bc-23 What's your xenopsd version? We haven't updated the modified runx package of xenopsd to support runx with XCP-ng 8.2.1. It is possible that you are using the latest packages without the right patches. ^^"

                              So please to confirm this issue using rpm -qa | grep xenops. πŸ™‚

                              B 1 Reply Last reply Reply Quote 0
                              • B
                                bc-23 @ronan-a last edited by

                                @ronan-a The server is still running on 8.2

                                [11:21 fraxcp04 ~]# rpm -qa | grep xenops
                                xenopsd-0.150.5.1-1.1.xcpng8.2.x86_64
                                xenopsd-xc-0.150.5.1-1.1.xcpng8.2.x86_64
                                xenopsd-cli-0.150.5.1-1.1.xcpng8.2.x86_64
                                

                                Are the patches for this version?

                                ronan-a 1 Reply Last reply Reply Quote 0
                                • ronan-a
                                  ronan-a Vates πŸͺ XCP-ng Team πŸš€ @bc-23 last edited by ronan-a

                                  @bc-23 You don't have the patched RPMs because there is a new hotfix in the 8.2 and 8.2.1 versions on the main branch. So the actual xenopsd package version is greater than runx... So we must build a new version of the runx packages on our side to correct this issue. We will fix that. πŸ˜‰

                                  B 1 Reply Last reply Reply Quote 1
                                  • B
                                    bc-23 @ronan-a last edited by

                                    @ronan-a I have seen there are updated packages, thanks πŸ™‚
                                    After the update I'm able to start the container/VM πŸ™‚

                                    1 Reply Last reply Reply Quote 3
                                    • matiasvl
                                      matiasvl Vates πŸͺ XCP-ng Team πŸš€ @olivierlambert last edited by ronan-a

                                      For those that we like to try by using xe, I did this to create the correct template. I have started from a Debian 10 template, you have to replace with the correct UUID (2 VCPUs):

                                      xe vm-install template=Debian\ Buster\ 10 new-name-label=tempforrunx sr-uuid=7c5212f3-97b2-cdeb-b735-ad26638926e3 --minimal
                                      xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a HVM-boot-policy=""
                                      xe vm-param-set uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a PV-args=""
                                      xe vm-param-set VCPUs-max=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                                      xe vm-param-set VCPUs-at-startup=2 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                                      xe vm-disk-remove device=0 uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                                      xe template-param-set is-a-template=true uuid=fc5c67c2-ee5a-4b90-8e0f-eb6ff9fdd29a
                                      

                                      The template is listed when you issue xe template-list.

                                      1 Reply Last reply Reply Quote 2
                                      • r3m8
                                        r3m8 last edited by

                                        Hi,

                                        Same as @bc-23, i get the error :

                                        message: xenopsd internal error: Could not find File, BlockDevice, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:85daf561-836e-48f1-9b74-1dfef38abe9e share_dir none ///root/runx-sr/1"}]]}
                                        

                                        This is my rpm -qa | grep xenops output (my XCP-NG is up-to-date) :

                                        xenopsd-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                                        xenopsd-xc-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                                        xenopsd-cli-0.150.9-1.1.0.runx.1.xcpng8.2.x86_64
                                        

                                        Is it always the runx package that causes problems ? Thanks you all πŸ™‚

                                        ronan-a 1 Reply Last reply Reply Quote 0
                                        • ronan-a
                                          ronan-a Vates πŸͺ XCP-ng Team πŸš€ @r3m8 last edited by

                                          @r3m8 Weird, did you run a xe-toolstack-restart?

                                          r3m8 1 Reply Last reply Reply Quote 0
                                          • r3m8
                                            r3m8 @ronan-a last edited by

                                            @ronan-a We have reviewed our SR and template configuration (especially with xe vm-disk-remove device=0 setting) and it works fine (we had already done an xe-toolstack-restart to avoid restarting the hypervisor)

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post