XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.3 updates announcements and testing

    Scheduled Pinned Locked Moved News
    470 Posts 47 Posters 193.5k Views 62 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B Offline
      bufanda
      last edited by

      Installed on my usual lab setup works fine so far although I have the "Fell back to full backup" message for qcow2 VMs

      A 1 Reply Last reply Reply Quote 3
      • A Online
        acebmxer @bufanda
        last edited by

        @bufanda Make sure you have enable the purge snapshot when using CBT, if you are using CBT. After the previous update that fixed fell back to full backup issue for me..

        Maybe also try running a manual full backup to reset the VDI chain.

        Screenshot_20260425_071444.png

        B 1 Reply Last reply Reply Quote 1
        • A Online
          acebmxer @rzr
          last edited by acebmxer

          @rzr

          One thing i am noticing after this updates. Is i am seeing alot more traffic on my truenas. The gaps are me shutting down vms and starting each one to find the problem vms but it seems to be any vm. I didnt notice any performance issues just noticed the graph in truenas when its usally flat line there the occasional spike here and there not the big mess to the left in first screenshot. 5 vms running

          My xoa is on local storage on master host. and used for these test.
          All vms are powered off except xoa and xo

          Screenshot_20260425_130107.png

          Here i booted the xo vm and left it idle. The spike after is me live migrate back to vhd SR and then left idle.

          Screenshot_20260425_135258-1.png

          The gap in the middle is xo idle on vhd only SR.
          Screenshot_20260425_135604.png

          live Migrate xo back to qcow2 only SR
          Screenshot_20260425_140908.png

          Migration back to qcow2 completed
          Screenshot_20260425_143122.png

          Left xo ldle after migration to qcow2 sr.
          Screenshot_20260425_144314.png

          Again all vms booted and idle....

          Screenshot_20260425_145451.png

          From master host.
          Screenshot_20260425_150320.png

          stormiS 1 Reply Last reply Reply Quote 2
          • B Offline
            bufanda @acebmxer
            last edited by

            @acebmxer purge snapshots is active since I created the backup job over a year ago. I always enable purge snapshots on backup jobs.

            stormiS 1 Reply Last reply Reply Quote 1
            • stormiS Online
              stormi Vates 🪐 XCP-ng Team @bufanda
              last edited by

              @bufanda I'm being told it's expected to have it the first time after the update, but in theory the next ones should be not be fulls. Can you try?

              B 1 Reply Last reply Reply Quote 0
              • stormiS Online
                stormi Vates 🪐 XCP-ng Team @acebmxer
                last edited by

                @acebmxer with qcow2, the way we scan the SR regularly uses more I/O, so this may explain it.

                A 1 Reply Last reply Reply Quote 0
                • A Online
                  acebmxer @stormi
                  last edited by

                  @stormi said:

                  @acebmxer with qcow2, the way we scan the SR regularly uses more I/O, so this may explain it.

                  Thanks for the update, that this is expected. I think it its a bit a excessive being its only 4 -5 vms.

                  stormiS 1 Reply Last reply Reply Quote 0
                  • B Offline
                    bufanda @stormi
                    last edited by

                    @stormi said:

                    @bufanda I'm being told it's expected to have it the first time after the update, but in theory the next ones should be not be fulls. Can you try?

                    Just checked and the VM I was testing with was part of two backups and it seems that when one runs and the second starts that it will fall back. I removed the VM now from one backup and with being only member of one backup job it looks good then. Will keep an eye on it.

                    1 Reply Last reply Reply Quote 1
                    • stormiS Online
                      stormi Vates 🪐 XCP-ng Team @acebmxer
                      last edited by

                      @acebmxer Can you evaluate the amount of data transferred at each spike? So that we can evaluate if it's more than expected. What's the total size of the VM disks?

                      A 1 Reply Last reply Reply Quote 0
                      • A Online
                        acebmxer @stormi
                        last edited by

                        @stormi

                        Left side of chart is all VMS running. 1.5gb/s each vm's vdi ranges from 128gb - 256gb allocated. Actual disk spaced used not sure)

                        screenshot_20260425_130107.png

                        The 200mb/s - 300mb/s on far right is just XO-CE running idle.
                        screenshot_20260425_144314.png

                        So if each vm is consuming 300mb/s ish times 4 -5 vms would get close to the 1.5gb/s.

                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post