XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. flakpyro
    F
    Offline
    • Profile
    • Following 0
    • Followers 1
    • Topics 11
    • Posts 259
    • Groups 0

    flakpyro

    @flakpyro

    140
    Reputation
    29
    Profile views
    259
    Posts
    1
    Followers
    0
    Following
    Joined
    Last Online

    flakpyro Unfollow Follow

    Best posts made by flakpyro

    • RE: XCP-ng 8.3 updates announcements and testing

      @stormi Installed on our 2 production pools, DR and remote sites, 46 hosts total ranging from Dell, Lenovo, HP, and Supermicro servers, no issues to report!

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly. Also installed on a HP GL325 Gen 10 with no issues after reboot.

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @gduperrey Updated my usual test hosts, (Minisforum and Supermicro X11) as well as an two sets of 2 host AMD pools (one pool of HP DL320 Gen10s and another of Asus Epyc servers of some sort, and lastly a Dell R360 without issue.

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @gduperrey Installed on my usual round test hosts. No issues to report so far! With such a small change i wasn't expecting anything to go wrong!

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      Installed on my usual selection of hosts. (A mixture of AMD and Intel hosts, SuperMicro, Asus, and Minisforum). No issues after a reboot, PCI Passthru, backups, etc continue to work smoothly

      posted in News
      F
      flakpyro
    • RE: log_fs_usage / /var/log directory on pool master filling up constantly

      One of our pools. (5 hosts, 6 NFS SRs) had this issue when we first deployed it. I engaged with support from Vates and they changed a setting that reduced the frequency of the SR.scan job from 30 seconds to every 2 mins instead. This totally fixed the issue for us going on a year and a half later.

      I dug back in our documentation and found the command they gave us

          xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> 
      

      Where hosts UUID is your pool master.

      posted in XCP-ng
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @stormi Installed on my usual test hosts (Intel Minisforum MS-01, and Supermicro running a Xeon E-2336 CPU). Also installed onto a 2 host AMD epyc pool. Updates went smooth, backups continue to function as before.

      3 windows 11 VMs had secure boot enabled. In XOA i clicked "Copy pool's default UEFI certificates to the VM" after the update was complete. The VMs continued to boot without issue after.

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @gduperrey

      installed on 2 test machines

      Machine 1:
      Intel Xeon E-2336
      SuperMicro board.

      Machine 2:
      Minisforum MS-01
      i9-13900H
      32 GB Ram
      Using Intel X710 onboard NIC

      Both machines installed fine and all VMs came up without issue after. My one test backup job also seemed to run without any issues.

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @gduperrey installed on 2 test machines

      Machine 1:
      Intel Xeon E-2336
      SuperMicro board.

      Machine 2:
      Minisforum MS-01
      i9-13900H
      32 GB Ram
      Using Intel X710 onboard NIC

      Both machines installed fine and all VMs came up without issue after.

      I ran a backup job after to test snapshot coalesce, no issues there.

      posted in News
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      @stormi Updated a test machine running only couple VMs. Everything installed fine and rebooted without issue.

      Machine is:
      Intel Xeon E-2336
      SuperMicro board.
      One VM happens to be windows based with an Nvidia GPU passed though to it running Blue Iris using the MSR fixed found elsewhere on these forums, fix continues to work with this version of Xen. 👍

      posted in News
      F
      flakpyro

    Latest posts made by flakpyro

    • RE: Xen Orchestra 6.3.2 Random Replication Failure

      @florent I am using NBD for all backups yes but am not purging snapshots/usnig CBT.

      Its so rare in fact that i haven't had it happen since since i made this post last week. (When i had it happen twice in 2 days). This is our production XOA it has occurred on so i wont be able to test a branch and i have never seen it happen on my sources install at home.

      posted in Backup
      F
      flakpyro
    • RE: XCP-ng 8.3 updates announcements and testing

      Installed on a handful of test machines. Not as many as usual as im being very cautious with this one for now. Everything rebooted and VMs started ok after. Using VHD for everything currently.

      posted in News
      F
      flakpyro
    • RE: Xen Orchestra 6.3.2 Random Replication Failure

      @pierrebrunet Thanks for the update. Glad to know its not something unique to our environment and you were able to track down the cause!

      posted in Backup
      F
      flakpyro
    • Xen Orchestra 6.3.2 Random Replication Failure

      Since the XOA 6.3 release i have had a few random backup errors in an environment that has otherwise had fairly flawless backup performance for the last year. I cannot make out what exactly the error means but retrying the job allows it to succeed without issue. It is also very intermittent.

      dc700086-299a-4903-9201-485ec6941f66-image.jpeg

      Log attached. 2026-04-07T01_00_03.075Z - backup NG.txt

      If the issue persists i will submit a ticket to dive into it further but i have only had it happen 3 times since the release ofthe 6.3.x update so its hard to reproduce.

      Replication target storage is a Pure C50R4 with NFS3 exports.

      posted in Backup
      F
      flakpyro
    • RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

      @florent I can confirm that this fixes the issue!

      posted in Backup
      F
      flakpyro
    • RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

      I created a brand new Windows 11 VM with a VTPM and Secure boot enabled and am able to reproduce this on a freshly created VM.

      • Initial replication will work.
      • Any follow up replications will fail with Error: VTPM_MAX_AMOUNT_REACHED(1)
      • Retrying the job after the failure will succeed.
      posted in Backup
      F
      flakpyro
    • RE: XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

      I tried removing the replica chain and let it start from scratch. The initial full replica was a success but unfortunately running a follow up incremental replication job results in the same error.

      01bda6a7-da33-4c50-8281-478e776f6e32-image.jpeg

      The entire transfer succeeds, it only seems to fail at the very end.

      I have other VMs (Server 2022) with VTPMs that do not have this issue. The VM that is failing is Windows 11, and is the only Windows 11 VM we have running.

      posted in Backup
      F
      flakpyro
    • RE: XOA - Memory Usage

      Looks like the issue still persists in 6.3.1?

      Here is since installing the latest update yesterday:

      bdb0577a-5cea-480f-9a76-d3d3f2213a54-image.jpeg

      posted in Xen Orchestra
      F
      flakpyro
    • XOA 6.1.3 Replication fails with "VTPM_MAX_AMOUNT_REACHED(1)"

      Since updating to XOA 6.1.3 i have a VM that has secure boot enabled and will fail to replicate with the error:

      "VTPM_MAX_AMOUNT_REACHED(1)"
      

      Retrying the backup allows it to complete.

      "data": {
                  "id": "69d826f4-383c-f163-b59a-8f3ea5132fd1",
                  "isFull": false,
                  "name_label": "C50-DR-Win-NFS3-SR1",
                  "type": "SR"
                },
                "id": "1775101037617",
                "message": "export",
                "start": 1775101037617,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1775101039730",
                    "message": "transfer",
                    "start": 1775101039730,
                    "status": "failure",
                    "end": 1775101063330,
                    "result": {
                      "code": "VTPM_MAX_AMOUNT_REACHED",
                      "params": [
                        "1"
                      ],
                      "call": {
                        "duration": 3,
                        "method": "VTPM.create",
                        "params": [
                          "* session id *",
                          "OpaqueRef:5263e3da-0772-f8c5-5344-32e81c08c37a",
                          false
                        ]
                      },
                      "message": "VTPM_MAX_AMOUNT_REACHED(1)",
                      "name": "XapiError",
                      "stack": "XapiError: VTPM_MAX_AMOUNT_REACHED(1)\n    at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n    at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/transports/json-rpc.mjs:38:21\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
                    }
                  }
                ],
                "end": 1775101063749
              }
      

      It appears to be trying to create a new TPM on the replica when one already exists?

      Not sure why it fails during the job run but completes during a retry but it is consistent in its behaviour.

      posted in Backup
      F
      flakpyro
    • RE: Backing up from Replica triggers full backup

      Happy to report from my limited testing that as of this morning this appears to be fixed in master.

      posted in Backup
      F
      flakpyro