XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • W

      Backup: ERR_OUT_OF_RANGE in RemoteVhdDisk.mergeBlock

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      14
      1
      0 Votes
      14 Posts
      175 Views
      W
      @florent One last update. I reverted to Master branch (6699b) yesterday evening and the backup ran without issues overnight.
    • DustyArmstrongD

      AMD 'Barcelo' passthrough issues - any success stories?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      9
      1
      0 Votes
      9 Posts
      202 Views
      DustyArmstrongD
      @TeddyAstie Thanks for the update. I do actually have a VBIOS for that GPU, but I wasn't entirely sure what to do with it - is there a process to inject it? I've found resources for Proxmox and others, but I really like XCP and equally I don't want to migrate my entire setup just for that. If it's really tricky then I'm not too worried about it, as I say the VM actually runs the cameras perfectly fine on CPU alone, it's negligible. Edit: Looks like this post might answer my question: https://github.com/xcp-ng/xcp/issues/786 "Even when specifying romfile and rombar properties on the xen-pci-passthrough device in QEMU, the ROM region is not mapped into guest memory." timemaster5 created this issue in xcp-ng/xcp open PCI ROM BAR not exposed to guest when using xen-pci-passthrough #786
    • rizaemet 0R

      S3 Chunk Size

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      16
      0 Votes
      16 Posts
      623 Views
      olivierlambertO
      502 is an answer coming from your S3, telling the server is having an issue. Adding @florent in the loop
    • T

      Hey XCP-NG! How's my setup?

      Watching Ignoring Scheduled Pinned Locked Moved Share your setup!
      12
      2
      2 Votes
      12 Posts
      2k Views
      T
      I've had a lot of updates happening in the homelab. I've replaced the T620 with an R740, expanded my storage for both HDD and NVME pools. Also did a little min-maxing on the hardware to help separate traffic, decrease some latency and jitter for internet traffic and introducing IDS There are a lot of additions and modifications, but I guess the other big one is setting up a Dell Precision 5820 with XCP-NG as a studio and prototyping rig. Replaced Ansible with AWX, added some more VMs and migrated others but in all that I've updated the diagrams as well! Will say that XCP-NG offers up the flexibility and performance that I've needed so far. Would love to try out the XOSTOR storage at some point but will have to move around my entire setup haha. This reference diagram breaks down each VLAN [image: 1772850854389-networking-and-vlans-reference-diagram.drawio.png] The physical equipment reference diagram gives a breakdown of the server equipment and NAS at both locations, including the home rack. It also shows a basic breakdown of each server configuration. [image: 1772850854434-physical-equipment-reference-diagram.drawio.png] The physical topology reference diagram gives a simplistic overview of the major networking and server equipment. [image: 1772850854468-physical-topology-reference-diagram.drawio.png] The logical topology reference diagram gives a more in-depth view of the networking, servers, VMs, VLANs and endpoint devices. [image: 1772850854364-logical-topology-reference-diagram.drawio.png] The colo reference diagram contains my off-site location with a rented dedicated server [image: 1772850854295-colo-reference-diagram.drawio.png] The Authentication reference diagram gives a break down of how user access is sourced or what security route it takes. [image: 1772850854275-authentication-reference-diagram.drawio.png] The shared storage and access reference diagram gives a break down of how most hardware interacts with each other in regards to network routing for users, endpoints, and member servers. [image: 1772850854483-shared-storage-and-acess-reference-diagram.drawio.png]
    • A

      VHD Check Error

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      2
      1
      0 Votes
      2 Posts
      24 Views
      A
      I looked into the backup job and i forgot to enable a few settings when recreated the backup job. After renabling the below settings i reran the backup job. The vm in question did a full backup. All passed. [image: 1772848089214-screenshot_20260306_204613.png] { "data": { "type": "VM", "id": "fb72a8d7-a039-849f-b547-24fc56f056ba", "name_label": "Work PC" }, "id": "1772833993844", "message": "backup VM", "start": 1772833993844, "status": "success", "tasks": [ { "id": "1772833993868", "message": "clean-vm", "start": 1772833993868, "status": "success", "end": 1772833994233, "result": { "merge": false } }, { "id": "1772833994652", "message": "snapshot", "start": 1772833994652, "status": "success", "end": 1772833997235, "result": "16f5fe19-207a-4d89-017c-3f9405d22231" }, { "id": "1772834490355:0", "message": "health check", "start": 1772834490355, "status": "success", "infos": [ { "message": "This VM doesn't match the health check's tags for this schedule" } ], "end": 1772834490356 }, { "data": { "id": "a5e54e04-d7e4-48cb-bafc-b2f306d39679", "isFull": true, "type": "remote" }, "id": "1772833997235:0", "message": "export", "start": 1772833997235, "status": "success", "tasks": [ { "id": "1772834004185", "message": "transfer", "start": 1772834004185, "status": "success", "end": 1772834489032, "result": { "size": 119502012416 } }, { "id": "1772834490368", "message": "clean-vm", "start": 1772834490368, "status": "success", "end": 1772834490475, "result": { "merge": false } } ], "end": 1772834490476 } ], "infos": [ { "message": "will delete snapshot data" }, { "data": { "vdiRef": "OpaqueRef:31692cb3-7c43-de83-2cc8-f2e39a0105c8" }, "message": "Snapshot data has been deleted" } ], "end": 1772834490476 }