XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • W

      Backup: ERR_OUT_OF_RANGE in RemoteVhdDisk.mergeBlock

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      1
      0 Votes
      8 Posts
      98 Views
      W
      Hi @simonp, thanks again. Great service!! I have checked out your fix branch and re-run the backup. It succeeded as seen here: { "data": { "mode": "delta", "reportWhen": "always" }, "id": "1772725052650", "jobId": "73d72ea9-f25e-4c88-878c-98af27c1a827", "jobName": "Daily backup of Unix VMs with changing data", "message": "backup", "scheduleId": "9cbc3238-382d-4fae-83c0-4e92348898e5", "start": 1772725052650, "status": "success", "infos": [ { "data": { "vms": [ "c54a256e-0493-5c6d-a0cd-c3a4618f99a1" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "c54a256e-0493-5c6d-a0cd-c3a4618f99a1", "name_label": "MailCW" }, "id": "1772725054418", "message": "backup VM", "start": 1772725054418, "status": "success", "tasks": [ { "id": "1772725054426", "message": "clean-vm", "start": 1772725054426, "status": "success", "tasks": [ { "id": "1772725055044", "message": "merge", "start": 1772725055044, "status": "success", "end": 1772725196294 } ], "end": 1772725196304, "result": { "merge": true } }, { "id": "1772725197030", "message": "snapshot", "start": 1772725197030, "status": "success", "end": 1772725198708, "result": "e46c757e-9180-2502-81c5-36a2fb2928fc" }, { "data": { "id": "fd6bbd08-52dd-43eb-84df-f93b54fc28e9", "isFull": false, "type": "remote" }, "id": "1772725198708:0", "message": "export", "start": 1772725198708, "status": "success", "tasks": [ { "id": "1772725200147", "message": "transfer", "start": 1772725200147, "status": "success", "end": 1772725224785, "result": { "size": 2768240640 } }, { "id": "1772725225673", "message": "clean-vm", "start": 1772725225673, "status": "success", "end": 1772725225747, "result": { "merge": true } } ], "end": 1772725225757 } ], "end": 1772725225758 } ], "end": 1772725225759 }
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      372
      1 Votes
      372 Posts
      149k Views
      gduperreyG
      New update candidate for you to test! A new update for the Xen packages is ready, which brings a significant improvement in live migration performance on AMD systems under heavy load, that we add to the previous batch of updates for a common publication. Maintenance updates xen: Improve migration performance on AMD systems under heavy load. Test on XCP-ng 8.3 yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates reboot The usual update rules apply: pool coordinator first, etc. Versions: xen: 4.17.6-4.1.xcpng8.3 What to test Normal use and anything else you want to test. If you have a pool with AMD processors, we're interested in your feedback regarding live migration under heavy load. Test window before official release of the updates ~4/5 days
    • A

      Issues with new vm after latest 8.3 updates (priror to release)

      Watching Ignoring Scheduled Pinned Locked Moved Solved XCP-ng
      4
      1 Votes
      4 Posts
      58 Views
      olivierlambertO
      No worries, it happens! Glad you found the problem
    • A

      Migrations after updates

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      2
      2
      0 Votes
      2 Posts
      7 Views
      D
      @acebmxer Using performance mode should migrate your VM's so all systems are equally "performant" the different modes are outlined here https://docs.xcp-ng.org/management/vm-load-balancing/ As to why your VMs not migrate, I would only be guessing - anyways my guess is that it only polls the system every so often and if your systems are performing well enough nothing gets moved...
    • W

      Getting "There was a problem proxying this request." when trying to restore folder or files

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Backup
      1
      2
      0 Votes
      1 Posts
      14 Views
      No one has replied
    • D

      S3 Backup - maximum number of parts

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      7
      0 Votes
      7 Posts
      318 Views
      florentF
      I insist on the fact that this will only have impact on the FULL backups jobs , and not on the incremental ones
    • olivierlambertO

      🛰️ XO 6: dedicated thread for all your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      140
      6 Votes
      140 Posts
      12k Views
      G
      @ShaneNP I just set my lab back up from scratch, I can't remember for certain, but I think it pushed me over to v5 to set up the SR.
    • DustyArmstrongD

      AMD 'Barcelo' passthrough issues - any success stories?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      8
      1
      0 Votes
      8 Posts
      143 Views
      TeddyAstieT
      @DustyArmstrong said: @TeddyAstie yarp. My bad, the VM has it as 00:08.0 but on the host it's actually 00:06.0, I just didn't think about the specifics of your request! 06:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Barcelo (rev c1) (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 1636 Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin A routed to IRQ 38 Region 0: Memory at d0000000 (64-bit, prefetchable) [size=256M] Region 2: Memory at e0000000 (64-bit, prefetchable) [size=2M] Region 4: I/O ports at d000 [size=256] Region 5: Memory at fca00000 (32-bit, non-prefetchable) [size=512K] Capabilities: [48] Vendor Specific Information: Len=08 <?> Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [64] Express (v2) Legacy Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr- TransPend- LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+ LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Capabilities: [a0] MSI: Enable- Count=1/4 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [c0] MSI-X: Enable- Count=4 Masked- Vector table: BAR=5 offset=00042000 PBA: BAR=5 offset=00043000 Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> Capabilities: [270 v1] #19 Capabilities: [2a0 v1] Access Control Services ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans- ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans- Capabilities: [2b0 v1] Address Translation Service (ATS) ATSCap: Invalidate Queue Depth: 00 ATSCtl: Enable-, Smallest Translation Unit: 00 Capabilities: [2c0 v1] Page Request Interface (PRI) PRICtl: Enable- Reset- PRISta: RF- UPRGI- Stopped+ Page Request Capacity: 00000100, Page Request Allocation: 00000000 Capabilities: [2d0 v1] Process Address Space ID (PASID) PASIDCap: Exec+ Priv+, Max PASID Width: 10 PASIDCtl: Enable- Exec- Priv- Capabilities: [400 v1] #25 Capabilities: [410 v1] #26 Capabilities: [440 v1] #27 Kernel driver in use: pciback thanks. So basically, there is a more annoying issue, as the device doesn't even have a ROMBAR, in this case, the VBIOS is likely in the VFCT ACPI table of host (which the guest can't see); which needs to be injected as a "fake" rombar for the guest to behave properly. That doable on its own, but it's quite tricky to integrate (and you would e.g need to extract VBIOS from VFCT using external tools). I just discussed with Xen/AMD people, and there are known issues regarding PCI Passthrough of integrated AMD GPUs (not specific to Xen AFAIU). There are some projects regarding alternative approaches to bring AMD GPUs to VMs (virtio-gpu native context) which is the current focus.