XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CBT: the thread to centralize your feedback

    Scheduled Pinned Locked Moved Backup
    439 Posts 37 Posters 386.6k Views 29 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @Tristis Oris
      last edited by

      @Tristis-Oris Working fine here, with cbt backups.

      1 Reply Last reply Reply Quote 0
      • A Offline
        Andrew Top contributor @Tristis Oris
        last edited by

        @Tristis-Oris Is your backed up VM using LVM for it's root filesystem?

        Tristis OrisT M 2 Replies Last reply Reply Quote 0
        • Tristis OrisT Offline
          Tristis Oris Top contributor @Andrew
          last edited by

          @Andrew most of them, yes.

          A 1 Reply Last reply Reply Quote 0
          • A Offline
            Andrew Top contributor @Tristis Oris
            last edited by

            @Tristis-Oris This is a known issue from before and not related to CBT backups.

            Message Posting File restore error on LVMs

            File level restoration not working on LVM partition #7029
            (https://github.com/vatesfr/xen-orchestra/issues/7029)

            andrew64k created this issue in vatesfr/xen-orchestra

            open File level restoration not working on LVM partition #7029

            Tristis OrisT 1 Reply Last reply Reply Quote 0
            • Tristis OrisT Offline
              Tristis Oris Top contributor @Andrew
              last edited by

              @Andrew got it. i miss that ticket.

              1 Reply Last reply Reply Quote 0
              • M Offline
                manilx @Andrew
                last edited by

                @Andrew ext4 or windows

                1 Reply Last reply Reply Quote 0
                • F Offline
                  frank-s
                  last edited by

                  Hi, I am running from the sources and I'm almost up to date (991b4) . Can anyone please tell me what this means "Error: can't create a stream from a metadata VDI, fall back to a base". Restarting the backup results in a full backup rather than an incremental.
                  Thanks.

                  436b4e04-ffee-4afe-b605-db05abf63ad3-image.png

                  DanpD 1 Reply Last reply Reply Quote 0
                  • DanpD Offline
                    Danp Pro Support Team @frank-s
                    last edited by

                    @frank-s This PR is where the update occurred -- https://github.com/vatesfr/xen-orchestra/pull/7836

                    @florent should be able to explain the meaning.

                    fbeauchamp opened this pull request in vatesfr/xen-orchestra

                    closed feat(nbd-client/multi): harden NBD backups #7836

                    M 1 Reply Last reply Reply Quote 0
                    • M Offline
                      manilx @Danp
                      last edited by

                      Getting coalesce stuck on one SR, NFS share.
                      After running a Continuous Replication job. When I use an internal SSD as the target this does not happen.

                      Jul 26 22:32:19 npb7 SMGC: [2253452] Another GC instance already active, exiting
                      Jul 26 22:32:19 npb7 SMGC: [2253452] In cleanup
                      Jul 26 22:32:19 npb7 SMGC: [2253452] SR 0d9e ('TBS-h574TX') (0 VDIs in 0 VHD trees): no changes
                      Jul 26 22:32:49 npb7 SMGC: [2253692] === SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110: gc ===
                      Jul 26 22:32:49 npb7 SMGC: [2253722] Will finish as PID [2253723]
                      Jul 26 22:32:49 npb7 SMGC: [2253692] New PID [2253722]
                      Jul 26 22:32:49 npb7 SMGC: [2253723] Found 0 cache files
                      Jul 26 22:32:49 npb7 SMGC: [2253723] Another GC instance already active, exiting
                      Jul 26 22:32:49 npb7 SMGC: [2253723] In cleanup
                      Jul 26 22:32:49 npb7 SMGC: [2253723] SR 0d9e ('TBS-h574TX') (0 VDIs in 0 VHD trees): no changes
                      Jul 26 22:33:19 npb7 SMGC: [2253917] === SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110: gc ===
                      Jul 26 22:33:19 npb7 SMGC: [2253947] Will finish as PID [2253948]
                      Jul 26 22:33:19 npb7 SMGC: [2253917] New PID [2253947]
                      Jul 26 22:33:19 npb7 SMGC: [2253948] Found 0 cache files
                      Jul 26 22:33:19 npb7 SMGC: [2253948] Another GC instance already active, exiting
                      Jul 26 22:33:19 npb7 SMGC: [2253948] In cleanup
                      Jul 26 22:33:19 npb7 SMGC: [2253948] SR 0d9e ('TBS-h574TX') (0 VDIs in 0 VHD trees): no changes
                      Jul 26 22:33:49 npb7 SMGC: [2254139] === SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110: gc ===
                      Jul 26 22:33:49 npb7 SMGC: [2254172] Will finish as PID [2254173]
                      Jul 26 22:33:49 npb7 SMGC: [2254139] New PID [2254172]
                      Jul 26 22:33:49 npb7 SMGC: [2254173] Found 0 cache files
                      Jul 26 22:33:49 npb7 SMGC: [2254173] Another GC instance already active, exiting
                      Jul 26 22:33:49 npb7 SMGC: [2254173] In cleanup
                      Jul 26 22:33:49 npb7 SMGC: [2254173] SR 0d9e ('TBS-h574TX') (0 VDIs in 0 VHD trees): no changes
                      Jul 26 22:33:49 npb7 SMGC: [2251795]   Child process completed successfully
                      Jul 26 22:33:49 npb7 SMGC: [2251795] GC active, quiet period ended
                      Jul 26 22:33:50 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got on-boot for 82664934(60.000G/2.914G?): 'persist'
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got allow_caching for 82664934(60.000G/2.914G?): False
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got other-config for 82664934(60.000G/2.914G?): {'xo:backup:job': '4c084697-6efd-4e35-a4ff-74ae50824c8b', 'xo:backup:datetime': '20240726T20:00:30Z', 'xo:backup:schedule': 'b1cef1e3-e313-409b-ad40-017076f115ce', 'xo:backup:vm': 'd6a5d420-72e6-5c87-a3af-b5eb5c4a44dd', 'xo:backup:sr': '0d9ee24c-ea59-e0e6-8c04-a9a65c22f110', 'content_id': '4be773b3-10dc-9a12-1f82-f575f5f6555b', 'xo:copy_of': 'bb4d2fa2-6241-4856-b196-39f16894f5ef'}
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Removed vhd-blocks from 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Set vhd-blocks = eJztV72O00AQHjsRWMJkU15xOlknRJ0yBdIZiQegpMwbQIUol7sGURAegIJH4AmQQRQUFHmEK+m4coWQlh2vZz1ezzo5paDhK+zZmdlvxrM/k9zk9RwY8gKfSreDGRgALzYPoPzm3tbiOGMzSkAvqEAPiAL8fNlG1hjFYFS2ARLOWTbjw/clnGL6NZqW3PL7BJTneRrHzxPJdVBCXOP1huu+M/8V8S9ycz3z39/yKGM4pUB9KzyE7TtXr1GKS8nZoSHhflEacw5faH17l3IwYeNfOVQNiHjcfXbxmSn7uhQR37kv9ifIdvh+k8jzUPw4ixSb2MOIImEHL+E10PePq5a9COKdyET7rmXNNLO4xX7mV8T4h7FkdmqlvQ2fpp9WD8hT5yUGJZGdUDJK+w02Xwwcw+4fQgs6CZj3dquloyA5935NmhCNIz4TjOlpHA1OeYWWLZmwdu0N4i6rOXcEOrMdy/qM1f9Y9OfaCT+DWntdiBMtAmZVTSZRMZn2p1wfaX0FL630be4fu8eF3T2DtJSu+qyS2/nSv5Zf9+Yx/GYlpFUJspL2LNN86CowZutRCoXNojFSXriTR661wBMOICsZCW8n4k9j8kiqj1N+Jkt/94JalYjDttpEWVXHknQIjsHDWOMSqlu5EH1XQTLR+nQs47Rv4Em+p/2PYDjPSnbBprfCG3gBj2CNqjW3F9QUdSJGSo8NbiOpxW2g9GmSh8Vh1Wz8C9vZiNGklj3qjGFiy7s0hhqeDVd0uBC84m6YG62FavPw+7BORJF1glVCyk0f/cvs4Ph0k+2wYLQZu/sN2zlK1BXt4PcSL+hePO9c73VUFJ/WtJqefh0rlHCxckwa26j4X2Laq8fs6o+Qw/FIxscmab1of7nCd8C8L+wYSZ7/mIT+d3Ht1i3nX50i2oE= for 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Set vhd-blocks = eJz73/7//H+ag8O4pRhobzteMND2N4zaP2o/0eD3ANtPdfBngPPfjxFu/4eBLv9GwcgGDwbW+oFO/w2j9o/aPwpGMGhgGEgwUPbT2/+4Qh8Aq3GH/w== for *d3a809af(60.000G/54.305G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Num combined blocks = 27750
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Coalesced size = 54.305G
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Leaf-coalesce candidate: 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got sm-config for *d3a809af(60.000G/54.305G?): {'vhd-blocks': 'eJz73/7//H+ag8O4pRhobzteMND2N4zaP2o/0eD3ANtPdfBngPPfjxFu/4eBLv9GwcgGDwbW+oFO/w2j9o/aPwpGMGhgGEgwUPbT2/+4Qh8Aq3GH/w=='}
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got on-boot for 82664934(60.000G/2.914G?): 'persist'
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got allow_caching for 82664934(60.000G/2.914G?): False
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got other-config for 82664934(60.000G/2.914G?): {'xo:backup:job': '4c084697-6efd-4e35-a4ff-74ae50824c8b', 'xo:backup:datetime': '20240726T20:00:30Z', 'xo:backup:schedule': 'b1cef1e3-e313-409b-ad40-017076f115ce', 'xo:backup:vm': 'd6a5d420-72e6-5c87-a3af-b5eb5c4a44dd', 'xo:backup:sr': '0d9ee24c-ea59-e0e6-8c04-a9a65c22f110', 'content_id': '4be773b3-10dc-9a12-1f82-f575f5f6555b', 'xo:copy_of': 'bb4d2fa2-6241-4856-b196-39f16894f5ef'}
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Removed vhd-blocks from 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Set vhd-blocks = eJztV72O00AQHjsRWMJkU15xOlknRJ0yBdIZiQegpMwbQIUol7sGURAegIJH4AmQQRQUFHmEK+m4coWQlh2vZz1ezzo5paDhK+zZmdlvxrM/k9zk9RwY8gKfSreDGRgALzYPoPzm3tbiOGMzSkAvqEAPiAL8fNlG1hjFYFS2ARLOWTbjw/clnGL6NZqW3PL7BJTneRrHzxPJdVBCXOP1huu+M/8V8S9ycz3z39/yKGM4pUB9KzyE7TtXr1GKS8nZoSHhflEacw5faH17l3IwYeNfOVQNiHjcfXbxmSn7uhQR37kv9ifIdvh+k8jzUPw4ixSb2MOIImEHL+E10PePq5a9COKdyET7rmXNNLO4xX7mV8T4h7FkdmqlvQ2fpp9WD8hT5yUGJZGdUDJK+w02Xwwcw+4fQgs6CZj3dquloyA5935NmhCNIz4TjOlpHA1OeYWWLZmwdu0N4i6rOXcEOrMdy/qM1f9Y9OfaCT+DWntdiBMtAmZVTSZRMZn2p1wfaX0FL630be4fu8eF3T2DtJSu+qyS2/nSv5Zf9+Yx/GYlpFUJspL2LNN86CowZutRCoXNojFSXriTR661wBMOICsZCW8n4k9j8kiqj1N+Jkt/94JalYjDttpEWVXHknQIjsHDWOMSqlu5EH1XQTLR+nQs47Rv4Em+p/2PYDjPSnbBprfCG3gBj2CNqjW3F9QUdSJGSo8NbiOpxW2g9GmSh8Vh1Wz8C9vZiNGklj3qjGFiy7s0hhqeDVd0uBC84m6YG62FavPw+7BORJF1glVCyk0f/cvs4Ph0k+2wYLQZu/sN2zlK1BXt4PcSL+hePO9c73VUFJ/WtJqefh0rlHCxckwa26j4X2Laq8fs6o+Qw/FIxscmab1of7nCd8C8L+wYSZ7/mIT+d3Ht1i3nX50i2oE= for 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Set vhd-blocks = eJz73/7//H+ag8O4pRhobzteMND2N4zaP2o/0eD3ANtPdfBngPPfjxFu/4eBLv9GwcgGDwbW+oFO/w2j9o/aPwpGMGhgGEgwUPbT2/+4Qh8Aq3GH/w== for *d3a809af(60.000G/54.305G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Num combined blocks = 27750
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Coalesced size = 54.305G
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Leaf-coalesce candidate: 82664934(60.000G/2.914G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Leaf-coalescing 82664934(60.000G/2.914G?) -> *d3a809af(60.000G/54.305G?)
                      Jul 26 22:33:50 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:33:50 npb7 SMGC: [2251795] Got other-config for 82664934(60.000G/2.914G?): {'xo:backup:job': '4c084697-6efd-4e35-a4ff-74ae50824c8b', 'xo:backup:datetime': '20240726T20:00:30Z', 'xo:backup:schedule': 'b1cef1e3-e313-409b-ad40-017076f115ce', 'xo:backup:vm': 'd6a5d420-72e6-5c87-a3af-b5eb5c4a44dd', 'xo:backup:sr': '0d9ee24c-ea59-e0e6-8c04-a9a65c22f110', 'content_id': '4be773b3-10dc-9a12-1f82-f575f5f6555b', 'xo:copy_of': 'bb4d2fa2-6241-4856-b196-39f16894f5ef'}
                      Jul 26 22:33:50 npb7 SMGC: [2251795]   Running VHD coalesce on 82664934(60.000G/2.914G?)
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          ***********************
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          *  E X C E P T I O N  *
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          ***********************
                      Jul 26 22:34:02 npb7 SMGC: [2251795] _doCoalesceLeaf: EXCEPTION <class 'util.SMException'>, Timed out
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2450, in _liveLeafCoalesce
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     self._doCoalesceLeaf(vdi)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2484, in _doCoalesceLeaf
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     vdi._coalesceVHD(timeout)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 934, in _coalesceVHD
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     self.sr.uuid, abortTest, VDI.POLL_INTERVAL, timeOut)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 189, in runAbortable
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     raise util.SMException("Timed out")
                      Jul 26 22:34:02 npb7 SMGC: [2251795]
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *** UNDO LEAF-COALESCE
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *** leaf-coalesce undo successful
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Got sm-config for 82664934(60.000G/2.914G?): {'paused': 'true', 'vhd-blocks': 'eJztV72O00AQHjsRWMJkU15xOlknRJ0yBdIZiQegpMwbQIUol7sGURAegIJH4AmQQRQUFHmEK+m4coWQlh2vZz1ezzo5paDhK+zZmdlvxrM/k9zk9RwY8gKfSreDGRgALzYPoPzm3tbiOGMzSkAvqEAPiAL8fNlG1hjFYFS2ARLOWTbjw/clnGL6NZqW3PL7BJTneRrHzxPJdVBCXOP1huu+M/8V8S9ycz3z39/yKGM4pUB9KzyE7TtXr1GKS8nZoSHhflEacw5faH17l3IwYeNfOVQNiHjcfXbxmSn7uhQR37kv9ifIdvh+k8jzUPw4ixSb2MOIImEHL+E10PePq5a9COKdyET7rmXNNLO4xX7mV8T4h7FkdmqlvQ2fpp9WD8hT5yUGJZGdUDJK+w02Xwwcw+4fQgs6CZj3dquloyA5935NmhCNIz4TjOlpHA1OeYWWLZmwdu0N4i6rOXcEOrMdy/qM1f9Y9OfaCT+DWntdiBMtAmZVTSZRMZn2p1wfaX0FL630be4fu8eF3T2DtJSu+qyS2/nSv5Zf9+Yx/GYlpFUJspL2LNN86CowZutRCoXNojFSXriTR661wBMOICsZCW8n4k9j8kiqj1N+Jkt/94JalYjDttpEWVXHknQIjsHDWOMSqlu5EH1XQTLR+nQs47Rv4Em+p/2PYDjPSnbBprfCG3gBj2CNqjW3F9QUdSJGSo8NbiOpxW2g9GmSh8Vh1Wz8C9vZiNGklj3qjGFiy7s0hhqeDVd0uBC84m6YG62FavPw+7BORJF1glVCyk0f/cvs4Ph0k+2wYLQZu/sN2zlK1BXt4PcSL+hePO9c73VUFJ/WtJqefh0rlHCxckwa26j4X2Laq8fs6o+Qw/FIxscmab1of7nCd8C8L+wYSZ7/mIT+d3Ht1i3nX50i2oE=', 'vhd-parent': 'd3a809af-ea4d-438b-a7e3-bc6a125bd35e'}
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Unpausing VDI 82664934(60.000G/2.914G?)
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Got sm-config for 82664934(60.000G/2.914G?): {'vhd-blocks': 'eJztV72O00AQHjsRWMJkU15xOlknRJ0yBdIZiQegpMwbQIUol7sGURAegIJH4AmQQRQUFHmEK+m4coWQlh2vZz1ezzo5paDhK+zZmdlvxrM/k9zk9RwY8gKfSreDGRgALzYPoPzm3tbiOGMzSkAvqEAPiAL8fNlG1hjFYFS2ARLOWTbjw/clnGL6NZqW3PL7BJTneRrHzxPJdVBCXOP1huu+M/8V8S9ycz3z39/yKGM4pUB9KzyE7TtXr1GKS8nZoSHhflEacw5faH17l3IwYeNfOVQNiHjcfXbxmSn7uhQR37kv9ifIdvh+k8jzUPw4ixSb2MOIImEHL+E10PePq5a9COKdyET7rmXNNLO4xX7mV8T4h7FkdmqlvQ2fpp9WD8hT5yUGJZGdUDJK+w02Xwwcw+4fQgs6CZj3dquloyA5935NmhCNIz4TjOlpHA1OeYWWLZmwdu0N4i6rOXcEOrMdy/qM1f9Y9OfaCT+DWntdiBMtAmZVTSZRMZn2p1wfaX0FL630be4fu8eF3T2DtJSu+qyS2/nSv5Zf9+Yx/GYlpFUJspL2LNN86CowZutRCoXNojFSXriTR661wBMOICsZCW8n4k9j8kiqj1N+Jkt/94JalYjDttpEWVXHknQIjsHDWOMSqlu5EH1XQTLR+nQs47Rv4Em+p/2PYDjPSnbBprfCG3gBj2CNqjW3F9QUdSJGSo8NbiOpxW2g9GmSh8Vh1Wz8C9vZiNGklj3qjGFiy7s0hhqeDVd0uBC84m6YG62FavPw+7BORJF1glVCyk0f/cvs4Ph0k+2wYLQZu/sN2zlK1BXt4PcSL+hePO9c73VUFJ/WtJqefh0rlHCxckwa26j4X2Laq8fs6o+Qw/FIxscmab1of7nCd8C8L+wYSZ7/mIT+d3Ht1i3nX50i2oE=', 'vhd-parent': 'd3a809af-ea4d-438b-a7e3-bc6a125bd35e'}
                      Jul 26 22:34:02 npb7 SMGC: [2251795] In cleanup
                      Jul 26 22:34:02 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Removed leaf-coalesce from 82664934(60.000G/2.914G?)
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          ***********************
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          *  E X C E P T I O N  *
                      Jul 26 22:34:02 npb7 SMGC: [2251795]          ***********************
                      Jul 26 22:34:02 npb7 SMGC: [2251795] leaf-coalesce: EXCEPTION <class 'util.SMException'>, Timed out
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2046, in coalesceLeaf
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     self._coalesceLeaf(vdi)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2328, in _coalesceLeaf
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     return self._liveLeafCoalesce(vdi)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2450, in _liveLeafCoalesce
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     self._doCoalesceLeaf(vdi)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 2484, in _doCoalesceLeaf
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     vdi._coalesceVHD(timeout)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 934, in _coalesceVHD
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     self.sr.uuid, abortTest, VDI.POLL_INTERVAL, timeOut)
                      Jul 26 22:34:02 npb7 SMGC: [2251795]   File "/opt/xensource/sm/cleanup.py", line 189, in runAbortable
                      Jul 26 22:34:02 npb7 SMGC: [2251795]     raise util.SMException("Timed out")
                      Jul 26 22:34:02 npb7 SMGC: [2251795]
                      Jul 26 22:34:02 npb7 SMGC: [2251795] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Leaf-coalesce failed on 82664934(60.000G/2.914G?), skipping
                      Jul 26 22:34:02 npb7 SMGC: [2251795] In cleanup
                      Jul 26 22:34:02 npb7 SMGC: [2251795] Starting asynch srUpdate for SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110
                      Jul 26 22:34:03 npb7 SMGC: [2251795] SR.update_asynch status changed to [success]
                      Jul 26 22:34:03 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:34:03 npb7 SMGC: [2251795] Got sm-config for *d3a809af(60.000G/54.305G?): {'vhd-blocks': 'eJz73/7//H+ag8O4pRhobzteMND2N4zaP2o/0eD3ANtPdfBngPPfjxFu/4eBLv9GwcgGDwbW+oFO/w2j9o/aPwpGMGhgGEgwUPbT2/+4Qh8Aq3GH/w=='}
                      Jul 26 22:34:03 npb7 SMGC: [2251795] No work, exiting
                      Jul 26 22:34:03 npb7 SMGC: [2251795] GC process exiting, no work left
                      Jul 26 22:34:03 npb7 SMGC: [2251795] In cleanup
                      Jul 26 22:34:03 npb7 SMGC: [2251795] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees): no changes
                      Jul 26 22:34:19 npb7 SMGC: [2254446] === SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110: gc ===
                      Jul 26 22:34:19 npb7 SMGC: [2254476] Will finish as PID [2254477]
                      Jul 26 22:34:19 npb7 SMGC: [2254446] New PID [2254476]
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Found 0 cache files
                      Jul 26 22:34:19 npb7 SMGC: [2254477] SR 0d9e ('TBS-h574TX') (6 VDIs in 5 VHD trees):
                      Jul 26 22:34:19 npb7 SMGC: [2254477]         3a763dec(48.002G/29.521G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]         561867f5(20.000G/19.309G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]         ab0314a9(16.000G/7.636G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]         d54d9ec4(24.002G/15.706G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]         *d3a809af(60.000G/54.305G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]             82664934(60.000G/2.914G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477]
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Got on-boot for 82664934(60.000G/2.914G?): 'persist'
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Got allow_caching for 82664934(60.000G/2.914G?): False
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Got other-config for 82664934(60.000G/2.914G?): {'xo:backup:job': '4c084697-6efd-4e35-a4ff-74ae50824c8b', 'xo:backup:datetime': '20240726T20:00:30Z', 'xo:backup:schedule': 'b1cef1e3-e313-409b-ad40-017076f115ce', 'xo:backup:vm': 'd6a5d420-72e6-5c87-a3af-b5eb5c4a44dd', 'xo:backup:sr': '0d9ee24c-ea59-e0e6-8c04-a9a65c22f110', 'content_id': '4be773b3-10dc-9a12-1f82-f575f5f6555b', 'xo:copy_of': 'bb4d2fa2-6241-4856-b196-39f16894f5ef'}
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Removed vhd-blocks from 82664934(60.000G/2.914G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Set vhd-blocks = eJztV72O00AQHjsRWMJkU15xOlknRJ0yBdIZiQegpMwbQIUol7sGURAegIJH4AmQQRQUFHmEK+m4coWQlh2vZz1ezzo5paDhK+zZmdlvxrM/k9zk9RwY8gKfSreDGRgALzYPoPzm3tbiOGMzSkAvqEAPiAL8fNlG1hjFYFS2ARLOWTbjw/clnGL6NZqW3PL7BJTneRrHzxPJdVBCXOP1huu+M/8V8S9ycz3z39/yKGM4pUB9KzyE7TtXr1GKS8nZoSHhflEacw5faH17l3IwYeNfOVQNiHjcfXbxmSn7uhQR37kv9ifIdvh+k8jzUPw4ixSb2MOIImEHL+E10PePq5a9COKdyET7rmXNNLO4xX7mV8T4h7FkdmqlvQ2fpp9WD8hT5yUGJZGdUDJK+w02Xwwcw+4fQgs6CZj3dquloyA5935NmhCNIz4TjOlpHA1OeYWWLZmwdu0N4i6rOXcEOrMdy/qM1f9Y9OfaCT+DWntdiBMtAmZVTSZRMZn2p1wfaX0FL630be4fu8eF3T2DtJSu+qyS2/nSv5Zf9+Yx/GYlpFUJspL2LNN86CowZutRCoXNojFSXriTR661wBMOICsZCW8n4k9j8kiqj1N+Jkt/94JalYjDttpEWVXHknQIjsHDWOMSqlu5EH1XQTLR+nQs47Rv4Em+p/2PYDjPSnbBprfCG3gBj2CNqjW3F9QUdSJGSo8NbiOpxW2g9GmSh8Vh1Wz8C9vZiNGklj3qjGFiy7s0hhqeDVd0uBC84m6YG62FavPw+7BORJF1glVCyk0f/cvs4Ph0k+2wYLQZu/sN2zlK1BXt4PcSL+hePO9c73VUFJ/WtJqefh0rlHCxckwa26j4X2Laq8fs6o+Qw/FIxscmab1of7nCd8C8L+wYSZ7/mIT+d3Ht1i3nX50i2oE= for 82664934(60.000G/2.914G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Set vhd-blocks = eJz73/7//H+ag8O4pRhobzteMND2N4zaP2o/0eD3ANtPdfBngPPfjxFu/4eBLv9GwcgGDwbW+oFO/w2j9o/aPwpGMGhgGEgwUPbT2/+4Qh8Aq3GH/w== for *d3a809af(60.000G/54.305G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Num combined blocks = 27750
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Coalesced size = 54.305G
                      Jul 26 22:34:19 npb7 SMGC: [2254477] Leaf-coalesce candidate: 82664934(60.000G/2.914G?)
                      Jul 26 22:34:19 npb7 SMGC: [2254477] GC active, about to go quiet
                      Jul 26 22:34:49 npb7 SMGC: [2254681] === SR 0d9ee24c-ea59-e0e6-8c04-a9a65c22f110: gc ===
                      Jul 26 22:34:49 npb7 SMGC: [2254715] Will finish as PID [2254716]
                      Jul 26 22:34:49 npb7 SMGC: [2254681] New PID [2254715]
                      Jul 26 22:34:49 npb7 SMGC: [2254716] Found 0 cache files
                      Jul 26 22:34:49 npb7 SMGC: [2254716] Another GC instance already active, exiting
                      Jul 26 22:34:49 npb7 SMGC: [2254716] In cleanup
                      Jul 26 22:34:49 npb7 SMGC: [2254716] SR 0d9e ('TBS-h574TX') (0 VDIs in 0 VHD trees): no changes
                      
                      
                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Because you are writing faster in the VM than the garbage collector can merge/coalesce.

                        You could try to change the leaf coalesce timeout value to see if it's better.

                        R M 2 Replies Last reply Reply Quote 0
                        • R Offline
                          rtjdamen @olivierlambert
                          last edited by rtjdamen

                          @manilx we use belows values:

                          /opt/xensource/sm/cleanup.py :

                          LIVE_LEAF_COALESCE_MAX_SIZE = 1024 * 1024 * 1024 # bytes
                          LIVE_LEAF_COALESCE_TIMEOUT = 300 # seconds

                          Since then we do not see issues like these, leaf coalesce is different from normal coalesce when a snapshot is left behind. I think your problem is:

                          
                          Problem 2:
                          
                          Coalesce due to Timed-out:
                          
                          Example:
                          
                          Nov 16 23:25:14 vm6 SMGC: [15312] raise util.SMException("Timed out")
                          
                          Nov 16 23:25:14 vm6 SMGC: [15312]
                          
                          Nov 16 23:25:14 vm6 SMGC: [15312] *
                          
                          Nov 16 23:25:14 vm6 SMGC: [15312] *** UNDO LEAF-COALESCE
                          
                          This is when the VDI is currently under significant IO stress. If possible, take the VM offline and do an offline coalesce or do the coalesce when the VM has less load. Upcoming versions of XenServer will address this issue more efficiently.
                          
                          

                          https://support.citrix.com/s/article/CTX201296-understanding-garbage-collection-and-coalesce-process-troubleshooting?language=en_US

                          M 1 Reply Last reply Reply Quote 0
                          • D Offline
                            Delgado
                            last edited by

                            I have been seeing this error recently. "VDI must be free or attached to exactly one VM" with quite a few snapshots attached to the control domain when I look at the health dashboard. I am not sure if this is related to the cbt backups but wanted to ask. It seems to only being happening on my delta backups that do have cbt enabled.

                            1 Reply Last reply Reply Quote 0
                            • M Offline
                              manilx @olivierlambert
                              last edited by

                              @olivierlambert Actually I don't think this is the case. The VM's are completely idle doing noting.

                              1 Reply Last reply Reply Quote 0
                              • M Offline
                                manilx @rtjdamen
                                last edited by

                                @rtjdamen VM's are idle (there is no doubt here).
                                Will try these values and see how it goes.

                                R 1 Reply Last reply Reply Quote 0
                                • R Offline
                                  rtjdamen @manilx
                                  last edited by

                                  @manilx it did wonders at our setup, hope they will help you as well

                                  M 1 Reply Last reply Reply Quote 0
                                  • M Offline
                                    manilx @rtjdamen
                                    last edited by

                                    @rtjdamen THX!
                                    I changed /opt/xensource/sm/cleanup.py as per your settings and rebooted (perhaps bot needed).

                                    Looks good. Coalesce finished! Wonder why it did fail, as there was no load on hosts/vm's/nas share....

                                    Will monitor.

                                    M 1 Reply Last reply Reply Quote 0
                                    • M Offline
                                      manilx @manilx
                                      last edited by

                                      @manilx P.S: Theses mods do not survive a host update, right?

                                      R A 2 Replies Last reply Reply Quote 0
                                      • R Offline
                                        rtjdamen @manilx
                                        last edited by

                                        @manilx nope, but i have talked with a dev about it and they are looking to make it a setting somewhere, don’t know the status of that. Good to see this works for you!

                                        1 Reply Last reply Reply Quote 1
                                        • A Offline
                                          Andrew Top contributor @manilx
                                          last edited by

                                          @manilx I have not tested that, but I would say that's correct. Upgrades are rather destructive for custom changes to system scripts and custom settings. This is to ensure that scripts and settings are set to standard known good values on install or upgrade.

                                          I keep notes on my custom settings/scripts/configs so I can check them after an upgrade or a new install.

                                          M 1 Reply Last reply Reply Quote 1
                                          • M Offline
                                            manilx @Andrew
                                            last edited by

                                            CR failed on all VM's with
                                            ScreenShot 2024-07-28 at 10.23.29.png

                                            Next one was ok.

                                            This happens sometimes, it's not consistent.

                                            {
                                              "data": {
                                                "mode": "delta",
                                                "reportWhen": "failure"
                                              },
                                              "id": "1722153616340",
                                              "jobId": "4c084697-6efd-4e35-a4ff-74ae50824c8b",
                                              "jobName": "CR",
                                              "message": "backup",
                                              "scheduleId": "b1cef1e3-e313-409b-ad40-017076f115ce",
                                              "start": 1722153616340,
                                              "status": "failure",
                                              "infos": [
                                                {
                                                  "data": {
                                                    "vms": [
                                                      "52e64134-62e3-9682-4e3f-296a1198db4d",
                                                      "43a4d905-7d13-85b8-bed3-f6b805ff26ac",
                                                      "b5d74e0b-388c-019a-6994-e174c9ca7a51",
                                                      "d6a5d420-72e6-5c87-a3af-b5eb5c4a44dd",
                                                      "131ee7f6-4d58-31d9-39a8-53727cc3dc68"
                                                    ]
                                                  },
                                                  "message": "vms"
                                                }
                                              ],
                                              "tasks": [
                                                {
                                                  "data": {
                                                    "type": "VM",
                                                    "id": "52e64134-62e3-9682-4e3f-296a1198db4d",
                                                    "name_label": "XO"
                                                  },
                                                  "id": "1722153619552",
                                                  "message": "backup VM",
                                                  "start": 1722153619552,
                                                  "status": "failure",
                                                  "tasks": [
                                                    {
                                                      "id": "1722153619599",
                                                      "message": "snapshot",
                                                      "start": 1722153619599,
                                                      "status": "success",
                                                      "end": 1722153622486,
                                                      "result": "6b6036ae-708e-4cb0-2681-12165ba19919"
                                                    },
                                                    {
                                                      "data": {
                                                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                                                        "isFull": false,
                                                        "name_label": "TBS-h574TX",
                                                        "type": "SR"
                                                      },
                                                      "id": "1722153622486:0",
                                                      "message": "export",
                                                      "start": 1722153622486,
                                                      "status": "interrupted"
                                                    }
                                                  ],
                                                  "infos": [
                                                    {
                                                      "message": "will delete snapshot data"
                                                    },
                                                    {
                                                      "data": {
                                                        "vdiRef": "OpaqueRef:f35bea93-45b3-f4bd-2752-3853850ff73a"
                                                      },
                                                      "message": "Snapshot data has been deleted"
                                                    }
                                                  ],
                                                  "end": 1722153637891,
                                                  "result": {
                                                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                                                    "name": "Error",
                                                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                                                  }
                                                },
                                                {
                                                  "data": {
                                                    "type": "VM",
                                                    "id": "43a4d905-7d13-85b8-bed3-f6b805ff26ac",
                                                    "name_label": "Bitwarden"
                                                  },
                                                  "id": "1722153619556",
                                                  "message": "backup VM",
                                                  "start": 1722153619556,
                                                  "status": "failure",
                                                  "tasks": [
                                                    {
                                                      "id": "1722153619603",
                                                      "message": "snapshot",
                                                      "start": 1722153619603,
                                                      "status": "success",
                                                      "end": 1722153624616,
                                                      "result": "58f1ac5b-7de0-8276-3872-b2a7d5a26ec2"
                                                    },
                                                    {
                                                      "data": {
                                                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                                                        "isFull": false,
                                                        "name_label": "TBS-h574TX",
                                                        "type": "SR"
                                                      },
                                                      "id": "1722153624616:0",
                                                      "message": "export",
                                                      "start": 1722153624616,
                                                      "status": "interrupted"
                                                    }
                                                  ],
                                                  "infos": [
                                                    {
                                                      "message": "will delete snapshot data"
                                                    },
                                                    {
                                                      "data": {
                                                        "vdiRef": "OpaqueRef:81a61f30-99a0-25bc-35ec-25cadb323a09"
                                                      },
                                                      "message": "Snapshot data has been deleted"
                                                    }
                                                  ],
                                                  "end": 1722153655152,
                                                  "result": {
                                                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                                                    "name": "Error",
                                                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                                                  }
                                                },
                                                {
                                                  "data": {
                                                    "type": "VM",
                                                    "id": "b5d74e0b-388c-019a-6994-e174c9ca7a51",
                                                    "name_label": "Docker Server"
                                                  },
                                                  "id": "1722153637896",
                                                  "message": "backup VM",
                                                  "start": 1722153637896,
                                                  "status": "failure",
                                                  "tasks": [
                                                    {
                                                      "id": "1722153637925",
                                                      "message": "snapshot",
                                                      "start": 1722153637925,
                                                      "status": "success",
                                                      "end": 1722153639557,
                                                      "result": "d820e4ad-462f-7043-4f1f-ee21ed986e8d"
                                                    },
                                                    {
                                                      "data": {
                                                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                                                        "isFull": false,
                                                        "name_label": "TBS-h574TX",
                                                        "type": "SR"
                                                      },
                                                      "id": "1722153639558",
                                                      "message": "export",
                                                      "start": 1722153639558,
                                                      "status": "interrupted"
                                                    }
                                                  ],
                                                  "infos": [
                                                    {
                                                      "message": "will delete snapshot data"
                                                    },
                                                    {
                                                      "data": {
                                                        "vdiRef": "OpaqueRef:5f21b5e1-8423-3bbc-7361-6319bb25e97d"
                                                      },
                                                      "message": "Snapshot data has been deleted"
                                                    }
                                                  ],
                                                  "end": 1722153675901,
                                                  "result": {
                                                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                                                    "name": "Error",
                                                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                                                  }
                                                },
                                                {
                                                  "data": {
                                                    "type": "VM",
                                                    "id": "d6a5d420-72e6-5c87-a3af-b5eb5c4a44dd",
                                                    "name_label": "Media Server"
                                                  },
                                                  "id": "1722153655156",
                                                  "message": "backup VM",
                                                  "start": 1722153655156,
                                                  "status": "failure",
                                                  "tasks": [
                                                    {
                                                      "id": "1722153655188",
                                                      "message": "snapshot",
                                                      "start": 1722153655188,
                                                      "status": "success",
                                                      "end": 1722153656817,
                                                      "result": "6c572cec-ed70-d994-a88b-bc6066c06b0b"
                                                    },
                                                    {
                                                      "data": {
                                                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                                                        "isFull": false,
                                                        "name_label": "TBS-h574TX",
                                                        "type": "SR"
                                                      },
                                                      "id": "1722153656818",
                                                      "message": "export",
                                                      "start": 1722153656818,
                                                      "status": "interrupted"
                                                    }
                                                  ],
                                                  "infos": [
                                                    {
                                                      "message": "will delete snapshot data"
                                                    },
                                                    {
                                                      "data": {
                                                        "vdiRef": "OpaqueRef:c755b6ed-5d00-397c-62a8-db643c3fbdcd"
                                                      },
                                                      "message": "Snapshot data has been deleted"
                                                    }
                                                  ],
                                                  "end": 1722153660309,
                                                  "result": {
                                                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                                                    "name": "Error",
                                                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                                                  }
                                                },
                                                {
                                                  "data": {
                                                    "type": "VM",
                                                    "id": "131ee7f6-4d58-31d9-39a8-53727cc3dc68",
                                                    "name_label": "Unifi"
                                                  },
                                                  "id": "1722153660312",
                                                  "message": "backup VM",
                                                  "start": 1722153660312,
                                                  "status": "failure",
                                                  "tasks": [
                                                    {
                                                      "id": "1722153660341",
                                                      "message": "snapshot",
                                                      "start": 1722153660341,
                                                      "status": "success",
                                                      "end": 1722153662203,
                                                      "result": "b0dac528-8914-141b-5d37-8b68bdeb7fe0"
                                                    },
                                                    {
                                                      "data": {
                                                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                                                        "isFull": false,
                                                        "name_label": "TBS-h574TX",
                                                        "type": "SR"
                                                      },
                                                      "id": "1722153662204",
                                                      "message": "export",
                                                      "start": 1722153662204,
                                                      "status": "interrupted"
                                                    }
                                                  ],
                                                  "infos": [
                                                    {
                                                      "message": "will delete snapshot data"
                                                    },
                                                    {
                                                      "data": {
                                                        "vdiRef": "OpaqueRef:595f2f1f-ec64-1d43-b2de-574fcd621576"
                                                      },
                                                      "message": "Snapshot data has been deleted"
                                                    }
                                                  ],
                                                  "end": 1722153669757,
                                                  "result": {
                                                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                                                    "name": "Error",
                                                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                                                  }
                                                }
                                              ],
                                              "end": 1722153675901
                                            }
                                            
                                            R 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post