@michael.manley The "Join Domain" option in the Users tab.
Keeping this will be very useful.
Posts
-
RE: XCP-ng Center 25.04 Released
-
RE: XCP-ng Center 25.04 Released
@crismarcul Yes, I would also like this option. It is essential.
-
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@gduperrey Is this alternate kernel fix for the fans already released with the November patches for XCP-ng 8.2?
https://xcp-ng.org/blog/2024/11/15/november-2024-security-and-maintenance-update-for-xcp-ng-8-2-lts/🔧 Update for alternate kernel Backport of a fix to correct cooling fan rotation speed on some Lenovo servers.
-
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@RIX_IT
I've been informed by XenServer trial support that Lenovo has filed the ticket LEN-605 for this issue, it's probably a private issue tracker though.
They'll let me know once it is fixed. -
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@RIX_IT Thank you for the tips.
Just for the record, I downgraded the SR645 v3 to UEFI v1.25 KAE106Y and the fans are now spinning at 9000RPM and the fan speed adjusts accordingly depending on the CPU load so I think this is the correct behaviour.
But once upgrade to version UEFI 2.10 KAE112N for the SR645 v3, the fan spins up to 20000RPM with unbearable sound. This also occurs with the latest firmware 4.20 KAE120J.
I opened a ticket with XenServer 8 Trial Edition bugs portal and they finally replied with "This issue is on our radar. We're currently developing a solution. To avoid duplicates, we'll archive this ticket for now"
Not sure when or how it will be fixed though. -
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@rfx77
I opened a support ticket with Lenovo but still no luck since XenServer 8 is a Vendor Certified OS and not a Lenovo Supported OS.Vendor Certified: The server has met the OS Partner Certified/Logo program requirements, however, hardware device driver updates may not be available beyond the inbox OS device drivers. As a result support is limited and customers should strongly consider this when determining which OS version to install. The OS vendor provides OS support with inbox drivers only. Lenovo only provides hardware level support, and will work with hardware partners and OS vendors in a best-effort capacity to close any compatibility issues customers may experience with their installation. Lenovo will work with the OS provider to submit hardware compatibility (HCL) test suite results so that the supported solution can be posted to the respective OS providers HCL site.
The SR645 v3 has different UEFI firmware versions to the SR635 v3, it looks like v1.25 for SR645 v3 is equivilent to v1.43 for SR635 v3
================================================== Version 1.25, Build ID KAE106Y [Critical] Release date: [May/ 2023] Release Ref: Genoa Wave2 EAR602 [AMD 2S V3] ================================================== Support Systems: Lenovo ThinkSystem SR645v3 Server, Machine Types 7D9C/7D9D Lenovo ThinkSystem SR665v3 Server, Machine Types 7D9A/7D9B This UEFI supports AMD processor: AMD EPYC 9004 Series processor (formerly codenamed "Genoa"), AGESA GenoaPI-SP5_1.0.0.3 1.0 Prerequisites and dependencies None 2.0 Fixes Fixed the issue that configuring OperatingModes.ChooseOperatingMode to "Maximum Performance" from OOB did not match the same configuration in UEFI F1 Setup page. Addressed AMD's recent Genoa processor errata that in rare scenarios can result in #UD, #PF, or other unexpected system behavior. 3.0 Enhancements None 4.0 Limitations None ================================================== Version 1.43, Build ID KAE110O [Critical] Release date: [May/ 2023] Release Ref: Genoa Wave2 EAR602 [AMD 1S V3] ================================================== Support Systems: Lenovo ThinkSystem SR635v3 Server, Machine Types 7D9G/7D9H Lenovo ThinkSystem SR655v3 Server, Machine Types 7D9E/7D9F This UEFI supports AMD processor: AMD EPYC 9004 Series processor (formerly codenamed "Genoa"), AGESA GenoaPI-SP5_1.0.0.4 1.0 Prerequisites and dependencies None 2.0 Fixes Fixed the issue that configuring OperatingModes.ChooseOperatingMode to "Maximum Performance" from OOB did not match the same configuration in UEFI F1 Setup page. Addressed AMD's recent Genoa processor errata that in rare scenarios can result in #UD, #PF, or other unexpected system behavior. 3.0 Enhancements None 4.0 Limitations None
What I want to know though is where to download the older version of the UEFI firmware? I can only see the latest version on the Lenovo website.
-
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@rfx77
That's what I thought of too and tried booting into the Xenserver 8 iso XenServer8_2024-06-03.iso.
Even booting with the ISO installer will cause the fans to spin up abnormally fast, up to 20,000RPM. -
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
I just noticed this issue today when I installed XCP-ng 8.2.1 on a ThinkSystem SR645 v3 with UEFI KAE120J-4.20.
The fans are spinning at high 16,800 to 20,224 RPM the high pitch sound is unbearably annoying.
I changed operating mode in the UEFI settings from maximum performance to maximum efficiency. It made no difference fans are still spinning as fast as before.
-
RE: windows 11 Support
@JoyceBabu said in windows 11 Support:
JoyceBabu
20 days ago@okynnor I installed Win11 after disabling TPM, RAM and SafeMode checks by adding the DWORD32 keys BypassTPMCheck, BypassRAMCheck and BypassSecureBootCheck under HKEY_LOCAL_MACHINE\System\Setup\LabConfig.
@okynnor Full details here: https://www.bleepingcomputer.com/news/microsoft/how-to-bypass-the-windows-11-tpm-20-requirement/
Xen drivers from windows updates didn't install properly for all device. So I installed Xen Tools from here https://downloads.xenserver.com/vm-tools-windows/9.3.1/managementagentx64.msi and works fine.
-
RE: Full Backup Successful But In Kilobytes
@Darkbeldin I checked the xensource.log on the server and found the error message "failed with exception Server_error(INVALID_DEVICE, [ autodetect ])" during export, similar to the post here: https://xcp-ng.org/forum/topic/4319/invalid_device-autodetect-during-backup
I checked the XO dashboard health and found 256 "Pool Metadata Backup" VDIs attached to Control Domain.
Not sure how or why the "Pool Metadata Backup" got stuck attached to Control Domain but I guess it has reached the 256 limit. I "Forget" all of them and full backup seems to be working now.
For the record, it wasn't anything to do with snapshots. The VM was OFF so I guess it doesn't need to take a snapshot before transfer hence the log doesn't mention snapshot.
-
RE: Full Backup Successful But In Kilobytes
Hi @Darkbeldin
I disabled compression, the XVA is now 37KB
Normally it should say snapshot above transfer but I don't see it for some reason.
When trying to restore the backup, I get IMPORT_ERROR_PREMATURE_EOF().
-
Full Backup Successful But In Kilobytes
I'm curently running XO (5.107.1) community from source and have had scheduled backups for one of the XCP-ng server 8.0.0 regularly without issues previously with XO version 5.102.1. I originally wondered if it has something to do with the new version of XO but I rolled back to 5.98.1 and still have the same issue.
I am now getting successful backups but only in kilobytes for all the VMs on this particular XCP-ng server. Not sure if it has anything to do with it but I noticed in the gui showing the status of the job changing quickly from Started -> Interrupted -> Sucessful.
transfer Start: Dec 6, 2022, 02:18:05 PM End: Dec 6, 2022, 02:18:42 PM Duration: a few seconds Size: 5.62 KiB Speed: 154.26 B/s Start: Dec 6, 2022, 02:18:05 PM End: Dec 6, 2022, 02:18:42 PM Duration: a few seconds Start: Dec 6, 2022, 02:18:03 PM End: Dec 6, 2022, 02:18:42 PM Duration: a few seconds Type: full
Backup config is Full backup with Zstd compression and normal snapshot, SMB remote. If I include VMs from the problematic XCP-ng server and another in the backup job. It executes correctly and succesfully for one of them and the problematic server backup is only 5.63KB.
Nothing else has changed that I'm aware of, I have rebooted the XCP-ng server, XO server, and the remote storage, tries specifying a different remote storage point. I also create a new VM backup job but still the same. Other XCP-ng server backups from XO are working normally.
What else can I check? Does it have something to do with an error during snapshot? I doesn't look like it created the snapshot before transferring the 5KB.
Thanks.
-
RE: SyntaxError: Named export 'format' not found. The requested module 'json-rpc-peer' is a CommonJS module
@danp Thanks for the tips...
Ran the command to do a force rebuild and started up ok now...sudo curl https://raw.githubusercontent.com/Jarli01/xenorchestra_updater/master/xo-update.sh | bash -s -- -f
-
RE: SyntaxError: Named export 'format' not found. The requested module 'json-rpc-peer' is a CommonJS module
@olivierlambert
I know it's not officially recommended or support but I'm using the updater from https://github.com/Jarli01/xenorchestra_updater@danp
Updated to node 14.17 but still has an error but different...May 26 11:45:46 server-xo xo-server[1908]: (node:1908) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension. May 26 11:45:46 server-xo xo-server[1908]: (Use `node --trace-warnings ...` to show where the warning was created) May 26 11:45:46 server-xo xo-server[1908]: /opt/xen-orchestra/packages/xo-server/dist/xapi/mixins/index.js:2 May 26 11:45:46 server-xo xo-server[1908]: import _gpu, * as __gpu from "./gpu"; May 26 11:45:46 server-xo xo-server[1908]: ^^^^^^ May 26 11:45:46 server-xo xo-server[1908]: SyntaxError: Cannot use import statement outside a module May 26 11:45:46 server-xo xo-server[1908]: at wrapSafe (internal/modules/cjs/loader.js:984:16) May 26 11:45:46 server-xo xo-server[1908]: at Module._compile (internal/modules/cjs/loader.js:1032:27) May 26 11:45:46 server-xo xo-server[1908]: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10) May 26 11:45:46 server-xo xo-server[1908]: at Module.load (internal/modules/cjs/loader.js:933:32) May 26 11:45:46 server-xo xo-server[1908]: at Function.Module._load (internal/modules/cjs/loader.js:774:14) May 26 11:45:46 server-xo xo-server[1908]: at ModuleWrap.<anonymous> (internal/modules/esm/translators.js:199:29) May 26 11:45:46 server-xo xo-server[1908]: at ModuleJob.run (internal/modules/esm/module_job.js:152:23) May 26 11:45:46 server-xo xo-server[1908]: at async Loader.import (internal/modules/esm/loader.js:177:24) May 26 11:45:46 server-xo xo-server[1908]: at async Object.loadESM (internal/process/esm_loader.js:68:5) May 26 11:45:46 server-xo systemd[1]: xo-server.service: Main process exited, code=exited, status=1/FAILURE May 26 11:45:46 server-xo systemd[1]: xo-server.service: Failed with result 'exit-code'. May 26 11:45:46 server-xo systemd[1]: xo-server.service: Scheduled restart job, restart counter is at 89. May 26 11:45:46 server-xo systemd[1]: Stopped XO Server. May 26 11:45:46 server-xo systemd[1]: Started XO Server.
-
SyntaxError: Named export 'format' not found. The requested module 'json-rpc-peer' is a CommonJS module
Just tried updating my latest XO community version to the latest but ran into this issue when starting.
node version is 14.16.1
npm version is 6.14.12
yarn version is 1.22.4Any ideas as to what this problem is?
May 25 12:56:32 server-xo systemd[1]: Started XO Server. May 25 12:56:33 server-xo xo-server[3211]: file:///opt/xen-orchestra/packages/xo-server/dist/api/disk.mjs:6 May 25 12:56:33 server-xo xo-server[3211]: import { format } from 'json-rpc-peer'; May 25 12:56:33 server-xo xo-server[3211]: ^^^^^^ May 25 12:56:33 server-xo xo-server[3211]: SyntaxError: Named export 'format' not found. The requested module 'json-rpc-peer' is a CommonJS module, which may not support all module.exports as named exports. May 25 12:56:33 server-xo xo-server[3211]: CommonJS modules can always be imported via the default export, for example using: May 25 12:56:33 server-xo xo-server[3211]: import pkg from 'json-rpc-peer'; May 25 12:56:33 server-xo xo-server[3211]: const { format } = pkg; May 25 12:56:33 server-xo xo-server[3211]: at ModuleJob._instantiate (internal/modules/esm/module_job.js:104:21) May 25 12:56:33 server-xo xo-server[3211]: at async ModuleJob.run (internal/modules/esm/module_job.js:149:5) May 25 12:56:33 server-xo xo-server[3211]: at async Loader.import (internal/modules/esm/loader.js:166:24) May 25 12:56:33 server-xo xo-server[3211]: at async Object.loadESM (internal/process/esm_loader.js:68:5) May 25 12:56:33 server-xo systemd[1]: xo-server.service: Main process exited, code=exited, status=1/FAILURE May 25 12:56:33 server-xo systemd[1]: xo-server.service: Failed with result 'exit-code'. May 25 12:56:33 server-xo systemd[1]: xo-server.service: Scheduled restart job, restart counter is at 1. May 25 12:56:33 server-xo systemd[1]: Stopped XO Server.
-
RE: Authentication via Active Directory
@bberndt
I'm pretty sure the Test data section is for any intended AD user.I'm not sure if it helps your particular case but I'm using:
My LDAP URI is ldaps://host.domain.ext:636
check certificate and TLS is off
The LDAP user is user@domain.ext.
User Filter: (&(objectCategory=Person)(sAMAccountName=*)) -
RE: Delta backup fails on SMB remote
@olivierlambert I don't think it's a permission error since the same Windows Server 2019 SMB share works fine with full back ups.
-
RE: Delta backup fails on SMB remote
@johnarvid That error looks like the error that I've been getting... https://xcp-ng.org/forum/topic/4353/delta-backup-failed-invalid-argument
I've switched back to full backups for now.
I'm still investigating and troubleshooting though so no solutions yet... -
RE: Delta Backup Failed invalid argument
Hi @julien-f
Here is the log, Thanks.
{ "data": { "mode": "delta", "reportWhen": "never" }, "id": "1615951668943", "jobId": "2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2", "jobName": "Server1 Delta VM Backup", "message": "backup", "scheduleId": "4ddb7b35-3352-4950-b036-2774dd5847bc", "start": 1615951668943, "status": "failure", "tasks": [ { "data": { "type": "VM", "id": "e9480144-69ce-c451-b0d8-0f6eab12aafe" }, "id": "1615951668949", "message": "Starting backup of virtual-machine-2. (2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2)", "start": 1615951668949, "status": "success", "tasks": [ { "id": "1615951668952", "message": "snapshot", "start": 1615951668952, "status": "success", "end": 1615951674373, "result": "e940bcde-54f4-394f-4766-a99e474fa815" }, { "id": "1615951674377", "message": "add metadata to snapshot", "start": 1615951674377, "status": "success", "end": 1615951674390 }, { "id": "1615951674566", "message": "waiting for uptodate snapshot record", "start": 1615951674566, "status": "success", "end": 1615951674772 }, { "id": "1615951674897", "message": "start snapshot export", "start": 1615951674897, "status": "success", "end": 1615951674897 }, { "data": { "id": "67bda2f7-7a02-46ad-9102-661bb1371f3d", "isFull": false, "type": "remote" }, "id": "1615951674897:1", "message": "export", "start": 1615951674897, "status": "success", "tasks": [ { "id": "1615951675022", "message": "transfer", "start": 1615951675022, "status": "success", "end": 1615952626682, "result": { "size": 21532619776 } }, { "id": "1615952626954", "message": "merge", "start": 1615952626954, "status": "success", "end": 1615953543224, "result": { "size": 12712935424 } } ], "end": 1615953543225 }, { "id": "1615953543228", "message": "set snapshot.other_config[xo:backup:exported]", "start": 1615953543228, "status": "success", "end": 1615953543268 } ], "end": 1615953544370 }, { "data": { "type": "VM", "id": "040f626c-7559-ad6f-9161-1510b14d52bc" }, "id": "1615951668952:0", "message": "Starting backup of virtual-machine-3. (2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2)", "start": 1615951668952, "status": "failure", "tasks": [ { "id": "1615951668954", "message": "snapshot", "start": 1615951668954, "status": "success", "end": 1615951670908, "result": "2f70c2b2-5c9d-aed7-f0c1-58cb56bd5bbc" }, { "id": "1615951670912", "message": "add metadata to snapshot", "start": 1615951670912, "status": "success", "end": 1615951670927 }, { "id": "1615951671109", "message": "waiting for uptodate snapshot record", "start": 1615951671109, "status": "success", "end": 1615951671312 }, { "id": "1615951671699", "message": "start snapshot export", "start": 1615951671699, "status": "success", "end": 1615951671699 }, { "data": { "id": "67bda2f7-7a02-46ad-9102-661bb1371f3d", "isFull": false, "type": "remote" }, "id": "1615951671700", "message": "export", "start": 1615951671700, "status": "failure", "tasks": [ { "id": "1615951671757", "message": "transfer", "start": 1615951671757, "status": "failure", "end": 1615952631601, "result": { "errno": -22, "code": "EINVAL", "syscall": "open", "path": "/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd", "message": "EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd'", "name": "Error", "stack": "Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd'" } } ], "end": 1615952631601, "result": { "errno": -22, "code": "EINVAL", "syscall": "open", "path": "/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd", "message": "EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd'", "name": "Error", "stack": "Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210317T032751Z.vhd'" } }, { "id": "1615952631605", "message": "set snapshot.other_config[xo:backup:exported]", "start": 1615952631605, "status": "success", "end": 1615952631613 } ], "end": 1615952632581 }, { "data": { "type": "VM", "id": "5c4e9203-9bb6-97f8-5fb2-f8a71cfe3ed2" }, "id": "1615951668954:0", "message": "Starting backup of virtual-machine-1. (2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2)", "start": 1615951668954, "status": "success", "tasks": [ { "id": "1615951668954:1", "message": "snapshot", "start": 1615951668954, "status": "success", "end": 1615951694024, "result": "11a9f248-aee3-d662-69f3-5ef5cc2dbda4" }, { "id": "1615951694029", "message": "add metadata to snapshot", "start": 1615951694029, "status": "success", "end": 1615951694046 }, { "id": "1615951694219", "message": "waiting for uptodate snapshot record", "start": 1615951694219, "status": "success", "end": 1615951694423 }, { "id": "1615951694636", "message": "start snapshot export", "start": 1615951694636, "status": "success", "end": 1615951694637 }, { "data": { "id": "67bda2f7-7a02-46ad-9102-661bb1371f3d", "isFull": true, "type": "remote" }, "id": "1615951694638", "message": "export", "start": 1615951694638, "status": "success", "tasks": [ { "id": "1615951694839", "message": "transfer", "start": 1615951694839, "status": "success", "end": 1615953319846, "result": { "size": 50652661760 } }, { "id": "1615953320215", "message": "merge", "start": 1615953320215, "status": "success", "end": 1615953320215, "result": { "size": 0 } } ], "end": 1615953320215 }, { "id": "1615953320218", "message": "set snapshot.other_config[xo:backup:exported]", "start": 1615953320218, "status": "success", "end": 1615953320224 } ], "end": 1615953333146 } ], "end": 1615953544371 }
-
RE: Delta Backup Failed invalid argument
@olivierlambert OK so now I'm testing on a XOA trial with the same remote and back up settings and I still get this error.
I have 3 VMs, the initial run for the full back up runs fine.
Then I ran the backup manually again, this time it's the delta backup and I get the following error for one VM with a 200GB disk. The delta back up for the other two were fine.Snapshot Start: Mar 16, 2021, 4:53:35 PM End: Mar 16, 2021, 4:53:55 PM 128 XCP-ng 243 transfer Start: Mar 16, 2021, 4:53:55 PM End: Mar 16, 2021, 4:57:19 PM Duration: 3 minutes Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210316T055355Z.vhd' Start: Mar 16, 2021, 4:53:55 PM End: Mar 16, 2021, 4:57:19 PM Duration: 3 minutes Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/040f626c-7559-ad6f-9161-1510b14d52bc/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/1783e9d6-ca8c-4e90-8b81-9a4720ec7979/20210316T055355Z.vhd' Start: Mar 16, 2021, 4:53:35 PM End: Mar 16, 2021, 4:57:20 PM Duration: 4 minutes Type: delta
The subsequent run is a full backup and 2 success and 1 failed (different VM to the last failure).
Snapshot Start: Mar 16, 2021, 5:24:59 PM End: Mar 16, 2021, 5:25:05 PM 128 XCP-ng 243 transfer Start: Mar 16, 2021, 5:25:05 PM End: Mar 16, 2021, 6:02:53 PM Duration: 38 minutes Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/5c4e9203-9bb6-97f8-5fb2-f8a71cfe3ed2/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/698876e5-c844-41c9-bfaa-c1792a2d64d4/20210316T062505Z.vhd' Start: Mar 16, 2021, 5:25:05 PM End: Mar 16, 2021, 6:02:53 PM Duration: 38 minutes Error: EINVAL: invalid argument, open '/run/xo-server/mounts/67bda2f7-7a02-46ad-9102-661bb1371f3d/xo-vm-backups/5c4e9203-9bb6-97f8-5fb2-f8a71cfe3ed2/vdis/2c73dfcf-46b4-40a1-9cd3-d4f08dd42aa2/698876e5-c844-41c9-bfaa-c1792a2d64d4/20210316T062505Z.vhd' Start: Mar 16, 2021, 5:24:59 PM End: Mar 16, 2021, 6:03:06 PM Duration: 38 minutes Type: full