LICENSE_RESTRICTION (PCI_device_for_auto_update)
-
@mauzilla You will need to stop the VM first, as you can't run that command against a running VM.
-
@danp thank you for your feedback, I shutdown the server, ran the code, and it seemed to have completed the manual backup I fired from the original "failed" backup job, however, it still indicates the backup failed due to the same error even though everything else indicates the backup was complete?
VMNAME Snapshot Start: May 2, 2021, 04:22:03 PM End: May 2, 2021, 04:22:05 PM NFS SERVER NAME transfer Start: May 2, 2021, 04:22:07 PM End: May 2, 2021, 04:25:16 PM Duration: 3 minutes Size: 12.53 GiB Speed: 67.62 MiB/s Start: May 2, 2021, 04:22:05 PM End: May 2, 2021, 04:25:16 PM Duration: 3 minutes Start: May 2, 2021, 04:22:02 PM End: May 2, 2021, 04:25:16 PM Duration: 3 minutes Error: LICENCE_RESTRICTION(PCI_device_for_auto_update) Type: delta
-
@mauzilla Have you verified that the change took effect?
xe vm-param-get param-name=has-vendor-device uuid={VM-UUID}
-
@danp the property seems to be false:
xe vm-param-get param-name=has-vendor-device uuid=0969208b-4271-7088-35cc-ac6fe41ca580 false
-
@danp what is strange is that besides the error, it does still seem that the backup was complete. Here is the entire backup log for reference. I just want to confirm, must the call be made against the actual running VM or the XO created backup VM that is halted?
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1619965322295", "jobId": "be71d5f0-87d4-4af2-aaae-a40c4e8a3c8f", "jobName": "Weekly DC Backup", "message": "backup", "scheduleId": "52f3a324-e200-4b5e-9cc2-b91c626a1280", "start": 1619965322295, "status": "failure", "infos": [ { "data": { "vms": [ "0969208b-4271-7088-35cc-ac6fe41ca580" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "0969208b-4271-7088-35cc-ac6fe41ca580" }, "id": "1619965322992", "message": "backup VM", "start": 1619965322992, "status": "failure", "tasks": [ { "id": "1619965323640", "message": "snapshot", "start": 1619965323640, "status": "success", "end": 1619965325714, "result": "5a661ee0-255f-ed46-3557-abafb1dc0075" }, { "data": { "id": "afbebbb7-acf1-4c98-ae32-7f436da4b701", "isFull": false, "type": "remote" }, "id": "1619965325714:0", "message": "export", "start": 1619965325714, "status": "success", "tasks": [ { "id": "1619965327142", "message": "transfer", "start": 1619965327142, "status": "success", "end": 1619965516926, "result": { "size": 13456619520 } }, { "id": "1619965516943", "message": "merge", "start": 1619965516943, "status": "success", "end": 1619965516943, "result": { "size": 0 } } ], "end": 1619965516943 } ], "end": 1619965516963, "result": { "code": "LICENCE_RESTRICTION", "params": [ "PCI_device_for_auto_update" ], "call": { "method": "VM.set_is_a_template", "params": [ "OpaqueRef:f6a13d0e-df67-4c1e-bc7b-c275cbe8f74a", false ] }, "message": "LICENCE_RESTRICTION(PCI_device_for_auto_update)", "name": "XapiError", "stack": "XapiError: LICENCE_RESTRICTION(PCI_device_for_auto_update)\n at Function.wrap (/home/xo/xen-orchestra/packages/xen-api/dist/_XapiError.js:26:12)\n at /home/xo/xen-orchestra/packages/xen-api/dist/transports/json-rpc.js:48:30\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)" } } ], "end": 1619965516964 }
-
@mauzilla I don't think you've stated the CH version involved. Also, have you tried creating a new backup job to see if the errorstill occurs with it?
-
@danp Citrix Hypervisor 7.6 (yes yes I know it's EOL, we're migrating to XCPNG 8.2 within the next couple of months)
I also havent tried a new backup job, I wanted to just troubleshoot what I might be doing wrong before I start a new job, will do that today and let you know if the problem persists.
-
@Danp I can confirm that setting up a new backup job with a VM where we changed the param does seem to complete. What do you recommend we do now? It seems that the previous backup job still thinks there is an issue. We have scheduled the delta backups to run on a Saturday. I'm going to notify the clients that we need to stop their VM's to run the above parameter to get backups resolved again. Hopefully on Saturday the above issue is resolved completely. I would prefer not to recreate the entire backup set as the same backup job has large VM's that is now already in a delta backup cycle. Maybe I should remove the VM's from the backup job and add them again (obviously after doing the xe param set command on them). Will this work?
-
@mauzilla I'm not sure which direction you should head at this point. I'm guessing that the issue is with the merge of the existing deltas. One thought would be to remove the VM snapshots related to the backup job, which would force a full backup for each VM.
@olivierlambert Any suggestions for the OP (beyond upgrading to XCP-ng )?
-
Maybe because of the snapshot still got the old property. If you remove all the snapshots, next backup should run OK.
-
@olivierlambert said in LICENSE_RESTRICTION (PCI_device_for_auto_update):
n OK.
Thank you Olivier, I tried deleting the snapshot from one of the servers, but I'm also unable to do so. Will it help if I re-enable / upgrade the servers again to CH Enterprise and then attempt to delete the snapshots? Also, I take it I need to go to the VM > Snapshots and delete the snapshots which was created by the backup job?
vm.delete { "id": "3a9a6fd9-3524-239a-9da2-a70453f83fdf" } { "code": "LICENCE_RESTRICTION", "params": [ "PCI_device_for_auto_update" ], "call": { "method": "VM.set_is_a_template", "params": [ "OpaqueRef:ff753fa6-bcec-4222-b7aa-31ddb0777df3", false ] }, "message": "LICENCE_RESTRICTION(PCI_device_for_auto_update)", "name": "XapiError", "stack": "XapiError: LICENCE_RESTRICTION(PCI_device_for_auto_update) at Function.wrap (/home/xo/xen-orchestra/packages/xen-api/src/_XapiError.js:16:12) at /home/xo/xen-orchestra/packages/xen-api/src/transports/json-rpc.js:35:27 at AsyncResource.runInAsyncScope (async_hooks.js:197:9) at cb (/home/xo/xen-orchestra/node_modules/bluebird/js/release/util.js:355:42) at tryCatcher (/home/xo/xen-orchestra/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/home/xo/xen-orchestra/node_modules/bluebird/js/release/promise.js:547:31) at Promise._settlePromise (/home/xo/xen-orchestra/node_modules/bluebird/js/release/promise.js:604:18) at Promise._settlePromise0 (/home/xo/xen-orchestra/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/home/xo/xen-orchestra/node_modules/bluebird/js/release/promise.js:729:18) at _drainQueueStep (/home/xo/xen-orchestra/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/home/xo/xen-orchestra/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/home/xo/xen-orchestra/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues [as _onImmediate] (/home/xo/xen-orchestra/node_modules/bluebird/js/release/async.js:15:14) at processImmediate (internal/timers.js:461:21) at process.topLevelDomainCallback (domain.js:144:15) at process.callbackTrampoline (internal/async_hooks.js:129:14)" }
-
Those licenses restrictions are ridiculous.
So you'll need to modify the snapshot parameter to remove the PCI device auto update, but I'm not sure 100% it's doable.
The command would be something like
xe snapshot-param-set has-vendor-device=false uuid=<SNAPSHOT UUID>
-
@olivierlambert Perfect, setting the parameter on both the VM and the snapshot resolves the issue. I am able to continue backups Thank you both for helping!
-
Ahh finally!! Enjoy saying goodbye to Citrix We'll be delighted to get you on board (as a user or even as a contributor via our pro support!)