Create VM Error SR_BACKEND_FAILURE_1200, No such Tapdisk.
-
Started getting the following error whenever I create a new VM now, this had been working fine previously
This is creating a VM on a remote NFS share holding the VM image (VDI) and a separate NFS share with the iso. Both NFS shares seem OK. Both NFS share have worked in the past and all other VMs are running fine from same the NFS share I am creating the new VM on. I have restarted xe on each host, this has not fixed the issue.
Latest XCPNG fully updated and with XO from sources. VM is using hardware virtualisation (HWV) and is booting into a Ubuntu 22.0 Server iso. XCP-NG version reports as 4.19.0, xo-server version 5.92.0 and XO as commit 8ed84.
What causes this? What do I do to investigate & fix this?
I have read this https://xcp-ng.org/docs/architecture.html but I am still not really clear what a tapdisk is & what it is used for. It sounds like it is process to provide a temporary disk/mount/device used to bootstrap the new VM?
vm.start { "id": "3ed0835d-f7bf-bb38-4cec-2ecd7b6992c9", "bypassMacAddressesCheck": false, "force": false } { "code": "SR_BACKEND_FAILURE_1200", "params": [ "", "No such Tapdisk(minor=7)", "" ], "call": { "method": "VM.start", "params": [ "OpaqueRef:7f6d9418-d37d-40de-8999-3a2d76c30b4d", false, false ] }, "message": "SR_BACKEND_FAILURE_1200(, No such Tapdisk(minor=7), )", "name": "XapiError", "stack": "XapiError: SR_BACKEND_FAILURE_1200(, No such Tapdisk(minor=7), ) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_XapiError.js:16:12) at /opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/transports/json-rpc.js:37:27 at AsyncResource.runInAsyncScope (async_hooks.js:197:9) at cb (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/util.js:355:42) at tryCatcher (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/promise.js:547:31) at Promise._settlePromise (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/promise.js:604:18) at Promise._settlePromise0 (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/promise.js:729:18) at _drainQueueStep (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues [as _onImmediate] (/opt/xo/xo-builds/xen-orchestra-202204291839/node_modules/bluebird/js/release/async.js:15:14) at processImmediate (internal/timers.js:464:21) at process.callbackTrampoline (internal/async_hooks.js:130:17)" }
-
I am seeing the same failure listed in the xen source log on the master host of the pool and also on one of the other hosts - I assume the server chosen to host the VM.
tail -1000f /var/log/xensource.log ... Jun 8 17:57:41 XCPNG02 xapi: [debug||103204 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:690db7cf5c31 created by task D:8c6d5334b10b Jun 8 17:57:41 XCPNG02 xapi: [debug||103148 HTTPS 192.168.1.190->:::80|VM.start R:30a9d8c81d65|xmlrpc_client] stunnel pid: 8324 (cached = true) returned stunnel to cache Jun 8 17:57:41 XCPNG02 xapi: [ info||103148 HTTPS 192.168.1.190->:::80|VM.start R:30a9d8c81d65|xapi_session] Session.destroy trackid=c395bd1ebf2259112354d8f71d05fef1 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] VM.start R:30a9d8c81d65 failed with exception Server_error(SR_BACKEND_FAILURE_1200, [ ; No such Tapdisk(minor=5); ]) Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] Raised Server_error(SR_BACKEND_FAILURE_1200, [ ; No such Tapdisk(minor=5); ]) Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 1/19 xapi Raised at file ocaml/xapi-client/client.ml, line 7 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 2/19 xapi Called from file ocaml/xapi-client/client.ml, line 19 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 3/19 xapi Called from file ocaml/xapi-client/client.ml, line 6044 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 4/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 5/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 6/19 xapi Called from file ocaml/xapi/message_forwarding.ml, line 131 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 7/19 xapi Called from file ocaml/xapi/message_forwarding.ml, line 1159 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 8/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 9/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 10/19 xapi Called from file ocaml/xapi/message_forwarding.ml, line 1491 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 11/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 12/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 13/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 14/19 xapi Called from file ocaml/xapi/rbac.ml, line 231 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 15/19 xapi Called from file ocaml/xapi/server_helpers.ml, line 103 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 16/19 xapi Called from file ocaml/xapi/server_helpers.ml, line 121 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 17/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 18/19 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] 19/19 xapi Called from file lib/backtrace.ml, line 177 Jun 8 17:57:41 XCPNG02 xapi: [error||103148 :::80||backtrace] Jun 8 17:57:41 XCPNG02 xapi: [debug||102859 :::80||dummytaskhelper] task dispatch:event.from D:bd097f7c7599 created by task D:bd1ca8d956f6 ...
I am also seeing these errors being reported in the daemon log file.
grep -i error /var/log/daemon.log ... Jun 8 17:58:53 XCPNG02 forkexecd: [error||0 ||forkexecd] 31201 (/opt/xensource/libexec/block_device_io -device /dev/sm/backend/f3d786ad-b524-...) exited with signal: SIGKILL Jun 8 18:02:22 XCPNG02 tapdisk[2282]: ERROR: errno -13 at __tapdisk_vbd_complete_td_request: req tap-1.0: write 0x0008 secs @ 0x0001e000 - Permission denied Jun 8 18:04:23 XCPNG02 tapdisk[2282]: ERROR: errno -13 at __tapdisk_vbd_request_timeout: req tap-1.0 timed out, retried 120 times Jun 8 18:04:23 XCPNG02 tapdisk[2282]: ERROR: errno -13 at __tapdisk_vbd_request_timeout: req tap-1.0 timed out, retried 120 times ...
On the master host of the pool I am seeing these errors being reported constantly in the kernel log. This doesn't look good.
... tail -f /var/log/kern.log ... Jun 8 18:04:23 XCPNG02 kernel: [2884638.906148] print_req_error: I/O error, dev tdb, sector 122880 Jun 8 18:04:23 XCPNG02 kernel: [2884638.906160] Buffer I/O error on dev tdb, logical block 15360, lost async page write ...
What is the \dev\tdb disk used for and can I recover it somehow?
Would this be causing the problems I am seeing? -
I rebooted the server that those I/O errors were occurring on - and that seems to have stopped the I/O errors. Yet the problem with
SR_BACKEND_FAILURE_1200, No such Tapdisk.
is still occurring and I still cannot create new VMs.Any ideas anyone?
-
Have you checked
SMlog
? -
@Danp said in Create VM Error SR_BACKEND_FAILURE_1200, No such Tapdisk.:
Have you checked
SMlog
?Creating a new VM now and checking the SMLog I see the a similar error about the tapdisk (VM was created on the master host of the pool)
***** generic exception: vdi_attach: EXCEPTION <class 'blktap2.TapdiskNotRunning'>, No such Tapdisk(minor=5)
Full log:
Jun 8 22:39:00 XCPNG02 SM: [11278] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11278] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11278] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:00 XCPNG02 SM: [11278] pread SUCCESS Jun 8 22:39:00 XCPNG02 SM: [11278] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11278] vdi_epoch_begin {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|667b7d40-d2f2-46a6-aedd-b7faf50894f8|VDI.epoch_begin', 'vdi_ref': 'OpaqueRef:e7f286ca-bfe4-4cfe-8ae6-20a5c589ae2e', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:4e700825-f840-4e11-9c8b-eb08ade578a5', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_epoch_begin', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f'} Jun 8 22:39:00 XCPNG02 SM: [11305] vdi_epoch_begin {'sr_uuid': 'ec87c10e-1499-c1c5-cf3f-c234062bb459', 'subtask_of': 'DummyRef:|362ae06a-b4bc-412a-babf-762ff2cda1f0|VDI.epoch_begin', 'vdi_ref': 'OpaqueRef:e2db345c-54ef-47c0-ad1f-6f9ea27f6e4a', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'ubuntu-22.04-live-server-amd64.iso', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:f1407219-7293-44b2-9e3b-a1a03cf30416', 'device_config': {'SRmaster': 'true', 'location': 'UNRAID01.NEWT.newtcomputing.com:/mnt/user/isos'}, 'command': 'vdi_epoch_begin', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:9c3de093-8c48-4591-ac5d-dae913519937', 'vdi_uuid': '56e01d87-0eb5-4f03-b916-b74484360738'} Jun 8 22:39:00 XCPNG02 SM: [11322] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11322] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11322] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:00 XCPNG02 SM: [11322] pread SUCCESS Jun 8 22:39:00 XCPNG02 SM: [11322] vdi_attach {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|cc3f614f-68af-4c5b-a9fa-2dcc4ccdbab1|VDI.attach2', 'vdi_ref': 'OpaqueRef:e7f286ca-bfe4-4cfe-8ae6-20a5c589ae2e', 'vdi_on_boot': 'persist', 'args': ['true'], 'o_direct': False, 'vdi_location': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:cfd26e8a-b81b-4099-ab0e-a60dd825cc1c', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f'} Jun 8 22:39:00 XCPNG02 SM: [11322] lock: opening lock file /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:00 XCPNG02 SM: [11322] <__main__.NFSFileVDI object at 0x7fea6672d150> Jun 8 22:39:00 XCPNG02 SM: [11322] result: {'params_nbd': 'nbd:unix:/run/blktap-control/nbd/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'o_direct_reason': 'NO_RO_IMAGE', 'params': '/dev/sm/backend/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEjFkNTY1NGM1LTBmZTQtNGEgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIDFkNTY1NGM1LTBmZTQtNGEwNy1iNWZiLTI5YjU4OTIyODcwZiA=', 'vdi-uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'mem-pool': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed'}} Jun 8 22:39:00 XCPNG02 SM: [11322] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11353] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11353] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:00 XCPNG02 SM: [11353] pread SUCCESS Jun 8 22:39:00 XCPNG02 SM: [11353] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:00 XCPNG02 SM: [11353] vdi_activate {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|07b91a8d-5c31-4d6e-aca7-b8e3679dbc69|VDI.activate', 'vdi_ref': 'OpaqueRef:e7f286ca-bfe4-4cfe-8ae6-20a5c589ae2e', 'vdi_on_boot': 'persist', 'args': ['true'], 'o_direct': False, 'vdi_location': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:91fe63d1-9495-4ec6-bf3a-27a745a0561d', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f'} Jun 8 22:39:00 XCPNG02 SM: [11353] lock: opening lock file /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:00 XCPNG02 SM: [11353] blktap2.activate Jun 8 22:39:00 XCPNG02 SM: [11353] lock: acquired /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:00 XCPNG02 SM: [11353] Adding tag to: 1d5654c5-0fe4-4a07-b5fb-29b58922870f Jun 8 22:39:00 XCPNG02 SM: [11353] Activate lock succeeded Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:00 XCPNG02 SM: [11353] pread SUCCESS Jun 8 22:39:00 XCPNG02 SM: [11353] PhyLink(/dev/sm/phy/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f) -> /var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd Jun 8 22:39:00 XCPNG02 SM: [11353] <NFSSR.NFSFileVDI object at 0x7ff0bb6b9610> Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/tap-ctl', 'allocate'] Jun 8 22:39:00 XCPNG02 SM: [11353] = 0 Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/tap-ctl', 'spawn'] Jun 8 22:39:00 XCPNG02 SM: [11353] = 0 Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/tap-ctl', 'attach', '-p', '11418', '-m', '4'] Jun 8 22:39:00 XCPNG02 SM: [11353] = 0 Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/tap-ctl', 'open', '-p', '11418', '-m', '4', '-a', 'vhd:/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd', '-t', '40'] Jun 8 22:39:00 XCPNG02 SM: [11353] = 0 Jun 8 22:39:00 XCPNG02 SM: [11353] ['/usr/sbin/tap-ctl', 'open', '-p', '11418', '-m', '4', '-a', 'vhd:/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd', '-t', '40'] Jun 8 22:39:00 XCPNG02 SM: [11353] = 114 Jun 8 22:39:00 XCPNG02 SM: [11353] Set scheduler to [noop] on [/sys/dev/block/254:4] Jun 8 22:39:00 XCPNG02 SM: [11353] tap.activate: Launched Tapdisk(vhd:/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd, pid=11418, minor=4, state=R) Jun 8 22:39:00 XCPNG02 SM: [11353] DeviceNode(/dev/sm/backend/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f) -> /dev/xen/blktap-2/tapdev4 Jun 8 22:39:00 XCPNG02 SM: [11353] NBDLink(/run/blktap-control/nbd/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f) -> /run/blktap-control/nbd11418.4 Jun 8 22:39:00 XCPNG02 SM: [11353] lock: released /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:00 XCPNG02 SM: [11473] vdi_attach {'sr_uuid': 'ec87c10e-1499-c1c5-cf3f-c234062bb459', 'subtask_of': 'DummyRef:|f3f51060-a443-4a91-8eb9-e9b9e1b22a51|VDI.attach2', 'vdi_ref': 'OpaqueRef:e2db345c-54ef-47c0-ad1f-6f9ea27f6e4a', 'vdi_on_boot': 'persist', 'args': ['false'], 'vdi_location': 'ubuntu-22.04-live-server-amd64.iso', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:987a7a78-b617-4d8a-8d3d-68e565aca8aa', 'device_config': {'SRmaster': 'true', 'location': 'UNRAID01.NEWT.newtcomputing.com:/mnt/user/isos'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:9c3de093-8c48-4591-ac5d-dae913519937', 'vdi_uuid': '56e01d87-0eb5-4f03-b916-b74484360738'} Jun 8 22:39:00 XCPNG02 SM: [11473] lock: opening lock file /var/lock/sm/56e01d87-0eb5-4f03-b916-b74484360738/vdi Jun 8 22:39:00 XCPNG02 SM: [11473] Attach & activate Jun 8 22:39:00 XCPNG02 SM: [11473] PhyLink(/dev/sm/phy/ec87c10e-1499-c1c5-cf3f-c234062bb459/56e01d87-0eb5-4f03-b916-b74484360738) -> /var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459/ubuntu-22.04-live-server-amd64.iso Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'allocate'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 0 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'spawn'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 0 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'attach', '-p', '11505', '-m', '5'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 0 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'open', '-p', '11505', '-m', '5', '-a', 'aio:/var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459/ubuntu-22.04-live-server-amd64.iso', '-R'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 13 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'close', '-p', '11505', '-m', '5', '-t', '30'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 0 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'detach', '-p', '11505', '-m', '5'] Jun 8 22:39:01 XCPNG02 SM: [11473] = 0 Jun 8 22:39:01 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'free', '-m', '5'] Jun 8 22:39:01 XCPNG02 SM: [11473] = 0 Jun 8 22:39:01 XCPNG02 SM: [11473] ***** generic exception: vdi_attach: EXCEPTION <class 'blktap2.TapdiskNotRunning'>, No such Tapdisk(minor=5) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 110, in run Jun 8 22:39:01 XCPNG02 SM: [11473] return self._run_locked(sr) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Jun 8 22:39:01 XCPNG02 SM: [11473] rv = self._run(sr, target) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 247, in _run Jun 8 22:39:01 XCPNG02 SM: [11473] return target.attach(self.params['sr_uuid'], self.vdi_uuid, writable, caching_params = caching_params) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1557, in attach Jun 8 22:39:01 XCPNG02 SM: [11473] {"rdonly": not writable}) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1710, in _activate Jun 8 22:39:01 XCPNG02 SM: [11473] self._get_pool_config(sr_uuid).get("mem-pool-size")) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1346, in _tap_activate Jun 8 22:39:01 XCPNG02 SM: [11473] options) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 838, in launch_on_tap Jun 8 22:39:01 XCPNG02 SM: [11473] tapdisk = cls.__from_blktap(blktap) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 749, in __from_blktap Jun 8 22:39:01 XCPNG02 SM: [11473] tapdisk = cls.from_minor(minor=blktap.minor) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 745, in from_minor Jun 8 22:39:01 XCPNG02 SM: [11473] return cls.get(minor=minor) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 735, in get Jun 8 22:39:01 XCPNG02 SM: [11473] raise TapdiskNotRunning(**attrs) Jun 8 22:39:01 XCPNG02 SM: [11473] Jun 8 22:39:01 XCPNG02 SM: [11473] ***** ISO: EXCEPTION <class 'blktap2.TapdiskNotRunning'>, No such Tapdisk(minor=5) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 378, in run Jun 8 22:39:01 XCPNG02 SM: [11473] ret = cmd.run(sr) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 110, in run Jun 8 22:39:01 XCPNG02 SM: [11473] return self._run_locked(sr) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Jun 8 22:39:01 XCPNG02 SM: [11473] rv = self._run(sr, target) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/SRCommand.py", line 247, in _run Jun 8 22:39:01 XCPNG02 SM: [11473] return target.attach(self.params['sr_uuid'], self.vdi_uuid, writable, caching_params = caching_params) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1557, in attach Jun 8 22:39:01 XCPNG02 SM: [11473] {"rdonly": not writable}) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1710, in _activate Jun 8 22:39:01 XCPNG02 SM: [11473] self._get_pool_config(sr_uuid).get("mem-pool-size")) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 1346, in _tap_activate Jun 8 22:39:01 XCPNG02 SM: [11473] options) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 838, in launch_on_tap Jun 8 22:39:01 XCPNG02 SM: [11473] tapdisk = cls.__from_blktap(blktap) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 749, in __from_blktap Jun 8 22:39:01 XCPNG02 SM: [11473] tapdisk = cls.from_minor(minor=blktap.minor) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 745, in from_minor Jun 8 22:39:01 XCPNG02 SM: [11473] return cls.get(minor=minor) Jun 8 22:39:01 XCPNG02 SM: [11473] File "/opt/xensource/sm/blktap2.py", line 735, in get Jun 8 22:39:01 XCPNG02 SM: [11473] raise TapdiskNotRunning(**attrs) Jun 8 22:39:01 XCPNG02 SM: [11473] Jun 8 22:39:01 XCPNG02 SM: [11551] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:01 XCPNG02 SM: [11551] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:01 XCPNG02 SM: [11551] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:01 XCPNG02 SM: [11551] pread SUCCESS Jun 8 22:39:01 XCPNG02 SM: [11551] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:01 XCPNG02 SM: [11551] vdi_deactivate {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|f500f41d-7c07-44fb-9845-732f9c3f0f6a|VDI.deactivate', 'vdi_ref': 'OpaqueRef:e7f286ca-bfe4-4cfe-8ae6-20a5c589ae2e', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:7d3510c2-c1fc-415b-b154-64da3af76a16', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f'} Jun 8 22:39:01 XCPNG02 SM: [11551] lock: opening lock file /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:01 XCPNG02 SM: [11551] blktap2.deactivate Jun 8 22:39:01 XCPNG02 SM: [11551] lock: acquired /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:01 XCPNG02 SM: [11551] ['/usr/sbin/tap-ctl', 'close', '-p', '11418', '-m', '4', '-t', '30'] Jun 8 22:39:01 XCPNG02 SM: [11551] = 0 Jun 8 22:39:01 XCPNG02 SM: [11551] ['/usr/sbin/tap-ctl', 'detach', '-p', '11418', '-m', '4'] Jun 8 22:39:01 XCPNG02 SM: [11551] = 0 Jun 8 22:39:01 XCPNG02 SM: [11551] ['/usr/sbin/tap-ctl', 'free', '-m', '4'] Jun 8 22:39:01 XCPNG02 SM: [11551] = 0 Jun 8 22:39:01 XCPNG02 SM: [11551] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd, pid=11418, minor=4, state=R) Jun 8 22:39:01 XCPNG02 SM: [11551] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:01 XCPNG02 SM: [11551] pread SUCCESS Jun 8 22:39:01 XCPNG02 SM: [11551] Removed host key host_OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190 for 1d5654c5-0fe4-4a07-b5fb-29b58922870f Jun 8 22:39:01 XCPNG02 SM: [11551] lock: released /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:01 XCPNG02 SM: [11615] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:01 XCPNG02 SM: [11615] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:39:01 XCPNG02 SM: [11615] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/1d5654c5-0fe4-4a07-b5fb-29b58922870f.vhd'] Jun 8 22:39:01 XCPNG02 SM: [11615] pread SUCCESS Jun 8 22:39:01 XCPNG02 SM: [11615] vdi_detach {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|430cf35c-c0d7-43e4-bc89-b014d3bf7754|VDI.detach', 'vdi_ref': 'OpaqueRef:e7f286ca-bfe4-4cfe-8ae6-20a5c589ae2e', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '1d5654c5-0fe4-4a07-b5fb-29b58922870f', 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:f17eba69-d8c9-4dcf-ab26-6c8c7dba380c', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_uuid': '1d5654c5-0fe4-4a07-b5fb-29b58922870f'} Jun 8 22:39:01 XCPNG02 SM: [11615] lock: opening lock file /var/lock/sm/1d5654c5-0fe4-4a07-b5fb-29b58922870f/vdi Jun 8 22:39:01 XCPNG02 SM: [11615] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:40:44 XCPNG02 SM: [12548] ['uuidgen', '-r'] Jun 8 22:40:44 XCPNG02 SM: [12548] pread SUCCESS Jun 8 22:40:44 XCPNG02 SM: [12548] lock: opening lock file /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:40:44 XCPNG02 SM: [12548] lock: acquired /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr Jun 8 22:40:44 XCPNG02 SM: [12548] vdi_create {'sr_uuid': 'bc2687ec-0cdf-03ed-7f90-e58edad07fed', 'subtask_of': 'DummyRef:|96f661f1-77c5-4d4d-ab2b-243d0de50732|VDI.create', 'vdi_type': 'user', 'args': ['30064771072', 'test08jun_vdi', 'test08jun', '', 'false', '19700101T00:00:00Z', '', 'false'], 'o_direct': False, 'host_ref': 'OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190', 'session_ref': 'OpaqueRef:7b64b6b4-4d66-4746-b75e-f13c086da122', 'device_config': {'SRmaster': 'true', 'serverpath': '/mnt/Pool01/Remote_VM_Images/xcpng_vm_images', 'server': 'TNC01.NEWT.newtcomputing.com'}, 'command': 'vdi_create', 'sr_ref': 'OpaqueRef:b1f916b2-51fb-4726-bcbf-49ab05b0cf50', 'vdi_sm_config': {}} Jun 8 22:40:44 XCPNG02 SM: [12548] ['/usr/sbin/td-util', 'create', 'vhd', '28672', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/604b51ed-53ea-45cf-a110-f4a51ec6101a.vhd'] Jun 8 22:40:44 XCPNG02 SM: [12548] pread SUCCESS Jun 8 22:40:44 XCPNG02 SM: [12548] ['/usr/sbin/td-util', 'query', 'vhd', '-v', '/var/run/sr-mount/bc2687ec-0cdf-03ed-7f90-e58edad07fed/604b51ed-53ea-45cf-a110-f4a51ec6101a.vhd'] Jun 8 22:40:44 XCPNG02 SM: [12548] pread SUCCESS Jun 8 22:40:44 XCPNG02 SM: [12548] lock: released /var/lock/sm/bc2687ec-0cdf-03ed-7f90-e58edad07fed/sr
-
Checking the code of blktap2.py on github and what I have on my system and I can see that they are subtly different.
I note last change to blktap2.py on github was 6th April 2020, I installed XCPNG November last year but I had recently been testing with the new XOSTOR. Checking the date of blktap2.py I find that it is 24th May 2022 so possibly this is a bug with the latest XOSTOR release?
-
-
Pinging @ronan-a
-
@olivierlambert Is it possible to rollback to a "standard" version of XCP-NG on these servers? I did a quick search to see if there was a way to rollback to a given XCP-NG version but did not find anything specific.
-
I think it is, but @ronan-a should help you with this.
-
@geoffbland You can downgrade your
sm
version on each host using:yum downgrade sm-2.30.6-1.1.xcpng8.2.x86_64
But I'm not sure if your problem is related to the sm linstor version.
-
Also:
Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'open', '-p', '11505', '-m', '5', '-a', 'aio:/var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459/ubuntu-22.04-live-server-amd64.iso', '-R'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 13 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'close', '-p', '11505', '-m', '5', '-t', '30'] Jun 8 22:39:00 XCPNG02 SM: [11473] = 0 Jun 8 22:39:00 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'detach', '-p', '11505', '-m', '5'] Jun 8 22:39:01 XCPNG02 SM: [11473] = 0 Jun 8 22:39:01 XCPNG02 SM: [11473] ['/usr/sbin/tap-ctl', 'free', '-m', '5'] Jun 8 22:39:01 XCPNG02 SM: [11473] = 0
There is this error during the tapdisk open call:
Permission denied
(errno 13).
Are you sure you can access correctly to the data of your SR?The last exception is caused in
blktap2.py
:try: tapdisk = cls.__from_blktap(blktap) node = '/sys/dev/block/%d:%d' % (tapdisk.major(), tapdisk.minor) util.set_scheduler_sysfs_node(node, 'noop') return tapdisk except: TapCtl.close(pid, minor) raise
-
@ronan-a said in Create VM Error SR_BACKEND_FAILURE_1200, No such Tapdisk.:
There is this error during the tapdisk open call: Permission denied (errno 13).
Are you sure you can access correctly to the data of your SR?I'm pretty sure I can...
[11:42 XCPNG01 ~]# whoami root [11:42 XCPNG01 ~]# ll /var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459/ubuntu-22.04-live-server-amd64.iso -rwxrwx--- 1 root users 1466714112 Apr 21 19:20 /var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459/ubuntu-22.04-live-server-amd64.iso
-
Try to create a file in there instead of just listing
-
@olivierlambert said in Create VM Error SR_BACKEND_FAILURE_1200, No such Tapdisk.:
Try to create a file in there instead of just listing
I have access....
[18:13 XCPNG01 ec87c10e-1499-c1c5-cf3f-c234062bb459]# pwd /var/run/sr-mount/ec87c10e-1499-c1c5-cf3f-c234062bb459 [18:13 XCPNG01 ec87c10e-1499-c1c5-cf3f-c234062bb459]# touch new_file [18:14 XCPNG01 ec87c10e-1499-c1c5-cf3f-c234062bb459]# ll new_file -rw-r----- 1 nfsnobody nfsnobody 0 Jun 13 18:14 new_file
Why is R/W access needed on the ISO SR?
-
All the access rights to the isos looked OK on the remote server but just to check I wiped the share. Recreated it and copied back the isos. In then recreated the SR in XCP-ng and now it works. It had all been working fine and I had not changed anything on the share - it just stopped working. So please consider this fixed now - looks like some weird issue with the mount and not a problem with XCP-ng - sorry for wasting your time looking at this.
-
@geoffbland No problem. It's the first time I see this error with tapdisk (and it's it's even more surprising to have it on this type of SR...).
It had all been working fine and I had not changed anything on the share - it just stopped working
In this case, maybe there was a problem with the XAPI, a lock on the device or something else. Not easy to find the cause without remote access. Don't hesitate to ping us if this problem comes back.