XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VDI_IO_ERROR(Device I/O errors) when you run scheduled backup

    Scheduled Pinned Locked Moved Xen Orchestra
    66 Posts 11 Posters 16.2k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • F Offline
      fachex @cbaguzman
      last edited by

      @cbaguzman Thank you! unfortunately, I can't reproduce that. After the update backups were not working at all. Even those that "worked" when trying to restore it failed to do so. So we decided to reinstall 8.2 and don't do updates. But in the process, we think we configure things differently. Now we can't reproduce what we did before even with the documentation we have. The moral of the story? don't create backup systems too complicated πŸ™‚

      1 Reply Last reply Reply Quote 0
      • stormiS Offline
        stormi Vates πŸͺ XCP-ng Team
        last edited by

        Does this mean it's working fine now?

        S 1 Reply Last reply Reply Quote 0
        • S Offline
          shorian @stormi
          last edited by shorian

          @stormi I confess we're now encountering the same error message on nearly all our backups, including CR to a local host. Started from fresh install and cleaned SRs; to avoid memory being the culprit we have upped the memory for Dom0 to 16gb (128gb machine) and XO is running with 16gb of mem of which 12gb is allocated to node.

          We've got the same problem occurring across all our hosts. Over 90% of backups error out with the VDI_IO_ERROR, however (weirdly) looking at the target end, I'd say that 75% of the backups 'seem' to complete successfully. Need to restore a couple to find out for sure but confess I've been concentrating on finding out what triggers the error rather than whether it is misleading.

          I've gone through the logs in detail and unfortunately nothing jumps out, I'm going to take time to extract the relevant sections from them all to see if you can see something that I can't, but apart from lots of memory messages from squeezed there aren't any obvious errors.

          Bizarrely SMlog is pretty clean - it's almost like it receives a termination signal from somewhere rather than erroring out of its own accord - for example tapdisk shuts down with "Closing after 0 errors" and no further explanation. I have found some talk that tapdisk can trigger a target reset after excessive i/o activity but I've not managed to prove that yet.

          I'll keep digging into things ; in short it's not something only experienced by @fachex but I haven't yet recreated it in XOA - it's on my todo list.

          (If you want me to download all the logs and send them across directly, or to do anything under the covers of XOA, please do let me know. I'm dipping into this when I get time so its not a continuous effort I'm afraid.)

          1 Reply Last reply Reply Quote 1
          • stormiS Offline
            stormi Vates πŸͺ XCP-ng Team
            last edited by

            We can't promise putting a lot of time into investigating it and the causes could be various, but feel free to share the logs so that anyone with a bit of time may have a quick look for obvious issues.

            1 Reply Last reply Reply Quote 0
            • S Offline
              shorian
              last edited by shorian

              Have recreated using XOA; unfortunately failed as described above. I'm afraid this is going to be a long painful post with lots of logs so my apologies in advance, but hopefully someone cleverer than I can see something that I can't.

              Setup: Cleaned all the remnant VMs and VDIs from before starting the backup (for some reason seem to get a lot that aren't listed in the GUI but show up in the CLI); same for the halted VMs.

              Objective: Three VMs are being backed up from source (xen11-lon1 in the logs) via CR to an identical host target (xen12-lon2 in the logs) on the local LAN, 1st backup (VM: 98gb) succeeds - 98gb, second (68gb) fails after 7 minutes - this is the one we're discussing - and the third automatically cancels. Concurrency in this scenario is set to 1 whereupon we get a failure probably 50% of the time, if concurrency is set to 2 it fails every time.

              Outcome:
              Attached is the backup log from XOA: backup NG.log.txt 😞

              Looking through the source's logs, quite a few storage errors repeating all the way through xensource.log eg :

              xapi: [error||3 ||storage_migrate] Tapdisk mirroring has failed
              xapi: [debug||181905 ||storage_access] Received update: ["Mirror","34bd3a97-3562-75af-a24e-a266caf368e3/269a33a0-4a59-4cc3-8b29-1245c402e51c"]
              xapi: [debug||23 |sm_events D:2b460f139f9e|storage_access] sm event on mirror: 34bd3a97-3562-75af-a24e-a266caf368e3/269a33a0-4a59-4cc3-8b29-1245c402e51c
              xapi: [debug||181905 ||storage_access] Calling UPDATES.get Async.VM.migrate_send R:61453ededbc7 3663 30
              xapi: [ info||23 |sm_events D:2b460f139f9e|storage_impl] DATA.MIRROR.stat dbg:sm_events D:2b460f139f9e id:34bd3a97-3562-75af-a24e-a266caf368e3/269a33a0-4a59-4cc3-8b29-1245c402e51c
              xapi: [debug||23 |sm_events D:2b460f139f9e|storage_access] Mirror 34bd3a97-3562-75af-a24e-a266caf368e3/269a33a0-4a59-4cc3-8b29-1245c402e51c has failed
              xapi: [debug||23 |sm_events D:2b460f139f9e|storage_access] Mirror associated with task: OpaqueRef:61453ede-dbc7-4aed-bca3-00b91253bdda
              xapi: [error||23 |sm_events D:2b460f139f9e|storage_access] storage event: Caught Db_exn.DBCache_NotFound("missing row", "task", "OpaqueRef:61453ede-dbc7-4aed-bca3-00b91253bdda") while updating mirror
              

              ...whilst on the target we get the following xensource.log errors at the point of the backup starting (however first VM backed up successfully?!):

              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80|Importing raw VDI R:063fb70f6b64|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80|Importing raw VDI R:063fb70f6b64|vhd_tool_wrapper] vhd-tool output: vhd-tool: Non-zero size required (either a pre-existing destination file or specified via --destination-size on the command line)\x0A
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621685 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:e90616ae272a created by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621685 UNIX /var/lib/xcp/xapi|session.slave_login D:e20578ac50c9|xapi_session] Session.create trackid=dabf19084add1449e2bb4eefb181a0fe pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621686 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:b215ffc57ba1 created by task D:e20578ac50c9
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VBD.unplug D:6ea0c0aa3771 created by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621687 UNIX /var/lib/xcp/xapi||taskhelper] task VBD.unplug R:1ac7c9ffd569 (uuid:3078efb7-ffd8-2b8a-f6d9-c7b3be9feb98) created (trackid=dabf19084add1449e2bb4eefb181a0fe) by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|audit] VBD.unplug: VBD = '4ccde4ad-9d36-948a-db99-abfa6b80e1ec'
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|vbdops] VBD.unplug of loopback VBD 'OpaqueRef:d7d882f5-b195-4ece-981f-00fc9511ee8a'
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] DP.destroy dbg:OpaqueRef:1ac7c9ff-d569-4e1d-8fe2-688917213cd2 dp:vbd/0/xvda allow_leak:false
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] [destroy_sr] Filtered VDI count:0
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: message repeated 3 times: [ [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] [destroy_sr] Filtered VDI count:0]
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] [destroy_sr] Filtered VDI count:1
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] [destroy_sr] VDI found with the dp is f9d37af0-10d4-4824-8306-f849c0c74d52
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] dbg:OpaqueRef:1ac7c9ff-d569-4e1d-8fe2-688917213cd2 dp:vbd/0/xvda sr:cf2dbaa3-21f3-903b-0fd1-fbe68539f897 vdi:f9d37af0-10d4-4824-8306-f849c0c74d52 superstate:activated RW
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621687 UNIX /var/lib/xcp/xapi|VBD.unplug R:1ac7c9ffd569|storage_impl] dbg:OpaqueRef:1ac7c9ff-d569-4e1d-8fe2-688917213cd2 dp:vbd/0/xvda sr:cf2dbaa3-21f3-903b-0fd1-fbe68539f897 vdi:f9d37af0-10d4-4824-8306-f849c0c74d52 superstate:activated RW
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621692 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VBD.destroy D:7ee64ab7600c created by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621692 UNIX /var/lib/xcp/xapi||taskhelper] task VBD.destroy R:dd91e7b81fb2 (uuid:27a52aa2-68dd-1e3d-6c49-184ab9ccdfa7) created (trackid=dabf19084add1449e2bb4eefb181a0fe) by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621692 UNIX /var/lib/xcp/xapi|VBD.destroy R:dd91e7b81fb2|audit] VBD.destroy: VBD = '4ccde4ad-9d36-948a-db99-abfa6b80e1ec'
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621692 UNIX /var/lib/xcp/xapi|VBD.destroy R:dd91e7b81fb2|xapi_vbd_helpers] VBD.destroy (uuid = 4ccde4ad-9d36-948a-db99-abfa6b80e1ec; ref = OpaqueRef:d7d882f5-b195-4ece-981f-00fc9511ee8a)
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621693 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:bdbdfd349487 created by task R:063fb70f6b64
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621693 UNIX /var/lib/xcp/xapi|session.logout D:443bf85f4ecf|xapi_session] Session.destroy trackid=dabf19084add1449e2bb4eefb181a0fe
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80|Importing raw VDI R:063fb70f6b64|import] Caught exception: VDI_IO_ERROR: [ Device I/O errors ]
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621599 INET :::80|Importing raw VDI R:063fb70f6b64|taskhelper] the status of R:063fb70f6b64 is failure; cannot set it to `failure
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] Importing raw VDI R:063fb70f6b64 failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 1/12 xapi Raised at file ocaml/xapi/vhd_tool_wrapper.ml, line 75
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 2/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 3/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 4/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 5/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 6/12 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 156
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 7/12 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 194
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 8/12 xapi Called from file ocaml/xapi/server_helpers.ml, line 100
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 9/12 xapi Called from file ocaml/xapi/server_helpers.ml, line 121
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 10/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 11/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 12/12 xapi Called from file lib/backtrace.ml, line 177
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace]
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [debug||621694 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:bdc1513b3d35 created by task D:39d05a9ec99c
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [ info||621694 UNIX /var/lib/xcp/xapi|session.logout D:cc7477457aac|xapi_session] Session.destroy trackid=758ce247f3c6d83f646e031b6504357f
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||import] Caught exception in import handler: VDI_IO_ERROR: [ Device I/O errors ]
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] VDI.import D:39d05a9ec99c failed with exception Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] Raised Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 1/9 xapi Raised at file unix.ml, line 328
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 2/9 xapi Called from file lib/xapi-stdext-unix/unixext.ml, line 469
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 3/9 xapi Called from file lib/xapi-stdext-unix/unixext.ml, line 471
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 4/9 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 232
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 5/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 100
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 6/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 121
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 7/9 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 8/9 xapi Called from file map.ml, line 135
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 9/9 xapi Called from file sexp_conv.ml, line 147
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace]
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] VDI.import D:29850c1bafd6 failed with exception Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] Raised Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace] 1/1 xapi Raised at file (Thread 621599 has no backtrace table. Was with_backtraces called?, line 0
              Jan 15 18:19:48 xcp-ng-xen12-lon2 xapi: [error||621599 INET :::80||backtrace]
              

              followed by the below errors when we get to (what will be) the failed VM backing up - the error being 'duplicate key' except the target host and SR were completely cleaned before the backup, including from the CLI for the usual remnants that don't show in the GUI -

              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80|VM.add_to_other_config D:48da5df44120|sql] Duplicate key in set or map: table VM; field other_config; ref OpaqueRef:fde7f87b-0f36-4d02-810b-249a7cede061; key xo:backup:sr
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] VM.add_to_other_config D:48da5df44120 failed with exception Db_exn.Duplicate_key("VM", "other_config", "OpaqueRef:fde7f87b-0f36-4d02-810b-249a7cede061", "xo:backup:sr")
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] Raised Db_exn.Duplicate_key("VM", "other_config", "OpaqueRef:fde7f87b-0f36-4d02-810b-249a7cede061", "xo:backup:sr")
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 1/8 xapi Raised at file ocaml/database/db_cache_impl.ml, line 310
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 2/8 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 3/8 xapi Called from file ocaml/xapi/rbac.ml, line 231
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 4/8 xapi Called from file ocaml/xapi/server_helpers.ml, line 100
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 5/8 xapi Called from file ocaml/xapi/server_helpers.ml, line 121
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 6/8 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 7/8 xapi Called from file map.ml, line 135
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace] 8/8 xapi Called from file sexp_conv.ml, line 147
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [error||623599 INET :::80||backtrace]
              Jan 15 19:17:36 xcp-ng-xen12-lon2 xapi: [debug||623600 INET :::80||api_effect] VM.remove_from_other_config
              

              ...then lots more debug messages, concluding with the VDI_IO_ERROR being returned:

              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623670 UNIX /var/lib/xcp/xapi|VBD.plug R:caeff9d15528|vbdops] device path: sm/backend/cf2dbaa3-21f3-903b-0fd1-fbe68539f897/ea4a2c16-5268-4e53-9efb-c637198fe77d
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623653 INET :::80|Importing raw VDI R:ce4f60abdac4|vhd_tool_wrapper] Executing /bin/vhd-tool serve --source-format vhd --source-protocol none --source-fd 61bc8345-8fa7-4193-b514-99023da700e4 -
              -tar-filename-prefix  --destination file:///dev/sm/backend/cf2dbaa3-21f3-903b-0fd1-fbe68539f897/ea4a2c16-5268-4e53-9efb-c637198fe77d --destination-format raw --progress --machine --direct --destination-size 0 --prezeroed
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623660 INET :::80|[XO] VDI Content Import R:d79ee3e4c7cd|vhd_tool_wrapper] Executing /bin/vhd-tool serve --source-format vhd --source-protocol none --source-fd 41731693-3426-42e6-9a43-5fda7eaf
              c59f --tar-filename-prefix  --destination file:///dev/sm/backend/cf2dbaa3-21f3-903b-0fd1-fbe68539f897/ea4a2c16-5268-4e53-9efb-c637198fe77d --destination-format raw --progress --machine --direct --destination-size 75491041792 --prezeroed
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80|Importing raw VDI R:ce4f60abdac4|vhd_tool_wrapper] vhd-tool failed, returning VDI_IO_ERROR
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80|Importing raw VDI R:ce4f60abdac4|vhd_tool_wrapper] vhd-tool output: vhd-tool: Non-zero size required (either a pre-existing destination file or specified via --destination-size on the command line)\x0A
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623738 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:db69be120ea6 created by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623738 UNIX /var/lib/xcp/xapi|session.slave_login D:094ed09a64e9|xapi_session] Session.create trackid=ba066585a7083cd229be4a26ef154317 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623739 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:9bcd81e4dd62 created by task D:094ed09a64e9
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VBD.unplug D:a0deb2a382f6 created by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623740 UNIX /var/lib/xcp/xapi||taskhelper] task VBD.unplug R:c0cac1edb5c5 (uuid:8eab9844-7218-b1a9-a94a-ade7e36b70a3) created (trackid=ba066585a7083cd229be4a26ef154317) by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|audit] VBD.unplug: VBD = 'a375bd27-cea7-4263-ad72-33012568155e'
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|vbdops] VBD.unplug of loopback VBD 'OpaqueRef:314f1ee1-0dd4-462a-8177-cd1bcbde0f0d'
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] DP.destroy dbg:OpaqueRef:c0cac1ed-b5c5-4475-a71a-b4303260e39d dp:vbd/0/xvda allow_leak:false
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] [destroy_sr] Filtered VDI count:0
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: message repeated 3 times: [ [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] [destroy_sr] Filtered VDI count:0]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] [destroy_sr] Filtered VDI count:1
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] [destroy_sr] VDI found with the dp is ea4a2c16-5268-4e53-9efb-c637198fe77d
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] dbg:OpaqueRef:c0cac1ed-b5c5-4475-a71a-b4303260e39d dp:vbd/0/xvda sr:cf2dbaa3-21f3-903b-0fd1-fbe68539f897 vdi:ea4a2c16-5268-4e53-9efb-c637198fe77d superstate:activated RW
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623740 UNIX /var/lib/xcp/xapi|VBD.unplug R:c0cac1edb5c5|storage_impl] dbg:OpaqueRef:c0cac1ed-b5c5-4475-a71a-b4303260e39d dp:vbd/0/xvda sr:cf2dbaa3-21f3-903b-0fd1-fbe68539f897 vdi:ea4a2c16-5268-4e53-9efb-c637198fe77d superstate:activated RW
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623743 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:VBD.destroy D:830c73f393b5 created by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623743 UNIX /var/lib/xcp/xapi||taskhelper] task VBD.destroy R:43bb36d95384 (uuid:c96a5c81-59f0-a49f-c934-0aa4b57dece6) created (trackid=ba066585a7083cd229be4a26ef154317) by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623743 UNIX /var/lib/xcp/xapi|VBD.destroy R:43bb36d95384|audit] VBD.destroy: VBD = 'a375bd27-cea7-4263-ad72-33012568155e'
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623743 UNIX /var/lib/xcp/xapi|VBD.destroy R:43bb36d95384|xapi_vbd_helpers] VBD.destroy (uuid = a375bd27-cea7-4263-ad72-33012568155e; ref = OpaqueRef:314f1ee1-0dd4-462a-8177-cd1bcbde0f0d)
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623744 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:528a483a90fa created by task R:ce4f60abdac4
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623744 UNIX /var/lib/xcp/xapi|session.logout D:89cb21d91bc0|xapi_session] Session.destroy trackid=ba066585a7083cd229be4a26ef154317
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80|Importing raw VDI R:ce4f60abdac4|import] Caught exception: VDI_IO_ERROR: [ Device I/O errors ]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623653 INET :::80|Importing raw VDI R:ce4f60abdac4|taskhelper] the status of R:ce4f60abdac4 is failure; cannot set it to `failure
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] Importing raw VDI R:ce4f60abdac4 failed with exception Server_error(VDI_IO_ERROR, [ Device I/O errors ])
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] Raised Server_error(VDI_IO_ERROR, [ Device I/O errors ])
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 1/12 xapi Raised at file ocaml/xapi/vhd_tool_wrapper.ml, line 75
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 2/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 3/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 4/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 5/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 6/12 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 156
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 7/12 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 194
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 8/12 xapi Called from file ocaml/xapi/server_helpers.ml, line 100
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 9/12 xapi Called from file ocaml/xapi/server_helpers.ml, line 121
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 10/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 11/12 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 35
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 12/12 xapi Called from file lib/backtrace.ml, line 177
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623745 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:b305b9778ea7 created by task D:378e6880299b
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [ info||623745 UNIX /var/lib/xcp/xapi|session.logout D:32b261df8c78|xapi_session] Session.destroy trackid=2de39b11c565eee741af8c59195f712e
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||import] Caught exception in import handler: VDI_IO_ERROR: [ Device I/O errors ]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] VDI.import D:378e6880299b failed with exception Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] Raised Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 1/9 xapi Raised at file unix.ml, line 328
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 2/9 xapi Called from file lib/xapi-stdext-unix/unixext.ml, line 469
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 3/9 xapi Called from file lib/xapi-stdext-unix/unixext.ml, line 471
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 4/9 xapi Called from file ocaml/xapi/import_raw_vdi.ml, line 232
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 5/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 100
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 6/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 121
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 7/9 xapi Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 24
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 8/9 xapi Called from file map.ml, line 135
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 9/9 xapi Called from file sexp_conv.ml, line 147
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] VDI.import D:c2494631d071 failed with exception Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] Raised Unix.Unix_error(Unix.EPIPE, "single_write", "")
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace] 1/1 xapi Raised at file (Thread 623653 has no backtrace table. Was with_backtraces called?, line 0
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [error||623653 INET :::80||backtrace]
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||575 ||xenops] Event on VM 35d318ed-1b0b-44f9-bf9e-da6ce90f0f24; resident_here = true
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||575 ||dummytaskhelper] task timeboxed_rpc D:2765a7d8c02f created by task D:7b99bbf373c5
              Jan 15 19:17:40 xcp-ng-xen12-lon2 xapi: [debug||623748 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:7e5dfbe36065 created by task D:7b99bbf373c5
              

              Nothing jumps out in SMlog on either source or target.

              XOA syslog, only line that refers to the failure is

              xo:xo-server ERROR unhandled error event { error: Cancel { message: 'aborted' } }
              

              fsck shows disks as being clean, dom0 has 16gb of memory allocated. I've tried numerous variations, cleaning the target down, fresh install of XCP-ng, etc.

              If anyone has any ideas or other areas I should be looking, please do jump in. I'm out of ideas 😞

              S 1 Reply Last reply Reply Quote 0
              • S Offline
                shorian @shorian
                last edited by shorian

                Forgot to say - the duplicate referred to in the error is the β€œ[Importing...BackupProcessName-timestamp] halted VM that the backup process itself creates - ie the remnants that I spoke of at the outset that needed to be removed that don’t show in the GUI but can be seen via CLI using

                xe vm-list power-state=halted 
                
                S 1 Reply Last reply Reply Quote 0
                • S Offline
                  shorian @shorian
                  last edited by shorian

                  Have just updated the hosts with the swathe of patches that came out this morning, and tested with a fresh build from master as of today.

                  Unfortunately still getting these VDI_IO_ERROR errors when running CR, be it from a trial XOA install or from the latest build from master (rebuilt this morning); nothing particularly helpful jumping out of the logs:

                  2021-01-23T15_43_13.502Z - backup NG.log.txt

                  Question to the community - are other people seeing the same VDI_IO_ERROR error occasionally, or is this a unique case? Even if you can't provide access to your logs for privacy reasons, it would be interesting to know if others are seeing it. Of course, that relies upon others seeing this message.... πŸ™‚

                  Thanks!

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates πŸͺ Co-Founder CEO
                    last edited by

                    Check the usual logs (SMlog and dmesg) to see if we can spot the root cause.

                    S 1 Reply Last reply Reply Quote 0
                    • S Offline
                      shorian @olivierlambert
                      last edited by shorian

                      @olivierlambert Nothing jumps out from dmesg and nothing useful in SMlog when running the CR, however do get the following exceptions in SMlog upon boot (same error for each disk array):

                       SMGC: [24044] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                       SMGC: [24044]          ***********************
                       SMGC: [24044]          *  E X C E P T I O N  *
                       SMGC: [24044]          ***********************
                       SMGC: [24044] gc: EXCEPTION <class 'util.SMException'>, SR 7e772759-f44e-af14-eb45-e839cc67689c not attached on this host
                       SMGC: [24044]   File "/opt/xensource/sm/cleanup.py", line 3354, in gc
                       SMGC: [24044]     _gc(None, srUuid, dryRun)
                       SMGC: [24044]   File "/opt/xensource/sm/cleanup.py", line 3233, in _gc
                       SMGC: [24044]     sr = SR.getInstance(srUuid, session)
                       SMGC: [24044]   File "/opt/xensource/sm/cleanup.py", line 1552, in getInstance
                       SMGC: [24044]     return FileSR(uuid, xapi, createLock, force)
                       SMGC: [24044]   File "/opt/xensource/sm/cleanup.py", line 2330, in __init__
                       SMGC: [24044]     SR.__init__(self, uuid, xapi, createLock, force)
                       SMGC: [24044]   File "/opt/xensource/sm/cleanup.py", line 1582, in __init__
                       SMGC: [24044]     raise util.SMException("SR %s not attached on this host" % uuid)
                       SMGC: [24044]
                       SMGC: [24044] *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
                       SMGC: [24044] * * * * * SR 7e772759-f44e-af14-eb45-e839cc67689c: ERROR
                       SMGC: [24044]
                       SM: [24066] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr
                       SM: [24066] sr_update {'sr_uuid': '7e772759-f44e-af14-eb45-e839cc67689c', 'subtask_of': 'DummyRef:|ed453e0f-248d-403f-9499-ee7255fdf429|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:c1b3f3e8-579b-4b35-9bb6-dcad830583c3', 'session_ref': 'OpaqueRef:dafc19b5-65af-4501-a151-1999d8e7f550', 'device_config': {'device': '/dev/disk/by-id/scsi-2ad3a730200d00000-part1', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:bfd47d28-1be1-4dd6-8a7b-2c0a986b8d47', 'local_cache_sr': '34bd3a97-3562-75af-a24e-a266caf368e3'}
                       SM: [24088] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr
                       SM: [24088] lock: acquired /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr
                       SM: [24088] sr_scan {'sr_uuid': '7e772759-f44e-af14-eb45-e839cc67689c', 'subtask_of': 'DummyRef:|017c9f00-7289-4983-a942-f91c830fed33|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:c1b3f3e8-579b-4b35-9bb6-dcad830583c3', 'session_ref': 'OpaqueRef:a907696c-b7cf-4ac8-b772-b22833073585', 'device_config': {'device': '/dev/disk/by-id/scsi-2ad3a730200d00000-part1', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:bfd47d28-1be1-4dd6-8a7b-2c0a986b8d47', 'local_cache_sr': '34bd3a97-3562-75af-a24e-a266caf368e3'}
                       SM: [24088] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/7e772759-f44e-af14-eb45-e839cc67689c/*.vhd']
                       SM: [24088]   pread SUCCESS
                       SM: [24088] ['ls', '/var/run/sr-mount/7e772759-f44e-af14-eb45-e839cc67689c', '-1', '--color=never']
                       SM: [24088]   pread SUCCESS
                       SM: [24088] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/running
                       SM: [24088] lock: tried lock /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/running, acquired: True (exists: True)
                       SM: [24088] lock: released /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/running
                       SM: [24088] Kicking GC
                       SMGC: [24088] === SR 7e772759-f44e-af14-eb45-e839cc67689c: gc ===
                       SMGC: [24104] Will finish as PID [24105]
                       SM: [24105] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/running
                       SMGC: [24088] New PID [24104]
                       SM: [24105] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/gc_active
                       SM: [24088] lock: released /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr
                       SM: [24105] lock: opening lock file /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr
                       SMGC: [24105] Found 0 cache files
                       SM: [24105] lock: tried lock /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/gc_active, acquired: True (exists: True)
                       SM: [24105] lock: tried lock /var/lock/sm/7e772759-f44e-af14-eb45-e839cc67689c/sr, acquired: True (exists: True)
                       SM: [24105] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/7e772759-f44e-af14-eb45-e839cc67689c/*.vhd']
                       SM: [24105]   pread SUCCESS
                      

                      Subsequent to above, found that get identical error when reattaching SR (after detaching it).

                      xapi-explore-sr does not show anything interesting

                      Thanks chap; v kind of you

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates πŸͺ Co-Founder CEO
                        last edited by

                        @shorian said in VDI_IO_ERROR(Device I/O errors) when you run scheduled backup:

                        SR 7e772759-f44e-af14-eb45-e839cc67689c not attached on this host

                        Weird. Is it a shared SR? If it's the case, the master node need to have access to the SR (because coalesce mechanism works on the master).

                        S 1 Reply Last reply Reply Quote 0
                        • S Offline
                          shorian @olivierlambert
                          last edited by

                          @olivierlambert Nope, not shared. Standalone box with two local disk arrays, both reporting the same error on boot / reattaching, yet seem performant for normal operations. We only get the exception error in SMlog when booting and frequent (but not always) VDI_IO_ERRORs when taking backups. VMs themselves seem to be fine. Quite strange

                          Both SRs are ext4 in RAID1, one array of 2xSSD and one of 2xSATA.

                          Should I try reinstalling the host?

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates πŸͺ Co-Founder CEO
                            last edited by

                            That's weird. Can you double check there's no "ghost" host that have been there bore? Eg with xe host-list.

                            Also a xe sr-param-list uuid=7e772759-f44e-af14-eb45-e839cc67689c to see if we can spot anything.

                            S 1 Reply Last reply Reply Quote 0
                            • S Offline
                              shorian @olivierlambert
                              last edited by

                              @olivierlambert Afraid nothing interesting:

                              # xe host-list
                              uuid ( RO)                : 3d60f1d4-595e-4428-9c24-6409db9593bc
                                        name-label ( RW): xen11
                                  name-description ( RW): ABC
                              
                              # xe sr-param-list uuid=7e772759-f44e-af14-eb45-e839cc67689c
                              
                              uuid ( RO)                    : 7e772759-f44e-af14-eb45-e839cc67689c
                                            name-label ( RW): Xen11 SSD
                                      name-description ( RW): Xen11 SSD
                                                  host ( RO): xen11
                                    allowed-operations (SRO): VDI.enable_cbt; VDI.list_changed_blocks; unplug; plug; PBD.create; VDI.disable_cbt; update; PBD.destroy; VDI.resize; VDI.clone; VDI.data_destroy; scan; VDI.snapshot; VDI.mirror; VDI.create; VDI.destroy; VDI.set_on_boot
                                    current-operations (SRO):
                                                  VDIs (SRO):
                                                  PBDs (SRO): 9e967e89-7263-40b5-5706-cd2d449b7192
                                    virtual-allocation ( RO): 0
                                  physical-utilisation ( RO): 75665408
                                         physical-size ( RO): 491294261248
                                                  type ( RO): ext
                                          content-type ( RO): user
                                                shared ( RW): false
                                         introduced-by ( RO): <not in database>
                                           is-tools-sr ( RO): false
                                          other-config (MRW):
                                             sm-config (MRO): devserial: scsi-2ad3a730200d00000
                                                 blobs ( RO):
                                   local-cache-enabled ( RO): false
                                                  tags (SRW):
                                             clustered ( RO): false
                              
                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates πŸͺ Co-Founder CEO
                                last edited by

                                What about xe pbd-param-list uuid=9e967e89-7263-40b5-5706-cd2d449b7192?

                                S 1 Reply Last reply Reply Quote 0
                                • S Offline
                                  shorian @olivierlambert
                                  last edited by

                                  😞

                                  # xe pbd-param-list uuid=9e967e89-7263-40b5-5706-cd2d449b7192
                                  uuid ( RO)                  : 9e967e89-7263-40b5-5706-cd2d449b7192
                                       host ( RO) [DEPRECATED]: 3d60f1d4-595e-4428-9c24-6409db9593bc
                                               host-uuid ( RO): 3d60f1d4-595e-4428-9c24-6409db9593bc
                                         host-name-label ( RO): xen11
                                                 sr-uuid ( RO): 7e772759-f44e-af14-eb45-e839cc67689c
                                           sr-name-label ( RO): Xen11 SSD
                                           device-config (MRO): device: /dev/disk/by-id/scsi-2ad3a730200d00000-part1
                                      currently-attached ( RO): true
                                            other-config (MRW): storage_driver_domain: OpaqueRef:ce759085-6e0d-4484-9f8a-abf75b822f75
                                  
                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates πŸͺ Co-Founder CEO
                                    last edited by

                                    Indeed, nothing special here.

                                    S 1 Reply Last reply Reply Quote 0
                                    • S Offline
                                      shorian @olivierlambert
                                      last edited by shorian

                                      I'll try a fresh install over the next couple of days and see if it reoccurs. Looking at the other boxes, I have the same error on one of the other hosts, but it's not across all of them despite identical installs and hardware.

                                      Thanks for your efforts and help, shame there wasn't an easy answer but let's see if it reoccurs after a completely fresh install.

                                      1 Reply Last reply Reply Quote 1
                                      • olivierlambertO Offline
                                        olivierlambert Vates πŸͺ Co-Founder CEO
                                        last edited by

                                        Please keep us posted πŸ™‚

                                        S 1 Reply Last reply Reply Quote 0
                                        • S Offline
                                          shorian @olivierlambert
                                          last edited by shorian

                                          Have just run a clean install - identical error in SMlog 😞

                                          Following was the last message in dmesg which seems counter to the SMlog error

                                          EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
                                          

                                          Can send full SMlog and anything else you can think of - but it's either a hardware error (in which case why identical error accross two hosts?) or an incorrect setting in the install process. I'm at a loss

                                          S 1 Reply Last reply Reply Quote 0
                                          • S Offline
                                            shorian @shorian
                                            last edited by

                                            Have tried reformatting disks after changing the boot record - previously was GPT with ext4, have tried using MBR instead but no difference, even after another fresh install.

                                            I'm going through a comparison with another machine that is identical kit but doesn't have those errors, but can't see any differences whatsoever 😞

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post