XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Windows11 VMs failing to boot

    Scheduled Pinned Locked Moved Solved Management
    15 Posts 2 Posters 39 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      McHenry
      last edited by

      I have a number of VMs on a host, Linux, Windows Server and Windows11 Pro

      I have restarted the host and all VMs have restarted except the two Windows11 Pro VMs are stuck at 67%

      Does anyone know what may be going on here?

      M 1 Reply Last reply Reply Quote 0
      • D Offline
        dinhngtu Vates 🪐 XCP-ng Team @McHenry
        last edited by dinhngtu

        @McHenry /dev/md127p1 (the root partition) looks pretty full. Do you store anything big in there (ISOs...)?

        M 1 Reply Last reply Reply Quote 1
        • M Offline
          McHenry @McHenry
          last edited by

          If this helps, when the VM boot process hangs at 67% there is no indication of anything in the console window.

          D 1 Reply Last reply Reply Quote 0
          • D Offline
            dinhngtu Vates 🪐 XCP-ng Team @McHenry
            last edited by

            @McHenry Do you have any relevant host logs in /var/log/xensource.log and /var/log/daemon.log (look for xen_platform_log)?

            M 1 Reply Last reply Reply Quote 1
            • M Offline
              McHenry @dinhngtu
              last edited by

              @dinhngtu

              [10:38 hst100 xen]# tail -f /var/log/xensource.log
              Jul 11 10:41:22 hst100 xapi: [ info||17967 |sm_exec D:1f5adec885fc|xapi_session] Session.destroy trackid=d57ddf3b827b76a764382bb034f6df00
              Jul 11 10:41:22 hst100 xapi: [debug||17967 |OpaqueRef:2f7768a0-46af-7e7a-f61a-75b641fe160d|dummytaskhelper] task SR.stat D:2e250443803d created by task R:2f7768a046af
              Jul 11 10:41:22 hst100 xapi: [debug||17967 |SR.stat D:2e250443803d|sm] SM nfs sr_update sr=OpaqueRef:cab6d499-a2af-f407-b5d7-cc5106101c59
              Jul 11 10:41:22 hst100 xapi: [ info||17967 |sm_exec D:43e19d131b64|xapi_session] Session.create trackid=51e6c3231dd12ea7d711ef4cfef8a392 pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:22 hst100 xapi: [debug||17981 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:ee0f82c1e793 created by task D:43e19d131b64
              Jul 11 10:41:22 hst100 xapi: [ info||17982 /var/lib/xcp/xapi|session.login_with_password D:ea7449fe408a|xapi_session] Session.create trackid=35cd066b3e308a3c646cac5035147382 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:22 hst100 xapi: [debug||17983 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:8998fc7453cf created by task D:ea7449fe408a
              Jul 11 10:41:22 hst100 xapi: [ info||17982 /var/lib/xcp/xapi|session.logout D:802a90760a73|xapi_session] Session.destroy trackid=35cd066b3e308a3c646cac5035147382
              Jul 11 10:41:22 hst100 xapi: [ info||17984 /var/lib/xcp/xapi|session.login_with_password D:bb6735f8c01e|xapi_session] Session.create trackid=01bb9a84f1ed735542a4cbdcb22dc3e4 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:22 hst100 xapi: [debug||17985 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:a641844de76c created by task D:bb6735f8c01e
              Jul 11 10:41:24 hst100 xapi: [ info||17967 |sm_exec D:43e19d131b64|xapi_session] Session.destroy trackid=51e6c3231dd12ea7d711ef4cfef8a392
              Jul 11 10:41:24 hst100 xapi: [debug||17965 /var/lib/xcp/xapi|SR.scan R:2f7768a046af|xapi_sr] Xapi_sr.scan.(fun).scan_rec no change detected, updating VDIs
              Jul 11 10:41:24 hst100 xapi: [debug||17965 /var/lib/xcp/xapi|SR.scan R:2f7768a046af|message_forwarding] Unmarking SR after SR.scan (task=OpaqueRef:2f7768a0-46af-7e7a-f61a-75b641fe160d)
              Jul 11 10:41:24 hst100 xapi: [debug||17987 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.logout D:d6d32a67415f created by task D:101b45c58ffc
              Jul 11 10:41:24 hst100 xapi: [ info||17987 /var/lib/xcp/xapi|session.logout D:ae943942b564|xapi_session] Session.destroy trackid=f6fa2f161f0320e76f9eaa32793303e0
              Jul 11 10:41:24 hst100 xapi: [debug||17959 |scan one D:101b45c58ffc|xapi_sr] Scan of SR d4b22411-592f-7ada-0597-68dbcb56ee4d complete.
              Jul 11 10:41:27 hst100 xcp-rrdd: [ info||9 ||rrdd_main] memfree has changed to 1080120 in domain 4
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||6 |events|xenops_server] Received an event on managed VM 4b27ed77-6a0e-cb7f-ccaa-1f852861d190
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||6 |queue|xenops_server] Queue.push ["VM_check_state","4b27ed77-6a0e-cb7f-ccaa-1f852861d190"] onto 4b27ed77-6a0e-cb7f-ccaa-1f852861d190:[  ]
              Jul 11 10:41:27 hst100 squeezed: [debug||4 ||squeeze_xen] watch /data/updated <- Fri Jul 11 10:41:27 2025
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 ||xenops_server] Queue.pop returned ["VM_check_state","4b27ed77-6a0e-cb7f-ccaa-1f852861d190"]
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 |events|xenops_server] Task 1417 reference events: ["VM_check_state","4b27ed77-6a0e-cb7f-ccaa-1f852861d190"]
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 |events|xenops_server] VM 4b27ed77-6a0e-cb7f-ccaa-1f852861d190 is not requesting any attention
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 |events|xenops_server] VM_DB.signal 4b27ed77-6a0e-cb7f-ccaa-1f852861d190
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 |events|task_server] Task 1417 completed; duration = 0
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||26 ||xenops_server] TASK.signal 1417 (object deleted)
              Jul 11 10:41:27 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] Processing event: ["Vm","4b27ed77-6a0e-cb7f-ccaa-1f852861d190"]
              Jul 11 10:41:27 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenops event on VM 4b27ed77-6a0e-cb7f-ccaa-1f852861d190
              Jul 11 10:41:27 hst100 xenopsd-xc: [debug||2932 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops_server] VM.stat 4b27ed77-6a0e-cb7f-ccaa-1f852861d190
              Jul 11 10:41:27 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenopsd event: processing event for VM 4b27ed77-6a0e-cb7f-ccaa-1f852861d190
              Jul 11 10:41:27 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] Supressing VM.allowed_operations update because guest_agent data is largely the same
              Jul 11 10:41:27 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenopsd event: Updating VM 4b27ed77-6a0e-cb7f-ccaa-1f852861d190 domid 4 guest_agent
              Jul 11 10:41:29 hst100 xapi: [ info||17988 /var/lib/xcp/xapi|session.login_with_password D:241021bd77e1|xapi_session] Session.create trackid=92f90e5f1c41bf1f25a7e8decad1be50 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:29 hst100 xapi: [debug||17989 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:4faa903a387e created by task D:241021bd77e1
              Jul 11 10:41:29 hst100 xapi: [ info||17984 /var/lib/xcp/xapi|session.logout D:b78e5175096e|xapi_session] Session.destroy trackid=01bb9a84f1ed735542a4cbdcb22dc3e4
              Jul 11 10:41:35 hst100 xcp-rrdd: [ info||9 ||rrdd_main] memfree has changed to 6554228 in domain 5
              Jul 11 10:41:36 hst100 xapi: [debug||16966 ||sparse_dd_wrapper] sparse_dd: Progress: 19
              Jul 11 10:41:36 hst100 xapi: [debug||16966 ||storage] TASK.signal 19 = ["Pending",0.22100000000000003]
              Jul 11 10:41:36 hst100 xapi: [debug||23 |sm_events D:a7cbe4356632|storage_access] sm event on Task 19
              Jul 11 10:41:36 hst100 xapi: [debug||12554 HTTPS 51.161.213.26->|Async.VM.migrate_send R:78b4661066db|storage_access] Received update: ["Task","19"]
              Jul 11 10:41:36 hst100 xapi: [debug||12554 HTTPS 51.161.213.26->|Async.VM.migrate_send R:78b4661066db|storage_access] Calling UPDATES.get Async.VM.migrate_send R:78b4661066db 1749 30
              Jul 11 10:41:47 hst100 xapi: [debug||177 scanning_thread|SR scanner D:97b30c7439bc|xapi_sr] Automatically scanning SRs = [ OpaqueRef:cab6d499-a2af-f407-b5d7-cc5106101c59;OpaqueRef:3e473288-a1f8-f0bb-80d7-3baa0ee0628b ]
              Jul 11 10:41:47 hst100 xapi: [debug||17990 ||dummytaskhelper] task scan one D:f280daf4394a created by task D:97b30c7439bc
              Jul 11 10:41:47 hst100 xapi: [debug||17991 ||dummytaskhelper] task scan one D:7d4bfe2b75fa created by task D:97b30c7439bc
              Jul 11 10:41:47 hst100 xapi: [debug||17992 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.slave_login D:0908878b4b6b created by task D:f280daf4394a
              Jul 11 10:41:47 hst100 xapi: [ info||17992 /var/lib/xcp/xapi|session.slave_login D:8b0c6872d88c|xapi_session] Session.create trackid=aeaa0692cf6b9ee21b44fba5b9a33cc6 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:47 hst100 xapi: [debug||17993 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.slave_login D:09334d9f54d0 created by task D:7d4bfe2b75fa
              Jul 11 10:41:47 hst100 xapi: [ info||17993 /var/lib/xcp/xapi|session.slave_login D:96939f2b4390|xapi_session] Session.create trackid=2f7776a5c3a2cce90ffe012a98075090 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:47 hst100 xapi: [debug||17995 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:e9fd3b387f46 created by task D:96939f2b4390
              Jul 11 10:41:47 hst100 xapi: [debug||17994 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:5b55216f450c created by task D:8b0c6872d88c
              Jul 11 10:41:47 hst100 xapi: [debug||17996 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:SR.scan D:1d51f520afa5 created by task D:f280daf4394a
              Jul 11 10:41:47 hst100 xapi: [debug||17997 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:SR.scan D:36c909da8770 created by task D:7d4bfe2b75fa
              Jul 11 10:41:47 hst100 xapi: [ info||17996 /var/lib/xcp/xapi|dispatch:SR.scan D:1d51f520afa5|taskhelper] task SR.scan R:06b4b7bb8db2 (uuid:ddd5c80f-327e-91ce-d11f-35655171a772) created (trackid=aeaa0692cf6b9ee21b44fba5b9a33cc6) by task D:f280daf4394a
              Jul 11 10:41:47 hst100 xapi: [debug||17996 /var/lib/xcp/xapi|SR.scan R:06b4b7bb8db2|message_forwarding] SR.scan: SR = 'd4b22411-592f-7ada-0597-68dbcb56ee4d (HST100 Backup)'
              Jul 11 10:41:47 hst100 xapi: [ info||17997 /var/lib/xcp/xapi|dispatch:SR.scan D:36c909da8770|taskhelper] task SR.scan R:08bcdeea9fb1 (uuid:0f824f57-8c33-2062-bf66-d9b72fb56f3b) created (trackid=2f7776a5c3a2cce90ffe012a98075090) by task D:7d4bfe2b75fa
              Jul 11 10:41:47 hst100 xapi: [debug||17997 /var/lib/xcp/xapi|SR.scan R:08bcdeea9fb1|message_forwarding] SR.scan: SR = 'd0140e93-71fe-9b11-4e5a-d80ce8102870 (ISOs)'
              Jul 11 10:41:47 hst100 xapi: [debug||17996 /var/lib/xcp/xapi|SR.scan R:06b4b7bb8db2|message_forwarding] Marking SR for SR.scan (task=OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17)
              Jul 11 10:41:47 hst100 xapi: [ info||17996 /var/lib/xcp/xapi|OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17|mux] SR.scan2 dbg:OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17 sr:d4b22411-592f-7ada-0597-68dbcb56ee4d
              Jul 11 10:41:47 hst100 xapi: [debug||17997 /var/lib/xcp/xapi|SR.scan R:08bcdeea9fb1|message_forwarding] Marking SR for SR.scan (task=OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4)
              Jul 11 10:41:47 hst100 xapi: [ info||17997 /var/lib/xcp/xapi|OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4|mux] SR.scan2 dbg:OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4 sr:d0140e93-71fe-9b11-4e5a-d80ce8102870
              Jul 11 10:41:47 hst100 xapi: [ info||17998 |OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17|Storage_smapiv1_wrapper] SR.scan2 dbg:OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17 sr:d4b22411-592f-7ada-0597-68dbcb56ee4d
              Jul 11 10:41:47 hst100 xapi: [debug||17998 |OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17|dummytaskhelper] task SR.scan D:91a013d358e8 created by task R:06b4b7bb8db2
              Jul 11 10:41:47 hst100 xapi: [debug||17998 |SR.scan D:91a013d358e8|sm] SM nfs sr_scan sr=OpaqueRef:cab6d499-a2af-f407-b5d7-cc5106101c59
              Jul 11 10:41:47 hst100 xapi: [ info||17998 |sm_exec D:70bcac641a6a|xapi_session] Session.create trackid=c04e3737f472c63e8250fab8a79f53ff pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:47 hst100 xapi: [ info||18000 |OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4|Storage_smapiv1_wrapper] SR.scan2 dbg:OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4 sr:d0140e93-71fe-9b11-4e5a-d80ce8102870
              Jul 11 10:41:47 hst100 xapi: [debug||18000 |OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4|dummytaskhelper] task SR.scan D:699f208e1dc1 created by task R:08bcdeea9fb1
              Jul 11 10:41:47 hst100 xapi: [debug||18000 |SR.scan D:699f208e1dc1|sm] SM iso sr_scan sr=OpaqueRef:3e473288-a1f8-f0bb-80d7-3baa0ee0628b
              Jul 11 10:41:47 hst100 xapi: [ info||18000 |sm_exec D:fab90060eebd|xapi_session] Session.create trackid=e0bfbc800696ff3374cdf776a5151a63 pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:47 hst100 xapi: [debug||17999 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:5fa86e462ac9 created by task D:70bcac641a6a
              Jul 11 10:41:47 hst100 xapi: [debug||18001 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:2f47ed86fffe created by task D:fab90060eebd
              Jul 11 10:41:48 hst100 xapi: [ info||18000 |sm_exec D:fab90060eebd|xapi_session] Session.destroy trackid=e0bfbc800696ff3374cdf776a5151a63
              Jul 11 10:41:48 hst100 xapi: [debug||18000 |OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4|dummytaskhelper] task SR.stat D:a0f54d83143c created by task R:08bcdeea9fb1
              Jul 11 10:41:48 hst100 xapi: [debug||18000 |SR.stat D:a0f54d83143c|sm] SM iso sr_update sr=OpaqueRef:3e473288-a1f8-f0bb-80d7-3baa0ee0628b
              Jul 11 10:41:48 hst100 xapi: [ info||18000 |sm_exec D:c353bb85679c|xapi_session] Session.create trackid=b469a692ddbdb291cc6937d3eee4e2a9 pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:48 hst100 xapi: [debug||18004 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:f84a14cc6b90 created by task D:c353bb85679c
              Jul 11 10:41:48 hst100 xapi: [ info||18000 |sm_exec D:c353bb85679c|xapi_session] Session.destroy trackid=b469a692ddbdb291cc6937d3eee4e2a9
              Jul 11 10:41:48 hst100 xapi: [debug||17997 /var/lib/xcp/xapi|SR.scan R:08bcdeea9fb1|xapi_sr] Xapi_sr.scan.(fun).scan_rec no change detected, updating VDIs
              Jul 11 10:41:48 hst100 xapi: [debug||17997 /var/lib/xcp/xapi|SR.scan R:08bcdeea9fb1|message_forwarding] Unmarking SR after SR.scan (task=OpaqueRef:08bcdeea-9fb1-3d0b-4d96-335db5a0c2a4)
              Jul 11 10:41:48 hst100 xapi: [debug||18006 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.logout D:ca51b0c3f90e created by task D:7d4bfe2b75fa
              Jul 11 10:41:48 hst100 xapi: [ info||18006 /var/lib/xcp/xapi|session.logout D:fa1f5fa07905|xapi_session] Session.destroy trackid=2f7776a5c3a2cce90ffe012a98075090
              Jul 11 10:41:48 hst100 xapi: [debug||17991 |scan one D:7d4bfe2b75fa|xapi_sr] Scan of SR d0140e93-71fe-9b11-4e5a-d80ce8102870 complete.
              Jul 11 10:41:48 hst100 xcp-rrdd: [ info||9 ||rrdd_main] memfree has changed to 572992 in domain 7
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||6 |events|xenops_server] Received an event on managed VM dcb7847c-e7c3-01a2-67a4-35d0d875e6e2
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||6 |queue|xenops_server] Queue.push ["VM_check_state","dcb7847c-e7c3-01a2-67a4-35d0d875e6e2"] onto dcb7847c-e7c3-01a2-67a4-35d0d875e6e2:[  ]
              Jul 11 10:41:48 hst100 squeezed: [debug||4 ||squeeze_xen] watch /data/updated <- Fri Jul 11 10:42:27 2025
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 ||xenops_server] Queue.pop returned ["VM_check_state","dcb7847c-e7c3-01a2-67a4-35d0d875e6e2"]
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 |events|xenops_server] Task 1418 reference events: ["VM_check_state","dcb7847c-e7c3-01a2-67a4-35d0d875e6e2"]
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 |events|xenops_server] VM dcb7847c-e7c3-01a2-67a4-35d0d875e6e2 is not requesting any attention
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 |events|xenops_server] VM_DB.signal dcb7847c-e7c3-01a2-67a4-35d0d875e6e2
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 |events|task_server] Task 1418 completed; duration = 0
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||40 ||xenops_server] TASK.signal 1418 (object deleted)
              Jul 11 10:41:48 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] Processing event: ["Vm","dcb7847c-e7c3-01a2-67a4-35d0d875e6e2"]
              Jul 11 10:41:48 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenops event on VM dcb7847c-e7c3-01a2-67a4-35d0d875e6e2
              Jul 11 10:41:48 hst100 xenopsd-xc: [debug||2935 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops_server] VM.stat dcb7847c-e7c3-01a2-67a4-35d0d875e6e2
              Jul 11 10:41:48 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenopsd event: processing event for VM dcb7847c-e7c3-01a2-67a4-35d0d875e6e2
              Jul 11 10:41:48 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] Supressing VM.allowed_operations update because guest_agent data is largely the same
              Jul 11 10:41:48 hst100 xapi: [debug||148 |org.xen.xapi.xenops.classic events D:14adfb57e17b|xenops] xenopsd event: Updating VM dcb7847c-e7c3-01a2-67a4-35d0d875e6e2 domid 7 guest_agent
              Jul 11 10:41:50 hst100 xapi: [debug||18007 HTTPS 51.161.213.26->:::80|host.call_plugin R:257d771f7575|audit] Host.call_plugin host = 'cb2ae4d4-6ed4-4790-8739-3cf0c2940c99 (hst100)'; plugin = 'updater.py'; fn = 'check_update' args = [ 'hidden' ]
              Jul 11 10:41:51 hst100 xapi: [debug||3138 HTTPS 67.219.99.188->:::80|event.from D:98aa8880a8c9|xapi_event] suppressing empty event.from
              Jul 11 10:41:52 hst100 xapi: [debug||175 |xapi events D:520281ee7232|dummytaskhelper] task timeboxed_rpc D:41df3c81c2c6 created by task D:520281ee7232
              Jul 11 10:41:52 hst100 xapi: [debug||18008 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:event.from D:608fbb40b816 created by task D:520281ee7232
              Jul 11 10:41:52 hst100 xapi: [debug||18009 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.logout D:a31c91185b23 created by task D:52b88ea71bd5
              Jul 11 10:41:52 hst100 xapi: [ info||18009 /var/lib/xcp/xapi|session.logout D:1aee21f5a80d|xapi_session] Session.destroy trackid=50452fd8d8f3a62f62cad67fa30b24f1
              Jul 11 10:41:52 hst100 xapi: [debug||18010 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.slave_login D:1de1795837e0 created by task D:52b88ea71bd5
              Jul 11 10:41:52 hst100 xapi: [ info||18010 /var/lib/xcp/xapi|session.slave_login D:19fcc3b48c90|xapi_session] Session.create trackid=a13373689bd852f3693510a9856128f2 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:52 hst100 xapi: [debug||18011 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:e6855e017e42 created by task D:19fcc3b48c90
              Jul 11 10:41:52 hst100 xapi: [debug||18012 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:event.from D:25ad6ad641aa created by task D:52b88ea71bd5
              Jul 11 10:41:52 hst100 xapi: [ info||17998 |sm_exec D:70bcac641a6a|xapi_session] Session.destroy trackid=c04e3737f472c63e8250fab8a79f53ff
              Jul 11 10:41:52 hst100 xapi: [debug||17998 |OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17|dummytaskhelper] task SR.stat D:591a6d7dfd4c created by task R:06b4b7bb8db2
              Jul 11 10:41:52 hst100 xapi: [debug||17998 |SR.stat D:591a6d7dfd4c|sm] SM nfs sr_update sr=OpaqueRef:cab6d499-a2af-f407-b5d7-cc5106101c59
              Jul 11 10:41:52 hst100 xapi: [ info||17998 |sm_exec D:90c6faeac33e|xapi_session] Session.create trackid=a5f3768a01c1a5591ff28ae0c084c1e6 pool=false uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:52 hst100 xapi: [debug||18013 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:43b797d7e11e created by task D:90c6faeac33e
              Jul 11 10:41:52 hst100 xapi: [ info||18014 /var/lib/xcp/xapi|session.login_with_password D:66f31b75e9ba|xapi_session] Session.create trackid=e89374827e41fe370399494057e73e05 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:52 hst100 xapi: [debug||18015 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:dcfefe59018c created by task D:66f31b75e9ba
              Jul 11 10:41:52 hst100 xapi: [ info||18014 /var/lib/xcp/xapi|session.logout D:1229724ea671|xapi_session] Session.destroy trackid=e89374827e41fe370399494057e73e05
              Jul 11 10:41:52 hst100 xapi: [ info||18017 /var/lib/xcp/xapi|session.login_with_password D:93780a0651d7|xapi_session] Session.create trackid=0be0df15d648322d6b569474255ed645 pool=false uname=root originator=SM is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
              Jul 11 10:41:52 hst100 xapi: [debug||18018 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:pool.get_all D:b6549a355f02 created by task D:93780a0651d7
              Jul 11 10:41:54 hst100 xapi: [ info||17998 |sm_exec D:90c6faeac33e|xapi_session] Session.destroy trackid=a5f3768a01c1a5591ff28ae0c084c1e6
              Jul 11 10:41:54 hst100 xapi: [debug||17996 /var/lib/xcp/xapi|SR.scan R:06b4b7bb8db2|xapi_sr] Xapi_sr.scan.(fun).scan_rec no change detected, updating VDIs
              Jul 11 10:41:54 hst100 xapi: [debug||17996 /var/lib/xcp/xapi|SR.scan R:06b4b7bb8db2|message_forwarding] Unmarking SR after SR.scan (task=OpaqueRef:06b4b7bb-8db2-67e6-ce2b-fc190d089f17)
              Jul 11 10:41:54 hst100 xapi: [debug||18019 /var/lib/xcp/xapi|post_root|dummytaskhelper] task dispatch:session.logout D:dd3305158779 created by task D:f280daf4394a
              Jul 11 10:41:54 hst100 xapi: [ info||18019 /var/lib/xcp/xapi|session.logout D:4dd639657798|xapi_session] Session.destroy trackid=aeaa0692cf6b9ee21b44fba5b9a33cc6
              Jul 11 10:41:54 hst100 xapi: [debug||17990 |scan one D:f280daf4394a|xapi_sr] Scan of SR d4b22411-592f-7ada-0597-68dbcb56ee4d complete.
              
              M 1 Reply Last reply Reply Quote 0
              • M Offline
                McHenry @McHenry
                last edited by

                Jul 11 10:33:59 hst100 cleanup.py[132827]: All output goes to log
                Jul 11 10:33:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:34:12 hst100 forkexecd: [error||0 ||forkexecd] 133009 (/opt/xensource/libexec/mail-alarm <?xml version="1.0" encoding="UTF-8"?>\x0A<message><ref>OpaqueRef:80e8cac6-789a-8d76-70b6-afe1551f8de2</ref><name>ALARM</name><priority>3</priority><cls>VM</cls><obj_uuid>1b298dd8-5921-4090-8ac9-f26efbaf88b3</obj_uuid><timestamp>20250711T00:34:12Z</timestamp><uuid>d2844e9b-f88c-bf7e-5456-610060fcb05b</uuid><body>value: 1.000000\x0Aconfig:\x0A&lt;variable&gt;\x0A\x09&lt;name value=&quot;fs_usage&quot;/&gt;\x0A\x09&lt;alarm_trigger_level value=&quot;0.9&quot;/&gt;\x0A\x09&lt;alarm_trigger_period value=&quot;60&quot;/&gt;\x0A\x09&lt;alarm_auto_inhibit_period value=&quot;3600&quot;/&gt;\x0A&lt;/variable&gt;\x0A</body></message>) exited with code 1
                Jul 11 10:34:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:34:28 hst100 cleanup.py[133122]: All output goes to log
                Jul 11 10:34:28 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:34:38 hst100 sparse_dd: [debug||0 ||sparse_dd] progress 10%
                Jul 11 10:34:44 hst100 qemu-dm-5[22780]: 22780@1752194084.964567:xen_platform_log xen platform: xeniface|IoctlLog: USER: OnSessionChange(SessionLock, 2)
                Jul 11 10:34:47 hst100 qemu-dm-5[22780]: 22780@1752194087.641201:xen_platform_log xen platform: xen|ModuleAdd: FFFFF80333BF0000 - FFFFF80333C00FFF [mskssrv.sys]
                Jul 11 10:34:47 hst100 qemu-dm-5[22780]: 22780@1752194087.642844:xen_platform_log xen platform: xen|ModuleAdd: FFFFF80333C10000 - FFFFF80333C20FFF [ksthunk.sys]
                Jul 11 10:34:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:34:59 hst100 cleanup.py[133367]: All output goes to log
                Jul 11 10:34:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:35:23 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:35:29 hst100 cleanup.py[133646]: All output goes to log
                Jul 11 10:35:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:35:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:35:59 hst100 cleanup.py[133908]: All output goes to log
                Jul 11 10:35:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:36:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:36:29 hst100 cleanup.py[134206]: All output goes to log
                Jul 11 10:36:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:36:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:36:59 hst100 cleanup.py[134483]: All output goes to log
                Jul 11 10:36:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:37:23 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:37:29 hst100 cleanup.py[134766]: All output goes to log
                Jul 11 10:37:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:37:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:37:59 hst100 cleanup.py[135061]: All output goes to log
                Jul 11 10:37:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:38:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:38:29 hst100 cleanup.py[135334]: All output goes to log
                Jul 11 10:38:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:38:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:38:58 hst100 cleanup.py[135655]: All output goes to log
                Jul 11 10:38:58 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:39:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:39:29 hst100 cleanup.py[135953]: All output goes to log
                Jul 11 10:39:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:39:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:39:59 hst100 cleanup.py[136246]: All output goes to log
                Jul 11 10:39:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:40:01 hst100 systemd[1]: Started Session c43 of user root.
                Jul 11 10:40:01 hst100 systemd[1]: Starting Session c43 of user root.
                Jul 11 10:40:22 hst100 tapdisk[33072]: received 'close' message (uuid = 10)
                Jul 11 10:40:22 hst100 tapdisk[33072]: nbd: NBD server pause(0x16b09f0)
                Jul 11 10:40:22 hst100 tapdisk[33072]: nbd: NBD server pause(0x16c8810)
                Jul 11 10:40:22 hst100 tapdisk[33072]: nbd: NBD server free(0x16b09f0)
                Jul 11 10:40:22 hst100 tapdisk[33072]: nbd: NBD server free(0x16c8810)
                Jul 11 10:40:22 hst100 tapdisk[33072]: gaps written/skipped: 0/0
                Jul 11 10:40:22 hst100 tapdisk[33072]: /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1.vhd: b: 102400, a: 102279, f: 102279, n: 419753840
                Jul 11 10:40:22 hst100 tapdisk[33072]: closed image /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1.vhd (0 users, state: 0x00000000, type: 4)
                Jul 11 10:40:22 hst100 tapdisk[33072]: sending 'close response' message (uuid = 10)
                Jul 11 10:40:22 hst100 tapdisk[33072]: received 'detach' message (uuid = 10)
                Jul 11 10:40:22 hst100 tapdisk[33072]: sending 'detach response' message (uuid = 10)
                Jul 11 10:40:22 hst100 tapdisk[33072]: tapdisk-log: closing after 0 errors
                Jul 11 10:40:22 hst100 tapdisk[33072]: tapdisk-syslog: 22 messages, 1932 bytes, xmits: 23, failed: 0, dropped: 0
                Jul 11 10:40:22 hst100 tapdisk[33072]: tapdisk-control: draining 1 connections
                Jul 11 10:40:22 hst100 tapdisk[33072]: tapdisk-control: done
                Jul 11 10:40:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:40:29 hst100 cleanup.py[136554]: All output goes to log
                Jul 11 10:40:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Command line: -controloutfd 8 -controlinfd 9 -mode hvm_build -image /usr/libexec/xen/boot/hvmloader -domid 13 -store_port 5 -store_domid 0 -console_port 6 -console_domid 0 -mem_max_mib 8184 -mem_start_mib 8184
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Domain Properties: Type HVM, hap 1
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Determined the following parameters from xenstore:
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: vcpu/number:4 vcpu/weight:256 vcpu/cap:0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: nx: 1, pae 1, cores-per-socket 4, x86-fip-width 0, nested 0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: apic: 1 acpi: 1 acpi_s4: 0 acpi_s3: 0 tsc_mode: 0 hpet: 1
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: nomigrate 0, timeoffset 36000 mmio_hole_size 0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: viridian: 1, time_ref_count: 1, reference_tsc: 1 hcall_remote_tlb_flush: 0 apic_assist: 1 crash_ctl: 1 stimer: 1 hcall_ipi: 0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: vcpu/0/affinity:111111111111
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: vcpu/1/affinity:111111111111
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: vcpu/2/affinity:111111111111
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: vcpu/3/affinity:111111111111
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_malloc_filemap    : 629 kB
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_module_file: filename="/usr/share/ipxe/ipxe.bin"
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_malloc_filemap    : 132 kB
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_boot_xen_init: ver 4.17, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_parse_image: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: loader probe failed
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: loader probe OK
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail: ELF: phdr: paddr=0x100000 memsz=0x57ac4
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail: ELF: memory: 0x100000 -> 0x157ac4
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Calculated provisional MMIO hole size as 0x10000000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Loaded OVMF from /usr/share/edk2/OVMF-release.fd
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_mem_init: mem 8184 MB, pages 0x1ff800 pages, 4k each
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_mem_init: 0x1ff800 pages
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_boot_mem_init: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: range: start=0x0 end=0xf0000000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: range: start=0x100000000 end=0x20f800000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail: PHYSICAL MEMORY ALLOCATION:
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail:   4KB PAGES: 0x0000000000000200
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail:   2MB PAGES: 0x00000000000003fb
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail:   1GB PAGES: 0x0000000000000006
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Final lower MMIO hole size is 0x10000000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_build_image: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x58 at 0x7f47fc88a000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x157ac4  (pfn 0x100 + 0x58 pages)
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: xc: detail: ELF: phdr 0 at 0x7f47fac98000 -> 0x7f47face8ea0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x158+0x200 at 0x7f47faaf0000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_alloc_segment:   System Firmware module : 0x158000 -> 0x358000  (pfn 0x158 + 0x200 pages)
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x358+0x22 at 0x7f47fc868000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_alloc_segment:   module0      : 0x358000 -> 0x379200  (pfn 0x358 + 0x22 pages)
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x37a+0x1 at 0x7f47fca47000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x37a000 -> 0x37a878  (pfn 0x37a + 0x1 pages)
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x37b000
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_boot_image: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: domain builder memory footprint
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:    allocated
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:       malloc             : 18525 bytes
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:       anon mmap          : 0 bytes
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:    mapped
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:       file mmap          : 762 kB
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail:       domU mmap          : 2540 kB
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: Adding module 0 guest_addr 358000 len 135680
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: vcpu_hvm: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_set_gnttab_entry: d13 gnt[0] -> d0 0xfefff
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_set_gnttab_entry: d13 gnt[1] -> d0 0xfeffc
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: viridian base
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: + time_ref_count
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: + reference_tsc
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: + apic_assist
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: + crash_ctl
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: + stimer
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Parsing '178bfbff-f6f83203-2e500800-040001f3-0000000f-f1bf07a9-00405f4e-00000000-711ed005-10000010-00000020-18000144-00000000-00000000-00000000-00000000-00000000-00000000-00000000-00000000-00000000-00000000' as featureset
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: domainbuilder: detail: xc_dom_release: called
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: Writing to control: 'result:1044476 1044479#012'
                Jul 11 10:40:52 hst100 xenguest-13-build[136890]: All done
                Jul 11 10:40:52 hst100 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "vif13.0" in table Interface
                Jul 11 10:40:52 hst100 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif13.0
                Jul 11 10:40:52 hst100 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi2 vif13.0 -- set interface vif13.0 "external-ids:\"xs-vm-uuid\"=\"16a5f8be-781c-46fe-df43-83744df32826\"" -- set interface vif13.0 "external-ids:\"xs-vif-uuid\"=\"a48873c0-17c9-8905-712e-72579245a342\"" -- set interface vif13.0 "external-ids:\"xs-network-uuid\"=\"1bfba311-a261-d329-d01d-ab2713d0dc78\"" -- set interface vif13.0 "external-ids:\"attached-mac\"=\"da:ff:e4:1f:38:61\""
                Jul 11 10:40:53 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:40:53 hst100 tapdisk[137103]: tapdisk-control: init, 10 x 4k buffers
                Jul 11 10:40:53 hst100 tapdisk[137103]: I/O queue driver: lio
                Jul 11 10:40:53 hst100 tapdisk[137103]: I/O queue driver: lio
                Jul 11 10:40:53 hst100 tapdisk[137103]: tapdisk-log: started, level 0
                Jul 11 10:40:53 hst100 tapdisk[137103]: Tapdisk running, control on /var/run/blktap-control/ctl137103
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdclient137103'
                Jul 11 10:40:53 hst100 tapdisk[137103]: received 'attach' message (uuid = 10)
                Jul 11 10:40:53 hst100 tapdisk[137103]: sending 'attach response' message (uuid = 10)
                Jul 11 10:40:53 hst100 tapdisk[137103]: received 'open' message (uuid = 10)
                Jul 11 10:40:53 hst100 tapdisk[137103]: /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/0d16619f-bcb7-47ea-9ecc-46159e4ff4ad.vhd version: tap 0x00010003, b: 51200, a: 21320, f: 21320, n: 87497696
                Jul 11 10:40:53 hst100 tapdisk[137103]: opened image /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/0d16619f-bcb7-47ea-9ecc-46159e4ff4ad.vhd (1 users, state: 0x00000001, type: 4, rw)
                Jul 11 10:40:53 hst100 tapdisk[137103]: VBD CHAIN:
                Jul 11 10:40:53 hst100 tapdisk[137103]: /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/0d16619f-bcb7-47ea-9ecc-46159e4ff4ad.vhd: type:vhd(4) storage:ext(2)
                Jul 11 10:40:53 hst100 tapdisk[137103]: bdev: capacity=209715200 sector_size=512/512 flags=0
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver137103.10'
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: registering for unix_listening_fd
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd-old137103.10
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver-new137103.10'
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: registering for unix_listening_fd
                Jul 11 10:40:53 hst100 tapdisk[137103]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd137103.10
                Jul 11 10:40:53 hst100 tapdisk[137103]: sending 'open response' message (uuid = 10)
                Jul 11 10:40:53 hst100 tapdisk[137114]: tapdisk-control: init, 10 x 4k buffers
                Jul 11 10:40:53 hst100 tapdisk[137114]: I/O queue driver: lio
                Jul 11 10:40:53 hst100 tapdisk[137114]: I/O queue driver: lio
                Jul 11 10:40:53 hst100 tapdisk[137114]: tapdisk-log: started, level 0
                Jul 11 10:40:53 hst100 tapdisk[137114]: Tapdisk running, control on /var/run/blktap-control/ctl137114
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdclient137114'
                Jul 11 10:40:53 hst100 tapdisk[137114]: received 'attach' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapdisk[137114]: sending 'attach response' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapdisk[137114]: received 'open' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapdisk[137114]: /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1.vhd version: tap 0x00010003, b: 102400, a: 102279, f: 102279, n: 419753840
                Jul 11 10:40:53 hst100 tapdisk[137114]: opened image /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1.vhd (1 users, state: 0x00000001, type: 4, rw)
                Jul 11 10:40:53 hst100 tapdisk[137114]: VBD CHAIN:
                Jul 11 10:40:53 hst100 tapdisk[137114]: /var/run/sr-mount/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1.vhd: type:vhd(4) storage:ext(2)
                Jul 11 10:40:53 hst100 tapdisk[137114]: bdev: capacity=419430400 sector_size=512/512 flags=0
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver137114.12'
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: registering for unix_listening_fd
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd-old137114.12
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver-new137114.12'
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: registering for unix_listening_fd
                Jul 11 10:40:53 hst100 tapdisk[137114]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd137114.12
                Jul 11 10:40:53 hst100 tapdisk[137114]: sending 'open response' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapback[137128]: tapback.c:445 slave tapback daemon started, only serving domain 13
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:406 832 physical_device_changed
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:406 768 physical_device_changed
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:406 832 physical_device_changed
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:492 832 found tapdisk[137114], for 254:12
                Jul 11 10:40:53 hst100 tapdisk[137114]: received 'disk info' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapdisk[137114]: VBD 12 got disk info: sectors=419430400 sector size=512, info=0
                Jul 11 10:40:53 hst100 tapdisk[137114]: sending 'disk info rsp' message (uuid = 12)
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:406 768 physical_device_changed
                Jul 11 10:40:53 hst100 tapback[137128]: backend.c:492 768 found tapdisk[137103], for 254:10
                Jul 11 10:40:53 hst100 tapdisk[137103]: received 'disk info' message (uuid = 10)
                Jul 11 10:40:53 hst100 tapdisk[137103]: VBD 10 got disk info: sectors=209715200 sector size=512, info=0
                Jul 11 10:40:53 hst100 tapdisk[137103]: sending 'disk info rsp' message (uuid = 10)
                Jul 11 10:40:53 hst100 systemd[1]: Started transient unit for varstored-13.
                Jul 11 10:40:53 hst100 systemd[1]: Starting transient unit for varstored-13...
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --domain = '13'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --chroot = '/var/run/xen/varstored-root-13'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --depriv = '(null)'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --uid = '65548'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --gid = '1004'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --backend = 'xapidb'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --arg = 'socket:/xapi-depriv-socket'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --pidfile = '/var/run/xen/varstored-13.pid'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --arg = 'uuid:16a5f8be-781c-46fe-df43-83744df32826'
                Jul 11 10:40:53 hst100 varstored-13[137207]: main: --arg = 'save:/efi-vars-save.dat'
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: 4 vCPU(s)
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: ioservid = 0
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: iopage = 0x7fe157ec9000
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: VCPU0: 7 -> 308
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: VCPU1: 8 -> 309
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: VCPU2: 9 -> 310
                Jul 11 10:40:53 hst100 varstored-13[137207]: varstored_initialize: VCPU3: 10 -> 311
                Jul 11 10:40:53 hst100 varstored-13[137207]: load_one_auth_data: Auth file '/var/lib/varstored/dbx.auth' is missing!
                Jul 11 10:40:53 hst100 varstored-13[137207]: load_one_auth_data: Auth file '/var/lib/varstored/db.auth' is missing!
                Jul 11 10:40:53 hst100 varstored-13[137207]: load_one_auth_data: Auth file '/var/lib/varstored/KEK.auth' is missing!
                Jul 11 10:40:53 hst100 varstored-13[137207]: initialize_settings: Secure boot enable: false
                Jul 11 10:40:53 hst100 varstored-13[137207]: initialize_settings: Authenticated variables: enforcing
                Jul 11 10:40:53 hst100 varstored-13[137207]: IO request not ready
                Jul 11 10:40:53 hst100 varstored-13[137207]: message repeated 3 times: [ IO request not ready]
                Jul 11 10:40:53 hst100 systemd[1]: Started transient unit for swtpm-13.
                Jul 11 10:40:53 hst100 systemd[1]: Starting transient unit for swtpm-13...
                Jul 11 10:40:53 hst100 swtpm-13[137230]: Arguments: 13 /var/lib/xcp/run/swtpm-root-13// unix+http://xapi-depriv-socket false
                Jul 11 10:40:53 hst100 swtpm-13[137230]: Binding socket to /var/lib/xcp/run/swtpm-root-13//swtpm-sock
                Jul 11 10:40:53 hst100 swtpm-13[137230]: Exec: /usr/bin/swtpm swtpm-13 socket --tpm2 --tpmstate backend-uri=unix+http://xapi-depriv-socket --ctrl type=unixio,fd=3 --log level=1 --pid file=/swtpm-13.pid -t --chroot /var/lib/xcp/run/swtpm-root-13// --runas 196621
                Jul 11 10:40:53 hst100 swtpm-13[137230]: core dump limit: 67108864
                Jul 11 10:40:53 hst100 swtpm-13[137230]: Could not write to pidfile : No space left on device
                Jul 11 10:40:53 hst100 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "tap13.0" in table Interface
                Jul 11 10:40:53 hst100 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap13.0
                Jul 11 10:40:53 hst100 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi2 tap13.0 -- set interface tap13.0 "external-ids:\"xs-vm-uuid\"=\"16a5f8be-781c-46fe-df43-83744df32826\"" -- set interface tap13.0 "external-ids:\"xs-vif-uuid\"=\"a48873c0-17c9-8905-712e-72579245a342\"" -- set interface tap13.0 "external-ids:\"xs-network-uuid\"=\"1bfba311-a261-d329-d01d-ab2713d0dc78\"" -- set interface tap13.0 "external-ids:\"attached-mac\"=\"da:ff:e4:1f:38:61\""
                Jul 11 10:40:53 hst100 systemd[1]: swtpm-13.service: main process exited, code=exited, status=1/FAILURE
                Jul 11 10:40:53 hst100 systemd[1]: Unit swtpm-13.service entered failed state.
                Jul 11 10:40:53 hst100 systemd[1]: swtpm-13.service failed.
                Jul 11 10:40:53 hst100 forkexecd: [ info||0 ||forkexecd] qemu-dm-13[137252]: Arguments: 13 --syslog -chardev socket,id=chrtpm,path=/var/lib/xcp/run/swtpm-root-13/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-crb,tpmdev=tpm0 -std-vga -videoram 8 -vnc unix:/var/run/xen/vnc-13,lock-key-sync=off -acpi -monitor null -pidfile /var/run/xen/qemu-dm-13.pid -xen-domid 13 -m size=8184 -boot order=dc -usb -device usb-tablet,port=2 -smp 4,maxcpus=4 -serial pty -display none -nodefaults -trace enable=xen_platform_log -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -S -parallel null -qmp unix:/var/run/xen/qmp-libxl-13,server,nowait -qmp unix:/var/run/xen/qmp-event-13,server,nowait -device xen-platform,addr=3,device-id=0x0002 -device nvme,serial=nvme0,id=nvme0,addr=7 -drive id=disk0,if=none,file=/dev/sm/backend/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/0d16619f-bcb7-47ea-9ecc-46159e4ff4ad,media=disk,auto-read-only=off,format=raw -device nvme-ns,drive=disk0,bus=nvme0,nsid=1 -drive id=disk1,if=none,file=/dev/sm/backend/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1,media=disk,auto-read-only=off,format=raw -device nvme-ns,drive=disk1,bus=nvme0,nsid=2 -device e1000,netdev=tapnet0,mac=da:ff:e4:1f:38:61,addr=4,rombar=0 -netdev tap,id=tapnet0,fd=8
                Jul 11 10:40:53 hst100 forkexecd: [ info||0 ||forkexecd] qemu-dm-13[137252]: Exec: /usr/lib64/xen/bin/qemu-system-i386 qemu-dm-13 -machine pc-i440fx-2.10,accel=xen,max-ram-below-4g=4026531840,suppress-vmdesc=on,allow-unassigned=true,trad_compat=False -chardev socket,id=chrtpm,path=/var/lib/xcp/run/swtpm-root-13/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-crb,tpmdev=tpm0 -vnc unix:/var/run/xen/vnc-13,lock-key-sync=off -monitor null -pidfile /var/run/xen/qemu-dm-13.pid -xen-domid 13 -m size=8184 -boot order=dc -usb -device usb-tablet,port=2 -smp 4,maxcpus=4 -serial pty -display none -nodefaults -trace enable=xen_platform_log -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -S -parallel null -qmp unix:/var/run/xen/qmp-libxl-13,server,nowait -qmp unix:/var/run/xen/qmp-event-13,server,nowait -device xen-platform,addr=3,device-id=0x0002 -device nvme,serial=nvme0,id=nvme0,addr=7 -drive id=disk0,if=none,file=/dev/sm/backend/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/0d16619f-bcb7-47ea-9ecc-46159e4ff4ad,media=disk,auto-read-only=off,format=raw -device nvme-ns,drive=disk0,bus=nvme0,nsid=1 -drive id=disk1,if=none,file=/dev/sm/backend/ff9dc099-c34f-d3ac-3ac4-19ed74480a4b/310dc526-765c-455c-848d-610bb7ae6cd1,media=disk,auto-read-only=off,format=raw -device nvme-ns,drive=disk1,bus=nvme0,nsid=2 -device e1000,netdev=tapnet0,mac=da:ff:e4:1f:38:61,addr=4,rombar=0 -netdev tap,id=tapnet0,fd=8 -device VGA,vgamem_mb=8,addr=2,romfile= -vnc-clipboard-socket-fd 4 -chardev stdio,id=ovmf -device isa-debugcon,chardev=ovmf,iobase=0x402 -xen-domid-restrict -chroot /var/xen/qemu/root-13 -runas 65548:1004
                Jul 11 10:40:53 hst100 qemu-dm-13[137268]: Moving to cgroup slice 'vm.slice'
                Jul 11 10:40:53 hst100 qemu-dm-13[137268]: core dump limit: 67108864
                Jul 11 10:40:53 hst100 qemu-dm-13[137268]: qemu-dm-13: -chardev socket,id=chrtpm,path=/var/lib/xcp/run/swtpm-root-13/swtpm-sock: Failed to connect socket /var/lib/xcp/run/swtpm-root-13/swtpm-sock: Connection refused
                Jul 11 10:40:53 hst100 /opt/xensource/libexec/xcp-clipboardd[137266]: poll failed because revents=0x11 (qemu socket)
                Jul 11 10:40:59 hst100 cleanup.py[137060]: All output goes to log
                Jul 11 10:40:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:41:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:41:29 hst100 cleanup.py[137582]: All output goes to log
                Jul 11 10:41:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:41:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:41:59 hst100 cleanup.py[137875]: All output goes to log
                Jul 11 10:41:59 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:42:22 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                Jul 11 10:42:23 hst100 sparse_dd: [debug||0 ||sparse_dd] progress 20%
                Jul 11 10:42:29 hst100 cleanup.py[138151]: All output goes to log
                Jul 11 10:42:29 hst100 systemd[1]: Started Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d.
                Jul 11 10:42:52 hst100 systemd[1]: Starting Garbage Collector for SR d4b22411-592f-7ada-0597-68dbcb56ee4d...
                
                D 1 Reply Last reply Reply Quote 0
                • D Offline
                  dinhngtu Vates 🪐 XCP-ng Team @McHenry
                  last edited by dinhngtu

                  @McHenry Do they work if you turn off Secure Boot? There's a procedure to enable Secure Boot, see https://docs.xcp-ng.org/guides/guest-UEFI-Secure-Boot/ .

                  Do you have space left on your Dom0 disk?

                  M 1 Reply Last reply Reply Quote 1
                  • M Offline
                    McHenry @dinhngtu
                    last edited by

                    @dinhngtu

                    8d60db1d-5996-4aad-a956-7ea64ff719b4-image.png

                    D 1 Reply Last reply Reply Quote 0
                    • D Offline
                      dinhngtu Vates 🪐 XCP-ng Team @McHenry
                      last edited by dinhngtu

                      @McHenry /dev/md127p1 (the root partition) looks pretty full. Do you store anything big in there (ISOs...)?

                      M 1 Reply Last reply Reply Quote 1
                      • M Offline
                        McHenry @dinhngtu
                        last edited by

                        @dinhngtu

                        Is that the 18G disk? I thought that was my ISOs disk.

                        8ddbf487-8c07-4f59-9f43-6a6dc019609c-image.png

                        M D 2 Replies Last reply Reply Quote 0
                        • M Offline
                          McHenry @McHenry
                          last edited by

                          Safe to delete these *.gz files?

                          5a216fe0-2d5a-44cd-9469-c1aeb1538ce2-image.png

                          1 Reply Last reply Reply Quote 0
                          • D Offline
                            dinhngtu Vates 🪐 XCP-ng Team @McHenry
                            last edited by

                            @McHenry That's your Dom0 partition, which stores the XCP-ng operating system. Don't store the ISOs there (which your local ISO SR is doing), you should mount an ISO SR using NFS instead.

                            /var/log shouldn't be an issue as it's in a separate partition. (I misread the df output)

                            M 1 Reply Last reply Reply Quote 1
                            • M Offline
                              McHenry @dinhngtu
                              last edited by

                              @dinhngtu

                              Wow, it worked!

                              M 1 Reply Last reply Reply Quote 0
                              • M Offline
                                McHenry @McHenry
                                last edited by

                                I deleted a few ISOs and the VM now boots.

                                So the issue was I was storing ISOs in the root partition and it was full?

                                D 1 Reply Last reply Reply Quote 0
                                • D Offline
                                  dinhngtu Vates 🪐 XCP-ng Team @McHenry
                                  last edited by

                                  @McHenry Yes, that's the cause of your issue.

                                  M 1 Reply Last reply Reply Quote 1
                                  • M Offline
                                    McHenry @dinhngtu
                                    last edited by

                                    @dinhngtu

                                    Thank you so much. If you want me I'll be at the pub.

                                    1 Reply Last reply Reply Quote 1
                                    • olivierlambertO olivierlambert marked this topic as a question
                                    • olivierlambertO olivierlambert has marked this topic as solved
                                    • First post
                                      Last post