XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Maelstrom96
    3. Best
    Offline
    • Profile
    • Following 0
    • Followers 2
    • Topics 1
    • Posts 38
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: XOSTOR hyperconvergence preview

      @gb-123 said in XOSTOR hyperconvergence preview:

      @ronan-a

      VMs would be using LUKS encryption.

      So if only VDI is replicated and hypothetically, if I loose the master node or any other node actually having the VM, then I will have to create the VM again using the replicated disk? Or would it be something like DRBD where there are actually 2 VMs running in Active/Passive mode and there is an automatic switchover ? Or would it be that One VM is running and the second gets automatically started when 1st is down ?

      Sorry for the noob questions. I just wanted to be sure of the implementation.

      The VM metadata is at the pool level, meaning that you wouldn't have to re-create the VM if the current VM host has a failure. However, memory can't/isn't replicated in the cluster, unless you're doing a live migration which would temporarily replicate the VM memory to the new host, so it can be moved.

      DRBD only replicates the VDI, or in other terms, the disk data across the active Linstor members. If the VM is stopped or is terminated because of host failure, you should be able to start it back up on another host in your pool, but by default, this will require manual intervention to start the VM, and will require you to input your encryption password since it will be a cold boot.

      If you want the VM to automatically self-start in case of failure, you can use the HA feature of XCP-ng. This wouldn't solve your issue with having to input your encryption password since, like explain earlier, the memory isn't replicated, and it would cold boot from the replicated VDI. Also, keep in mind that enabling HA adds maintenance complexity and might not be worth it.

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: Three-node Networking for XOSTOR

      @T3CCH What you might be looking for: https://xcp-ng.org/docs/networking.html#full-mesh-network

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: XOSTOR hyperconvergence preview

      @ronan-a said in XOSTOR hyperconvergence preview:

      @Maelstrom96 We must update our documentation for that, This will probably require executing commands manually during an upgrade.

      Any news on that? We're still pretty much blocked until that's figured out.

      Also, any news on when it will be officially released?

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: XOSTOR hyperconvergence preview

      @ronan-a I've checked the commit history and saw that the breaking change seems to be related to the renaming of the KV store. Also just noticed that you renamed the volume namespace. Is there any other breaking changes that would require the deletion of the SR in order to update the sm package?

      I've made a python code that makes a copy of all the old KV data to the new KV name, along with renaming the key names for the volume data and was wondering if that would be sufficient.

      Thanks,

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: XOSTOR hyperconvergence preview

      Hi @ronan-a ,

      So like we said at some point, we're using a K8s cluster that is connecting to the linstor directly. It's actually going surprisingly well, and we've even deployed that in production with contingency plans in case of failure, but it's been rock solid for now.

      We're working on setting up Velero to automatically backup all of our K8s cluster metadata along with the PVs for easy Disaster Recovery, but we've hit a unfortunate blocker. Here is what we're getting from Velero when attempting to do the backup/snapshot:

      error:
          message: 'Failed to check and update snapshot content: failed to take snapshot
            of the volume pvc-3602bca1-5b92-4fc7-96af-ce77f35e802c: "rpc error: code = Internal
            desc = failed to create snapshot: error creating S3 backup: Message: ''LVM_THIN
            based backup shipping requires at least version 2.24 for setsid from util_linux''
            next error: Message: ''LVM_THIN based backup shipping requires support for thin_send_recv''
            next error: Message: ''Backup shipping of resource ''pvc-3602bca1-5b92-4fc7-96af-ce77f35e802c''
            cannot be started since there is no node available that supports backup shipping.''"'
      

      It looks like when using thin volumes, we can't actually run a backup. We've checked and the current version of setsid is 2.23.2 on xcp-ng :

      [12:57 ovbh-pprod-xen12 ~]# setsid --v
      setsid from util-linux 2.23.2
      

      We know that updating a package directly is a pretty bad idea, so I'm wondering if you have an idea on what we could do to solve this, or if this will be updated with other XCP-ng updates?

      Thanks in advance for you time!

      P.S: We're working on a full post on how we went about deploying our full K8s linstor CSI setup for other people if anyone is interested in that.

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: XOSTOR hyperconvergence preview

      @olivierlambert
      I just checked the sm repository, and it looks like it wouldn't be that complicated to add a new sm-config and pass it down to the volume creation. Do you accept PR/Contributions on that repository? We're really interested in this feature and I think I can take the time to write the code to handle this.

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: XOSTOR hyperconvergence preview

      After reading the sm LinstorSR file, I figured out the hosts names need to exactly match the hosts names in the XCP-ng pool. I thought I tried that and that it failed the same way, but after re-trying with all valid hosts, it setup the SR correctly.

      Something I've also noticed in the code is that it seems like there's not a way to deploy a secondary SR connectted to the same lintstor controller that could have a different replication factor. For some VMs that have built-in software replication/HA, like DBs, it might be prefered to have replication=1 set for the VDI.

      posted in XOSTOR
      Maelstrom96M
      Maelstrom96
    • RE: Kioxia CM7 PCIe pass-through crash

      @dinhngtu The only output in hypervisor.log file is what I sent earlier.

      Here is daemon.log:

      Oct 10 16:30:01 lab-pprod-xen03 systemd[1]: Started Session c18 of user root.
      Oct 10 16:30:01 lab-pprod-xen03 systemd[1]: Starting Session c18 of user root.
      Oct 10 16:30:01 lab-pprod-xen03 systemd[1]: Started Session c19 of user root.
      Oct 10 16:30:01 lab-pprod-xen03 systemd[1]: Starting Session c19 of user root.
      Oct 10 16:30:13 lab-pprod-xen03 tapdisk[20267]: received 'sring disconnect' message (uuid = 0)
      Oct 10 16:30:13 lab-pprod-xen03 tapdisk[20267]: disconnecting domid=6, devid=768
      Oct 10 16:30:13 lab-pprod-xen03 tapdisk[20267]: sending 'sring disconnect rsp' message (uuid = 0)
      Oct 10 16:30:13 lab-pprod-xen03 systemd[1]: Stopping transient unit for varstored-6...
      Oct 10 16:30:13 lab-pprod-xen03 systemd[1]: Stopped transient unit for varstored-6.
      Oct 10 16:30:13 lab-pprod-xen03 qemu-dm-6[20398]: qemu-dm-6: terminating on signal 15 from pid 2169 (/usr/sbin/xenopsd-xc)
      Oct 10 16:30:14 lab-pprod-xen03 /opt/xensource/libexec/xcp-clipboardd[20392]: poll failed because revents=0x11 (qemu socket)
      Oct 10 16:30:14 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: received 'close' message (uuid = 0)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: nbd: NBD server pause(0x198d410)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: nbd: NBD server pause(0x198d610)
      Oct 10 16:30:14 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif6.0
      Oct 10 16:30:14 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:14 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif6.0
      Oct 10 16:30:14 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:14 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif6.1
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: nbd: NBD server free(0x198d410)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: nbd: NBD server free(0x198d610)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: gaps written/skipped: 444/0
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd: b: 25600, a: 2686, f: 2658, n: 11023552
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: closed image /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd (0 users, state: 0x00000000, ty$
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: sending 'close response' message (uuid = 0)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: received 'detach' message (uuid = 0)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: sending 'detach response' message (uuid = 0)
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: tapdisk-log: closing after 0 errors
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: tapdisk-syslog: 32 messages, 2739 bytes, xmits: 33, failed: 0, dropped: 0
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: tapdisk-control: draining 1 connections
      Oct 10 16:30:14 lab-pprod-xen03 tapdisk[20267]: tapdisk-control: done
      Oct 10 16:30:16 lab-pprod-xen03 tapback[20277]: backend.c:1246 domain removed, exit
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Command line: -controloutfd 8 -controlinfd 9 -mode hvm_build -image /usr/libexec/xen/boot/hvmloader -domid 7 -store_port 5 -store_d$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Domain Properties: Type HVM, hap 1
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Determined the following parameters from xenstore:
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: vcpu/number:4 vcpu/weight:256 vcpu/cap:0
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: nx: 1, pae 1, cores-per-socket 0, x86-fip-width 0, nested 0
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: apic: 1 acpi: 1 acpi_s4: 0 acpi_s3: 0 tsc_mode: 0 hpet: 1
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: nomigrate 0, timeoffset 0 mmio_hole_size 0
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: viridian: 0, time_ref_count: 0, reference_tsc: 0 hcall_remote_tlb_flush: 0 apic_assist: 0 crash_ctl: 0 stimer: 0 hcall_ipi: 0
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: vcpu/0/affinity:1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: vcpu/1/affinity:1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: vcpu/2/affinity:1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: vcpu/3/affinity:1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_malloc_filemap    : 631 kB
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_module_file: filename="/usr/share/ipxe/ipxe.bin"
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_malloc_filemap    : 132 kB
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_boot_xen_init: ver 4.17, caps xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_parse_image: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: loader probe failed
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: loader probe OK
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail: ELF: phdr: paddr=0x100000 memsz=0x57e24
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail: ELF: memory: 0x100000 -> 0x157e24
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:f1:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:f1:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:f3:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:f3:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:21:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:f3:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:21:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:21:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:64:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:64:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:63:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:63:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:23:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:23:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting RMRRs for device '0000:22:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Getting total MMIO space occupied for device '0000:22:00.0'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Calculated provisional MMIO hole size as 0x20000000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Loaded OVMF from /usr/share/edk2/OVMF-release.fd
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_mem_init: mem 8184 MB, pages 0x1ff800 pages, 4k each
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_mem_init: 0x1ff800 pages
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_boot_mem_init: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: range: start=0x0 end=0xe0000000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: range: start=0x100000000 end=0x21f800000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail: PHYSICAL MEMORY ALLOCATION:
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail:   4KB PAGES: 0x0000000000000200
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail:   2MB PAGES: 0x00000000000003fb
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail:   1GB PAGES: 0x0000000000000006
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Final lower MMIO hole size is 0x20000000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_build_image: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x58 at 0x7f6d1170f000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x157e24  (pfn 0x100 + 0x58 pages)
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: xc: detail: ELF: phdr 0 at 0x7f6d0fb1e000 -> 0x7f6d0fb6f200
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x158+0x200 at 0x7f6d0f976000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_alloc_segment:   System Firmware module : 0x158000 -> 0x358000  (pfn 0x158 + 0x200 pages)
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x358+0x22 at 0x7f6d116ed000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_alloc_segment:   module0      : 0x358000 -> 0x379200  (pfn 0x358 + 0x22 pages)
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x37a+0x1 at 0x7f6d118cd000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x37a000 -> 0x37a878  (pfn 0x37a + 0x1 pages)
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x37b000
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_boot_image: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: domain builder memory footprint
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:    allocated
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:       malloc             : 18525 bytes
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:       anon mmap          : 0 bytes
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:    mapped
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:       file mmap          : 764 kB
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail:       domU mmap          : 2540 kB
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: Adding module 0 guest_addr 358000 len 135680
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: vcpu_hvm: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_set_gnttab_entry: d7 gnt[0] -> d0 0xfefff
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_set_gnttab_entry: d7 gnt[1] -> d0 0xfeffc
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Parsing '178bfbff-f6fa3203-2e500800-040001f3-0000000f-219c07a9-0040060c-00000000-311ed005-00000010-00000000-18000064-00000000-00000$
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_release: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Writing to control: 'result:1044476 1044479#012'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: domainbuilder: detail: xc_dom_release: called
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: Writing to control: 'result:1044476 1044479#012'
      Oct 10 16:30:16 lab-pprod-xen03 xenguest-7-build[47808]: All done
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "vif7.0" in table Interface
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "vif7.1" in table Interface
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif7.0
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif7.1
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi0 vif7.0 -- set interface vif7.0 "external-ids:\"xs-vm-uuid\"=\"ab6fa81f-59d2-$
      Oct 10 16:30:17 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi3 vif7.1 -- set interface vif7.1 "external-ids:\"xs-vm-uuid\"=\"ab6fa81f-59d2-$
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: tapdisk-control: init, 10 x 4k buffers
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: I/O queue driver: lio
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: I/O queue driver: lio
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: tapdisk-log: started, level 0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: Tapdisk running, control on /var/run/blktap-control/ctl48097
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdclient48097'
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: received 'attach' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: sending 'attach response' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: received 'open' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd version: tap 0x00010003, b: 25600, a: 2686, $
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: opened image /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd (1 users, state: 0x00000001, ty$
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: VBD CHAIN:
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd: type:vhd(4) storage:ext(2)
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: bdev: capacity=104857600 sector_size=512/512 flags=0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver48097.0'
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: registering for unix_listening_fd
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd-old48097.0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: Set up local unix domain socket on path '/var/run/blktap-control/nbdserver-new48097.0'
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: registering for unix_listening_fd
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: nbd: Successfully started NBD server on /var/run/blktap-control/nbd48097.0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: sending 'open response' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 tapback[48107]: tapback.c:445 slave tapback daemon started, only serving domain 7
      Oct 10 16:30:18 lab-pprod-xen03 tapback[48107]: backend.c:406 768 physical_device_changed
      Oct 10 16:30:18 lab-pprod-xen03 tapback[48107]: backend.c:406 768 physical_device_changed
      Oct 10 16:30:18 lab-pprod-xen03 tapback[48107]: backend.c:492 768 found tapdisk[48097], for 254:0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: received 'disk info' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: VBD 0 got disk info: sectors=104857600 sector size=512, info=0
      Oct 10 16:30:18 lab-pprod-xen03 tapdisk[48097]: sending 'disk info rsp' message (uuid = 0)
      Oct 10 16:30:18 lab-pprod-xen03 systemd[1]: Started transient unit for varstored-7.
      Oct 10 16:30:18 lab-pprod-xen03 systemd[1]: Starting transient unit for varstored-7...
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --domain = '7'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --chroot = '/var/run/xen/varstored-root-7'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --depriv = '(null)'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --uid = '65542'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --gid = '998'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --backend = 'xapidb'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --arg = 'socket:/xapi-depriv-socket'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --pidfile = '/var/run/xen/varstored-7.pid'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --arg = 'uuid:ab6fa81f-59d2-8bb1-fdf8-35969838ec7a'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --arg = 'save:/efi-vars-save.dat'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: 4 vCPU(s)
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: main: --arg = 'save:/efi-vars-save.dat'
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: 4 vCPU(s)
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: ioservid = 0
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: iopage = 0x7f5b175d1000
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: VCPU0: 7 -> 356
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: VCPU1: 8 -> 357
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: VCPU2: 9 -> 358
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: varstored_initialize: VCPU3: 10 -> 359
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: load_one_auth_data: Auth file '/var/lib/varstored/dbx.auth' is missing!
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: load_one_auth_data: Auth file '/var/lib/varstored/db.auth' is missing!
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: load_one_auth_data: Auth file '/var/lib/varstored/KEK.auth' is missing!
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: initialize_settings: Secure boot enable: false
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: initialize_settings: Authenticated variables: enforcing
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: IO request not ready
      Oct 10 16:30:18 lab-pprod-xen03 varstored-7[48166]: message repeated 3 times: [ IO request not ready]
      Oct 10 16:30:18 lab-pprod-xen03 forkexecd: [ info||0 ||forkexecd] qemu-dm-7[48182]: Arguments: 7 --syslog -std-vga -videoram 8 -vnc unix:/var/run/xen/vnc-7,lock-key-sync=off -acpi -priv -m$
      Oct 10 16:30:18 lab-pprod-xen03 forkexecd: [ info||0 ||forkexecd] qemu-dm-7[48182]: Exec: /usr/lib64/xen/bin/qemu-system-i386 qemu-dm-7 -machine pc-i440fx-2.10,accel=xen,max-ram-below-4g=3$
      Oct 10 16:30:18 lab-pprod-xen03 qemu-dm-7[48225]: Moving to cgroup slice 'vm.slice'
      Oct 10 16:30:18 lab-pprod-xen03 qemu-dm-7[48225]: core dump limit: 67108864
      Oct 10 16:30:18 lab-pprod-xen03 qemu-dm-7[48225]: char device redirected to /dev/pts/2 (label serial0)
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "tap7.0" in table Interface
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|db_ctl_base|ERR|no row "tap7.1" in table Interface
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap7.0
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap7.1
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi0 tap7.0 -- set interface tap7.0 "external-ids:\"xs-vm-uuid\"=\"ab6fa81f-59d2-$
      Oct 10 16:30:18 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 add-port xapi3 tap7.1 -- set interface tap7.1 "external-ids:\"xs-vm-uuid\"=\"ab6fa81f-59d2-$
      Oct 10 16:30:19 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: create new mem mapping failed! (err: 1)
      Oct 10 16:30:33 lab-pprod-xen03 qemu-dm-7[48225]: Detected Xen version 4.17
      Oct 10 16:30:34 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed! (err: 1)
      Oct 10 16:30:35 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: create new mem mapping failed! (err: 1)
      Oct 10 16:30:35 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed! (err: 1)
      Oct 10 16:30:36 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: create new mem mapping failed! (err: 1)
      Oct 10 16:30:37 lab-pprod-xen03 tapdisk[48097]: received 'sring connect' message (uuid = 0)
      Oct 10 16:30:37 lab-pprod-xen03 tapdisk[48097]: connecting VBD 0 domid=7, devid=768, pool (null), evt 16, poll duration 1000, poll idle threshold 50
      Oct 10 16:30:37 lab-pprod-xen03 tapdisk[48097]: ring 0xbed010 connected
      Oct 10 16:30:37 lab-pprod-xen03 tapdisk[48097]: sending 'sring connect rsp' message (uuid = 0)
      Oct 10 16:30:37 lab-pprod-xen03 qemu-dm-7[48225]: XenPvBlk: New disk with 104857600 sectors of 512 bytes
      Oct 10 16:30:38 lab-pprod-xen03 qemu-dm-7[48225]: About to call StartImage (0xDEC16D18)
      Oct 10 16:30:40 lab-pprod-xen03 qemu-dm-7[48225]: ExitBootServices -> (0xDEC16D18, 0xD9D)
      Oct 10 16:30:40 lab-pprod-xen03 tapdisk[48097]: received 'sring disconnect' message (uuid = 0)
      Oct 10 16:30:40 lab-pprod-xen03 tapdisk[48097]: disconnecting domid=7, devid=768
      Oct 10 16:30:40 lab-pprod-xen03 tapdisk[48097]: sending 'sring disconnect rsp' message (uuid = 0)
      Oct 10 16:30:40 lab-pprod-xen03 qemu-dm-7[48225]: ExitBootServices <- (Success)
      Oct 10 16:30:41 lab-pprod-xen03 qemu-dm-7[48225]: SetVirtualAddressMap -> (0x4B0, 0x30, 0x1)
      Oct 10 16:30:41 lab-pprod-xen03 qemu-dm-7[48225]: SetVirtualAddressMap <- (Success)
      Oct 10 16:30:41 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:41 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap7.1
      Oct 10 16:30:41 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:41 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap7.0
      Oct 10 16:30:41 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed! (err: 1)
      Oct 10 16:30:41 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port tap7.0
      Oct 10 16:30:41 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed! (err: 1)
      Oct 10 16:30:41 lab-pprod-xen03 qemu-dm-7[48225]: [00:08.0] xen_pt_region_update: Error: create new mem mapping failed! (err: 1)
      Oct 10 16:30:43 lab-pprod-xen03 tapback[48107]: frontend.c:216 768 front-end supports persistent grants but we don't
      Oct 10 16:30:43 lab-pprod-xen03 tapdisk[48097]: received 'sring connect' message (uuid = 0)
      Oct 10 16:30:43 lab-pprod-xen03 tapdisk[48097]: connecting VBD 0 domid=7, devid=768, pool (null), evt 49, poll duration 1000, poll idle threshold 50
      Oct 10 16:30:43 lab-pprod-xen03 tapdisk[48097]: ring 0xbee810 connected
      Oct 10 16:30:43 lab-pprod-xen03 tapdisk[48097]: sending 'sring connect rsp' message (uuid = 0)
      Oct 10 16:30:47 lab-pprod-xen03 systemd[1]: Stopping transient unit for varstored-7...
      Oct 10 16:30:47 lab-pprod-xen03 systemd[1]: Stopped transient unit for varstored-7.
      Oct 10 16:30:47 lab-pprod-xen03 qemu-dm-7[48225]: qemu-dm-7: terminating on signal 15 from pid 2169 (/usr/sbin/xenopsd-xc)
      Oct 10 16:30:47 lab-pprod-xen03 /opt/xensource/libexec/xcp-clipboardd[48221]: poll failed because revents=0x11 (qemu socket)
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: received 'sring disconnect' message (uuid = 0)
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: disconnecting domid=7, devid=768
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: sending 'sring disconnect rsp' message (uuid = 0)
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: received 'close' message (uuid = 0)
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: nbd: NBD server pause(0xbfe410)
      Oct 10 16:30:47 lab-pprod-xen03 tapdisk[48097]: nbd: NBD server pause(0xbfe610)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: nbd: NBD server free(0xbfe410)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: nbd: NBD server free(0xbfe610)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: gaps written/skipped: 2/0
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd: b: 25600, a: 2686, f: 2658, n: 11023552
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: closed image /var/run/sr-mount/5bd2dee1-cfb7-be70-0326-3f9070c4ca2d/721646c6-7a3f-4909-bde8-70dac75f5361.vhd (0 users, state: 0x00000000, ty$
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: sending 'close response' message (uuid = 0)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: received 'detach' message (uuid = 0)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: sending 'detach response' message (uuid = 0)
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: tapdisk-log: closing after 0 errors
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: tapdisk-syslog: 32 messages, 2735 bytes, xmits: 33, failed: 0, dropped: 0
      Oct 10 16:30:48 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:48 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif7.1
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: tapdisk-control: draining 1 connections
      Oct 10 16:30:48 lab-pprod-xen03 tapdisk[48097]: tapdisk-control: done
      Oct 10 16:30:48 lab-pprod-xen03 ovs-ofctl: ovs|00001|ofp_port|WARN|Negative value -1 is not a valid port number.
      Oct 10 16:30:48 lab-pprod-xen03 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=30 -- --if-exists del-port vif7.0
      Oct 10 16:30:49 lab-pprod-xen03 tapback[48107]: backend.c:1246 domain removed, exit
      

      @olivierlambert SKU is KCMYXRUG15T3

      posted in Compute
      Maelstrom96M
      Maelstrom96