@ronan-a I have seen there are updated packages, thanks
After the update I'm able to start the container/VM
Best posts made by bc-23
-
RE: RunX: tech preview
-
RE: Ceph (qemu-dp) in XCP-ng 7.6
I'm looking forward for the SMAPIv3 integration. I already took a look on the concept and I really like the separation of volume and datapath plugin. I think this will make a lot of things easier in the future.
Latest posts made by bc-23
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
Thanks for the hint, I will take a look into it.
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
It is a 64bit PV guest.
The reason still using PV guests is that our install environment used to add boot parameters for the installer which was quite a nice way to manage the installation.Running as HVM guest works without issue.
But when PV guests are not supported anymore then it seems now I should start to migrate our environment to use HVM guests.
May I ask what would be a good way to insert install information into a HVM guest? At the moment I would think about using xenstore to add information like IP to use and other information and create a custom Debian installer reading the information from the xenstore.
I know also about cloudinit, but what I have read so far I don't think it would fit into our environment.Thanks,
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
Hi,
I have a issue starting a PV VM on a fresh installed XCP-ng 8.3 server.
The VM was created from a template I exported from our XCP-ng 8.2 cluster and imported into the new 8.3 server.
The template creates a empty PV VM containing the PV boot information to do a network installation.The error message I get is:
xenopsd internal error: VM = fb7977de-aa28-273b-7e07-90a8c8639559; domid = 9; Bootloader.Bad_error
In the
xensource.log
I don't see much more information:Sep 28 08:20:12 X xapi: [error||26203 |Async.VM.start R:5c82647ea60e|xenops] Re-raising as INTERNAL_ERROR [ xenopsd internal error: VM = fb7977de-aa28-273b-7e07-90a8c8639559; domid = 9; Bootloader.Bad_error ] Sep 28 08:20:12 X xapi: [error||26203 ||backtrace] Async.VM.start R:5c82647ea60e failed with exception Server_error(INTERNAL_ERROR, [ xenopsd internal error: VM = fb7977de-aa28-273b-7e07-90a8c8639559; domid = 9; Bootloader.Bad_error ]) Sep 28 08:20:12 X xapi: [error||26203 ||backtrace] Raised Server_error(INTERNAL_ERROR, [ xenopsd internal error: VM = fb7977de-aa28-27 3b-7e07-90a8c8639559; domid = 9; Bootloader.Bad_error ]) Sep 28 08:20:12 X xapi: [error||26203 ||backtrace] 1/39 xenopsd-xc Raised at file ocaml/xenopsd/xc/xenops_server_xen.ml, line 2201 Sep 28 08:20:12 X xapi: [error||26203 ||backtrace] 2/39 xenopsd-xc Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 2 4 ...
I skipped the remaining 36 lines from the backtrace, as this only seems to be the ocaml stack trace I it doesn't some to contain any additional relevant information.
When I compare two newly created VMs based on the PV template in the 8.2 and 8.3 environment, the look equal.
The PV elements from vm-param-list on both VMs looks like:xe vm-param-list uuid=<UUID> | grep PV PV-kernel ( RW): PV-ramdisk ( RW): PV-args ( RW): preseed/url=<install specific information> PV-legacy-args ( RW): PV-bootloader ( RW): eliloader PV-bootloader-args ( RW): PV-drivers-version (MRO): <not in database> PV-drivers-up-to-date ( RO) [DEPRECATED]: <not in database> PV-drivers-detected ( RO): <not in database>
I see a difference on the
bios-strings
parameter, which is empty in 8.2 but contains the following in 8.3:bios-strings (MRO): bios-vendor: Xen; bios-version: ; system-manufacturer: Xen; system-product-name: HVM domU; system-version: ; system-serial-number: ; baseboard-manufacturer: ; baseboard-product-name: ; baseboard-version: ; baseboard-serial-number: ; baseboard-asset-tag: ; baseboard-location-in-chassis: ; enclosure-asset-tag: ; hp-rombios: ; oem-1: Xen; oem-2: MS_VM_CERT/SHA1/bdbeb6e0a816d43fa6d3fe8aaef04c2bad9d3e3d
Do you have a hint what could case this error, or where I could find additional information, as the error message does not contain a lot of information.
Thanks.
-
RE: RunX: tech preview
@ronan-a I have seen there are updated packages, thanks
After the update I'm able to start the container/VM -
RE: RunX: tech preview
@ronan-a The server is still running on 8.2
[11:21 fraxcp04 ~]# rpm -qa | grep xenops xenopsd-0.150.5.1-1.1.xcpng8.2.x86_64 xenopsd-xc-0.150.5.1-1.1.xcpng8.2.x86_64 xenopsd-cli-0.150.5.1-1.1.xcpng8.2.x86_64
Are the patches for this version?
-
RE: RunX: tech preview
Hi,
I have started to play around with this feature. I think it's a great idea
At the moment I'm running into issue on container start:message: xenopsd internal error: Could not find BlockDevice, File, or Nbd implementation: {"implementations":[["XenDisk",{"backend_type":"9pfs","extra":{},"params":"vdi:80a85063-9b59-4fda-82c9-017be0fe967a share_dir none ///srv/runx-sr/1"}]]}
I have created the SR as described above:
uuid ( RO) : 968d0b84-213e-a269-3a7a-355cd54f1a1c name-label ( RW): runx-sr name-description ( RW): host ( RO): fraxcp04 allowed-operations (SRO): VDI.introduce; unplug; plug; PBD.create; update; PBD.destroy; VDI.resize; VDI.clone; scan; VDI.snapshot; VDI.create; VDI.destroy; VDI.set_on_boot current-operations (SRO): VDIs (SRO): 80a85063-9b59-4fda-82c9-017be0fe967a PBDs (SRO): 0d4ca926-5906-b137-a192-8b55c5b2acb6 virtual-allocation ( RO): 0 physical-utilisation ( RO): -1 physical-size ( RO): -1 type ( RO): fsp content-type ( RO): shared ( RW): false introduced-by ( RO): <not in database> is-tools-sr ( RO): false other-config (MRW): sm-config (MRO): blobs ( RO): local-cache-enabled ( RO): false tags (SRW): clustered ( RO): false # xe pbd-param-list uuid=0d4ca926-5906-b137-a192-8b55c5b2acb6 uuid ( RO) : 0d4ca926-5906-b137-a192-8b55c5b2acb6 host ( RO) [DEPRECATED]: a6ec002d-b7c3-47d1-a9f2-18614565dd6c host-uuid ( RO): a6ec002d-b7c3-47d1-a9f2-18614565dd6c host-name-label ( RO): fraxcp04 sr-uuid ( RO): 968d0b84-213e-a269-3a7a-355cd54f1a1c sr-name-label ( RO): runx-sr device-config (MRO): file-uri: /srv/runx-sr currently-attached ( RO): true other-config (MRW): storage_driver_domain: OpaqueRef:a194af9f-fd9e-4cb1-a99f-3ee8ad54b624
I see also that in
/srv/runx-sr
a symlink1
is created, pointing to a overlay image.The VM is in stated paused after the error above.
The template I used was a old debian PV template, where I removed the PV-bootloader and install-* attributes from other-config. What template would you recommend to use?
Any idea what could cause the error above?
Thanks,
Florian -
RE: Ceph (qemu-dp) in XCP-ng 7.6
I'm looking forward for the SMAPIv3 integration. I already took a look on the concept and I really like the separation of volume and datapath plugin. I think this will make a lot of things easier in the future.
-
RE: Ceph (qemu-dp) in XCP-ng 7.6
Thanks for the update. Then I will go with a LVMoRBDSR as a workaround at the moment. Maybe I find some time to build a development environment and do some tests with the smapiv3-changes branch.
-
Ceph (qemu-dp) in XCP-ng 7.6
Hi,
the last week, I tried to understand how the RBDSR plugin is working (or at the moment not working) in XCP-ng togther with qemu-dp.
I see how the plugin is working together with ceph, and how it communicates with qemu-dp, but I don't understand the current state of the qemu-dp package.
At the moment I'm able to create a disk image in ceph, but starting a VM with this image results in a qemu error that the rbd protocol is unknown.
Maybe someone can explain to me the current state of the qemu-dp package in XCP-ng 7.6? I see that there is a version in the xcp-ng-extras repo, which I
have installed at the moment. But is seems that it don't support rbd protocol. For 7.5 there was a extra qemu-dp package build for ceph, should this be the
same package as in the 7.6 extras repo?I also see that there is work done in a smapiv3-changes branch in this repo:
https://github.com/xcp-ng/qemu-dp/tree/smapiv3-changesAs there is a file block/rbd.c I would assume this is a qemu-dp version which supports the rbd protocol, is this correct?
Would be great if someone could give me a explanation how the qemu-dp is handled in XCP-ng and what your plans are.