@mdraugh sure.
As workaround you can create simple table with MAC-IP-VM fields, and simple script to deploy new VM with first "free" MAC. Yes, it sounds like to develop your own XO, but I belive it should help alot.
@mdraugh sure.
As workaround you can create simple table with MAC-IP-VM fields, and simple script to deploy new VM with first "free" MAC. Yes, it sounds like to develop your own XO, but I belive it should help alot.
It is not a problem at all.
You can always set MAC manually, or create VIF with certain MAC via XAPI or cli.
Link to XAPI https://xapi-project.github.io/xen-api/classes/vif.html
check create method.
Cli command
xe vif-create vm-uuid=<VM UUID> network-uuid=<NETWORK UUID> device=<ETHERNET INTERFACE NUMBER> mac=<MAC ADDRESS>
Device
could be in range 0-15
MAC
in format XX:XX:XX:XX:XX:XX
After you need "activate" new VIF by command
xe vif-plug uuid=<VIF UUID>
If VM does not running xe-guest-tools
you have to switch off VM and power it on again to activate new VIF.
@Mark-C Thank you!
Could you tell please how you scale your storage with iSCSI? What hardware/software are you using for storage? Is it possible to add storage nodes on fly or you have to deploy new storag cluster every time when you grow?
@olivierlambert NFS and iSCSI have single point of failure. Yes, it is possible to deploy multipath iSCSI, but it is to complicated. I like CEPH RBD because it does not have single point of failure.
So I'm looking for something similar.
From my point of vew XOSTOR is good idea, but in some cases there is no need to use all nodes as xcp-ng hosts. For example you do not need large amount of RAM and fast modern CPU for storage cluster nodes.
I think the best solution in my case will be to deploy XOSTOR controller in xcp-ng cluster connected to separate storage cluster.
At first glance, I assume that it should be possible to connect storage cluster to xcp-ng with this command
linstor resource create node1 test1 --diskless
So the base idea is to use xcp-ng nodes for linstor-controllers/linstor-satellite and "storage" nodes as linstor-satellite only.
Hi!
I'm looking for new storage cluster for XCP-ng, because ceph RBD performance is very poor.
The main quetion now - is it possible to build XOSTOR (linstore) cluster separatly from xcp-ng and connect it over ethernet?
No inforamtion about such scenario in this article.
So I would like to have "compute" claster of xcp-ng nodes with fast local NVMe disks + and dedicated storage cluster with big amount of HDDs connected vie ethernet.
And second question is about scaling.
How this storage cluster could be scaled? Is it possible to add storage nodes online without interrupting clients (VMs)?
Thank you
Maybe netdata will cover everything?
There are no default alerts, but you can easily create them by yourself.
Also it is very easy to deploy "parent" netdata node and stream metrics to it from all hosts (maybe this part could be integrated to XOA free version? ).
You do not need netdata cloud account for this solution
@RS sorry, I have no idea how to fix it
You can always try to use XAPI
, or xe
cli tool
@RS Few years ago I faced the same errors on all xva backups and after that switch to vdi backups.
I did not find the root of problem, but I know how to "repair" broken backup.
Check xva backup with command vhd-util check -n name_of_backup.xva
.
If you got something like Checksum : 0x0|0xffffffff (Bad!), the solution is:
apt-get install libssl-dev g++ cmake
add-apt-repository ppa:ubuntu-toolchain-r/test && apt-get update && apt-get install -y gcc-7
#To compile execute following commands in xva-img folder
cmake ./
make install
mkdir my-virtual-machine
tar -xf name_of_backup.xva.xva -C my-virtual-machine
chmod -R 755 my-virtual-machine
xva-img -p disk-export my-virtual-machine/Ref\:1/ disk.raw
apt install qemu-utils
qemu-img convert -f raw -O vpc disk.raw resotred.vhd
cp resotred.vhd /run/sr-mount/{sr-uuid}/
vhd-util read -p -n resotred.vhd
@KPS said in Simulating network cable disconnect:
This is an old thread, but to keep it in one place: is there any option to start a VM with disconnected vif?
as workaround/trick you can use locking mode.
Set it to "locked" and do not allow any IP, so all traffic will be dropped on this VIF. It will be like "disabled". After that you can set locking mode to "unlocked" to allow any traffic.
@mohammadm
I'm talking now about vGPU not passthrough
@austinw no licenses, but a lot of troubles.....
Instructions
Set the "other-config:auto_poweron=true" parameter on both target VM and resource pool:
#xe pool-param-set uuid=<pool_UUID> other-config:auto_poweron=true
#xe vm-param-set uuid=<vm_UUID> other-config:auto_poweron=true
great news! Thank you @michael-manley
Just use official AMD drivers
https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-x2
It works fine, but sometime Dom-0 "lose" graphic adapter and you should restart whole server....
I did find solution and AMD stopped supporting it....
Some time later I faced with problem that VMs (Linux and Windows) can't correctly start GPU (ADM MxGPU).
In windows device manager there was error #43.
I have solved this error without host reboot by reloading gim
module.
Hope this will help somebody else.
rmmod gim gim_api
modprobe gim gim_api
In my case problem was in bad connection.
I have reassembled server, cleaned PCI-e (GPU) contacts and now it is stable.
But it will be nice to have some tool to control and monitor AMD GPUs.
@hani it began asking for license after one day but without throttling.
I have switched to AMD GPUs
@Dani Strange. Is it executable? Did you tried to follow step by step my instruction to make vGPU work?