So all of you have connected the storage directly to the vms?
I'm trying to do it on iscsi and nfs storage.
Posts made by mohammadm
-
RE: How to kubernetes on xcp-ng (csi?)
-
RE: How to kubernetes on xcp-ng (csi?)
@olivierlambert Nope, still the same error.
I don't think our Sonicwall (in the datacenter) is blocking anything. Since at home, I am using Unifi. -
RE: How to kubernetes on xcp-ng (csi?)
@olivierlambert said in How to kubernetes on xcp-ng (csi?):
Still a network problem (Cloud init can't reach something, no route to host)
This is the same error I used to get on my homelab when manually installing Ubuntu and trying to deploy k3s with rancheros and longhorn.
This try is on our datacenter and not my homelab. I'll do another setup with DHCP.
-
RE: How to kubernetes on xcp-ng (csi?)
@olivierlambert said in How to kubernetes on xcp-ng (csi?):
Is it working correctly now?
Currently it is stuck on this.
[FAILED] Failed to start Execute cloud user/final scripts.
cp-1 login:
I did not specify login credentials.
-
RE: How to kubernetes on xcp-ng (csi?)
@olivierlambert said in How to kubernetes on xcp-ng (csi?):
You have a network issue (well, a DNS one) inside your VM, are you using the right network?
Feel so dumb. When creating a VM, usually the top Network is the correct one. For Kubernetes, I had to scroll all the way down and select the correct network.
-
RE: How to kubernetes on xcp-ng (csi?)
@Theoi-Meteoroi said in How to kubernetes on xcp-ng (csi?):
I've been using this with NVMe on 3 Dell 7920 boxen with PCI passthru.
https://github.com/piraeusdatastore/piraeus-operator
It worked well enough that I installed the rest of the NVMe slots to have 7TB per node. I pin the master kubernetes nodes each to a physical node, I use 3 so I can roll updates and patches. The masters serve the storage out to containers - so the workers are basically "storage-less". Those worker nodes can move around. All the networking is 10G with 4 interfaces, so I have one specifically as the backend for this.
Just one note on handing devices to the operator - I use raw NVMe disk.
There can't be any partition or PV on the device. I put a PV on, then erase it so the disk is wiped. Then the operator finds the disk usable an initializes. It tries to not use a disk that seems in use already.I also played a bit with XOSTOR but on spinning rust. Its really robust with the DRBD backend once you get used to working with it. Figuring out object relationships will have you maybe drink more than usual.
Did you use the built-in Recipes to create the kubernetes cluster? I tried NVMe, iCSCI, SSD, NFS Share. All the same thing.
-
RE: How to kubernetes on xcp-ng (csi?)
On console I am getting "Failed to start Execute cloud user/final scripts."
suddenly it has an ip address, but the installation has failed.
-
RE: How to kubernetes on xcp-ng (csi?)
@olivierlambert said in How to kubernetes on xcp-ng (csi?):
Can you try on
latest
release channel?Samething, again apipa ip.
Trying to login on the machine, is it the admin : admin?
-
RE: How to kubernetes on xcp-ng (csi?)
Trying to build a cluster from the hub, bit it is giving me "Err: http://deb.debian.org/debian bullseye/main amd65 ... ... Temporary failure resolvig deb.debian.org"
Probably because the VM gets an 169.254.0.2 apipa ip. Both setting up an static IP or DHCP is giving me the same issue. -
RE: SMB ISO share can't upload
@Tristis-Oris said in SMB ISO share can't upload:
@mohammadm looks like Truenas related issue. Check the account permissions.
if it stop working after XO update, try to rollback previous commit.
User – xcp-ng
Read | Write | ExecuteFor now I upload the ISO's through Windows SMB. Because it is accessible from XOA. Only uploading is not working.
-
RE: SMB ISO share can't upload
I do believe it is a permission something in Linux. Because connecting to the SMB share with a Windows device does work.
-
SMB ISO share can't upload
I have a SMB TrueNAS Scale share as ISO repository. This used to work, but since this week I am getting errors while trying to upload ISOs.
The error I am suddenly getting is "Failed to fetch
SR_BACKEND_FAILURE_222(, Could not mount the directory specified in Device Configuration [opterr=mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)], )" -
RE: nVidia Tesla P4 for vgpu and Plex encoding
@JamesG This would indeed be awesome! I would prefer going the Intel route. Any contacts there @olivierlambert ?
-
RE: self signed cert
I'm also getting the "self-signed certificate" error sinds the last update.
I get it while trying to import/upload ISO.From source, commit e52b1
8.3 beta -
RE: nVidia Tesla P4 for vgpu and Plex encoding
@splastunov said in nVidia Tesla P4 for vgpu and Plex encoding:
@mohammadm
I'm talking now about vGPU not passthrough- old drivers
- no way to monitor GPU load
- Sometimes the GPU on Dom0 stops responding and the only thing that can be done to solve this problem is to reboot the entire server with all the virtual machines on it.
and etc.... do not remember all troubles I had with it
I installed the Firepro S7150x2 yesterday without any issues. It's been about 24 hours, so far no issues. I do agree I am missing the nvidia-smi command to get a better overview.
Why is the support regarding vGPU so bad and mostly outdated
-
RE: nVidia Tesla P4 for vgpu and Plex encoding
@splastunov said in nVidia Tesla P4 for vgpu and Plex encoding:
@austinw no licenses, but a lot of troubles.....
Curious, what troubles?
-
RE: nVidia Tesla P4 for vgpu and Plex encoding
@austinw said in nVidia Tesla P4 for vgpu and Plex encoding:
@splastunov Do the AMD GPU's not require a license?
Nope. These work easy out of the box. Installed the GPU on one of our servers yesterday.
-
RE: ESXi -> XCP-ng Homelab
@austinw Since 80% is SSD, it does not generate that much heat to be honest. All the machines have their original supermicro fans. Tweaked some stuff in the ipmi. Average temperature of the cpu and ram is around 45/48c without the fans making lots of noises.
-
ESXi -> XCP-ng Homelab
Recently we made the switch at the office from ESXi to XCP-ng. Ofcourse I had to do the same thing at my homelab.
XCP01 Main
Supermicro SC216BE1C-R920LPB
Supermicro X11DPi-N(T)
2x Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz
16x 32GB Samsung DDR4 ECC RAM
1x Nvidia NVS 510 Passthrough
1x 256GB Samsng NVMe OS boot
2x 4TB Samsung SSD PM883 local ZFS
48x 2TB Samsung SSD PM883 Raidz2 (24, 24JBOD)
Avago Fusion-MPT 12GSAS SAS3008 Passthrough
2x 920Watt Platinum+ SQ PSU (PWS-920P-SQ)JOB01
Supermicro SC216BE1C-R920LPB JBOD
24x 2TB Samsung SSD PM883 Raidz2
2x 920Watt Platinum+ SQ PSU (PWS-920P-SQ)XCP02 Test
Supermicro SC216BE1C-R920LPB
Supermicro X11DPi-N(T)
2x Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz
16x 32GB Samsung DDR4 ECC RAM
1x Nvidia NVS 510 Passthrough
1x 256GB Samsng NVMe OS boot
2x 1TB Samsung SSD PM883 local ZFS
24x 1TB Samsung SSD PM883 Raidz2
Avago Fusion-MPT 12GSAS SAS3008 Passthrough
2x 920Watt Platinum+ SQ PSU (PWS-920P-SQ)XCP03 XOA & Docker
Dell Precision Tower 3420
Intel(R) Xeon(R) CPU E3-1245 v5 @ 3.50GHz
2x 16GB Samsung DDR4 ECC RAM
1x 256GB Samsng NVMe OS boot
2x 1TB Samsung SSD PM883 local RAID1
4x 1Gb NICSYN01 Xpenology
Supermicro SC216BE1C-R920LPB
Supermicro X11DPi-N(T)
2x Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz
16x 16GB Kingston DDR4 ECC RAM
1x 16GB USB3 OS boot
8x 10TB HGST 7200RPM 4K HHD
Avago Fusion-MPT 12GSAS SAS3008 PCI-Express
2x 920Watt Platinum+ SQ PSU (PWS-920P-SQ)Networking
UDM-SE (6TB HGST)
USW-24-PoE
3x UAP-nanoHD
3x USW Flex
G4 Doorbell
2x G4 Bullet
G3 InstantWhat am I running?
Windows Servers
Ubuntu Servers
TrueNAS Scale
Docker SwarmNext Steps
Adding SSD's for cache to my Xpenology
Virtualizing the Xpenology machine to XCP02 but somehow Xpenology and XCP-ng are not good friends yet.