@Otis2772 Where is your time synchronization coming from? Time differences can be caused by a slew of things including power settings (VM or not), CPU availability, networking connectivity, etc. Are they all the same OS or different OSes?
Posts
-
RE: Losing Time on multiple VM machines
-
RE: Snapshot created and then deleted automaticly
On a 1.78 TB storage repository, you don't have nearly enough disk space to be snapshotting a 832 GB disk. You will need to increase the size of your storage repository sufficiently to avoid such issues. While a snapshot should only take twice the amount of space as the disk you are snapshotting, overhead and other activity on the storage repository means you should probably aim for having at least twice the size of the disk you are snapshotting actually FREE on the storage repository. E.g. to snapshot a 1 TB disk, you should have 2 TB of free space on the storage repository.
Note: I am aware of all the nuances of snapshot trees, etc. - it's just not worth getting into here when the simple recommendation above prevents issues such as these.
-
RE: Logs Partition Full
This often occurs when you're having storage issues (whether they are readily apparent or not), that may or may not be related to networking intermittency, that fills up the log files. We can't say what it is without seeing logs; but, just FYI that that is the reason I have most often seen this occur. Check your /var/log directory.
-
RE: Hiding hypervisor from guest to prevent Nvidia Code 43
@slavD I'm in agreement with you. I wanted to replace my multiple gaming rigs with multiple VMs with GPUs (since I already run the server); but, I spent at least a month messing around with various options and never came up with a stable, reliable, and "not overly complicated" way to do so without drastically changing my approach to hardware - the easiest being going with AMD GPUs and increasing my power consumption over 9000%. It's why I've ultimately paused that project for now until some better solution comes along, or until I can "empty" my existing host and repurpose it with a solution that will work (probably unRAID despite my concerns with other parts of it).
-
RE: Hiding hypervisor from guest to prevent Nvidia Code 43
There is still no guarantee that even modified drivers will work, however, due to the way NVIDIA drivers function. There are a lot of people, who have spent a lot of time, to get it to work - with some success; but, everything I've read points to this being something you'll have to screw around with and "maintain" regularly to keep it working. If you're expecting it to be a smooth, easy, and "permanent" process, I would recommend not even starting to try it on Xen (Server or XCP-NG).
If you REALLY need to do this with VMs, KVM and unRAID both have built-in capabilities to do this - which seem to work a bit better than others. vmWare is supposed to have a similar feature; but, I've not gotten it to work. I will tell you, that in messing around with this on all the hypervisors, even on KVM and unRAID, some card\driver combinations just will not work, and, you'll honestly waste more time screwing around with it than just spinning up a dedicated box or grabbing an AMD card. I'm not an AMD guy, and I didn't choose that route myself; but, they at least do not have this "limitation" in their driver stack.
Don't think NVIDIA will come around and fix it, either - their forums are rife with this topic; and, they've pretty much said "we don't have any plans to fix this bug [that we actually introduced on purpose]".
-
RE: AMD Radeon Vega M GH Passthrough
@imad2nsi This is starting to remind me of my 4-week foray into messing around with unRAID, coming out of it with the conclusion that it was more trouble than it was worth most of the time (with the various types of hardware I tried it on).
-
RE: XCP-ng Center Notifications - Updates
@fibrewire If you're looking at a free way to do so, you can install XOA from the sources and do it through that. I honestly don't miss the updates portion of XCP-ng Center (even after having used it for years in XenCenter). It was very clunky and time-consuming; and, I ultimately ended up scripting a lot of it locally on hosts instead. This is obviously much easier now that it can be done through a simple "yum" command instead of having to pull all the separate packages and install them like XenServer does.
XOA will obviously provide that feature through the paid versions as well (if you've already got it). I'm not big on pushing people to XOA from XCP-ng Center, as I still use XCP-ng Center way more than XOA (at least until they push out the new version with the tree view so I can give it a try); but, that's the easy answer to your question.
-
RE: Add Host to pool, is this non-destructive to local VM's?
Yes. That would be a way of doing so. If you can afford the downtime, you could also use a backup solution (XOA, Alike DR, etc.) to backup the VMs from both hosts, build the pool from scratch using both hosts in a clean state, and then restore back into the new pool (this way gives you a chance to not only start fresh; but, implement any big "lessons learned" or "new ideas" you've got on your list).
-
RE: Add Host to pool, is this non-destructive to local VM's?
That is correct. My recommendation would be to migrate the VMs from one host to the other, then wipe that host and bring it into the pool (or just bring it into the pool). I'm not sure it wipes the actual data of those VMs off the local storage repository; but, it will certainly clear out the metadata of all the VMs on that host. You can't recover that metadata into a pool that I'm aware of.
-
RE: XCP-ng Center: Future
@Appollonius @borzel They've been saying that it'll be relevant for 20 years, though.
-
RE: XEN Orchestra Snapshot Space for Backups
@olivierlambert I'll give you that one if all I need is a simple NFS share for something like ISOs. I've got a virtualization stack that's got a demand of upwards of 10,000 IOPs, and, every time I've tried to push to NFS, performance drops like a rock. My XPenology setup (with SSD cache) worked quite well. We'll see how things develop here in a couple\few weeks as I dive back in with this appliance rebuild.
-
RE: XEN Orchestra Snapshot Space for Backups
@fohdeesha I'm going to be honest with you. I've had terrible luck with FreeNAS more times than I can count. It's always been a mix of hardware compatibility issues, appearance of reliability, a not-so-great interface, and a slew of others. I've found unRAID to suffer from similar things. I know a lot of people who use it; and, online, it's obviously widely used by a very large audience - it just doesn't seem to work well for me. I just recently spent at least a couple days trying to use it in front of a fiber-channel array to no avail.
I am about to drain and rebuild an x86-based storage appliance (currently running XPenology) in the next few weeks, so, I may go ahead and try it on this piece of hardware and see where it gets me. I loved OpenFiler back in the day, and, had pretty good luck with Nexenta for a while; but, one is just asking to be compromised using it in 2020, and the other doesn't scale well with modern storage without spending a fortune. XPenology works great, and gets you all the ease of Synology; but, it is very finicky about hardware. I ultimately had to settle with a solution that I really don't like; but, it works for now (Windows Server on vmWare, with the FC array passed through as a RAW disk).
-
RE: XEN Orchestra Snapshot Space for Backups
@fohdeesha My experience is the exact opposite - both regarding issues and performance. That said, part of your observation of support tickets differences could be that those Pro users submitting a lot of iSCSI-related tickets are users who don't have the knowledge or experience to be deploying iSCSI in the first place, and should stick to NFS anyway. Those users submitting NFS tickets are likely the above users, and also us standard iSCSI guys who are trying to accommodate a business need with NFS and find it to be very finicky and frustrating to setup in various types of environments.
I'm not sure what hardware you guys run on all the time that we get a lot of "NFS is amazing, get rid of your iSCSI!"; because a lot of us aren't running the latest and greatest, and, iSCSI is a considerably more efficient type of storage setup.
I digress.
On topic, my experience is that backups take somewhere between 2x and 3x the storage space initially consumed by the VM (most of this being the snapshot itself); and, I'm curious about the VSS stuff you guys are working through. Have you guys reached out to someone like Quadric to query how their doing their VSS? (officially supported inside both Xen, XCP-ng, and Hyper-V environments)
-
RE: Citrix Hypervisor 8.1 released
@GHW @olivierlambert The Access Control lists in the DVSC instance was easily one of its most powerful and useful features. It essentially turned the DVS into an actual port-level security capable virtual switch. If it had full-featured routing, it would have been a very nifty solution to layer 3 in the virtual network space without having to run an entirely different appliance to achieve that capability. Perhaps that is something that the SDN could do at some point? (become a layer 3 capable switch with security access control)
-
RE: XO to manage KVM?
@beagle The simple answer here, which has been implied by others, is...
You DO NOT, EVER, install other "services" on your hypervisor in a production environment - no matter what your cost, convenience, etc. "desires" are.
That's just not how this is all supposed to work; and, any technical guy worth his job position will know right away to not only recommend against setting it up like that, but, will adamantly decline being "forced" to do so - consequences be damned.
What is even worse, is you mentioned a 20 TB requirement for file server storage. There is literally only one solution for this...
A DEDICATED file server - whether that be a VM with enough attached storage (VDIs using workarounds, iSCSI\NFS\SMB direct attached), or you build a back-end NAS system that either passes off the data directly, or utilizes a front-end file server "proxy".
You also claim a need for high-performance - this automatically dismisses any "cheap" solution that aligns with some of your concepts above. Do it right...the first time.
There is a huge difference between "finding a creative solution" and "ignorantly (intentionally or not) pursuing an unrealistic and ill-advised idea".
-
RE: Modular system from multiple touch devices
This all seems very over the top for what you're trying to achieve - since there are much simpler solutions out there. BTW, a much better approach than Samsung Dex was Microsoft's Continuum. It, unfortunately, did not catch on in the consumer space (nor will Dex); and, thus, has been relegated to niche business use.
I see where you're going with the end concept; but, it is highly unlikely (if not impossible) that such an approach would ever be broadly accepted and implemented in any typical user environment - it's essentially the same management/implementation problem as existing solutions - you're just moving the burden to another portion of the process with no real advantage.
-
XCP-ng and NVIDIA GPUs
Now, I'm sure many of us are aware of NVIDIA's "bug" that prevents the vGPU pass-through of standard GeForce cards; and, despite being a well-discussed topic in the community beyond just these forums, there is little information on how to solve that issue for a broad range of environments. This is, technically, a niche issue; but, there are a considerable number of companies out there who don't have the budget to go purchase GRID GPUs. high-end Quadros, or Teslas.
My question is... has anyone had any success with "hacking" this under XCP and/or does the XCP team have any out of the box ideas that might "workaround" this limitation similar to the way KVM did so? While I understand that much of this is limited to the type of virtualization Xen performs in regards to VMs, one can hope that someone comes up with a creative way to get around it. I know for sure that there is a driver version before 330 something where they weren't performing this machine check in the NVIDIA driver; but, that automatically eliminates several models of GPUs as candidates.
I know for a fact that both the 1030 and 1060 will not pass-through in vGPU mode to Windows VMs. I'm still messing around with a couple Linux VMs to try and circumvent it. I am also looking into doing manual PCI pass-through and device hide through GRUB (reportedly has had spotty success) instead of using the interface.
On a related, but, secondary, note - to those who have run into this issue and invested in alternative GPUs - what are you using and how successful has it been?
-
RE: Great projects have great documentation. Is XCP-ng a great project?
@stormi You can attach the ISO from XenServer 7.x to your VM (which you'll obviously have to obtain from either the Citrix site or from a XenServer 7.x install), and then install them from there. Once they're installed, Windows Update should update them accordingly from XOA.