@jdias14 Make sure you have at least double the available disk storage space on the target.
Posts made by tjkreidl
-
RE: Warm migration stuck at 0%
-
RE: Restoring from backup error: self-signed certificate
@utmoab Strange, but I've seen some tasks "stuck" that could not be cancelled for some reason or aanother. A reboot is a drastic measure, but unfortunate;y sometimes the only recourse.
-
RE: Restoring from backup error: self-signed certificate
@KS Did you try running "xe task-list" to identify the process and then "xe task-cancel force=true uuid=(UUID-of-process)"?
-
RE: Intel iGPU passthough
@xerxist Good news! Some boot mechanisms will work one way or the other, or in some cases, even either one of them.
It's also important to keep the BIOS up-to-date. -
RE: Intel iGPU passthough
@xerxist I honestly do not know, but it seems it cannot hurt to try.
-
RE: Intel iGPU passthough
@bullerwins Why would the driver have to be world writable (permissions 777) which seems like a security risk unless the /dev directory itself isn't. But still, a writable driver area seems very strange.
-
RE: "Orphan VDIs" and "VDIs attached to Control Domain" Safe to delete all?
@jweez Something similar to this might be useful. I would think it would work on XCP-ng, as well. https://raw.githubusercontent.com/deepix/shell-fu/master/orphan_vdi.bash
-
RE: Ideas to reduce delta-backup-size - especially for Windows VMs
@KPS Perhaps consider AppV. MSIX. or some third party application sharing option for a chunk of the applications, which would reduce the size of the VM images? Also, perhaps the log verbosity could be reduced?
-
RE: Seeking community insight/review of my first Homelab design (includes some open technical questions)
@olivierlambert @joehays Easiest backup to a remote location IMO is to NFS-mount a drive from some other system and back up to it. That way, it can always be exported to anywhere needed, even if your server(s) are destroyed and you have to start over from scratch.
-
RE: Accessing XCP host outside of private network
@mauzilla We had all our servers on private 10 networks and were heavily firewalled plus used VPN to get in with fixed VPN individually assigned addresses that were the only ones allowed to access those hosts. It's not worth the security risk to leave your servers open to the world with public addresses.
-
RE: GPU pass through - suggestion for suitable hardware
@planedrop At least with the higher-end NVIDIA GPU boards, you need special licensing to even tap into the Quadro features. Note that this is not to be confused with Quadro boards like the P5000 and P6000! They sure don't make it easy to figure out what is supported with what boards.
As to RDP, we ran with RDP on a Dell server with Windows 2012 servers some time back successfully. I think it was a PXXXX board of some sort, but don't recall which specifically. -
RE: GPU pass through - suggestion for suitable hardware
@Forza Just make sure whatever GPU you get either doesn't require a software license or take the licensing into account under consideration of your choice. In some case, you will require software licensing even for passthrough deployment. Also, make sure your server has an adequate power supply as well as ventilation to handle the GPU card.
-
RE: Intel X550T 2.5G Not Working
@livegrenier 2.5 GiB is unusual. Have used X5xx series NICss before, but only at 1 or 10 GiB.
-
RE: Socket/core configuration in VM
@robyt It depends on (1) licensing, if any, as some licenses go by cores vs. sockets, and (2) NUMA/VNUMA depending on how critical the performance is depending on how the VCPUs get allocated between sockets or on a single socket. Best way IMO is to try all and test with benchmarks. See, for example, this article and the previous two articles, as well as articles by Frank Denneman and others: https://blogs.mycugc.org/2019/04/30/a-tale-of-two-servers-part-3-the-influence-of-numa-cpus-and-sockets-cores-persocket-plus-other-vm-settings-on-apps-and-gpu-performance/
-
RE: Backup size larger than it should be
@olivierlambert Ah, for SSD drives, yes, trimming is a whole separate thing.
-
RE: Backup size larger than it should be
@florent I am perplexed because thin provisioning should allow for coalescing. I would still try moving the VM to another SR and back again would again be my suggestion.
-
RE: Backup size larger than it should be
@kamil-v4 Not to another host, to another SR! The VM has to be moved to a different storage device. The host is irrelevant if within the same pool.
-
RE: Backup size larger than it should be
@kamil-v4 If you have another SR with space available, one option would be to move the VM to it and then back again, which should in principle reclaim the space. On the SR that you move thee VM to, it should show up coalesced. Sure you don't have any hidden snapshots or such that might be contributing to the bloated space?
-
RE: Backup size larger than it should be
@kamil-v4 Perhaps the space has not yet been recovered? How full is that SR? It should be under around 90% full for space recovery to be able to take place. Also, have you tried manually running an sr-scan which should trigger a coalesce process?
-
RE: The system gives only the base frequency
@NikFer This is very likely a BIOS setting. My experience has been with Dell hardware, so I cannot speak specifically to SuperMicro configurations, but see if this might help or check one of the SuperMicro forums: https://www.supermicro.com/support/faqs/faq.cfm?faq=21555