XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. nikade
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 5
    • Posts 251
    • Groups 1

    nikade

    @nikade

    Top contributor

    Started using Xen on top of debian around 2004, virtualization was always something that interested me due to the fact that you can dramaticly increase density and better maximise use of your hardware.
    In 2010 I was hired to insource a smaller infrastructure and to ease management and use something more "enterprise"-like we started using XenServer.

    120
    Reputation
    3.2k
    Profile views
    251
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online
    Location Stockholm, Sweden

    nikade Unfollow Follow
    Top contributor

    Best posts made by nikade

    • RE: XCP-ng 8.2.1 (maintenance update) - ready for testing

      Hi everyone,

      I just installed this 8.2.1 a Dell R630 and the installer was very smooth.
      We chose to make a new installation since the host was running XS 7.2 and we wanted a fresh install, with that came the opportunity to leverage EXT and thin provisioning which seems to work just as it should.

      We're also mounting a NFS SR for VM disks which works fine as well. I'll have to wait and see, but hopefully the problem with /var/log/snmpd.log is resolved now and no more alerts regarding disk usage 🙂

      posted in News
      nikadeN
      nikade
    • RE: Veeam and XCP-ng

      We're using Veeam for our VMware platform and XOA for our XCP platform and I have to say I prefer XOA. Why may one ask?
      Well it is dead simple, its not bloated, you have a ton of options when it comes to configuration, destination, scheduling and so on.

      Veeam on the other hand is A LOT faster, we're using a 10G link to our backup site and we're seeing speeds over 7Gbit/s when backing up our VMware platform and I know this is something Vates is working hard on improving in XOA.
      Veeam also has application aware backups which is a big deal when you're running MSSQL inside your VM's - I dont think there is any plans from Vates in supporting this and this might be a big deal for customers comming from the VMware and Veeam-side.

      posted in XCP-ng
      nikadeN
      nikade
    • RE: XOA: backup Active Directory vm

      According to Microsoft you need to use their built in backup feature or a software that supports AD and VSS which will tell the VM OS that it is going to be backed up.
      Unless you do this there might be corruption of the AD Databases according to Microsoft.

      ALL THO we've been backing up our AD servers with XOA snapshots (Both normal backup and incremental) and had only 1 issue since we started using XOA in 2016.
      Since that issue we also use a guest agent (From Ahsay CBS) that makes a Windows System State backup and Windows System Backup.

      More info about that can be found here:

      https://wiki.ahsay.com/doku.php?id=public:version_9:client:9447_system_state_backup_vs_system_backup

      posted in Backup
      nikadeN
      nikade
    • RE: Join our great support team!

      A great opportunity for anyone who is into xcp, xoa and opensource software to join a great company 🙂

      posted in News
      nikadeN
      nikade
    • RE: XCP-ng 8.2.1 (maintenance update) - ready for testing

      @olivierlambert said in XCP-ng 8.2.1 (maintenance update) - ready for testing:

      Thanks a lot @nikade for your feedback! (also I love your avatar!)

      Yeah, I gotta have my suit on my tux 😉

      I wanted to inform that I've tried iSCSI with multipathing as well and it works fine.
      So far everything we need in our general production-environment seems to be working as it should.

      posted in News
      nikadeN
      nikade
    • RE: Sdn controller and physical network

      @blackliner said in Sdn controller and physical network:

      @nikade How do you "pair" the XCP-ng SDN with your routing setup?

      You cant/dont, you'll have to setup each private network on the vyos router and then have the vm private network routed through it manually.

      For example if you have private network 1 with subnet 192.168.1.0/24 you'd have to add this network to the vyos router and assign 192.168.1.1/24 on the router.
      Then set 192.168.1.1 as default gateway in your vm's which uses this network.

      Then you'll setup ospf or bgp on the vyos router manually with your upstream border/core-router or firewall. If the subnet is a private subnet you'll need to setup NAT as well somewhere before it reaches internet to NAT traffic from 192.168.1.0/24.

      posted in Advanced features
      nikadeN
      nikade
    • RE: What should i expect from VM migration performance from Xen-ng ?

      @Greg_E said in What should i expect from VM migration performance from Xen-ng ?:

      @nikade

      I've spent a bunch of time trying to find some dark magic to making the VDI migration faster, so far nothing. My VM (memory) migration is fast enough that I'm not concerned right now. and don't have any testing to show for it.

      Currently migrating the test VDI from storage1 to storage2 (again) and getting an average of 400/400mbps (lower case m and b). If I do three VDI at once, I can get over a gigbit and sometimes close to 2 gigabit.

      It's either SMAPIv1 or it is a file "block" size issue, bigger blocks can get me benchmarks up to 600MBps to almost 700MBps (capital M and B) on my slow storage over a 10gbps network. Testing this with XCP-NG 8.3 release to see if anything changed from the Beta, so far all is the same. Also all testing done with thin provisioned file shares (SMB and NFS). If I could get half my maximum tests for the VDI migration, I'd be happy. In fact I'm extremely pleased that my storage can go as fast as it is showing, it's all old stuff on SATA.

      I have a whole thread on this testing if you want to read more.

      migrate-benchmark.png

      You can see the migrate which was 400/400 and then the benchmark across the ethernet interface of my Truenas, this example was migrate from SMB to NFS, and benchmark on the NFS. Settings for that NFS are in the thread mentioned and certainly my fastest non-real world performance to date.

      That's impressive!
      We're not seeing as high speeds are you are, we have 3 different storages, mostly doing NFS tho. We're still running 8.2.0 but I dont really think it matters as the issue is most likely tied to the SMAPIv1.

      We also noted that it goes a bit faster when doing 3-4 VDI's in parallell, but the individual speed per migration is about the same.

      posted in Advanced features
      nikadeN
      nikade
    • RE: XO - Restore Health Check

      @planedrop said in XO - Restore Health Check:

      I can confirm this is the case for me too, not a huge deal, but would be kinda nice if it could keep track of the name.

      Thanks for verifying, it is probably an easy fix for the vates ppl 🙂

      posted in Advanced features
      nikadeN
      nikade
    • XO - Restore Health Check

      Hi everyone,

      Has anyone else tried the Restore Health Check-feature in XO to test their backups?
      I've used it now in my lab a couple of times and it works great, but one thing that bugs me out is that the name of the tested VM is no longer dispalyed after the TEST is finished because it was removed:

      7edb57c5-f1b7-47a5-bd58-f9ecde6f93db-bild.png

      Instead of listing the name of the VM it just says "VM not found!"

      We're using XO from sources:

      Xen Orchestra, commit 8b7e1
      Master, commit 587da

      posted in Advanced features
      nikadeN
      nikade
    • RE: After installing updates: 0 bytes free, Control domain memory = 0B

      @Dataslak said in After installing updates: 0 bytes free, Control domain memory = 0B:

      @nikade @olivierlambert @stormi @Danp @yann

      Just wanted to say to you all:

      Thank you for your contributions and kind helpful assistance which has helped me through this crisis.

      I would have been in deep trouble without you. I respect your expertise, and appreciate deeply that you are working so hard to help us dumb users. I have learned a lot, and hope one day to become skilled enough to at least help other new users on this forum.

      Best wishes
      Aslak

      Happy everything worked out, this is what this community is all about.
      I've gotten a lot of help and given some too, it's all about helping out with the things that you can.
      With time you'll be able to help out more and more and more 🙂

      posted in XCP-ng
      nikadeN
      nikade

    Latest posts made by nikade

    • RE: 10gb backup only managing about 80Mb

      @tjkreidl I think the issue is that he's got no 10G switch, hence the direct connection 🙂
      But you live and you learn, best would be to pick up a cheap 10G switch and make it right!

      posted in Backup
      nikadeN
      nikade
    • RE: 10gb backup only managing about 80Mb

      @utopianfish I see, that explains a lot.

      posted in Backup
      nikadeN
      nikade
    • RE: 10gb backup only managing about 80Mb

      @acebmxer said in 10gb backup only managing about 80Mb:

      I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.

      Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.

      Tbh I do the same on our vmware hosts, 2x10G or 2x25G and then the management as a vlan interface on that vSwitch, aswell as the VLAN's used for storage, VM traffic and so on.

      I find it much easier to keep the racks clean if we only have 2 connections from each hosts, rather than 4, since it kind of adds up really fast and makes the rack impossible to keep nice and clean when you have 15-20 machines in it + storage + switches + firewalls and all the inter-connections with other racks, ip-transit and so on.

      Edit:
      Except for vSAN hosts where the vSAN traffic needs atleast 1 dedicated interface, but those are the only exception.

      posted in Backup
      nikadeN
      nikade
    • RE: 10gb backup only managing about 80Mb

      @utopianfish said in 10gb backup only managing about 80Mb:

      @nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?

      May I ask why your management interface is not on the 10G nic? There is absolutely no downside to having that kind of setup.

      We used this setup for 7 years on our Dell R630's without any issues at all. We had 2x10G NIC in our hosts and then put the management interface on top of the bond0 as a native vlan.
      Then we just added our VLAN's on top on the bond0 and voila, all your interfaces benefits from the 10G nic's.

      posted in Backup
      nikadeN
      nikade
    • RE: 10gb backup only managing about 80Mb

      @olivierlambert said in 10gb backup only managing about 80Mb:

      I would have ask the same question 😄

      Great minds and all that, you know 😉

      @utopianfish check if you have any kind of power options regarding "power saving" or "performance" modes you can change in the BIOS. That could make a big difference as well.

      posted in Backup
      nikadeN
      nikade
    • RE: 10gb backup only managing about 80Mb

      @utopianfish said in 10gb backup only managing about 80Mb:

      @olivierlambert ok here's a bit from the log.. Start: 2025-09-03 12:00
      End: 2025-09-03 12:00
      Duration: a few seconds
      Size: 624 MiB
      Speed: 61.63 MiB/s

      Start: 2025-09-03 12:00
      End: 2025-09-03 12:00

      so other jobs are sowing anywhere betwwen 25 to about 80. MiB/s

      What CPU are you using? We saw about the same speeds on our older Intel Xeon's with 2.4ghz and when we switched to newer Intel Xeon Gold with 3Ghz the speeds increased quite a bit, we're now seeing around 110-160 MiB/s after migrating the XO VM.

      posted in Backup
      nikadeN
      nikade
    • RE: Pre-Setup for Migration of 75+ VM's from Proxmox VE to XCP-ng

      Welcome to the community @cichy!
      Just out of curiosity, why are you migrating from proxmox to xcp-ng? Are you ex. vmware?
      We used both vmware and xcp-ng for a long time and xcp-ng is was the obvious alternative for us for workloads that we didn't want in our vmware environment, mostly because of using shared storage and the general similarities.

      posted in Migrate to XCP-ng
      nikadeN
      nikade
    • RE: Windows Server not listening to radius port after vmware migration

      @acebmxer said in Windows Server not listening to radius port after vmware migration:

      After migrating our windows server that host our Duo Proxy manager having an issue.

      [info] Testing section 'radius_client' with configuration:
      [info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'}
      [error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812
      [debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote host

      After the migration I did have to reset the IP address and I did install the Xen tools via windows update.

      Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well....

      On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1
      If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so.

      Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours.

      Edit:
      Do you have the "interface" option specified in your Duo Auth Proxy configuration?

      posted in XCP-ng
      nikadeN
      nikade
    • RE: High availability - host failure number

      Think of it like this:

      If you have 4 hosts, each host maximum usage will be 25% of the total - How much of that % do you want to reserve in case of a failed host?
      Personally, I'd like to have the number set to 1 host (25%) because that means im able to use 3 hosts and the 4th hosts resources would be reserved in case of a failure.

      posted in Compute
      nikadeN
      nikade
    • RE: Sdn controller and physical network

      @blackliner said in Sdn controller and physical network:

      @nikade How do you "pair" the XCP-ng SDN with your routing setup?

      You cant/dont, you'll have to setup each private network on the vyos router and then have the vm private network routed through it manually.

      For example if you have private network 1 with subnet 192.168.1.0/24 you'd have to add this network to the vyos router and assign 192.168.1.1/24 on the router.
      Then set 192.168.1.1 as default gateway in your vm's which uses this network.

      Then you'll setup ospf or bgp on the vyos router manually with your upstream border/core-router or firewall. If the subnet is a private subnet you'll need to setup NAT as well somewhere before it reaches internet to NAT traffic from 192.168.1.0/24.

      posted in Advanced features
      nikadeN
      nikade