XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. olivierlambert
    3. Best
    Offline
    • Profile
    • Following 0
    • Followers 73
    • Topics 63
    • Posts 9,526
    • Groups 6

    Posts

    Recent Best Controversial
    • XCP-ng issue 1: closed!

      This is not just a powerful symbolic thing: by closing issue nΒ°1, we are really reaffirming our independance and our capacity to build everything without using any binary provided by Citrix.

      It's really a great news for XCP-ng's future!

      https://github.com/xcp-ng/xcp/issues/1

      🎊 🎊 🎊

      olivierlambert created this issue in xcp-ng/xcp

      closed Build components from the sources #1

      posted in News
      olivierlambertO
      olivierlambert
    • XCP-ng team is growing

      Since last week, we have a new team member πŸ™‚ @ronan-a is now part of Vates XCP-ng team, with @stormi and myself.

      @r1 and @johnelse are still with us as external contributors.

      @ronan-a , feel free to introduce yourself here (and the stuff you are working on)

      posted in News
      olivierlambertO
      olivierlambert
    • 2 weeks break for me

      Hey everyone,

      Just to let you know I'm finally taking a 2 weeks break, and it's really needed πŸ˜† (thanks Broadcom for the extra work those last months πŸ‘Ό )

      I will make sure that everyone with admin powers will review often the registration waiting list. For the rest, it's up to you, the community, to demonstrate you can help each other without having me around. I don't doubt it, but I wanted to tell you why you won't see me for the next 2 weeks πŸ˜‰

      Enjoy and see you then!

      posted in News
      olivierlambertO
      olivierlambert
    • New XCP-ng documentation!

      https://xcp-ng.org/docs/

      It's far from being complete, but it's a great first step in improving XCP-ng existing documentation and also visibility/credibility!

      Everyone can still contribute (link on the bottom of each page), even if it's a bit more difficult than the wiki.

      Let me know what you think about it πŸ™‚

      What about the wiki? We'll probably to continue to have very specific content there, and "promote" content to the doc if it "fits" for enough people.

      posted in News
      olivierlambertO
      olivierlambert
    • 100,000 unique downloads for XCP-ng

      https://xcp-ng.org/blog/2020/04/17/100-000-downloads-for-xcp-ng/

      So you probably spotted the news, feel free to comment here πŸ˜„

      posted in News
      olivierlambertO
      olivierlambert
    • RE: French government initiative to support

      Maybe it wasn't clear? French gov won't do anything in the project, they just support it as a research and dev project boosting innovation, and indirectly contributing to create more jobs that can't be replicated easily elsewhere.

      There's no "control" on how we decide to do the R&D, as long as we put resources into it as we said.

      XCP-ng is a free and community project, and everybody who want to contribute can do it. There's no "link" between us and any government.

      posted in News
      olivierlambertO
      olivierlambert
    • RE: New XCP-ng "theme" in shell

      What about this one?

      4.png

      posted in News
      olivierlambertO
      olivierlambert
    • Merging XO forum here

      To avoid splitting the community, we'll merge XO forum in this category. It seems not possible (or very not trivial) to fetch the previous XO threads here, so we'll probably have a transition time until we redirect directly XO forum URL toward this category directly.

      posted in Xen Orchestra
      olivierlambertO
      olivierlambert
    • RE: EOL: XCP-ng Center has come to an end (New Maintainer!)

      Thanks for all your work @borzel ! We are doing our best to improve our capacity to quickly review and structure external contribution in XO Lite: we are currently working on a dedicated CONTRIBUTING.md with links to the Figma UX design and such, so everything can be developed in the open to everyone.

      posted in News
      olivierlambertO
      olivierlambert
    • RunX: tech preview

      RunX tech preview

      What is RunX?

      See https://xcp-ng.org/blog/2021/09/14/runx-next-generation-secured-containers/ for more details.

      Do I need it?

      If you want to test high level of security with your containers, yes. Or if you like to play with shiny new techs and report bugs πŸ˜‰

      How to test it

      ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠
      THIS IS A TECH PREVIEW
      ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠ ⚠

      It's not meant to run in production. Play with it in your lab, not in production. You have been warned πŸ˜›

      Follow the next section.

      Installation

      1. Create the repo, install packages + restart toolstack.

      Create the file /etc/yum.repos.d/xcp-ng-runx.repo with:

      [xcp-ng-runx]
      name=XCP-ng runx Repository
      baseurl=http://mirrors.xcp-ng.org/8/8.2/runx/x86_64/ http://updates.xcp-ng.org/8/8.2/runx/x86_64/
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-xcpng
      

      Then, install all the packages:

      yum install --enablerepo=epel -y qemu-dp xenopsd xenopsd-cli xenopsd-xc xcp-ng-xapi-storage runx
      yum install --enablerepo="base,extras" docker podman
      xe-toolstack-restart
      

      2. Create a 9p SR

      mkdir runx-sr
      mkdir -p /root/runx-sr
      xe sr-create type=fsp name-label=runx-sr device-config:file-uri=/root/runx-sr
      

      3. Create a VM template

      You must create a VM template with the desired amount of ram an the number of cores to use. Make sure the vm type is: PV.
      It's not necessary to add disk to this VM.

      4. Modify the runx config

      Open /etc/runx.conf and modify these variables: SR_UUID and TEMPLATE_UUID.

      5. Start a VM.

      With Podman

      Using podman:

      podman pull archlinux
      podman container create --name archlinux archlinux
      podman start archlinux
      

      With Docker

      With docker:

      • Use previous commands and replace podman with docker.
      • Also you must start the docker daemon using an empty bridge using docker -b none.

      Feedback

      Please test and report any problem πŸ™‚

      posted in News
      olivierlambertO
      olivierlambert
    • XO Hub Template: what do you want next?

      So far we got 4 templates available:

      • Debian 10 cloudinit ready/disk growable/DHCP
      • CentOS 8 cloudinit ready/disk growable/DHCP
      • PfSense 2.4
      • Alpine Linux 3.10

      Would you like other OS? Or even apps installed in those OS? Let us know!

      xohub.png

      posted in News
      olivierlambertO
      olivierlambert
    • RE: [DEPRECATED] SMAPIv3 - Feedback & Bug reports

      Wait for a blog post coming next week πŸ˜‰

      posted in Development
      olivierlambertO
      olivierlambert
    • RE: Accessing Citrix VM Tools

      See https://www.xenserver.com/downloads

      posted in Compute
      olivierlambertO
      olivierlambert
    • XOSTOR hyperconvergence preview

      XOSTOR - Tech preview

      ⚠ Installation script is compatible with XCP-ng 8.2 and 8.3 ⚠

      ⚠ UPDATE to sm-2.30.7-1.3.0.linstor.7 ⚠

      Please read this: https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview/224?_=1679390249707

      ⚠ UPDATE from an older version (before sm-2.30.7-1.3.0.linstor.3) ⚠

      Please read this: https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview/177?_=1667938000897


      XOSTOR is a "disaggregated hyperconvergence storage solution". In plain English: you can assemble local storage of multiple hosts into one "fake" shared storage.

      The key to get fast hyperconvergence is to try a different approach. We used GlusterFS for XOSAN, and it wasn't really fast for small random blocks (due to the nature of the global filesystem). But in XOSTOR, there's a catch: unlike traditional hyperconvergence, it won't create a global clustered and shared filesystem. This time, when you'll create a VM disk, it will create a "resource", that will be replicated "n" times on multiple hosts (eg twice or 3 times).

      So in the end, the number of resources depends on the VM disk numbers (and snapshots).

      The technology we use is not invented from scratch, we are using LINSTOR from LINBIT, based itself on DRBD. See https://linbit.com/linstor/

      For you, it will be (ideally) transparent.

      Ultimate goals

      Our first goal here is to validate the technology at scale. If it works as we expect, then we'll add a complete automated solution and UI on top of it, and sell pro support for people who want to get a "turnkey" supported solution (a la XOA).

      The manual/shell script installation as described here is meant to stay fully open/accessible with community support πŸ™‚

      Now I'm letting @ronan-a writing the rest of this message πŸ™‚ Thanks a lot for your hard work πŸ˜‰

      How-it-works-Isometric-Deck-5Integrations-VersionV2-1024x554.png


      ⚠ Important ⚠

      Despite we are doing intensive testing with this technology in the last 2 years (!), it was really HARD to integrate it easily into SMAPIv1 (legacy storage stack of XCP-ng). Especially when you have to test all potential cases.

      The goal of this tech preview is to scale our testing to a LOT of users.

      Right now, this version should be installed on pools with 3 or 4 hosts. We plan to release another test release in one month to remove this limitation. Also, in order to ensure data integrity, it is more than recommended to use at least 3 hosts.

      How to install XOSTOR on your pool?

      1. Download installation script

      First, you must ensure you have at least one free disk or more on each host of your pool.
      Then you can download the installation script using this command:

      wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/052b3dfff9c06b1765e51d8de72c90f2f90f475b/gistfile1.txt -O install && chmod +x install
      

      2. Install

      Then, on each host you must execute the script with the disks to use, for example with one partition:

      ./install --disks /dev/sdb
      

      If you have many disks you can use them, BUT for optimal use, the sum of all disks should be the same on each host:

      ./install --disks /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3
      

      By default, thick provisioning is used, you can use thin instead:

      ./install --disks /dev/sdb --thin
      

      Note: You can use the --force flag if you already have a VG group or PV on your hosts to override:

      ./install --disks /dev/sdb --thin --force
      

      3. Verify config

      With thin option

      lsblk must return on each host an output similar to:

      > lsblk
      NAME                                                                              MAJ:MIN  RM   SIZE RO TYPE  MOUNTPOINT
      ...
      sdb                                                                                 8:16    0   1.8T  0 disk
      └─36848f690df82210028c2364008358dd7                                               253:0     0   1.8T  0 mpath
        β”œβ”€linstor_group-thin_device_tmeta                                               253:1     0   120M  0 lvm
        β”‚ └─linstor_group-thin_device-tpool                                             253:3     0   1.8T  0 lvm
        └─linstor_group-thin_device_tdata                                               253:2     0   1.8T  0 lvm
          └─linstor_group-thin_device-tpool                                             253:3     0   1.8T  0 lvm
      ...
      

      With thick option

      No LVM volume is created, only a new group must be present now using vgs command.

      > vgs
        VG                                                 #PV #LV #SN Attr   VSize   VFree  
        ...
        linstor_group                                        1   0   0 wz--n- 931.51g 931.51g
      

      And you must have linstor versions of sm and xha:

      > rpm -qa | grep -E "^(sm|xha)-.*linstor.*"
      sm-2.30.4-1.1.0.linstor.8.xcpng8.2.x86_64
      xha-10.1.0-2.2.0.linstor.1.xcpng8.2.x86_64
      

      4. Finally you can create the SR:

      If you use thick provisioning:

      xe sr-create type=linstor name-label=<SR_NAME> host-uuid=<MASTER_UUID> device-config:group-name=linstor_group device-config:redundancy=<REDUNDANCY> shared=true device-config:provisioning=thick
      

      Otherwise with thin provisioning:

      xe sr-create type=linstor name-label=<SR_NAME> host-uuid=<MASTER_UUID> device-config:group-name=linstor_group/thin_device device-config:redundancy=<REDUNDANCY> shared=true device-config:provisioning=thin
      

      So for example if you have 4 hosts, a thin config and you want a replication of 3 for each disk:

      xe sr-create type=linstor name-label=XOSTOR host-uuid=bc3cd3af-3f09-48cf-ae55-515ba21930f5 device-config:group-name=linstor_group/thin_device device-config:redundancy=3 shared=true device-config:provisioning=thin
      
      

      5. Verification

      After that you must have a XOSTOR SR visible in XOA with all PBDs attached.

      6. Update

      If you want to update your LINSTOR and other packages, you can execute on each host the install script like this:

      ./install --update-only
      

      F.A.Q.

      How the SR capacity is calculated? πŸ€”

      If you can't create a VDI greater than the displayed size in the XO SR view, don't worry:

      • There are two important things to remember: the maximum size of a VDI that can be created is not necessarily equal to the capacity of the SR. The SR capacity in the XOSTOR context is the maximum size that can be used to store all VDI data.
      • Exception: if the replication count is equal to the number of hosts, the SR capacity is equal to the max VDI size, i.e. the capacity of the smallest disk in the pool.

      We use this formula to compute the SR capacity:

      sr_capacity = smallest_host_disk_capacity * host_count / replication_count
      

      For example if you have a pool of 3 hosts with a replication count of 2 and a disk of 200 GiB on each host, the capacity of the SR is equal to 300 GiB using the formula. Notes:

      • You can't create a VDI greater than 200 GiB because the replication is not block based but volume based.
      • If you create a volume of 200 GiB (400 of the 600 GiB are physically used) and the remaining disk can't be used because it becomes impossible to replicate on two different disks.
      • If you create 3 volumes of 100 GiB: the SR becomes fully filled. In this case you have 300 GiB of unique data and a replication of 300 GiB.

      How to destroy properly the SR after a SR.forget call?

      If you used a command like SR.forget, the SR is not actually removed properly. To do that you can execute these commands:

      # Create new UUID for the SR to reintroduce it.
      uuidgen
      
      # Reintroduce the SR.
      xe sr-introduce uuid=<UUID_of_uuidgen> type=linstor shared=true name-label="XOSTOR" content-type=user
      
      # Get host list to recreate PBD
      xe host-list
      ...
      
      # For each host, you must execute a `xe pbd-create` call.
      # Don't forget to use the correct SR/host UUIDs, and device-config parameters.
      xe pbd-create host-uuid=<host_uuid>  sr-uuid=uuid_of_uuidgen> device-config:provisioning=thick device-config:redundancy=<redundancy> device-config:group-name=<group_name>
      
      # After this point you can now destroy the SR properly using xe or XOA.
      

      Node: auto-eviction and how to restore?

      If a node is no longer active for 60 minutes by default, it's automatically evicted. This behavior can be changed.
      There is an advantage using auto evict, if there are enough nodes in your cluster, LINSTOR will create new replicas of your disks.

      See: https://linbit.com/blog/linstors-auto-evict/

      Now if you want to re-add your node, it's not automatic. You can used a linstor command to remove it: linstor node lost. Then you can recreate it. Also if there is no disk issue, and it was a network problem, whatever, just run one command linstor node restore.

      How to use a specific network storage?

      You can run few specific LINSTOR commands to configure new NICs to use. By default the XAPI management interface is used.

      For more info: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-managing_network_interface_cards

      In case of failure with the preferred NIC, the default interface is used.

      How to replace drives?

      Take a look at the official documentation: https://kb.linbit.com/how-do-i-replace-a-failed-d

      XAPI plugin: linstor-manager

      It's possible to perform low-level tasks using the linstor-manager plugin.

      It can be executed using the following command:

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=<FUNCTION> args:<ARG_NAME_1>=<VALUE_1> args:<ARG_NAME_2>=<VALUE_2> ...
      

      Many functions are not documented here and are reserved for internal use by the smapi driver (LinstorSR).

      For each command, HOST_UUID is a host of your pool, master or not.

      Add a new host to an existing LINSTOR SR

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=addHost args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      This command creates a new PBD on the SR and new node in the LINSTOR database. Also it starts what's is necessary for the driver.
      After running this command, it's up to you to set up a new storage pool in the LINSTOR database with the same name used by the other nodes.
      So again use pvcreate/vgcreate and then a basic "linstor storage-pool create"

      Remove a new host from an existing LINSTOR SR

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=removeHost args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      Check if the linstor controller is currently running on a specific host

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=hasControllerRunning
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=hasControllerRunning
      False
      

      Check if a DRBD volume is currently used by a process on a specific host

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=getDrbdOpeners args:resourceName=<RES_NAME> args:volume=0
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=getDrbdOpeners args:resourceName=xcp-volume-a10809db-bb40-43bd-9dee-22d70d781c45 args:volume=0
      {}
      

      List DRBD volumes

      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=listDrbdVolumes args:groupName=<THIN_OR_THICK_POOL_NAME>
      

      Example:

      xe host-call-plugin host-uuid=ddcd3461-7052-4f5e-932c-e1ed75c192d6 plugin=linstor-manager fn=listDrbdVolumes args:groupName=linstor_group/thin_device
      {"linstor_group": [1000, 1005, 1001, 1007, 1006]}
      

      Force destruction of DRBD volumes

      Warning: In principle, the volumes created by the smapi driver (LinstorSR) must be destroyed using the XAPI or XOA. Only use these functions if you know what you are doing. Otherwise, forget them.

      # To destroy one volume:
      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=destroyDrbdVolume args:minor=<MINOR>
      
      # To destroy all volumes:
      xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=destroyDrbdVolumes args:groupName=<THIN_OR_THICK_POOL_NAME>
      
      posted in XOSTOR
      olivierlambertO
      olivierlambert
    • Our future backup code: test it!

      A big change is coming!
      As we prepare to add qcow2 support for backups, we took the opportunity to redesign major parts of the backup engine. The result? A much more flexible and abstracted system that can better handle various scenarios like V2V, qcow2, VHD, and more.

      We're also moving from traditional streams to Node generators, adding backup throttling, and laying the groundwork for future improvements.

      🚧 It's not production-ready yet, but that’s where you come in!
      If you're working from sources, you can test it by switching to the branch: feat_generator_backups. Bug reports and feedback are more than welcome!

      More tests from you means we could put it in production sooner πŸ™‚

      For reference, the PR is here: https://github.com/vatesfr/xen-orchestra/pull/8432

      Adding directly @florent in the loop because he's the guy to talk to πŸ˜‰

      fbeauchamp opened this pull request in vatesfr/xen-orchestra

      closed feat(backups): use generator instead of streams for backup and replication #8432

      posted in Backup
      olivierlambertO
      olivierlambert
    • XCP-ng & XO at Vates

      Always wanted to provide a bit of feedback on what's powering Vates IT.

      You might be aware that we are relying on almost 100% self-hosted and Open Source software, see:

      https://vates.tech/blog/our-self-hosting-journey-with-open-source/

      And all of that is running in a colo!

      Previous infrastructure

      Over the past decade, we've bounced around various hosting providers - first, it was Scaleway in France, then Hetzner in Germany (and before that, OVH).

      While these providers offered some killer hardware deals, it's been a real hassle when we've wanted to craft our own infrastructure or make customized storage choices. Plus, if you ever ventured beyond their "basic offers," you'd quickly watch those costs soar.

      To complicate matters further, we were managing different sites across some not-so-friendly networks. This led us down the rabbit hole of using the XOA SDN controller to carve out a super-private management network for all our VMs and hosts. It did the job, but it also added an extra layer of complexity to our setup.

      New Goal: Keep it Simple and Flexible

      After years of cobbling together virtualized platforms, there's one piece of advice we can't stress enough: keep things simple and straightforward.

      So, we set out on a mission to revamp our infrastructure, aiming for more flexibility while keeping things resilient and easy to handle. The idea was to rely on budget-friendly (but blazing-fast) off-the-shelf hardware that we could swap out without breaking a sweat.

      Our Choice: Rock DC Lyon

      4 years ago, we zeroed in on the Rock DC Lyon as the perfect spot for our new setup. We started by tucking our lab gear into a rack, and it's been smooth sailing since. This DC is a 2019 model and pretty darn impressive. If you want the nitty-gritty, you can check out this link, though it's in French (but has some sweet pictures!).

      Hardware: EPYC at the Heart of It

      Now that we had the liberty to pick our own hardware, here's what we rolled with:

      Network

      We went for the trusty and tested Ruckus hardware: 2x Ruckus ICX6610 units neatly stacked in an HA configuration (with 4x40G interconnects).

      ICX6610-24-PE

      These babies have been rock-solid in our lab setups and now in production. With the HA pair setup, all our critical production hosts and storage platforms are linked via 10gbE to each switch, bonded. If one switch decides to ghost on us, no worries, there won't be any traffic casualties.

      These units also come with some premium L3 features like VRFs, which help us keep our lab, production, and management routing tables as separate as church and state. For public production access, we're currently handling routes for 3x IPv4 blocks and one massive IPv6 block that we've divided up as we see fit. They also throw in nifty stuff like OpenFlow SDN support, which lets us play around with OpenFlow in the XCP-ng development sphere.

      One sweet perk of using commodity hardware like this is its availability and price. We can stash cold spares right in our rack in case something decides to bail on us.

      Storage

      Staying true to the KISS principle, we designated a dedicated storage host, running TrueNAS.

      dc2-small

      Meet the host - a Dell R730, packing 128GiB RAM and 2x Xeon @ 2.4GHz CPUs, all hustling to keep TrueNAS running smoothly. One of the smartest moves we made here was opting for NFS instead of iSCSI. We've had our fair share of iSCSI-related support tickets (around 20% or more) due to misconfigurations and storage appliance hiccups. NFS, on the other hand, is a simpler protocol and gives us thin provisioning. Plus, as you can see from the numbers below, it doesn't drag down performance on modern hardware.

      Our main storage repository is a mirrored pair of 2TiB Samsung 970 EVO Plus NVMe drives (for VM systems and databases), and our "slow" storage rolls with 4x 1TiB SAS HDDs (for NextCloud files and such). This whole setup connects to our backbone via a redundant pair of 10gbE connections from Intel series NICs.

      It's a budget-friendly configuration (around 450USD/400EUR per NVMe drive) that delivers some serious firepower in just one VM disk:

      • 140k IOPS in read and 115k in write for 4k blocks
      • Chewing through 10G for read/write in 4M blocks (that's 1000 MiB/s)

      And the icing on the cake? It scales even better with multiple VMs!

      Compute nodes

      Since we're tech enthusiasts who can't resist playing with hardware, we couldn't pass up the AMD EPYC CPUs. Let's be honest, we were also tired of constantly pushing updates to fix Intel CPU vulnerabilities.

      So, we ordered 3x Dell R6515 units, each flaunting an EPYC 7302P chip with a generous 128GiB RAM. The pricing was right on the money, and this hardware packs quite a punch for our needs.

      We tossed in an Intel X710-BM2 NIC (dual 10G) in each node and hooked up 2x 10G DACs per node to the Ruckus switch pair (one DAC per switch).

      These nodes also came with iDRAC9 Enterprise licenses for super-easy remote management, including an HTML5-based remote KVM - no more Java woes!

      dc4-small

      XCP-ng configuration

      In this straightforward setup, we lean on a redundant bond (2x10G) for all networks - management, storage, and public. Each network lives in its cozy VLAN, allowing us to logically separate them (and throw in a dedicated virtual interface in our VM for the public network). This step also plays a part in isolating our management network from our public routing.

      The rest of the setup is pretty much par for the course: 2x NFS SR (one for NVMe drives, the other for HDDs).

      And when it comes to our not-so-public networks, the only way in is through a jump box armed with a Wireguard VPN.

      Backup

      Of course, we're on the Xen Orchestra bandwagon, using it to back up all our VMs with Delta backups, all the way to another FreeNAS machine.

      Note that we're cruising at some impressive speeds for these backups, thanks to our entire backbone running on 10gbE, thin provisioning, and a simplified setup.

      We have 2 backup types:

      1. Incremental backup every 6 hours to another TrueNAS machine, hosted in a different rack & different room, in the same DC (10G speed).
      2. A nightly incremental replication to another DC, which is 400km away, allowing a full & fast recovery of our production in case the entire DC is destroyed.

      Migration

      With shiny new infrastructure in place, there's still the matter of our production humming away on our old hardware. Switching from Intel to AMD meant live migrating VMs wasn't an option. Plus, an offline migration was going to take a while, especially when we had to replicate the entire disk first.

      So, how do we minimize downtime during our migration? We call it "warm data migration."

      dc6-small

      See this blog post for more details: https://xen-orchestra.com/blog/warm-migration-with-xen-orchestra/

      And right after that, we can boot the replicated VMs on our AMD hosts! (and change their IP configuration, because we have new public IP blocks from a different provider).

      New Levels of Performance

      Apart from the blazing-fast backup performance (40 minutes for daily backups? Now it's just 5 minutes!), our VMs are feeling the love with reduced load averages. It makes sense, given the speedier storage and the mighty CPUs at their disposal.

      This performance boost is particularly noticeable for some VM load averages:

      lessloadavg-1

      And don't even get us started on CPU usage:

      lesscpu

      For our Koji Build system handling XCP-ng packages, kernel packaging used to take a solid hour. Now, it's clocking in at less than 20 minutes, using the same VM specs (same core count). Plus, since we had some extra cores to play with, we added 2 more, and the build time dropped even further (less than 15 minutes).

      In a Nutshell

      This migration project unfolded in three phases:

      1. Setting up the hardware in the DC (and plugging everything in)
      2. Installing and configuring the software on the new hardware (XCP-ng setup, storage, and more)
      3. The Migration

      And the result? Total flexibility in our infrastructure, no outside shackles on our internal network, speedier VMs, quicker backups, and an ROI of less than a year! The actual time spent on these three phases was quite manageable, clocking in at less than three days of actual work (perhaps a tad more when you factor in planning). Most of the legwork revolved around migrating IP/DNS settings for the VMs themselves.

      Oh, and one more thing: AMD EPYC CPUs are seriously impressive - great performance, both single and multi-threaded, all while sipping power responsibly. Plus, they don't break the bank!

      posted in Share your setup!
      olivierlambertO
      olivierlambert
    • VMware migration tool: we need your feedback!

      VMware migration tool

      The release blog post with more details:

      https://xen-orchestra.com/blog/xen-orchestra-5-79

      needyou.jpg

      Hello there! We will announce very soon the first preview of our VMware migration tool, using only the VMware API (via Xen Orchestra).

      We did some tests, but we need broader feedback from VMware users in our community.

      What to test

      1. The overall process
      2. Versions or required VMware components to make it work
      3. Tools removal or not before doing the transfer
      4. Linux and Windows guests

      How to test

      1. Getting on the right branch (XO from the sources, latest commit on master) or latest release channel (XOA, latest channel)
      2. Using xo-cli (see below)
      xo-cli vm.importFromEsxi host=<VSPHERE_IP> user=<VSPHERE_USER> password=<VSHPERE_PWD> sslVerify=<true|false> vm=<VSPHERE_VM_ID> sr=<SR_UUID> network=<NETWORK_UUID>
      

      Answers we need

      Right now, we are facing various challenges. If you have some VMware experience, we'd like your point of view.

      1. How to access the VM disk in VMDK format directly via the API? (not the raw disk)
      posted in Migrate to XCP-ng
      olivierlambertO
      olivierlambert
    • RE: [WARNING] XCP-ng Center shows wrong CITRIX updates for XCP-ng Servers - DO NOT APPLY - Fix released
      1. Why not using XO from the sources?
      2. Fast clone shouldn't be restricted to Enterprise, sounds more like a bug than anything else. In short, XOA Free should cover all features of XCP-ng Center.
      3. We also have possible plans for XO lite, a "XCP-ng Center" like embed in the host UI directly.
      posted in News
      olivierlambertO
      olivierlambert
    • RE: New XCP-ng "theme" in shell

      Less yellowish terminal to use less different colors and keeping the variations in the logo:

      5.png

      posted in News
      olivierlambertO
      olivierlambert
    • New XCP-ng "theme" in shell

      For XCP-ng 8 we thought that it might be interesting to get a dedicated visual identity outside the boot splash: the prompt and the "ncurse" display.

      A new prompt

      • removed the current user (because it's an "appliance", with only root
      • added current time, very handy to know how long your command took (eg export)
      • using XCP-ng colors

      New vs old:

      newprompt.png

      Obviously, we got a fallback for terminal without color support.

      Also, it also works on white background terms:

      promptclear.png

      A new console theme (xsconsole)

      New vs old:

      newxsconsole.png

      There is also a fallback on terminals without color support (unchanged)

      Maybe you could think it's a minor thing, but in the end, that's how we show XCP-ng is not just a clone of CH/XS πŸ™‚

      Obviously, you can still configure this easily (especially the prompt, because you can override it).

      What do you think?

      posted in News
      olivierlambertO
      olivierlambert