XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Offline
    • Profile
    • Following 2
    • Followers 0
    • Topics 8
    • Posts 82
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Migrating Powered Off VM Results In VDI_CBT_ENABLED Failure Error

      @planedrop does your VM have a snapshot ?

      posted in Xen Orchestra
      P
      Pilow
    • RE: Full backup - new long-retention options

      @Bastien-Nollet by default what day of day/week/month/year is selected for the LTR point ?

      You said we cannot choose for now, but you could give us the actual recipe ?

      posted in Backup
      P
      Pilow
    • RE: Full backup - new long-retention options

      @Bastien-Nollet here is the config of the BACKUP job :
      95912a70-912d-49fb-8d68-1277beaa9407-image.png

      to the remote DC1-REMOTE01

      you would expect 10 points after 1month+ of forver incremental backups ?
      7 daily, 2 weekly, and 1 monthly

      but I only get 7 points in this remote (the last 7 days)

      I have a mirror copy job, 14 points retention, no LTR, to a remote called DC2-REMOTE01-COPYDC1, i get my exact 14 points.

      b944fdef-693a-4de6-bfaa-9c758cf8a0e2-image.png 680c5ed9-df13-4a32-9703-8fd36464bc2c-image.png

      so i guess, i do not profit from LTR ?
      is that because i didnt put something in DAYS ? or because my retention is >1 ?
      backups are executed daily by SEQUENCE.

      posted in Backup
      P
      Pilow
    • RE: Full backup - new long-retention options

      @Bastien-Nollet where did u see his backup executing once a week ?
      I see everyday, full on sunday ?

      i'm discovering a new way of LTR.

      we should keep retention to 1, and manage the LT points in the form below ? days included ?
      as of now, I gave retention 7 days, and then only put numbers in weeks/months but the results were inexpected... 😕

      posted in Backup
      P
      Pilow
    • RE: Is supermicro IPMI data display planned?

      @olivierlambert HPE DL360 G11
      didnt manage to markdown 😕

      [{"name": "UID", "value": "0x00", "event": "ok"}, {"name": "SysHealth_Stat", "value": "0x00", "event": "ok"}, {"name": "01-Inlet Ambient", "value": "18 degrees C", "event": "ok"}, {"name": "02-CPU 1 PkgTmp", "value": "63 degrees C", "event": "ok"}, {"name": "03-CPU 2 PkgTmp", "value": "64 degrees C", "event": "ok"}, {"name": "04-P1 DIMM 1-8", "value": "43 degrees C", "event": "ok"}, {"name": "05-P1 PMM 1-8", "value": "disabled", "event": "ns"}, {"name": "06-P1 DIMM 9-16", "value": "43 degrees C", "event": "ok"}, {"name": "07-P1 PMM 9-16", "value": "disabled", "event": "ns"}, {"name": "08-P2 DIMM 1-8", "value": "44 degrees C", "event": "ok"}, {"name": "09-P2 PMM 1-8", "value": "disabled", "event": "ns"}, {"name": "10-P2 DIMM 9-16", "value": "44 degrees C", "event": "ok"}, {"name": "11-P2 PMM 9-16", "value": "disabled", "event": "ns"}, {"name": "12-VR P1", "value": "53 degrees C", "event": "ok"}, {"name": "13-VR P2", "value": "54 degrees C", "event": "ok"}, {"name": "14-HD Max", "value": "disabled", "event": "ns"}, {"name": "15-AHCI HD Max", "value": "disabled", "event": "ns"}, {"name": "16-Exp Bay Drive", "value": "disabled", "event": "ns"}, {"name": "18-Stor Batt", "value": "disabled", "event": "ns"}, {"name": "21-Chipset", "value": "56 degrees C", "event": "ok"}, {"name": "22-BMC", "value": "72 degrees C", "event": "ok"}, {"name": "23-P/S 1 Inlet", "value": "33 degrees C", "event": "ok"}, {"name": "24-P/S 1", "value": "40 degrees C", "event": "ok"}, {"name": "25-P/S 2 Inlet", "value": "42 degrees C", "event": "ok"}, {"name": "26-P/S 2", "value": "43 degrees C", "event": "ok"}, {"name": "28-OCP 1", "value": "disabled", "event": "ns"}, {"name": "30-OCP 2", "value": "disabled", "event": "ns"}, {"name": "36-PCI 3", "value": "disabled", "event": "ns"}, {"name": "49-Board Inlet", "value": "26 degrees C", "event": "ok"}, {"name": "50-Sys Exhaust 2", "value": "47 degrees C", "event": "ok"}, {"name": "51-Battery Zone", "value": "44 degrees C", "event": "ok"}, {"name": "52-Sys Exhaust 1", "value": "48 degrees C", "event": "ok"}, {"name": "53-P/S 2 Zone", "value": "46 degrees C", "event": "ok"}, {"name": "Fan 1", "value": "0x00", "event": "ok"}, {"name": "Fan 1 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 1 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 2", "value": "0x00", "event": "ok"}, {"name": "Fan 2 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 2 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 3", "value": "0x00", "event": "ok"}, {"name": "Fan 3 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 3 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 4", "value": "0x00", "event": "ok"}, {"name": "Fan 4 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 4 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 5", "value": "0x00", "event": "ok"}, {"name": "Fan 5 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 5 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 6", "value": "0x00", "event": "ok"}, {"name": "Fan 6 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 6 Presence", "value": "0x00", "event": "ok"}, {"name": "Fan 7", "value": "0x00", "event": "ok"}, {"name": "Fan 7 DutyCycle", "value": "17.36 percent", "event": "ok"}, {"name": "Fan 7 Presence", "value": "0x00", "event": "ok"}, {"name": "Power Supply 1", "value": "0x00", "event": "ok"}, {"name": "PS 1 Input", "value": "180 Watts", "event": "ok"}, {"name": "Power Supply 2", "value": "0x00", "event": "ok"}, {"name": "PS 2 Input", "value": "160 Watts", "event": "ok"}, {"name": "Power Meter", "value": "340 Watts", "event": "ok"}, {"name": "Fans", "value": "0x00", "event": "ok"}, {"name": "Power Supplies", "value": "0x00", "event": "ok"}, {"name": "Memory Status", "value": "0x00", "event": "ok"}, {"name": "Megacell Status", "value": "Not Readable", "event": "ns"}, {"name": "Intrusion", "value": "Not Readable", "event": "ns"}, {"name": "CPU Utilization", "value": "2 unspecified", "event": "ok"}, {"name": "PS 1 Output", "value": "160 Watts", "event": "ok"}, {"name": "PS_Volt_Out_01", "value": "12 Volts", "event": "ok"}, {"name": "PS_Volt_In_01", "value": "228 Volts", "event": "ok"}, {"name": "PS_Curr_Out_01", "value": "14 Amps", "event": "ok"}, {"name": "PS_Curr_In_01", "value": "0.70 Amps", "event": "ok"}, {"name": "PS 2 Output", "value": "150 Watts", "event": "ok"}, {"name": "PS_Volt_Out_02", "value": "12 Volts", "event": "ok"}, {"name": "PS_Volt_In_02", "value": "227 Volts", "event": "ok"}, {"name": "PS_Curr_Out_02", "value": "13 Amps", "event": "ok"}, {"name": "PS_Curr_In_02", "value": "0.70 Amps", "event": "ok"}, {"name": "17.1-ExpBayBoot-", "value": "41 degrees C", "event": "ok"}, {"name": "17.2-ExpBayBoot-", "value": "26 degrees C", "event": "ok"}, {"name": "32.1-PCI 1-I/O m", "value": "84 degrees C", "event": "ok"}, {"name": "32.2-PCI 1-I/O m", "value": "57 degrees C", "event": "ok"}, {"name": "32.3-PCI 1-I/O m", "value": "59 degrees C", "event": "ok"}, {"name": "34.1-PCI 2-I/O m", "value": "81 degrees C", "event": "ok"}, {"name": "34.2-PCI 2-I/O m", "value": "56 degrees C", "event": "ok"}, {"name": "34.3-PCI 2-I/O m", "value": "57 degrees C", "event": "ok"}, {"name": "NIC_Link_14P1", "value": "Not Readable", "event": "ns"}, {"name": "NIC_Link_14P2", "value": "Not Readable", "event": "ns"}, {"name": "NIC_Link_14P3", "value": "Not Readable", "event": "ns"}, {"name": "NIC_Link_14P4", "value": "Not Readable", "event": "ns"}, {"name": "NIC_Link_01P1", "value": "0x00", "event": "ok"}, {"name": "NIC_Link_01P2", "value": "0x00", "event": "ok"}, {"name": "NIC_Link_02P1", "value": "0x00", "event": "ok"}, {"name": "NIC_Link_02P2", "value": "0x00", "event": "ok"}, {"name": "CPU_Stat_C1", "value": "0x00", "event": "ok"}, {"name": "CPU_Stat_C2", "value": "0x00", "event": "ok"}]
      
      posted in Xen Orchestra
      P
      Pilow
    • RE: Full backup - new long-retention options

      @Bastien-Nollet said in Full backup - new long-retention options:

      Yes, a backup is kept if it matches one of the retention criteria, either the schedule's retention or the LTR. (the backup is not duplicated, we just check for both criteria to know if we should keep the backup or not)

      Could we have an option to choose wich LTR day from month to keep ?
      and even for weekly, what weekday ?

      posted in Backup
      P
      Pilow
    • RE: Is supermicro IPMI data display planned?

      @sluflyer06 and I wish HPE would be added too 😃

      posted in Xen Orchestra
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @Davidj-0 i'll ping back here whenever the app is ready to be tested
      will be MS O365 auth to access

      posted in Share your setup!
      P
      Pilow
    • RE: Job canceled to protect the VDI chain

      @olivierlambert sor for high number of VMs, there is a point in time where you should be aware of not doing DR too frequently to let coalesce proceed.
      50VMs in a DR job, every hour, if coalesce takes 2min by VM, it didnt finish when the next DR is starting ?

      i'm looking for the edge case

      posted in Backup
      P
      Pilow
    • RE: Job canceled to protect the VDI chain

      @olivierlambert if its at XCP level, is there a ratio where storage access performance should be especially monitored ?

      i guess that 500 VMs desnapshoted on 1 host is managed differently than 500VMs on 50 hosts.

      but if all hosts are on same shared storage... performance constraint ?

      posted in Backup
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade yup. break it with normal usage and try to abuse it before its on production mode.
      having feedbacks on what is working, and what is not would be valuable for us... totally pro bono though 😄

      bonus for you, you could spawn VMs in the indian ocean ^^'

      more seriously if you're into it, we need some experimented XCPNG users to test it before launching the tests to totally non-experimented users too.

      the app is quite not ready for open beta, still in alpha I guess, but i'll tell you asap.

      posted in Share your setup!
      P
      Pilow
    • RE: VM association with shared storage

      @olivierlambert on vmware we had affinity and anti affiniy rules to regroup or dispatch VMs

      would be cool to have that managed by tags ?
      tag VMGROUP1 : affinity rule to get app and database VMs on same host
      tag VMGROUP2 : anti affinity rule to dispatch domain controllers on different hosts

      doable ?

      posted in Management
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade the customer base is just below what we need to zeroe the costs... need to have good communication and sell the new infrastructure now, it's a challenge 🙂

      rebuikld, yes, and even get more with automation because of Vates stack it's easier to pull up things. i'm a big fan.

      other challenge is to convert the actual vmware vms.
      our current V2V path consists of veeam backup from old datacenter to new datacenter to veeam cloud connect repository. then we have nested ESXi in XCP pool to restore, and import from there 😄

      we had veeam replication possibilities with vmware, still need to get that in place with XO proxies and XO backup to replicate from on-prem clients to our datacenters

      perhaps when veeam will be out of beta, we could have replication plans like we had on vmware ?

      as soon as our hosting stack is done we'll need beta testers, if you're up to help us test the automations

      posted in Share your setup!
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade we had a similar setup on VMWARE solution, OVH bare metal hosted in France.

      but you know. broadcom 😕

      main company is Toolbox, we decided to migrate onprem and cloud clients to full Vates locally hosted on the island this time, and separated the hosting in Cloudbox, a sister company of Toolbox.

      many clients do not want to be hosted externaly of the island because of the latency. 250ms to 10ms is quite an upgrade for some situations.

      and disaster recovery for 10Tb of vm infrastructure from OVH to Reunion gets you a high RTO, many clients had their external backups on our OVH servers. from days to hours now if needed.

      posted in Share your setup!
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade still early dev but here is what is actually working
      3e876fc7-84b5-400e-878c-4caa8f583007-image.png

      • defining a tenant VLAN, ip subnet is calculated with vlan
      • creating interfaces & VIP carp in netgate 8300 max cluster
      • creating bandwith limiters & default rules for the tenant
      • adding vlan to the switches clusters
      • creating the XCP networks on production pool in xoa
      • pushing documentation of new tenant to netbox

      4 clicks ! 🙂

      work in progress : dhcp server/openvpn server by tenant, outbound nat dedicated IP of available pool per tenant

      9577f7f8-ff68-47c0-8092-5ff2217ae3a0-image.png

      and XO like interface for resellers to manage their clients (this is a global admin view, all internal, clients, and resellers available)
      pushing VMs in their reserved vlan
      start/stopping vms
      view only on their backup logs (not possible with XOA ACLs/self service resources without being an admin)
      reseller can manage its own tenant and its clients tenants, firewall rules are made so that the reseller can access all its client tenants (if he wants to put up its own monitoring for exemple, or mutualised services for its clients)

      work in progress : replicate XOA self-service like options, but with custom granularity. VMs deployement with pulumi is quite finished, need to better manage the available templates to each client/reseller
      their will be a global admin view for us, reseller view for reseller tenant+its clients tenants, and client view on its own tenant

      spinning up a tenant with zero-to-ping in less than 5 minutes is the goal !

      posted in Share your setup!
      P
      Pilow
    • RE: XOSTOR / XOA down

      @bdenv-r are you trying XOSTOR on 8.2 or 8.3 ?

      had many problem of performance and tap-drive locking my VDIs on 8.2 xostor
      would like to know if 8.3 xostor still have issues

      posted in XOSTOR
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade will share some automation screenshots of our current developments as soon as they are proofed
      we're building on top of APIs, all custom settings

      posted in Share your setup!
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @olivierlambert and hurricanes saeson from November to march 😁

      ha, we have an active volcano on the island too 🔥

      posted in Share your setup!
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade so our 10G WDM is ten times your price (but redundancy included :')

      check here for a cool map
      https://www.submarinecablemap.com/

      posted in Share your setup!
      P
      Pilow
    • RE: Racked today, entire hosting solution based on Vates stack

      @nikade there are many local datacenter operators (ZEOP/OMEGA1/SFR/IDOM/CANAL+/FREE)

      I chose SFR because they have connectivity also upto Mayotte Island (look it up too 🙂 ) where we have clients that will profit our hosting solution on Reunion Island.

      Many submarine cables reach us (oldest one is the SAFE : South Africa - Far East to Asia) and some new submarine cables to Africa.

      Fiber connectivity exists, not cheap 😕
      for the x2paths 10Gb between the nodes you can count 3K€/month (no internet, just data)
      100Mb symmetric internet connectivity from datacenter, with good SLAs, 500€/m

      Real challenge to be in the middle of an ocean.

      posted in Share your setup!
      P
      Pilow