XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. icecoke
    3. Posts
    I
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 13
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: SR_BACKEND_FAILURE_46 The VDI is not available not detached cleanly

      @john.manning

      Happy to hear that. I guess it was my fault - I edited the naming of the snipped pseudo vars:

      OLDVBDUUID = xe vdi-param-get uuid=VDIUUID param-name=vbd-uuids
      VDIUSERDEVICE = xe vbd-param-get param-name=userdevice uuid=OLDVBDUUID
      

      maybe that way it's clearer for others that might find that helpful

      posted in Management
      I
      icecoke
    • RE: SR_BACKEND_FAILURE_46 The VDI is not available not detached cleanly

      @john.manning

      not familiar with xostor, but that should make no difference.

      The UUID for the vbd-param-get command must be a vbd-uuid, as the vbd 'has' the logical position in the VM, not the VDI. Look at this example:

      [root@xenserver72-b1 ~]# xe vm-disk-list vm=xxxxxx is-a-snapshot=false | grep -a1 'VDI\:' | grep 'uuid' | awk '{print $(NF)}'
      847c79d8-1dac-4b7c-bc5f-9fd2714dfa66
      40d02eb9-e099-4ebf-8ad7-96f689b33990
      [root@xenserver72-b1 ~]# xe vdi-param-get uuid=40d02eb9-e099-4ebf-8ad7-96f689b33990 param-name=vbd-uuids
      98bb67cb-2b66-b10d-6016-edd8e24896cb
      [root@xenserver72-b1 ~]# xe vbd-param-get param-name=userdevice uuid=98bb67cb-2b66-b10d-6016-edd8e24896cb
      0
      [root@xenserver72-b1 ~]# xe vdi-param-get uuid=847c79d8-1dac-4b7c-bc5f-9fd2714dfa66 param-name=vbd-uuids
      ffab52fe-ba57-f5b3-2614-5ce01171ea79
      [root@xenserver72-b1 ~]# xe vbd-param-get param-name=userdevice uuid=ffab52fe-ba57-f5b3-2614-5ce01171ea79
      4
      

      first step is to get all VDIs of the named VM, than for the vdi-param-get command, I use these VDI-UUIDs to get the VBD-UUIDs of each.
      Then - from the vbd-param-get with the VBD-UUID - you can get the logical userdevice position of the VBD->VDI connection.

      posted in Management
      I
      icecoke
    • RE: SR_BACKEND_FAILURE_46 The VDI is not available not detached cleanly

      @john.manning

      well known problem.
      solution:

      • shutdown vm (must be in halted state)
      • find your VDI UUIDs
      xe vm-disk-list vm=VMNAMEORUUID is-a-snapshot=false | grep -a1 'VDI\:' | grep 'uuid' | awk '{print $(NF)}'
      
      • write down for all VDIs (in addition to you VMUUID):
      VDIUUID
      VDILABEL = xe vdi-param-get param-name=name-label uuid=VDIUUID
      VDIDESCR = xe vdi-param-get param-name=name-description uuid=VDIUUID
      SRUUID = xe vdi-param-get param-name=sr-uuid uuid=VDIUUID (if you have different SRs)
      OLDVBDUUID = xe vdi-param-get uuid=VDIUUID param-name=vbd-uuids
      VDIUSERDEVICE = xe vbd-param-get param-name=userdevice uuid=OLDVBDUUID
      

      i have a script doing some checks at this point like getting current-operations etc. to prevent work on running/connected VMs/VDIs. If you are sure, the VM is offline, everything is ok. Sometimes thinks like a xe-toolstack-restart is needed, if the host is a long time runner 🙂

      • now 'forget' all VDIs: (call for each VDI)
      xe vdi-forget uuid=VDIUUID
      
      • this could take some time AFTER the call above. Check that this responds with a 0, after you did the above call for all of your VM devices:
      xe vbd-list vm-uuid=VMUUID | grep 'empty ( RO): false' | wc -l`
      
      • now rescan all SRs where the VM had a VDI (maybe you have more than one SR):
      xe sr-scan uuid=SRUUID
      
      • now readd all devices again to the same position (VDIUSERDEVICE):
      xe vbd-create vm-uuid=VMUUID device=VDIUSERDEVICE vdi-uuid=VDIUUID
      xe vdi-param-set name-label='VDILABEL' name-description='VDIDESCR' uuid=VDIUUID
      
      • If you got no errors (hopefully), start you VM:
      xe vm-start uuid=VMUUID
      

      This can easily scripted in shell or perl. We have 'The VDI is not available' errors and alike from time to time, so on all pool masters such a script exists to be run if needed. Especially if you have several drives and several VDI names/descriptions it's nice to not do it step by step 😉

      posted in Management
      I
      icecoke
    • RE: always getting "unhealthy VDI chain"

      @Danp - thank you. Yes, I did, but that was not helping in this case.
      Found the solution:

      Both machines are on a hostserver, which has NOT all networks. So the real reason must be a network problem, therefore a quite missleading error message 🙂

      After moving both VMs to the master of this pool (which is connected to the same network where the xoa resides), everything went fine. Same storage, same vhds.

      posted in Xen Orchestra
      I
      icecoke
    • always getting "unhealthy VDI chain"

      I have an delta backup job in xoa, which is backing up 28 VMs. 26 of them has never any problems, 2 of them (different storages, different xenserver hosts) have always (or at least nearly always, as some manual retries did it) this error:

      Reason: (unhealthy VDI chain) Job canceled to protect the VDI chain

      When I rerun the job for these two machines, sometimes it get's thru. When running it automatically, it never runs thru.

      How can I get more detailed information about the real reason. Which logs can show more?

      interesting: I even made a copy of the vdi to have that chain thing cut and used the copy (not a fast one or clone) as storage, but the same error occurs. As far as I can see, there is no VDI chain, with no parent or childs. So where could lay the problem?

      Any help is welcome!
      Jimmy

      posted in Xen Orchestra
      I
      icecoke
    • RE: Delta Backup transferred thru slow management network...

      @darkbeldin Any additional idea, why this still happens after the suggested change?

      Many thanks!

      posted in Xen Orchestra
      I
      icecoke
    • RE: Delta Backup transferred thru slow management network...

      @olivierlambert I'm sorry to say that, but it seems that this change (defining the default migration network for all pools) made no difference.
      Is it for sure, that this also defines the network used on Delta Backup jobs?
      Is it needed to define the Delta Backup job completely new (maybe it saved some information about the migration networks of the first Delta Backup job run?) ?
      Any input is welcome!

      posted in Xen Orchestra
      I
      icecoke
    • RE: Delta Backup transferred thru slow management network...

      @darkbeldin

      That sounds really promissing! I will change and test this.
      Many thanks!!!

      posted in Xen Orchestra
      I
      icecoke
    • RE: Delta Backup transferred thru slow management network...

      @darkbeldin Hi!,

      the xoa VM is already able to communicate thru the 10GB network, but it does'nt as it obviously prefers the management network for this purpose instead of the dedicated storage network of the dom0 hosting the given VM to backup.

      So the 'problem' seems to be that the download from the host is taken thru the management network if that exists - despite the existance of a dedicated storage network on the same dom0. That is the point where one should be able to select the network where the snapshot is transfered thru. At least if there are more than one network existing in the given pool/dom0.

      Can this be achieved?

      posted in Xen Orchestra
      I
      icecoke
    • Delta Backup transferred thru slow management network...

      Hi there,

      we have the following, physical setup:

      public net 1.2.3.xxx
      management net 192.168.0.xxx
      storage net 10.0.0.xxx

      all storages are connected to 10.0.0.xxx only (so the remote target, too)
      all xenserver are connected with one NIC to the public net
      all xenserver are connected with one NIC to the storage net
      SOME xenserver have their management network on the NIC with the management net
      SOME xenserver have their management network on the NIC with the storage net (so no dedicated storage/management NIC)

      It is planned to separate the storage net completely from the management net, but atm some have the old (management on storage network/NIC) and some have the new (management network on dedicated NIC, storage on dedicated NIC) setup.

      The xoa VM has connection to bot networks.

      In the most situations it was helpful, to separate the storage and the management, even if the management has just 1Gbit/s and not 10Gbit/s as the storage network. According to the citrix xen forum discussions the management suffers from the traffic kind of the storages, so we separated it. We got faster responses in controlling VMs etc. after that. So far, so good.

      The big problem is now:

      xoa is transferring the complete backup data of VMs on hosts with the dedicated management NICs thru these NICs even if the target storage is only connected to the storage network. VMs on hosts with shared management/storage NIC on the storage network are transferred on this network, with nearly 10 times faster speed (as this is a 10Gbit/s network).

      This is terrible slow for the setup with the dedicated management NIC and - as the target is on the storage network (which makes sense in all environments) - I wonder why this can't be controlled by the user, if xoa is not trying to use always the target network if possible on it's own?

      When I migrate VMs across pools, xen is offering the selection of the network and - as all VMs/Hosts are connected to the fast storage network - it makes only sense to transfer the data on the storage network as this data is no control information, but only a stream of storage data...

      Is there a way to control/change this for backups? Do I need to remove the management NIC from the xoa VM, so xoa MUST connect to all hosts by the storage network? That in return would slow down api commands to control hosts and VMs again as that was the reason for a dedicated management network.

      Any solutions/workarounds on this?

      Kind regards,
      jimmy

      posted in Xen Orchestra
      I
      icecoke
    • RE: Continous Replication - smart selection by source SR?

      @olivierlambert You are absolutely right: tags are enough. My fault. We will code some cronjobs that find the current SR, set the right tag and that's it. If the cronjob runs before the CR job is running, we should be fine.
      If you will implement the other idea: I guess (as you can specify only one backup target atm) an option to choose: 'store all vdis in target SR' or 'store only vdis from source SR in target SR' would be great, but if less work should be done: just store all vdis in the target SR - no matter where the source vdis are laying around 🙂

      Anyway: great work! Keep on!

      posted in Xen Orchestra
      I
      icecoke
    • Continous Replication - smart selection by source SR?

      Hi everyone!

      I wonder if it is possible to create a backup job in xoa, that uses the great Continous Replication and automatically (smart) selects all VMs hosted on a specifix SR to be replicated to another specific target SR.

      That way, a perfect desaster recovery could be easily managed.

      Getting/using a similar storage will be a common task for everyone managing a cloud infrastructure. I assume it should be quite easy to code, as you already offer the smart selection by power-state, tags etc.
      I know it is possible to have VDIs on different SRs, but you have always exceptions 🙂 Nevertheless, selecting VMs by SR should be an great thing for CR in my view.

      Is it possible, that you will offer this in the future? Is it already possible, and I just missed some feature?

      Many thanks in advance!
      Jimmy

      posted in Xen Orchestra
      I
      icecoke
    • Complete uninstall of XOSANv1?

      Hi,
      is it possible (and when how) to do a complete uninstall of XOSAN (version 1.16) from a XenServer 7.2 host?
      Didn't find any documentation about that 😞

      Many thanks in advance!
      Jimmy

      posted in Xen Orchestra
      I
      icecoke