Navigation

    XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. _danielgurgel
    3. Posts
    • Profile
    • Following 1
    • Followers 0
    • Topics 8
    • Posts 48
    • Best 2
    • Groups 0

    Posts made by _danielgurgel

    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert said in XCP-ng 8.2.0 RC now available!:

      So the IQN was correctly "saved" during the upgrade, it sounds correct to me. Is that host in a pool? Can you check for the other hosts?

      All others are correct and with other guests running.
      I'll do some testing by removing the iSCSI connections altogether and replugging the host that has been updated.

      If you have any tips on how to solve it, i appreciate your help.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert said in XCP-ng 8.2.0 RC now available!:

      xe host-list uuid=<host uuid> params=other-config

      [13:01 CLI220 ~]# xe host-list uuid=e1d9e417-961b-43f4-8750-39300e197692 params=other-config
      other-config (MRW)    : agent_start_time: 1604332891.; boot_time: 1604323953.; iscsi_iqn: iqn.2020-11.com.svm:cli219; rpm_patch_installation_time: 1604320091.898; last_blob_sync_time: 1570986114.22; multipathing: true; MAINTENANCE_MODE_EVACUATED_VMS_SUSPENDED: ; MAINTENANCE_MODE_EVACUATED_VMS_HALTED: ; MAINTENANCE_MODE_EVACUATED_VMS_MIGRATED: ; multipathhandle: dmp
      
      [13:02 CLI219 ~]# cat /etc/iscsi/initiatorname.iscsi
      InitiatorName=iqn.2020-11.com.svm:cli219
      InitiatorAlias=CLI219
      
      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert ISO. Our Storage is IBM v7000.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      Reconnection via iSCSI is failing. Upgrade pool from 8.0 to 8.2 (Starting with the Master host).

      Nov  2 10:36:59 CLI219 SM: [10134] Warning: vdi_[de]activate present for dummy
      Nov  2 10:37:00 CLI219 SM: [10200] Setting LVM_DEVICE to /dev/disk/by-scsid/36005076d0281000108000000000000d7
      Nov  2 10:37:00 CLI219 SM: [10200] Setting LVM_DEVICE to /dev/disk/by-scsid/36005076d0281000108000000000000d7
      Nov  2 10:37:00 CLI219 SM: [10200] Raising exception [97, Unable to retrieve the host configuration ISCSI IQN parameter]
      Nov  2 10:37:00 CLI219 SM: [10200] ***** LVHD over iSCSI: EXCEPTION <class 'SR.SROSError'>, Unable to retrieve the host configuration ISCSI IQN parameter
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SRCommand.py", line 376, in run
      Nov  2 10:37:00 CLI219 SM: [10200]     sr = driver(cmd, cmd.sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SR.py", line 147, in __init__
      Nov  2 10:37:00 CLI219 SM: [10200]     self.load(sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/LVMoISCSISR", line 86, in load
      Nov  2 10:37:00 CLI219 SM: [10200]     iscsi = BaseISCSI.BaseISCSISR(self.original_srcmd, sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SR.py", line 147, in __init__
      Nov  2 10:37:00 CLI219 SM: [10200]     self.load(sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/BaseISCSI.py", line 150, in load
      Nov  2 10:37:00 CLI219 SM: [10200]     raise xs_errors.XenError('ConfigISCSIIQNMissing')
      Nov  2 10:37:00 CLI219 SM: [10200]
      
      posted in News
      _danielgurgel
      _danielgurgel
    • CH 8.2

      https://www.citrix.com/blogs/2020/06/25/citrix-hypervisor-8-2-ltsr-is-here/

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.1.0 beta now available!

      @stormi plus this update?
      https://support.citrix.com/article/CTX269586

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Citrix Hypervisor 8.1 released

      @GHW I agree ... they failed to apply the same level of reliability during live migration as VMWare has ...

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      Before applying the patch, in a new pool (during the CH8 update process to XCP8), when moving any VM to the updated host I received the error below.

      After installing the patch on the master host and restarting the toolstack on the host master and the source host, we no longer notice error, with the migrate process successfully completing.

      The error condition was the failed shutdown of the VM in the live migrate process. So I think the patch made available actually solves the problem in question. (for VMs with and without network interfaces)

      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] Async.VM.pool_migrate R:29b4b31d74de failed with exception Server_error(INTERNAL_ERROR, [ xenopsd internal error: Device_common.QMP_Error(135, "{\"error\":{\"class\":\"GenericError\",\"desc\":\"Unable to open /dev/fdset/0: No such file or directory\",\"data\":{}},\"id\":\"qmp-000029-135\"}") ])
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] Raised Server_error(INTERNAL_ERROR, [ xenopsd internal error: Device_common.QMP_Error(135, "{\"error\":{\"class\":\"GenericError\",\"desc\":\"Unable to open /dev/fdset/0: No such file or directory\",\"data\":{}},\"id\":\"qmp-000029-135\"}") ])
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] 1/1 xapi @ SECH82 Raised at file (Thread 946189 has no backtrace table. Was with_backtraces called?, line 0
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace]
      
      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi it seems to be the same problem.

      Sorry, it may have been the "placebo effect" but after applying the update we no longer had the error when making the migrate.

      I'm going to continue the tests... and with VMs without network card, as described in the link.

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi After applying the new patch, migrations are no longer failing. We just tested on a new cluster with 20 servers.

      Thank you so much for your help.
      This patch be including as official update?

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @olivierlambert Yes, all VMs have ejected CD.

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi said in FIX to XCP-ng:

      cleaning up VM stat

      Yes, I received the error, see below:

      xensource.log:Nov 27 08:20:32 SECH82 xenopsd-xc: [debug|SECH82|24 |Async.VM.pool_migrate R:8472b9c00966|xenops_server] Caught Xenops_interface.Xenopsd_error([S(Storage_backend_error);[S(SR_BACKEND_FAILURE_46);[S();S(The VDI is not available [opterr=VDI e3222f55-10ce-4d85-b6a8-a7c81f1a5a1d not detached cleanly]);S()]]]): cleaning up VM state
      
      [10:14 SECH82 log]# cat /etc/xensource-inventory
      PRIMARY_DISK='/dev/disk/by-id/scsi-36d0946606f4911002317966d118d6a4f'
      DOM0_VCPUS='16'
      PRODUCT_VERSION='8.0.0'
      DOM0_MEM='8192'
      CONTROL_DOMAIN_UUID='8320160d-3a65-4051-8d55-2e619ad4875f'
      MANAGEMENT_ADDRESS_TYPE='IPv4'
      COMPANY_NAME_SHORT='Open Source'
      PARTITION_LAYOUT='ROOT,BACKUP,LOG,BOOT,SWAP,SR'
      PRODUCT_VERSION_TEXT='8.0'
      INSTALLATION_UUID='a394d22c-94a9-4e83-89c0-fd366b191216'
      PRODUCT_BRAND='XCP-ng'
      BRAND_CONSOLE='XCP-ng Center'
      PRODUCT_VERSION_TEXT_SHORT='8.0'
      MANAGEMENT_INTERFACE='xenbr2'
      PRODUCT_NAME='xenenterprise'
      STUNNEL_LEGACY='true'
      BUILD_NUMBER='release/naples/master/45'
      PLATFORM_VERSION='3.0.0'
      COMPANY_PRODUCT_BRAND='XCP-ng'
      PLATFORM_NAME='XCP'
      BACKUP_PARTITION='/dev/disk/by-id/scsi-36d0946606f4911002317966d118d6a4f-part2'
      BRAND_CONSOLE_URL='https://xcp-ng.org'
      INSTALLATION_DATE='2019-11-27 01:22:15.630698'
      COMPANY_NAME='Open Source'
      
      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi Described in CA-327906 🙃

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      I think it's i'm going through this problem. we are migrating from CH8 to XCP and during migrate, I have noticed failure in the process (migre stalled at 100% and does not conclude) or unexpected shutdown occurs in virtual server.

      Does this bug affect migrate after a pool is 100% updated with XCP8 (with all updates applied)?

      posted in Development
      _danielgurgel
      _danielgurgel
    • FIX to XCP-ng

      Is it possible to apply this fix also in XCP-ng 8?

      Live migration, storage live migration, and VDI migration can fail for VMs that have no attached
      VIFs. After this failure, the VM hangs in shutdown mode. (CA-327906)

      https://github.com/xapi-project/xenopsd/commit/8c3756b952476ff82f9bcbb9ab11ea027bc5ccbb

      0 edwintorok committed to xapi-project/xenopsd
      CA-327906: don't fail migration if a xenstore directory is missing
      
      Migration of a VM without VIFs got stuck at:
      ```
      Caught Xs_protocol.Enoent("directory"): cleaning up VM stat
      ```
      
      Some more debugging showed it failed when trying to move this xenstore
      entry, which was missing:
      ```
      /xapi/8e8c58cd-eba4-cde3-e1e5-000000000001
      ```
      
      The intention of this code seems to have been to ignore missing entries,
      so do that by ignoring Enoent on the root of a tree.
      Do not ignore Enoent on entries deep in the tree since that would
      indicate a race condition elsewhere.
      
      Signed-off-by: Edwin Török <edvin.torok@citrix.com>
      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: Updates announcements and testing

      @stormi After the informed change, the backup occurred with 100% success, on no disk in the coalesce chain.

      We're migrating another cluster to XCP-ng 8!
      Thanks for the support, quick return and attention.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Updates announcements and testing

      @stormi The strange thing is, I had to turn off the VMs, rescan disk and then turn on again.

      The coalesce process began with the linked VMs (in production) and successfully completed. The following values have been changed at /opt/xensource/sm/cleanup.py :

      LIVE_LEAF_COALESCE_MAX_SIZE = 1024 * 1024 * 1024 # bytes
      LIVE_LEAF_COALESCE_TIMEOUT = 300 # seconds
      

      Well, apparently everything ok... we will see in our next backup if it will be necessary to turn off the VMs for the coalesce to start and complete correctly.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Citrix Hypervisor 8.1 released

      @olivierlambert
      Notes:
      • Dynamic Memory Control is deprecated in Citrix Hypervisor 8.1 and will be removed in a
      future release.

      See file.. https://gofile.io/?c=xRuFy8 (citrix-hypervisor-8.1.pdf)

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Updates announcements and testing

      @stormi The perception I have is as follows in XCP8:

      • After backup, all the disks were left with 1 disk frozen in the leaf tree.
      • Even pausing the VM and rescanning disk, the coalesce process does not start.

      For CH7.1 with the XS71ECU2020 update, the coalesce process completed 100% by pausing the VMs. We will now re-back it up and see if the coalesce runs again 100%.

      I used standard times in LIVE_LEAF_COALESCE_TIMEOUT=10.
      The new test will be with LIVE_LEAF_COALESCE_TIMEOUT=300.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Citrix Hypervisor 8.1 released

      I was able to download the PDF with the documentation of 8.1. Happy for Xen 4.13 and WLB integrated with XenCenter...

      But I was sad that they didn't evolve with GFS2 and removed DMC.

      posted in News
      _danielgurgel
      _danielgurgel