XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. _danielgurgel
    3. Posts
    • Profile
    • Following 1
    • Followers 0
    • Topics 9
    • Posts 53
    • Best 2
    • Controversial 0
    • Groups 0

    Posts made by _danielgurgel

    • RE: Backup Job HTTP connection abruptly closed

      @olivierlambert Even updating the host from 8.0 to 8.2 (with last update level) and after cluster and NFS migration, the problem persists.

      We updated the virtualization agent on the virtual server to the latest available version from Citrix and we were able to back it up for a few weeks...but the problem reoccurred, again only for the same server.

      Are there any logs I can paste to help identify this failure?

      posted in Xen Orchestra
      _danielgurgel
      _danielgurgel
    • RE: Backup Job HTTP connection abruptly closed

      @olivierlambert Is there any difference between "traditional Backup" and Export VM performed by Xen Orchestra?

      Even changing the cluster virtual server, the problem still occurs. However, the Export operation works normally.

      posted in Xen Orchestra
      _danielgurgel
      _danielgurgel
    • RE: Backup Job HTTP connection abruptly closed

      @olivierlambert But any reason why this issue only occurs for this VM? Even cloning the VM, the problem happens with the clone... even changing the NFS Server, the problem happens... Let's try moving it to a new cluster.

      posted in Xen Orchestra
      _danielgurgel
      _danielgurgel
    • RE: Backup Job HTTP connection abruptly closed

      @_danielgurgel Here is complete log of the operation.

      vm.copy
      {
        "vm": "54676579-2328-d137-1002-0f32920eab23",
        "sr": "50c59b18-5b5c-2eed-8c82-b8f7fdc8e9b5",
        "name": "VM_NAME"
      }
      {
        "call": {
          "method": "VM.destroy",
          "params": [
            "OpaqueRef:fc032b38-d8d7-43ab-983c-f54bc9dc6f85"
          ]
        },
        "message": "operation timed out",
        "name": "TimeoutError",
        "stack": "TimeoutError: operation timed out
          at Promise.call (/opt/xen-orchestra/node_modules/promise-toolbox/timeout.js:13:16)
          at Xapi._call (/opt/xen-orchestra/packages/xen-api/src/index.js:644:37)
          at /opt/xen-orchestra/packages/xen-api/src/index.js:722:21
          at loopResolver (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:94:23)
          at Promise._execute (/opt/xen-orchestra/node_modules/bluebird/js/release/debuggability.js:384:9)
          at Promise._resolveFromExecutor (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:518:18)
          at new Promise (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:103:10)
          at loop (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:98:12)
          at retry (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:101:10)
          at Xapi._sessionCall (/opt/xen-orchestra/packages/xen-api/src/index.js:713:20)
          at Xapi.call (/opt/xen-orchestra/packages/xen-api/src/index.js:247:14)
          at loopResolver (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:94:23)
          at Promise._execute (/opt/xen-orchestra/node_modules/bluebird/js/release/debuggability.js:384:9)
          at Promise._resolveFromExecutor (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:518:18)
          at new Promise (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:103:10)
          at loop (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:98:12)
          at Xapi.retry (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:101:10)
          at Xapi.call (/opt/xen-orchestra/node_modules/promise-toolbox/retry.js:119:18)
          at Xapi.destroy (/opt/xen-orchestra/@xen-orchestra/xapi/src/vm.js:324:16)
          at Xapi._copyVm (file:///opt/xen-orchestra/packages/xo-server/src/xapi/index.mjs:322:9)
          at Xapi.copyVm (file:///opt/xen-orchestra/packages/xo-server/src/xapi/index.mjs:337:7)
          at Api.callApiMethod (file:///opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.mjs:304:20)"
      }
      
      posted in Xen Orchestra
      _danielgurgel
      _danielgurgel
    • Backup Job HTTP connection abruptly closed

      We are getting backup error on only 1 server in our pool. We've already swapped NFS storage and done a FULL CLONE of the VM for testing, but we're still failing (all other servers work fine in the backup operation to the same NFS Server).

      I have not found anything related to this error and the snapshot operations are working correctly. Any tips to solve this problem?

      transfer 
      Start: Jul 27, 2021, 08:50:02 AM
      End: Jul 27, 2021, 09:39:49 AM
      Duration: an hour
      Error: HTTP connection abruptly closed
      Start: Jul 27, 2021, 08:50:02 AM
      End: Jul 27, 2021, 09:39:49 AM
      Duration: an hour
      Error: HTTP connection abruptly closed
      Start: Jul 27, 2021, 08:49:33 AM
      End: Jul 27, 2021, 09:44:45 AM
      Duration: an hour
      Error: all targets have failed, step: writer.run()
      Type: full
      
      posted in Xen Orchestra
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert said in XCP-ng 8.2.0 RC now available!:

      So the IQN was correctly "saved" during the upgrade, it sounds correct to me. Is that host in a pool? Can you check for the other hosts?

      All others are correct and with other guests running.
      I'll do some testing by removing the iSCSI connections altogether and replugging the host that has been updated.

      If you have any tips on how to solve it, i appreciate your help.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert said in XCP-ng 8.2.0 RC now available!:

      xe host-list uuid=<host uuid> params=other-config

      [13:01 CLI220 ~]# xe host-list uuid=e1d9e417-961b-43f4-8750-39300e197692 params=other-config
      other-config (MRW)    : agent_start_time: 1604332891.; boot_time: 1604323953.; iscsi_iqn: iqn.2020-11.com.svm:cli219; rpm_patch_installation_time: 1604320091.898; last_blob_sync_time: 1570986114.22; multipathing: true; MAINTENANCE_MODE_EVACUATED_VMS_SUSPENDED: ; MAINTENANCE_MODE_EVACUATED_VMS_HALTED: ; MAINTENANCE_MODE_EVACUATED_VMS_MIGRATED: ; multipathhandle: dmp
      
      [13:02 CLI219 ~]# cat /etc/iscsi/initiatorname.iscsi
      InitiatorName=iqn.2020-11.com.svm:cli219
      InitiatorAlias=CLI219
      
      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      @olivierlambert ISO. Our Storage is IBM v7000.

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.2.0 RC now available!

      Reconnection via iSCSI is failing. Upgrade pool from 8.0 to 8.2 (Starting with the Master host).

      Nov  2 10:36:59 CLI219 SM: [10134] Warning: vdi_[de]activate present for dummy
      Nov  2 10:37:00 CLI219 SM: [10200] Setting LVM_DEVICE to /dev/disk/by-scsid/36005076d0281000108000000000000d7
      Nov  2 10:37:00 CLI219 SM: [10200] Setting LVM_DEVICE to /dev/disk/by-scsid/36005076d0281000108000000000000d7
      Nov  2 10:37:00 CLI219 SM: [10200] Raising exception [97, Unable to retrieve the host configuration ISCSI IQN parameter]
      Nov  2 10:37:00 CLI219 SM: [10200] ***** LVHD over iSCSI: EXCEPTION <class 'SR.SROSError'>, Unable to retrieve the host configuration ISCSI IQN parameter
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SRCommand.py", line 376, in run
      Nov  2 10:37:00 CLI219 SM: [10200]     sr = driver(cmd, cmd.sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SR.py", line 147, in __init__
      Nov  2 10:37:00 CLI219 SM: [10200]     self.load(sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/LVMoISCSISR", line 86, in load
      Nov  2 10:37:00 CLI219 SM: [10200]     iscsi = BaseISCSI.BaseISCSISR(self.original_srcmd, sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/SR.py", line 147, in __init__
      Nov  2 10:37:00 CLI219 SM: [10200]     self.load(sr_uuid)
      Nov  2 10:37:00 CLI219 SM: [10200]   File "/opt/xensource/sm/BaseISCSI.py", line 150, in load
      Nov  2 10:37:00 CLI219 SM: [10200]     raise xs_errors.XenError('ConfigISCSIIQNMissing')
      Nov  2 10:37:00 CLI219 SM: [10200]
      
      posted in News
      _danielgurgel
      _danielgurgel
    • CH 8.2

      https://www.citrix.com/blogs/2020/06/25/citrix-hypervisor-8-2-ltsr-is-here/

      posted in News ch 8.2
      _danielgurgel
      _danielgurgel
    • RE: XCP-ng 8.1.0 beta now available!

      @stormi plus this update?
      https://support.citrix.com/article/CTX269586

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: Citrix Hypervisor 8.1 released

      @GHW I agree ... they failed to apply the same level of reliability during live migration as VMWare has ...

      posted in News
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      Before applying the patch, in a new pool (during the CH8 update process to XCP8), when moving any VM to the updated host I received the error below.

      After installing the patch on the master host and restarting the toolstack on the host master and the source host, we no longer notice error, with the migrate process successfully completing.

      The error condition was the failed shutdown of the VM in the live migrate process. So I think the patch made available actually solves the problem in question. (for VMs with and without network interfaces)

      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] Async.VM.pool_migrate R:29b4b31d74de failed with exception Server_error(INTERNAL_ERROR, [ xenopsd internal error: Device_common.QMP_Error(135, "{\"error\":{\"class\":\"GenericError\",\"desc\":\"Unable to open /dev/fdset/0: No such file or directory\",\"data\":{}},\"id\":\"qmp-000029-135\"}") ])
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] Raised Server_error(INTERNAL_ERROR, [ xenopsd internal error: Device_common.QMP_Error(135, "{\"error\":{\"class\":\"GenericError\",\"desc\":\"Unable to open /dev/fdset/0: No such file or directory\",\"data\":{}},\"id\":\"qmp-000029-135\"}") ])
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace] 1/1 xapi @ SECH82 Raised at file (Thread 946189 has no backtrace table. Was with_backtraces called?, line 0
      Nov 30 12:54:48 SECH82 xapi: [error|SECH82|946189 ||backtrace]
      
      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi it seems to be the same problem.

      Sorry, it may have been the "placebo effect" but after applying the update we no longer had the error when making the migrate.

      I'm going to continue the tests... and with VMs without network card, as described in the link.

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi After applying the new patch, migrations are no longer failing. We just tested on a new cluster with 20 servers.

      Thank you so much for your help.
      This patch be including as official update?

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @olivierlambert Yes, all VMs have ejected CD.

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi said in FIX to XCP-ng:

      cleaning up VM stat

      Yes, I received the error, see below:

      xensource.log:Nov 27 08:20:32 SECH82 xenopsd-xc: [debug|SECH82|24 |Async.VM.pool_migrate R:8472b9c00966|xenops_server] Caught Xenops_interface.Xenopsd_error([S(Storage_backend_error);[S(SR_BACKEND_FAILURE_46);[S();S(The VDI is not available [opterr=VDI e3222f55-10ce-4d85-b6a8-a7c81f1a5a1d not detached cleanly]);S()]]]): cleaning up VM state
      
      [10:14 SECH82 log]# cat /etc/xensource-inventory
      PRIMARY_DISK='/dev/disk/by-id/scsi-36d0946606f4911002317966d118d6a4f'
      DOM0_VCPUS='16'
      PRODUCT_VERSION='8.0.0'
      DOM0_MEM='8192'
      CONTROL_DOMAIN_UUID='8320160d-3a65-4051-8d55-2e619ad4875f'
      MANAGEMENT_ADDRESS_TYPE='IPv4'
      COMPANY_NAME_SHORT='Open Source'
      PARTITION_LAYOUT='ROOT,BACKUP,LOG,BOOT,SWAP,SR'
      PRODUCT_VERSION_TEXT='8.0'
      INSTALLATION_UUID='a394d22c-94a9-4e83-89c0-fd366b191216'
      PRODUCT_BRAND='XCP-ng'
      BRAND_CONSOLE='XCP-ng Center'
      PRODUCT_VERSION_TEXT_SHORT='8.0'
      MANAGEMENT_INTERFACE='xenbr2'
      PRODUCT_NAME='xenenterprise'
      STUNNEL_LEGACY='true'
      BUILD_NUMBER='release/naples/master/45'
      PLATFORM_VERSION='3.0.0'
      COMPANY_PRODUCT_BRAND='XCP-ng'
      PLATFORM_NAME='XCP'
      BACKUP_PARTITION='/dev/disk/by-id/scsi-36d0946606f4911002317966d118d6a4f-part2'
      BRAND_CONSOLE_URL='https://xcp-ng.org'
      INSTALLATION_DATE='2019-11-27 01:22:15.630698'
      COMPANY_NAME='Open Source'
      
      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      @stormi Described in CA-327906 🙃

      posted in Development
      _danielgurgel
      _danielgurgel
    • RE: FIX to XCP-ng

      I think it's i'm going through this problem. we are migrating from CH8 to XCP and during migrate, I have noticed failure in the process (migre stalled at 100% and does not conclude) or unexpected shutdown occurs in virtual server.

      Does this bug affect migrate after a pool is 100% updated with XCP8 (with all updates applied)?

      posted in Development
      _danielgurgel
      _danielgurgel
    • FIX to XCP-ng

      Is it possible to apply this fix also in XCP-ng 8?

      Live migration, storage live migration, and VDI migration can fail for VMs that have no attached
      VIFs. After this failure, the VM hangs in shutdown mode. (CA-327906)

      https://github.com/xapi-project/xenopsd/commit/8c3756b952476ff82f9bcbb9ab11ea027bc5ccbb

      0 edwintorok committed to xapi-project/xenopsd
      CA-327906: don't fail migration if a xenstore directory is missing
      
      Migration of a VM without VIFs got stuck at:
      ```
      Caught Xs_protocol.Enoent("directory"): cleaning up VM stat
      ```
      
      Some more debugging showed it failed when trying to move this xenstore
      entry, which was missing:
      ```
      /xapi/8e8c58cd-eba4-cde3-e1e5-000000000001
      ```
      
      The intention of this code seems to have been to ignore missing entries,
      so do that by ignoring Enoent on the root of a tree.
      Do not ignore Enoent on entries deep in the tree since that would
      indicate a race condition elsewhere.
      
      Signed-off-by: Edwin Török <edvin.torok@citrix.com>
      posted in Development
      _danielgurgel
      _danielgurgel