XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. tc-atwork
    3. Posts
    T
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 11
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Veeam and XCP-ng

      @KPS said in Veeam and XCP-ng:

      • Instant restore "pendant" is missing --> makes restore speed MORE critical

      While not entirely the same, depending on the reason you are restoring, you can restore to the most recent snapshot taken by your scheduled backups or setup a separate snapshot schedule independent from your backups.

      To add to your list, XO also can't snapshot (and therefore backup) VMs that are configured with USB passthrough due to an XCP-NG 8.2 limitation. Support has told me this "should be solved" in 8.3, but to me should does not mean confirmed and there is no timeframe that I'm aware of for 8.3 stable release.

      posted in XCP-ng
      T
      tc-atwork
    • RE: Veeam and XCP-ng

      @DustinB said in Veeam and XCP-ng:

      Vates would have to build (either into or their own) guest utility that supports application awareness, which can be done by using the "Driver Development Kit (DDK) 8.0.0"

      And they should if they want to compete with VMware and Hyper-V. I think a native, completely Vates-controlled guest utility should be high on the priority list. Preferably, one that is automatically distributed by Windows Update (admittedly, I don't know the cost for this) or automatically installed on Linux systems, controlled by a setting at the VM and/or Pool level.

      posted in XCP-ng
      T
      tc-atwork
    • RE: Veeam and XCP-ng

      @jbamford said in Veeam and XCP-ng:

      there is having something faster and there is something that’s going to be reliable

      How about fast and reliable, which is Veeam.

      I’m not saying that Veeam isn’t going to be reliable but what I am going to say is that you are at more risk of backups not being reliable

      If this is a Production environment I would go with the XCP-ng and XO because if something goes wrong

      That's a pretty hot take that Veeam, an industry leading product where their only business is backup and they are damn good at it, is going to somehow be less reliable than the, in my opinion, half-baked solution that exists in XO (while I appreciate the work the team is doing to make this better and better, seriously, I love reading the monthly progress updates, the fact remains). Additionally, the fact that XO requires at least double the provisioned storage of a VM in order to properly coalesce snapshots is a drawback that Veeam doesn't require.

      While I agree with the answer someone else gave that this question should be getting asked to Veeam, I think the discussion here is important as it may drive interest to Vates to work with Veeam. Especially now that a lot of VMWare customers that are accustomed to all that Veeam can do are making their way to XO, I'd expect Veeam integration to be at least in consideration.

      posted in XCP-ng
      T
      tc-atwork
    • RE: XO6/XOLite Power state dropdown suggestion

      @clemencebx Awesome, thanks!

      posted in Xen Orchestra
      T
      tc-atwork
    • RE: Attempting to migrate VM to different host in a different pool get 'operation failed'

      Yeah, the trouble is manually migrating from a different hypervisor platform (hyper-v) through web-browser downloads and uploads (and then, a middleman being my laptop) is extremely slow so I just transferred directly using SCP and then imported using xe commands.

      posted in Xen Orchestra
      T
      tc-atwork
    • RE: Attempting to migrate VM to different host in a different pool get 'operation failed'

      Okay, yup that was it. Migration is in progress now. When I manually added these disks to the host I never cleaned up the original disks after importing from the CLI, so removing them allowed the SR scan to work properly (or something like that). Thanks for the direction @olivierlambert !

      Kind of annoying that totally unrelated disks would cause a migration failure for another VM, but alas. Are there any plans to improve the error reporting in XO?

      posted in Xen Orchestra
      T
      tc-atwork
    • RE: Attempting to migrate VM to different host in a different pool get 'operation failed'

      My assumption is the error is due to these lines:

      Aug 22 10:09:06 H2SPH180034 SM: [26118] ***** sr_scan: EXCEPTION <class 'XenAPI.Failure'>, ['UUID_INVALID', 'VDI', 'nwweb1_disk2']

      Aug 22 10:09:06 H2SPH180034 SM: [26118] Raising exception [40, The SR scan failed [opterr=['UUID_INVALID', 'VDI', 'nwweb1_disk2']]]

      posted in Xen Orchestra
      T
      tc-atwork
    • RE: Attempting to migrate VM to different host in a different pool get 'operation failed'

      @olivierlambert here is the output to SMLog when I attempt to migrate with the VM shutdown. The VM UUID is 06afe27b-045e-d1a9-1d57-aaea2e4394ed and the disk UUID is 5bcc1298-712a-4288-95c8-11b37de30191.

      Aug 22 10:09:04 H2SPH180034 SM: [26008] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:04 H2SPH180034 SM: [26008] lock: acquired /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:04 H2SPH180034 SM: [26008] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:04 H2SPH180034 SM: [26008]   pread SUCCESS
      Aug 22 10:09:05 H2SPH180034 SM: [26008] vdi_attach {'sr_uuid': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'subtask_of': 'DummyRef:|c5a430fd-b0bc-4572-a15e-2505a9a882f6|VDI.attach2', 'vdi_ref': 'OpaqueRef:737c43e7-81e3-40d8-9f3d-7ba4fa7b8d5e', 'vdi_on_boot': 'persist', 'args': ['true'], 'o_direct': False, 'vdi_location': '5bcc1298-712a-4288-95c8-11b37de30191', 'host_ref': 'OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8', 'session_ref': 'OpaqueRef:4b319af9-5119-4f8a-804a-a91f317b4fc2', 'device_config': {'device': '/dev/disk/by-id/scsi-36d09466058006700293b74fd0e7c6873-part3', 'SRmaster': 'true'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:4ecc1a6e-8115-4104-b06b-6b1431b60de7', 'local_cache_sr': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'vdi_uuid': '5bcc1298-712a-4288-95c8-11b37de30191'}
      Aug 22 10:09:05 H2SPH180034 SM: [26008] lock: opening lock file /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:05 H2SPH180034 SM: [26008] <__main__.EXTFileVDI object at 0x7febd9eeb110>
      Aug 22 10:09:05 H2SPH180034 SM: [26008] result: {'params_nbd': 'nbd:unix:/run/blktap-control/nbd/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191', 'o_direct_reason': 'NO_RO_IMAGE', 'params': '/dev/sm/backend/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEjViY2MxMjk4LTcxMmEtNDIgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIDViY2MxMjk4LTcxMmEtNDI4OC05NWM4LTExYjM3ZGUzMDE5MSA=', 'vdi-uuid': '5bcc1298-712a-4288-95c8-11b37de30191', 'mem-pool': '54938185-cf60-f996-e2d7-fe8c47c38abb'}}
      Aug 22 10:09:05 H2SPH180034 SM: [26008] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: acquired /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]   pread SUCCESS
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26037] vdi_activate {'sr_uuid': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'subtask_of': 'DummyRef:|a7064e4e-5a08-4b68-afb9-a4aba04c93d3|VDI.activate', 'vdi_ref': 'OpaqueRef:737c43e7-81e3-40d8-9f3d-7ba4fa7b8d5e', 'vdi_on_boot': 'persist', 'args': ['true'], 'o_direct': False, 'vdi_location': '5bcc1298-712a-4288-95c8-11b37de30191', 'host_ref': 'OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8', 'session_ref': 'OpaqueRef:28ba8e04-21f5-4bbc-b1ab-dafab080dec2', 'device_config': {'device': '/dev/disk/by-id/scsi-36d09466058006700293b74fd0e7c6873-part3', 'SRmaster': 'true'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:4ecc1a6e-8115-4104-b06b-6b1431b60de7', 'local_cache_sr': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'vdi_uuid': '5bcc1298-712a-4288-95c8-11b37de30191'}
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: opening lock file /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:05 H2SPH180034 SM: [26037] blktap2.activate
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: acquired /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:05 H2SPH180034 SM: [26037] Adding tag to: 5bcc1298-712a-4288-95c8-11b37de30191
      Aug 22 10:09:05 H2SPH180034 SM: [26037] Activate lock succeeded
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]   pread SUCCESS
      Aug 22 10:09:05 H2SPH180034 SM: [26037] PhyLink(/dev/sm/phy/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191) -> /var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd
      Aug 22 10:09:05 H2SPH180034 SM: [26037] <EXTSR.EXTFileVDI object at 0x7f727c2cc9d0>
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/tap-ctl', 'allocate']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]  = 0
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/tap-ctl', 'spawn']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]  = 0
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/tap-ctl', 'attach', '-p', '26090', '-m', '2']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]  = 0
      Aug 22 10:09:05 H2SPH180034 SM: [26037] ['/usr/sbin/tap-ctl', 'open', '-p', '26090', '-m', '2', '-a', 'vhd:/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd', '-t', '40']
      Aug 22 10:09:05 H2SPH180034 SM: [26037]  = 0
      Aug 22 10:09:05 H2SPH180034 SM: [26037] Set scheduler to [noop] on [/sys/dev/block/254:2]
      Aug 22 10:09:05 H2SPH180034 SM: [26037] tap.activate: Launched Tapdisk(vhd:/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd, pid=26090, minor=2, state=R)
      Aug 22 10:09:05 H2SPH180034 SM: [26037] DeviceNode(/dev/sm/backend/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191) -> /dev/xen/blktap-2/tapdev2
      Aug 22 10:09:05 H2SPH180034 SM: [26037] NBDLink(/run/blktap-control/nbd/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191) -> /run/blktap-control/nbd26090.2
      Aug 22 10:09:05 H2SPH180034 SM: [26037] lock: released /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:05 H2SPH180034 SM: [26118] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26118] lock: acquired /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:05 H2SPH180034 SM: [26118] sr_scan {'sr_uuid': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'subtask_of': 'DummyRef:|d9d1c923-13d3-4284-a2b0-166f836619c9|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8', 'session_ref': 'OpaqueRef:5c779a32-0bb5-4034-a630-2739e4462614', 'device_config': {'device': '/dev/disk/by-id/scsi-36d09466058006700293b74fd0e7c6873-part3', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:4ecc1a6e-8115-4104-b06b-6b1431b60de7', 'local_cache_sr': '54938185-cf60-f996-e2d7-fe8c47c38abb'}
      Aug 22 10:09:05 H2SPH180034 SM: [26118] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/*.vhd']
      Aug 22 10:09:05 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:05 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/d21f98dc-c385-4d2a-81ae-f2f79c229e5c.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/f0d05834-e6ec-45ee-b5a9-d8036f8e7340.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/a8631a05-efd9-4e56-9b7f-ee7627051e76.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/ea8fcb2d-0dd1-4ce9-b894-ee1445fa6d80.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/e36aeb72-33dc-47ec-8db9-915d5420a12f.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/nwweb1_disk2.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/c8886b66-010f-4288-9139-1e1c19d557dd.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/nwweb1_disk1.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/ec0ce545-2930-48d4-8d57-eb99f35be554.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['vhd-util', 'key', '-p', '-n', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/8a33a5d9-d26f-4c4d-acca-189b0d6abd3e.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ['ls', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb', '-1', '--color=never']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26118] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/running
      Aug 22 10:09:06 H2SPH180034 SM: [26118] lock: tried lock /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/running, acquired: True (exists: True)
      Aug 22 10:09:06 H2SPH180034 SM: [26118] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/running
      Aug 22 10:09:06 H2SPH180034 SM: [26118] Kicking GC
      Aug 22 10:09:06 H2SPH180034 SMGC: [26118] === SR 54938185-cf60-f996-e2d7-fe8c47c38abb: gc ===
      Aug 22 10:09:06 H2SPH180034 SMGC: [26145] Will finish as PID [26146]
      Aug 22 10:09:06 H2SPH180034 SMGC: [26118] New PID [26145]
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/running
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/gc_active
      Aug 22 10:09:06 H2SPH180034 SM: [26118] missing config for vdi: nwweb1_disk2
      Aug 22 10:09:06 H2SPH180034 SM: [26118] missing config for vdi: nwweb1_disk1
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 25685762560 <> 25141539328
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 52819563008 <> 495104
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 64737485312 <> 52703637504
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 32077820416 <> 30558618112
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 46813917696 <> 45536358912
      Aug 22 10:09:06 H2SPH180034 SM: [26118] utilisation 37654450688 <> 87552
      Aug 22 10:09:06 H2SPH180034 SM: [26118] new VDIs on disk: set(['nwweb1_disk2', 'nwweb1_disk1'])
      Aug 22 10:09:06 H2SPH180034 SM: [26118] VDIs changed on disk: ['d21f98dc-c385-4d2a-81ae-f2f79c229e5c', 'a8631a05-efd9-4e56-9b7f-ee7627051e76', 'ea8fcb2d-0dd1-4ce9-b894-ee1445fa6d80', 'e36aeb72-33dc-47ec-8db9-915d5420a12f', 'c8886b66-010f-4288-9139-1e1c19d557dd', 'ec0ce545-2930-48d4-8d57-eb99f35be554']
      Aug 22 10:09:06 H2SPH180034 SM: [26118] Introducing VDI with location=nwweb1_disk2
      Aug 22 10:09:06 H2SPH180034 SM: [26118] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ***** sr_scan: EXCEPTION <class 'XenAPI.Failure'>, ['UUID_INVALID', 'VDI', 'nwweb1_disk2']
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     return self._run_locked(sr)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     rv = self._run(sr, target)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SRCommand.py", line 364, in _run
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     return sr.scan(self.params['sr_uuid'])
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/FileSR.py", line 216, in scan
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     return super(FileSR, self).scan(sr_uuid)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SR.py", line 346, in scan
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     scanrecord.synchronise()
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SR.py", line 616, in synchronise
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     self.synchronise_new()
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SR.py", line 589, in synchronise_new
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     vdi._db_introduce()
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/VDI.py", line 488, in _db_introduce
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     vdi = self.sr.session.xenapi.VDI.db_introduce(uuid, self.label, self.description, self.sr.sr_ref, ty, self.shareable, self.read_only, {}, self.location, {}, sm_config, self.managed, str(self.size), str(self.utilisation), metadata_of_pool, is_a_snapshot, xmlrpclib.DateTime(snapshot_time), snapshot_of, cbt_enabled)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/usr/lib/python2.7/site-packages/XenAPI.py", line 264, in __call__
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     return self.__send(self.__name, args)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/usr/lib/python2.7/site-packages/XenAPI.py", line 160, in xenapi_request
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     result = _parse_result(getattr(self, methodname)(*full_params))
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/usr/lib/python2.7/site-packages/XenAPI.py", line 238, in _parse_result
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     raise Failure(result['ErrorDescription'])
      Aug 22 10:09:06 H2SPH180034 SM: [26118]
      Aug 22 10:09:06 H2SPH180034 SM: [26118] Raising exception [40, The SR scan failed  [opterr=['UUID_INVALID', 'VDI', 'nwweb1_disk2']]]
      Aug 22 10:09:06 H2SPH180034 SM: [26118] ***** Local EXT3 VHD: EXCEPTION <class 'SR.SROSError'>, The SR scan failed  [opterr=['UUID_INVALID', 'VDI', 'nwweb1_disk2']]
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SRCommand.py", line 378, in run
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     ret = cmd.run(sr)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]   File "/opt/xensource/sm/SRCommand.py", line 120, in run
      Aug 22 10:09:06 H2SPH180034 SM: [26118]     raise xs_errors.XenError(excType, opterr=msg)
      Aug 22 10:09:06 H2SPH180034 SM: [26118]
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] Found 0 cache files
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: tried lock /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/gc_active, acquired: True (exists: True)
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: tried lock /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr, acquired: True (exists: True)
      Aug 22 10:09:06 H2SPH180034 SM: [26146] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/*.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26146]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] SR 5493 ('0034.datastore1') (11 VDIs in 11 VHD trees):
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         d21f98dc(50.000G/23.922G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         5bcc1298(120.000G/120.166G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         f0d05834(4.000G/591.161M)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         a8631a05(232.887G/49.192G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         ea8fcb2d(64.000G/60.291G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         e36aeb72(80.000G/29.875G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         nwweb1_d(40.001G/35.009G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         c8886b66(100.000G/43.599G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         nwweb1_d(232.886G/49.098G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         ec0ce545(40.002G/35.068G)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]         8a33a5d9(302.000M/98.192M)
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146]
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] No work, exiting
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] GC process exiting, no work left
      Aug 22 10:09:06 H2SPH180034 SM: [26146] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/gc_active
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] In cleanup
      Aug 22 10:09:06 H2SPH180034 SMGC: [26146] SR 5493 ('0034.datastore1') (11 VDIs in 11 VHD trees): no changes
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: acquired /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26186] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26186]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26186] vdi_deactivate {'sr_uuid': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'subtask_of': 'DummyRef:|d5df22a3-6aa7-40ac-b53e-60c564f8bd03|VDI.deactivate', 'vdi_ref': 'OpaqueRef:737c43e7-81e3-40d8-9f3d-7ba4fa7b8d5e', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '5bcc1298-712a-4288-95c8-11b37de30191', 'host_ref': 'OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8', 'session_ref': 'OpaqueRef:c2ab8621-e768-495d-9cb9-733fcc2a7191', 'device_config': {'device': '/dev/disk/by-id/scsi-36d09466058006700293b74fd0e7c6873-part3', 'SRmaster': 'true'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:4ecc1a6e-8115-4104-b06b-6b1431b60de7', 'local_cache_sr': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'vdi_uuid': '5bcc1298-712a-4288-95c8-11b37de30191'}
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: opening lock file /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:06 H2SPH180034 SM: [26186] blktap2.deactivate
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: acquired /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:06 H2SPH180034 SM: [26186] ['/usr/sbin/tap-ctl', 'close', '-p', '26090', '-m', '2', '-t', '30']
      Aug 22 10:09:06 H2SPH180034 SM: [26186]  = 0
      Aug 22 10:09:06 H2SPH180034 SM: [26186] ['/usr/sbin/tap-ctl', 'detach', '-p', '26090', '-m', '2']
      Aug 22 10:09:06 H2SPH180034 SM: [26186]  = 0
      Aug 22 10:09:06 H2SPH180034 SM: [26186] ['/usr/sbin/tap-ctl', 'free', '-m', '2']
      Aug 22 10:09:06 H2SPH180034 SM: [26186]  = 0
      Aug 22 10:09:06 H2SPH180034 SM: [26186] tap.deactivate: Shut down Tapdisk(vhd:/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd, pid=26090, minor=2, state=R)
      Aug 22 10:09:06 H2SPH180034 SM: [26186] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26186]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26186] Removed host key host_OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8 for 5bcc1298-712a-4288-95c8-11b37de30191
      Aug 22 10:09:06 H2SPH180034 SM: [26186] lock: released /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:06 H2SPH180034 SM: [26231] lock: opening lock file /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26231] lock: acquired /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      Aug 22 10:09:06 H2SPH180034 SM: [26231] ['/usr/sbin/td-util', 'query', 'vhd', '-vpfb', '/var/run/sr-mount/54938185-cf60-f996-e2d7-fe8c47c38abb/5bcc1298-712a-4288-95c8-11b37de30191.vhd']
      Aug 22 10:09:06 H2SPH180034 SM: [26231]   pread SUCCESS
      Aug 22 10:09:06 H2SPH180034 SM: [26231] vdi_detach {'sr_uuid': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'subtask_of': 'DummyRef:|12db307f-14fd-44cc-84c8-34b7bfeb02bb|VDI.detach', 'vdi_ref': 'OpaqueRef:737c43e7-81e3-40d8-9f3d-7ba4fa7b8d5e', 'vdi_on_boot': 'persist', 'args': [], 'o_direct': False, 'vdi_location': '5bcc1298-712a-4288-95c8-11b37de30191', 'host_ref': 'OpaqueRef:e0d2b385-379f-4342-914a-66aa75eb0ec8', 'session_ref': 'OpaqueRef:3c833184-2a57-426e-a86a-3e5452790c81', 'device_config': {'device': '/dev/disk/by-id/scsi-36d09466058006700293b74fd0e7c6873-part3', 'SRmaster': 'true'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:4ecc1a6e-8115-4104-b06b-6b1431b60de7', 'local_cache_sr': '54938185-cf60-f996-e2d7-fe8c47c38abb', 'vdi_uuid': '5bcc1298-712a-4288-95c8-11b37de30191'}
      Aug 22 10:09:06 H2SPH180034 SM: [26231] lock: opening lock file /var/lock/sm/5bcc1298-712a-4288-95c8-11b37de30191/vdi
      Aug 22 10:09:06 H2SPH180034 SM: [26231] lock: released /var/lock/sm/54938185-cf60-f996-e2d7-fe8c47c38abb/sr
      
      posted in Xen Orchestra
      T
      tc-atwork
    • Attempting to migrate VM to different host in a different pool get 'operation failed'

      Full error output:

      vm.migrate
      {
        "vm": "06afe27b-045e-d1a9-1d57-aaea2e4394ed",
        "mapVifsNetworks": {
          "e92e3c62-6457-5f3f-de12-ba7c54c91cce": "2187cbd6-f957-c58c-c095-36ddda241566"
        },
        "migrationNetwork": "bc68b95e-6b2b-8331-ffc8-3b7f831c9027",
        "sr": "6d5f5f12-80af-dc6e-f67c-b016136a8728",
        "targetHost": "60889d12-d9e1-403c-9997-7ed2889fb9c0"
      }
      {
        "code": 21,
        "data": {
          "objectId": "06afe27b-045e-d1a9-1d57-aaea2e4394ed",
          "code": "SR_BACKEND_FAILURE_40"
        },
        "message": "operation failed",
        "name": "XoError",
        "stack": "XoError: operation failed
          at operationFailed (/usr/local/lib/node_modules/xo-server/node_modules/xo-common/src/api-errors.js:21:32)
          at file:///usr/local/lib/node_modules/xo-server/src/api/vm.mjs:567:15
          at Xo.migrate (file:///usr/local/lib/node_modules/xo-server/src/api/vm.mjs:553:3)
          at Api.#callApiMethod (file:///usr/local/lib/node_modules/xo-server/src/xo-mixins/api.mjs:417:20)"
      }```
      posted in Xen Orchestra
      T
      tc-atwork
    • RE: XO6/XOLite Power state dropdown suggestion

      @olivierlambert thanks.

      Ps. I know I’m being a little nitpicky with this suggestion but honestly I can’t wait for XO6, the redesign overhaul looks amazing and functionally appears to be a ton more cohesive/better user experience.

      posted in Xen Orchestra
      T
      tc-atwork
    • XO6/XOLite Power state dropdown suggestion

      In the most recent blog post for XO 5.85, a preview of the VM console view was shown towards the end. I'd like to suggest for the "Change state" button to instead show the current power state of the VM. It'd be great to know there if it's rebooting, suspended, etc anywhere from the view.

      So it'd look like this:

      1755930a-c77e-4c2c-8365-9682a44c94f4-image.png

      Hopefully this is the right place to post. Thanks for all your work!

      posted in Xen Orchestra
      T
      tc-atwork