XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • R

      Internal error: Not_found after Vinchin backup

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      54
      0 Votes
      54 Posts
      291 Views
      olivierlambertO
      You have a rather long chain now, but it should coalesce anyway
    • JamfoFLJ

      Xen Orchestra from Sources unreachable after applying XCPng Patch updates

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      17
      0 Votes
      17 Posts
      131 Views
      J
      @JamfoFL said in Xen Orchestra from Sources unreachable after applying XCPng Patch updates: @john.c I am able to PING the VM hosting Orchestra with no issues. How about trace routing the tcp port using tcptraceroute (Linux) or tracetcp (Windows). Can you run this on the IP Address and port used by Xen Orchestra, from both local address (same machine) and another address (remote machine)? Can help to see if there’s an issue in the path to that IP’s TCP Port!
    • G

      GPU Passthrough

      Watching Ignoring Scheduled Pinned Locked Moved Management
      15
      3
      0 Votes
      15 Posts
      139 Views
      G
      @tjkreidl Thanks once again for your help and guidance! I have seen/read many 'how to' videos/docs. The problem was not the method I was using. I managed to get this working albeit there is a bug in XCG-ng I suppose. If USB Keyboard & Mouse is passed-through along-with GPU: The GPU gets stuck in D3 state (Classic GPU reset problem) If no vUSB is passed but GPU is passed through: The GPU works correctly and resets correctly
    • T

      PCI Passthorugh INTERNAL_ERROR

      Watching Ignoring Scheduled Pinned Locked Moved Management
      13
      1
      0 Votes
      13 Posts
      191 Views
      T
      @andriy.sultanov After running that command I didn't have anything returned.
    • A

      Disk import failed

      Watching Ignoring Scheduled Pinned Locked Moved Solved Xen Orchestra
      10
      1
      0 Votes
      10 Posts
      131 Views
      olivierlambertO
      Great news! Thanks for the feedback.
    • D

      Possible to reconnect SR automatically?

      Watching Ignoring Scheduled Pinned Locked Moved Development
      9
      0 Votes
      9 Posts
      87 Views
      D
      @ronan-a Thanks! I've installed the plugin, and configured it. Now to wait until the next power outage. (or until I get time this weekend to test it and annoy my family)
    • R

      Pool Master

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      8
      0 Votes
      8 Posts
      43 Views
      R
      @olivierlambert Dang ok. I waited a few minute then clicked the Connect in XOA for that host and it connected. Not sure what to do really.
    • W

      Backup status email template

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      1
      0 Votes
      8 Posts
      32 Views
      olivierlambertO
      Great!
    • J

      Installation: expecting an rsa key, any plans to support elliptic curve keys?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      10
      0 Votes
      10 Posts
      324 Views
      A
      @jivanpal We do not currently have any plans to support elliptic curve keys - this is a very sensitive topic given different governmental security requirements around the world. Note that Let's Encrypt recommends a dual setup for this exact reason: "Our recommendation is to serve a dual-cert config, offering an RSA certificate by default, and a (much smaller) ECDSA certificate to those clients that indicate support." (https://letsencrypt.org/docs/integration-guide/)
    • S

      Rest API Mount CDRom to VM

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      6
      0 Votes
      6 Posts
      32 Views
      lsouai-vatesL
      @StephenAOINS This endpoint is not currently present in our REST API swagger, but we do plan to add it to the list of endpoints. We are currently finalizing the migration of existing endpoints, the next step will be adding new ones. We will keep you informed when it is available. Feel free to come back to us if you want to learn more and follow our blog posts. have a good day
    • I

      VM backup retry - status failed despite it was done on second attempt

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      9
      2
      0 Votes
      9 Posts
      109 Views
      I
      @Bastien-Nollet here is the log.. 2025-07-09T01_00_00.011Z - backup NG.json.txt
    • S

      WORM Backups with XCP-ng / Xen Orchestra - Seeking Solutions & Experience

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      6
      0 Votes
      6 Posts
      62 Views
      lsouai-vatesL
      @olivierlambert I agree, ping @thomas-dkmt
    • M

      XCP-NG 8.3 PCI Passthrough Trials and Tribulations

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      6
      0 Votes
      6 Posts
      130 Views
      M
      @gb.123 In XO on the "Advanced" tab for the VM I added the GPU devices by first adding them both as "Attached PCIs" near the bottom of the page. I also disabled the "VGA" option under "Xen Settings" and clicked the "+" next to "GPUs" and added the vGPU type "passthrough () 0x0" which was available on the drop-down list. I don't know if it matters or not, but I also set the "Static Max", "Dynamic Min", and "Dynamic Max" memory limits under "VM limits" to the total RAM size I allocated the VM.
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      194
      1 Votes
      194 Posts
      20k Views
      G
      @olivierlambert @stormi Another bug I encountered ( I don't know if this is to be mentioned here or whether I should open this as an issue in github ) Also, this bug may be present in previous versions as the current version is the one I have first tried this on: Here is the summary: If USB Keyboard & Mouse is passed-through along-with GPU: The GPU gets stuck in D3 state (on Shutdown/Restart of VM) (Classic GPU reset problem) If no vUSB is passed but GPU is passed through: The GPU works correctly and resets correctly (on Shutdown/Restart of VM) I will try the workaround of passing the whole usb controller to see how it goes; but in my use case that may not be possible for regular usage (I'll just be doing this for testing only)
    • V

      ldap user auth doesn't work after update to actual version

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      5
      1
      0 Votes
      5 Posts
      44 Views
      V
      @olivierlambert thanks! for the info @pdonias do you have any idea what could be wrong? cheers Ringo
    • D

      XOA Console not coming up

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      5
      0 Votes
      5 Posts
      96 Views
      D
      @olivierlambert Tried this - XCPNG | ~/_scripts > netstat -tulpn | sed -n '1,2p;/5900\|80/p' Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:5900 0.0.0.0:* LISTEN 1741/vncterm tcp 0 0 0.0.0.0:10809 0.0.0.0:* LISTEN 2896/xapi-nbd tcp6 0 0 :::80 :::* LISTEN 2887/xapi udp 0 0 0.0.0.0:780 0.0.0.0:* 1029/rpcbind udp6 0 0 :::780 :::* 1029/rpcbind VNC on the XCPNg box is listening to 127.0.01 : 5900, may be - I should change this to listen to 0.0.0.0 ? Also, something to do with protocol used - the browser > inspect > network - seems to use ws:// protocol and receives a 101 - upgrade, should it have been vnc ? ( apologies if its a random guess rant here) -- Attached screenshot of browser console logs. FYI - 192.168.0.49:3000 - is where my XOA VM is running as a VM on XCP Ng (192.168.0.45) and the logs below is when the console for itself (i.e. same IP) [image: 1751716444788-selection_157.png]
    • A

      HPE ML350 G11 - Fan at high speed - Agentless Management (AMS)

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      8
      0 Votes
      8 Posts
      831 Views
      J
      @flakpyro said in HPE ML350 G11 - Fan at high speed - Agentless Management (AMS): I am currently running "amsd-3.6.0-1867.26.xenserver8.x86_64.rpm" on my DL360 Gen 11 servers. It's been working well! It’salso worth mentioning i had no luck with the newer amsd rpms as they have dependency errors. I am assuming just because the dom0 OS is CentOS 7 based and somewhat dated. I reached out to my HP rep about if the xenserver packages will continue to see updates but never heard back. Having to deal with a daemon like this to keep fan speeds under control will definitely be taking into account with this years server refreshes considering Dell and Lenovo do not have this requirement. HPE AMS and hp-ams have historically also been required with smartctl to get drive health data into iLO along with certain health and metrics data. Depending on where you live you may have options like 2CRSi, Scan, Novatech or other smaller brand options. Not just companies like HPE, Dell and Lenovo!
    • H

      Installing Netdata Cloud on XCP Nodes

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      4
      0 Votes
      4 Posts
      28 Views
      H
      @gduperrey Thank you for the reply. The use case is to centralize all of our reporting and monitoring for all of our devices. We have a number of XCP host, various servers, etc, that we are responsible for, and all of them besides our XCP nodes are being monitored for real-time analytics by Netdata. We are also using Zabbix for trending and long term usage but would like to be able to move the XCP nodes into our Netdata cloud versus being just on XOA/XCP. I also understand this would be overidden by updates and that you guys cant support it. I mainly wanted to see if it would "break" anything you already have in place. As a feature request, maybe have a way that you can report to a Netdata cloud instance. That would be nice.
    • Tristis OrisT

      SR Garbage Collection running permanently

      Watching Ignoring Scheduled Pinned Locked Moved Management
      32
      1
      0 Votes
      32 Posts
      1k Views
      R
      @ronan-a for example, for ID 3407716 in SMlog for Dom0 master : Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] Setting LVM_DEVICE to /dev/disk/by-scsid/3624a93704071cc78f82b4df4000113ee Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] Setting LVM_DEVICE to /dev/disk/by-scsid/3624a93704071cc78f82b4df4000113ee Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache created for VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89 Jul 9 09:44:45 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893214.155789358' Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] ['/sbin/vgs', '--readonly', 'VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache: will initialize now Jul 9 09:44:45 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:46 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893215.105961535' Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] Entering _checkMetadataVolume Jul 9 09:44:46 na-mut-xen-03 SM: [3407716] vdi_list_changed_blocks {'host_ref': 'OpaqueRef:8bb2bb09-1d56-cb47-6014-91c3eca78529', 'command': 'vdi_list_changed_blocks', 'args': ['OpaqueRef:3a4d6847-eb39-5a96-f2c2-ffcad3da4477'], 'device_config': {'SRmaster': 'true', 'port': '3260', 'SCSIid': '3624a93704071cc78f82b4df4000113ee', 'targetIQN': 'iqn.2010-06.com.purestorage:flasharray.2498b71d53b104d9', 'target': '10.20.0.21', 'multihomelist': '10.20.0.21:3260,10.20.0.22:3260,10.20.1.21:3260,10.20.1.22:3260'}, 'session_ref': '******', 'sr_ref': 'OpaqueRef:78f4538b-4c0b-531c-c523-fd92f8738fc5', 'sr_uuid': '5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89', 'vdi_ref': 'OpaqueRef:05e1857b-f39d-47da-9c06-d0d35ba97f00', 'vdi_location': '9fae176c-2c5f-4fd1-91fd-9bdb32533795', 'vdi_uuid': '9fae176c-2c5f-4fd1-91fd-9bdb32533795', 'subtask_of': 'DummyRef:|dea7f1a8-9116-8be0-6ba8-c3ecac77b442|VDI.list_changed_blocks', 'vdi_on_boot': 'persist', 'vdi_allow_caching': 'false'} Jul 9 09:44:47 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893215.990307086' Jul 9 09:44:47 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893216.783648117' Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:48 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893217.178993739' Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:48 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893218.029420281' Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog'] Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog', '-c'] Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:49 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog => 1 / '' / '' Jul 9 09:44:50 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893219.268237038' Jul 9 09:44:50 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/9fae176c-2c5f-4fd1-91fd-9bdb32533795.cbtlog'] Jul 9 09:44:50 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:51 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893220.721225461' Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-9fae176c--2c5f--4fd1--91fd--9bdb32533795.cbtlog'] Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:51 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:44:51 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893220.765534349' Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] lock: opening lock file /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:52 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:53 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893222.038114535' Jul 9 09:44:53 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:53 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893222.921315355' Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-s'] Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:54 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:44:55 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893224.138058885' Jul 9 09:44:55 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:55 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:56 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.053222085' Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] DEBUG: Processing VDI 54bc2594-5228-411a-a4b2-cc1a7502d9a4 of size 107374182400 Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:44:56 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.076140954' Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:44:56 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893225.934427314' Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-b'] Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:44:57 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:44:59 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893227.790063374' Jul 9 09:44:59 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:44:59 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:00 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893228.82935698' Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] Size of bitmap: 1638400 Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] lock: acquired /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] LVMCache: refreshing Jul 9 09:45:00 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893229.259729804' Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89'] Jul 9 09:45:00 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:01 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893230.644242194' Jul 9 09:45:01 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] ['/usr/sbin/cbt-util', 'get', '-n', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog', '-c'] Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] fuser /dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog => 1 / '' / '' Jul 9 09:45:02 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893231.488289557' Jul 9 09:45:02 na-mut-xen-03 SM: [3407716] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/54bc2594-5228-411a-a4b2-cc1a7502d9a4.cbtlog'] Jul 9 09:45:03 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:04 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893233.069490829' Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ['/sbin/dmsetup', 'status', 'VG_XenStorage--5301ae76--31fd--9ff0--7d4c--65c8b1ed8f89-54bc2594--5228--411a--a4b2--cc1a7502d9a4.cbtlog'] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] pread SUCCESS Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: released /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:04 na-mut-xen-03 fairlock[3949]: /run/fairlock/devicemapper sent '3407716 - 1893233.110222368' Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Raising exception [460, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated]] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ***** generic exception: vdi_list_changed_blocks: EXCEPTION <class 'xs_errors.SROSError'>, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 113, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return self._run_locked(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] rv = self._run(sr, target) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 333, in _run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return target.list_changed_blocks() Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/VDI.py", line 761, in list_changed_blocks Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] "Source and target VDI are unrelated") Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ***** LVHD over iSCSI: EXCEPTION <class 'xs_errors.SROSError'>, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 392, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] ret = cmd.run(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 113, in run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return self._run_locked(sr) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 163, in _run_locked Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] rv = self._run(sr, target) Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/SRCommand.py", line 333, in _run Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] return target.list_changed_blocks() Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] File "/opt/xensource/sm/VDI.py", line 761, in list_changed_blocks Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] "Source and target VDI are unrelated") Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/9fae176c-2c5f-4fd1-91fd-9bdb32533795/cbtlog Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/54bc2594-5228-411a-a4b2-cc1a7502d9a4/cbtlog Jul 9 09:45:04 na-mut-xen-03 SM: [3407716] lock: closed /var/lock/sm/5301ae76-31fd-9ff0-7d4c-65c8b1ed8f89/sr
    • S

      Getting Error Creating VM Through REST

      Watching Ignoring Scheduled Pinned Locked Moved Solved REST API
      4
      0 Votes
      4 Posts
      44 Views
      olivierlambertO
      Haha, glad it works now, that's what matters anyway