@Swen Thanks bud! It did the trick!
I did the interface modify commands on master only, it changed all hosts online with running guest vms and no downtime at all!
@Swen Thanks bud! It did the trick!
I did the interface modify commands on master only, it changed all hosts online with running guest vms and no downtime at all!
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
@rjt said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
Do you have any shared storage somewhere on your LAN to use?
A three GB thumbdrive shared from another machine is big enough for this VM.
I have a NFS storage that's seen by all servers, but when migrating using this storage gives me the same error.
Found the correct xe command but got the same error
[root@srv-vm02 ~]# xe vm-migrate remote-master=172.16.254.2 remote-username=root remote-password=xxxxxx vm=VM_MEPCAFI host=srv-vm01 live=true vdi:75400759-8829-48a0-b4ee-d8dd56a0facc=652efd1c-e37e-2f33-4ed3-3deffb251ed6
Performing a Storage XenMotion migration. Your VM's VDIs will be migrated with the VM.
Will migrate to remote host: srv-vm01, using remote network: Pool-wide network associated with eth0. Here is the VDI mapping:
VDI 75400759-8829-48a0-b4ee-d8dd56a0facc -> SR 652efd1c-e37e-2f33-4ed3-3deffb251ed6
You tried to call a method with the incorrect number of parameters. The fully-qualified method name that you used, and the number of received and expected parameters are returned.
method: VM.migrate_send
expected: 6
received: 7
Don't really know what to do next...
@andersonalipio said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
@olivierlambert said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
Can you try with Xen Orchestra? And/or
xe
CLI?Would the command line be something like this?
xe vm-migrate remote-master=x.x.x.x remote-username=root remote-password=******* vm=vm-name live=true force=true
If yes, should it be executed from the master or the server where the VM lies?
I need to live migrate a VM from slave 1 (XS 7.0) to the upgraded master (XCP 8.0)
That's not it...
I'm trying to find the correct parameters, because in my case, I need to migrate a VM from slave1 with VDI in slave1 LOCAL SR to the master and the VM's VDI to master's LOCAL SR.
I've found this but I'm confused about wich uuid and wich SR should I use, current or new....
xe vm-migrate uuid=[vm's uuid?] host=master live=true host-uuid=[old or new host uuid?] vdi:72ce69ce-6162-4d9d-a82b-1020ba8eeaf0=62cb502e-ffc0-79c2-3b73-9888d9dad60b
@olivierlambert said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
Can you try with Xen Orchestra? And/or
xe
CLI?
Would the command line be something like this?
xe vm-migrate remote-master=x.x.x.x remote-username=root remote-password=******* vm=vm-name live=true force=true
If yes, should it be executed from the master or the server where the VM lies?
I need to live migrate a VM from slave 1 (XS 7.0) to the upgraded master (XCP 8.0)
@stormi said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
Logs from the server showing that error message and several lines before and after may help (after testing in XO).
The logs are as follows
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170042 INET :::80|host.migrate_receive R:b064242009db|audit] Host.migrate_receive: host = '21a1ec6a-2c56-4724-970f-9fd10bbb6455 (srv-vm01)';
network = '57802815-7652-6932-4399-3dd3eeb75934'
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170042 INET :::80|host.migrate_receive R:b064242009db|xapi] Session.create trackid=3cc92cdcc50d6a9cb2742e3a39d5f326 pool=false uname= origina
tor=xapi is_local_superuser=true auth_user_sid= parent=trackid=07817cf1dacc45f05ff61ae2cba31edc
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170043 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:8c3336258b4d created by task R:b064242009db
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|dispatcher] spawning a new thread to handle the current task (trackid=07817cf1dacc45f05ff61ae2cb
a31edc)
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|audit] VM.migrate_send: VM = '2ae968be-0cf1-e296-48e7-a63e46ce039c (VM_MEPCAFI)'
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|dummytaskhelper] task VM.assert_can_migrate D:0853f11da576 created (trackid=07817cf1dacc45f05ff6
1ae2cba31edc) by task R:422b7c3ce8b7
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|audit] VM.assert_can_migrate: VM = '2ae968be-0cf1-e296-48e7-a63e46ce039c (VM_MEPCAFI)'
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] This is an intra-pool migration
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|stunnel] stunnel start
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] stunnel pid: 22905 (cached = false) connected to 172.16.254.2:443
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=22905
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|stunnel] stunnel start
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] stunnel pid: 22913 (cached = false) connected to 172.16.254.2:443
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=22913
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|stunnel] stunnel start
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] stunnel pid: 22918 (cached = false) connected to 172.16.254.2:443
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=22918
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1156213 INET :::80||dummytaskhelper] task dispatch:event.from D:0cf95852ef63 created by task D:62ee87bd340d
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] Checking whether VM OpaqueRef:7fc6e9ef-8934-6d37-0cc9-27f03319c16a can run on host OpaqueR
ef:c5a91627-9b39-1a50-35fb-f90527d490bc
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] host srv-vm01; available_memory = 96405225472; memory_required = 2167406592
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] All fine, VM OpaqueRef:7fc6e9ef-8934-6d37-0cc9-27f03319c16a can run on host OpaqueRef:c5a9
1627-9b39-1a50-35fb-f90527d490bc!
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|audit] VM.assert_can_migrate_sender: VM = '2ae968be-0cf1-e296-48e7-a63e46ce039c (VM_MEPCAFI)'
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] Session.create trackid=8da662af51992b24914d07ef997127f1 pool=true uname= originator=xapi i
s_local_superuser=true auth_user_sid= parent=trackid=07817cf1dacc45f05ff61ae2cba31edc
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170049 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:fa6579267792 created by task D:0853f11da576
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] stunnel pid: 21902 (cached = true) connected to 172.16.254.3:443
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=21902
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xmlrpc_client] stunnel pid: 21902 (cached = true) returned stunnel to cache
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] Session.destroy trackid=8da662af51992b24914d07ef997127f1
Dec 11 12:08:47 srv-vm01 xapi: [ warn|srv-vm01|1170045 |VM.assert_can_migrate D:0853f11da576|xapi] VM.assert_can_migrate_sender is not known by destination, assuming it can ignore this chec
k.
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|audit] Reserve resources for VM OpaqueRef:7fc6e9ef-8934-6d37-0cc9-27f03319c16a on host OpaqueRef
:c5a91627-9b39-1a50-35fb-f90527d490bc
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|xapi] Session.create trackid=7c9b755c10d64667754d3318bd756d6a pool=true uname= originator=xapi i
s_local_superuser=true auth_user_sid= parent=trackid=07817cf1dacc45f05ff61ae2cba31edc
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170051 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:ff08228ff92f created by task R:422b7c3ce8b7
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|xmlrpc_client] stunnel pid: 21902 (cached = true) connected to 172.16.254.3:443
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|xmlrpc_client] with_recorded_stunnelpid task_opt=OpaqueRef:422b7c3c-e8b7-4971-a4a2-5ad9238d156f
s_pid=21902
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|xmlrpc_client] stunnel pid: 21902 (cached = true) returned stunnel to cache
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|xapi] Session.destroy trackid=7c9b755c10d64667754d3318bd756d6a
Dec 11 12:08:47 srv-vm01 xapi: [ warn|srv-vm01|1170045 |Async.VM.migrate_send R:422b7c3ce8b7|rbac_audit] cannot marshall arguments for the action VM.migrate_send: name and value list length
s don't match. str_names=[session_id,vm,dest,live,vdi_map,vif_map,options,vgpu_map,], xml_values=[S(OpaqueRef:023f9925-7153-4105-89aa-e6c190aea2f7),S(OpaqueRef:7fc6e9ef-8934-6d37-0cc9-27f03
319c16a),{SM:S(http://172.16.254.2/services/SM?session_id=OpaqueRef:5e6a5567-25fd-40e8-86d8-1521847b009c);host:S(OpaqueRef:c5a91627-9b39-1a50-35fb-f90527d490bc);xenops:S(http://172.16.254.2
/services/xenops?session_id=OpaqueRef:5e6a5567-25fd-40e8-86d8-1521847b009c);session_id:S(OpaqueRef:5e6a5567-25fd-40e8-86d8-1521847b009c);master:S(http://172.16.254.2/)},B(true),{OpaqueRef:b
301e08f-b142-58ee-85f3-7b79e5becb2b:S(OpaqueRef:34331f5e-fe77-9b58-5a0f-22775aa7c75e)},{},{force:S(true)},]
Dec 11 12:08:47 srv-vm01 xapi: [error|srv-vm01|1170045 ||backtrace] Async.VM.migrate_send R:422b7c3ce8b7 failed with exception Server_error(MESSAGE_PARAMETER_COUNT_MISMATCH, [ VM.migrate_se
nd; 6; 7 ])
Dec 11 12:08:47 srv-vm01 xapi: [error|srv-vm01|1170045 ||backtrace] Raised Server_error(MESSAGE_PARAMETER_COUNT_MISMATCH, [ VM.migrate_send; 6; 7 ])
Dec 11 12:08:47 srv-vm01 xapi: [error|srv-vm01|1170045 ||backtrace] 1/1 xapi @ srv-vm01 Raised at file (Thread 1170045 has no backtrace table. Was with_backtraces called?, line 0
Dec 11 12:08:47 srv-vm01 xapi: [error|srv-vm01|1170045 ||backtrace]
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170054 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:df15f6998d8f created by task D:331171945fdb
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170054 UNIX /var/lib/xcp/xapi|session.logout D:58fc8603783e|xapi] Session.destroy trackid=54686d2aa75068a16e7bbc9b346e53bf
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170055 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:521bf1375263 created by task D:331171945fdb
Dec 11 12:08:47 srv-vm01 xapi: [ info|srv-vm01|1170055 UNIX /var/lib/xcp/xapi|session.slave_login D:ea496baa7952|xapi] Session.create trackid=d4797774b02461bf58c8fb234fdb46eb pool=true unam
e= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170056 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:pool.get_all D:75990f7f155b created by task D:ea496baa7952
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1170057 UNIX /var/lib/xcp/xapi||dummytaskhelper] task dispatch:event.from D:18e39dd6f49a created by task D:331171945fdb
Dec 11 12:08:47 srv-vm01 xapi: [debug|srv-vm01|1161880 INET :::80||dummytaskhelper] task dispatch:event.from D:5f3c05ffc517 created by task D:62ee87bd340d
@olivierlambert Hi Oliver, sorry for the delay.
I'm getting the same error on XOA, MESSAGE_PARAMETER_COUNT_MISMATCH(VM.migrate_send, 6, 7)
I'm not sure how to do it by command line. Can you help me with that?
@olivierlambert said in XS 7.0 to XCP-ng 8 live migration error VM.migrate_send with the incorrect number of parameters:
Can you try with Xen Orchestra? And/or
xe
CLI?
Hi Oliver,
I'm using XCP-ng Center 8.0.1 (build 8.0.1.26) 64-bit
left click on the VM and Migrate to Server
Hello all!
I'm having a issue after upgrading my pool master from XS 7.0 to XCP-ng 8, in a pool with 3 hosts.
I followed the instructions in https://xcp-ng.org/blog/2018/06/11/migrate-from-xenserver-to-xcp-ng/ and for the pool master it worked as expected, but when I needed to migrate running VMs from the second host to the master and upgrade the slave, it gave me this error message:
"You tried to call the method VM.migrate_send with the incorrect number of parameters 7. The method expect 6 parameter(s)."
Moving a VM that was shutdown worked ok, but migrating a running VM didn't.
I searched arround the Net and had found nothing like that.
Can anyone help?
Thank's in advance.
Best regards,
Anderson Alipio
After testing with XOA and seeing that a update has been issued, updated my XO from source with
$ git checkout .
$ git pull --ff-only
$ yarn
$ yarn build
and then restarting xenorchestra, working fine now.
Thanks a lot!
Worked well on XOA stable (xo-server 5.42.1 xo-web 5.48.1)
I was just installing XOA to see this. About to finish and I'll post here.
Hello all!
I've just installed XO from source (xo-server 5.50.1 - xo-web 5.50.2) on a CentOS 7 (last updates up to now) in my test environment, with XCP-NG 8 (latest) and I don't know if my installation has some problems or I'm doing something wrong.
Summing up, create a self-service resource to a group, gave the permissions to use some templates, SR, ISO SR, Networks, max cpu, max ram, max disk, etc.
When logged with a user in that group, he can see the resources set to him, existent VMs and all, but when this user tries to create a new VM, he can see the template but can't select it, on the drop down when click on the template to use it does not select it and it can't go to the next part of creation.
Also, no networks are shown even if the network device is selected on the self-service settings.
Tried to set ACLs to SR, as viewer and operator and no go. The only way it worked was to set ACL to the pool, but in that case he can see all pool resources, hosts and all VM instead of only seeing his VM.
Am I doing something wrong? Followed all tutorials and videos that I’ve found, done exactly the same thing but my result are different from the rest.
Best regards,
Anderson Alipio