XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Pilow
    3. Posts
    P
    Offline
    • Profile
    • Following 4
    • Followers 1
    • Topics 30
    • Posts 381
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: XOA - Memory Usage

      @flakpyro yup still aggressive on memory consumption
      I have a task that reboots XOA & XOPROXIES every two days to mitigate

      8e3474d6-3c71-4b21-bf1c-293fe13c61b8-image.jpeg

      since 6.3 / 6.3.1 it seems more aggressive on the ramp up (I updated just yesterday... was still on 6.1.2)

      posted in Xen Orchestra
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      I only modified @florent code, not the second one you gave me

      posted in Backup
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      @pierrebrunet with your modification on XO PROXies, it seems to be working now !

      20f0ecca-d770-4941-b16e-f14fb7b13781-image.jpeg

      I should have thought about the proxy doing the actual job...

      posted in Backup
      P
      Pilow
    • RE: Backup retention policy and key backup interval

      @Bastien-Nollet why doni have weekly or monthly rétention points that are not fulls ?
      could not theses rétentions points be merged in fulls when tagged ?

      keeping a monthly incremental suppose to also keep previous inc. and based full

      posted in Backup
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      @pierrebrunet said:

      • const scheduleId = this.#props.schedule?.id ?? this.#props.scheduleId

      done,
      a430761c-84dc-4931-9567-3e26297b6c0d-image.jpeg but still same error

                "result": {
                  "message": "Cannot read properties of undefined (reading 'id')",
                  "name": "TypeError",
                  "stack": "TypeError: Cannot read properties of undefined (reading 'id')\n    at IncrementalRemoteWriter._prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:93:39)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:32\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:108:24\n    at Zone.run (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/node-zone/index.js:80:23)\n    at Task.run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:106:23)\n    at IncrementalRemoteWriter.prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:17)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:48\n    at callWriter (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalRemoteVmBackupRunner._callWriters (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalRemoteVmBackupRunner._run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:18)"
                }
              }
            ],
            "end": 1775118782797,
            "result": {
              "message": "Cannot read properties of undefined (reading 'id')",
              "name": "TypeError",
              "stack": "TypeError: Cannot read properties of undefined (reading 'id')\n    at IncrementalRemoteWriter._prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:93:39)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:32\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:108:24\n    at Zone.run (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/node-zone/index.js:80:23)\n    at Task.run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:106:23)\n    at IncrementalRemoteWriter.prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:17)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:48\n    at callWriter (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalRemoteVmBackupRunner._callWriters (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalRemoteVmBackupRunner._run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:18)"
            }
          },
      
      
      posted in Backup
      P
      Pilow
    • RE: Distibuted backups doesn't clean up the deltas in the BRs

      @ph7 try to put RETENTION to 1 in the schedule, as you are using LTR parameters

      posted in Backup
      P
      Pilow
    • RE: VDI not showing in XO 5 from Source.

      @wgomes yup thanks for bumping this topic, still having the problem too

      posted in Management
      P
      Pilow
    • RE: Backup retention policy and key backup interval

      @abudef I've been there, here is my vision on the topic.

      in terms of RETENTION of the schedule, if you put 21, not using the FULL BACKUP INTERVAL in the job, you will get : one full and 20 subsequent deltas. on the 22nd point, a delta will be done and the oldest delta (the one just next to the full) will be merged in the full. its like a forever incremental.
      Put 7 in FULL BACKUP INTERVAL, and every 7 days (if it runs once every day), it will make a full.

      other way to do it : as you told, two schedules, with forced full on sundays.

      difference between the two ? you really choose wich day the full occurs.
      in the first way, the full is "7 days after the chain begins"

      do not mix these two way of configuring, you would have unnecessary fulls in your chain.

      in terms of LTR, @florent and @bastien-nollet recommended me this :
      if you use LTR (lets says 14Daily, 4Weekly, 2Monthly), keep the SCHEDULE retention to 1
      the LTR parameters will keep the retention points as intended
      try not to mix with FULL BACKUP INTERVAL

      LTR is still early, because i would like the WEEKLY/MONTHLY/ANNUALY to always be fulls or specific merged fulls, this is not the case... 😕
      it is still not clear, as you can't choose it, which day is the WEEKLY (sunday ? monday ?) the MONTHLY (1st ? last day of month ?)

      trial & error mode activated.

      posted in Backup
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      tried either by manual run, or relaunched the schedule to see if context of schedule was needed to get id of the schedule, but to no avail 😕

      schedule or schedule.id seems to be undefined

      posted in Backup
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      @florent made the modification and restarted the xo-server

      but still have the problem

      [22:13 01] xoa:_vmRunners$ cat _AbstractRemote.mjs|grep -B5 -A 5 schedule
        constructor({
          config,
          job,
          healthCheckSr,
          remoteAdapters,
          schedule,
          settings,
          sourceRemoteAdapter,
          throttleGenerator,
          throttleStream,
          vmUuid,
      --
          super()
          this.config = config
          this.job = job
          this.remoteAdapters = remoteAdapters
          this._settings = settings
          this.scheduleId = schedule.id
          this.timestamp = undefined
      
          this._healthCheckSr = healthCheckSr
          this._sourceRemoteAdapter = sourceRemoteAdapter
          this._throttleGenerator = throttleGenerator
      --
                adapters: remoteAdapters,
                BackupWriter,
                config,
                healthCheckSr,
                job,
                schedule,
                vmUuid,
                settings,
              })
            )
          } else {
      --
                new BackupWriter({
                  adapter,
                  config,
                  healthCheckSr,
                  job,
                  schedule,
                  vmUuid,
                  remoteId,
                  settings: targetSettings,
                })
              )
      [22:13 01] xoa:_vmRunners$
      
      

      need to modify anything else ? I noticed only 2 line modifications in #9667

      posted in Backup
      P
      Pilow
    • RE: VM Migration | PIF is not attached

      had the same on one pool that needed 97 updates.
      RPU was going great, but couldn't migrate VM post-upgrade

      it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished.

      we waited 10 min and the 97 patches disappeared, and then we could migrate the VM

      posted in Compute
      P
      Pilow
    • RE: Mirror backup broken since XO 6.3.0 release, "Error: Cannot read properties of undefined (reading 'id')"

      @Danp just upgraded XOA from 6.1.2 to 6.3.1
      same error

              {
                "data": {
                  "id": "bc54f9a7-ca66-4c7c-9c35-c37a715af458",
                  "type": "remote"
                },
                "id": "1775058349205",
                "message": "export",
                "start": 1775058349205,
                "status": "failure",
                "end": 1775058349205,
                "result": {
                  "message": "Cannot read properties of undefined (reading 'id')",
                  "name": "TypeError",
                  "stack": "TypeError: Cannot read properties of undefined (reading 'id')\n    at IncrementalRemoteWriter._prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:93:39)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:32\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:108:24\n    at Zone.run (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/node-zone/index.js:80:23)\n    at Task.run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:106:23)\n    at IncrementalRemoteWriter.prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:17)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:48\n    at callWriter (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalRemoteVmBackupRunner._callWriters (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalRemoteVmBackupRunner._run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:18)"
                }
              }
            ],
            "end": 1775058349208,
            "result": {
              "message": "Cannot read properties of undefined (reading 'id')",
              "name": "TypeError",
              "stack": "TypeError: Cannot read properties of undefined (reading 'id')\n    at IncrementalRemoteWriter._prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:93:39)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:32\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:108:24\n    at Zone.run (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/node-zone/index.js:80:23)\n    at Task.run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/Task.mjs:106:23)\n    at IncrementalRemoteWriter.prepare (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_writers/IncrementalRemoteWriter.mjs:87:17)\n    at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:48\n    at callWriter (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:33:15)\n    at IncrementalRemoteVmBackupRunner._callWriters (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_Abstract.mjs:52:14)\n    at IncrementalRemoteVmBackupRunner._run (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalRemote.mjs:92:18)"
            }
          },
      
      posted in Backup
      P
      Pilow
    • RE: Loss of connection during an action BUG

      @User-cxs peut etre un clnflit IP ou la VM prenait l'IP du master...
      courage pour la suite ! 🙂

      posted in Xen Orchestra
      P
      Pilow
    • fell back to full and cannot delete snapshot

      @florent we also have a case with a combo of 'fell back to a full' and 'cant delete snapshot data' and it indeed does a full

      but not all the vdis of the VM. it makes full only of one.

      this occurs on vms with 200/300Gb+ vdis... imported from vmware vms.

      what would you recommend ?

      it is indeed iscsi+CBT on SR storage
      and iscsi inside minio VM, as a remote

      b59d069c-88d5-479e-ac33-285b32aaa412-image.jpeg

      error
      {"code":"SR_BACKEND_FAILURE_202","params":["","General backend error [opterr=Command ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-380c841c-77a3-40a2-24bf-714ac149fae4/VHD-8d5d80dc-17af-4e10-97aa-45eb3b5fb63a'] failed 
      (WARNING: Not using device /dev/disk/by-id/scsi-3600c0ff000fe91f97013b86801000000 for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sdc for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sdd for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sde for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      Failed to find logical volume \"VG_XenStorage-380c841c-77a3-40a2-24bf-714ac149fae4/VHD-8d5d80dc-17af-4e10-97aa-45eb3b5fb63a\"): Input/output error]",""],"task":{"uuid":"260097cd-4ff7-ac73-2a36-1a7a5b8d7963","name_label":"Async.VDI.data_destroy","name_description":"","allowed_operations":[],"current_operations":{},"created":"20260330T14:35:14Z","finished":"20260330T14:35:16Z","status":"failure","resident_on":"OpaqueRef:ffaaef79-ffa0-4c03-cb5e-285db6b8dce0","progress":1,"type":"<none/>","result":"","error_info":["SR_BACKEND_FAILURE_202","","General backend error [opterr=Command ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-380c841c-77a3-40a2-24bf-714ac149fae4/VHD-8d5d80dc-17af-4e10-97aa-45eb3b5fb63a'] failed (WARNING: Not using device /dev/disk/by-id/scsi-3600c0ff000fe91f97013b86801000000 for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sdc for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sdd for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: Not using device /dev/disk/by-scsid/3600c0ff000fe91f97013b86801000000/sde for PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      WARNING: PV ciJKf6-qmAl-3ef6-U3P4-MtTv-d0d2-93Xkhv prefers device /dev/disk/by-id/dm-name-3600c0ff000fe91f97013b86801000000 because device is used by LV.\n 
      Failed to find logical volume \"VG_XenStorage-380c841c-77a3-40a2-24bf-714ac149fae4/VHD-8d5d80dc-17af-4e10-97aa-45eb3b5fb63a\"): Input/output error]",""],"other_config":{},"subtask_of":"OpaqueRef:NULL","subtasks":[],"backtrace":"(((process xapi)(filename ocaml/xapi/xapi_vdi.ml)(line 1058))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 229))((process xapi)(filename ocaml/xapi/rbac.ml)(line 239))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"}}
      vdiRef
      "OpaqueRef:e07a0fdf-15cb-d8d2-bf4b-58433e250644"
      
      posted in Backup
      P
      Pilow
    • RE: Backing up from Replica triggers full backup

      @florent we also have a case with a combo of 'fell back to a full' and 'cant delete snapshot data' and it indeed does a full

      but not all the vdis of the VM. it makes full only of one.

      this occurs on vms with 200/300Gb+ vdis... imported from vmware vms.

      what would you recommend ?

      posted in Backup
      P
      Pilow
    • RE: Loss of connection during an action BUG

      @User-cxs dans ce cas en l'occurence de coupures intempestives au niveau réseau (scrute de plus pret le routage entre ces 2 subnets (j'ai supposé que c'etait du /24)) tu peux avoir le comportement erratique ou tout disparait et fini par revenir

      monitore un ping du xoa vers les hosts, voir si tu as des pertes de paquet de temps en temps.
      au moment ou les hosts disparaissent, essaie de pinger les hosts depuis le XOA...

      a mon avis un probleme de réseau quelque part...

      posted in Xen Orchestra
      P
      Pilow
    • RE: backup mail report says INTERRUPTED but it's not ?

      @MajorP93 what version of XOA are you up to ?
      i'm still in 6.1.2 because of other tiny bugs in new versions

      posted in Backup
      P
      Pilow
    • RE: Loss of connection during an action BUG

      @User-cxs les hotes et le XOA ont tous une IP dans le meme subnet et ETH0/ETH1 sont dans le meme réseau(vlan) ?

      tant que le XOA peut bien ping les hotes, tout devrait etre ok

      posted in Xen Orchestra
      P
      Pilow
    • RE: Loss of connection during an action BUG

      @User-cxs tout en bas ici en français : https://steeveschwab.wordpress.com/2018/02/05/howto-configuration-de-xenserver-3eme-partie/

      IV. Quelques Commandes utiles dans le cadre d’un pool
      Voici une série, non exhaustive, de commandes que vous allez certainement devoir utiliser pour administrer XenServer. Les commandes sont à exécuter prioritairement depuis le Master.

      Nb. la majorité des commandes XenServer débute par xe puis un nom de commande. Si vous connaissez le début du nom de commande mais pas les propriétés, tapez xe « la commande » puis appuyez 2x sur Tab pour voir une série de paramètres que vous pouvez passer.

      Désactivation du HA : xe pool-disable-ha
      Réactivation du HA : xe pool-ha-enable
      Lister les hôtes d’un Pool : xe host-list (affiche l’UUID , name-label , et description de tous les hôtes)
      Trouver l’UUID d’un hôte précis : xe host-list name-label=« le nom d’hôte«
      Évacuer les VM d’un hôte : xe host-evacuate host=« le nom d’hôte«
      Promouvoir un slave en master : xe pool-designate-new-master host-uuid=« UUID de l’hote«
      Ejecter un hôte d’un Pool : xe pool-eject host-uuid=« UUID de l’hôte«
      (/!\ les VM présentes sur l’hôte éjecté sont perdues, la configuration est réinitialisée, l’hôte redémarre)

      Supprimer un hôte qui « n’existe plus » d’un Pool : xe host-forget uuid=« UUID de l’hôte » –force
      Si le Pool Master n’est plus joignable :
      passer en mode d’urgence : xe pool-emergency-transition-to-master
      reconnecter les membres : xe pool-recover-slaves
      Rejoindre un Pool existant : xe pool-join master-address=« NomFQDN ou IP du master » master-username=« compte admin » master-password=« le mdp associé«
      Voilà ce chapitre concernant la mise en Pool de nos hôtes XenServer est terminée. Au cours du prochain article je vais aborder deux autres aspects important, la sauvegarde de vos hôtes et la mise à jour d’un Pool à l’aide du wizard Rolling Pool Upgrade.

      posted in Xen Orchestra
      P
      Pilow
    • RE: Loss of connection during an action BUG

      @User-cxs non si ton master tombe, tout disparait, c'est comme ça...
      il y a des méthodes pour transférer le master sur un autre noeud a postériori

      posted in Xen Orchestra
      P
      Pilow