XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. jsajous26
    J
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 3
    • Posts 9
    • Groups 0

    jsajous26

    @jsajous26

    2
    Reputation
    3
    Profile views
    9
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    jsajous26 Unfollow Follow

    Best posts made by jsajous26

    • RE: XO-Lite ne se lance plus

      @yann
      Ce n'est pas de la PROD.
      Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible).

      [10:05 xcp-ng-poc-1 ~]# xe vm-list
      The server could not join the liveset because the HA daemon could not access the heartbeat disk.
      [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable
      Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force).
      [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force
      [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart
      Executing xe-toolstack-restart
      done.
      [10:07 xcp-ng-poc-1 ~]#
      

      Côté stockage

      [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15
      uuid ( RO)                  : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23
                   host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a
                     sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15
               device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260
          currently-attached ( RO): false
      
      
      uuid ( RO)                  : a2dd4324-ce32-5a5e-768f-cc0df10dc49a
                   host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11
                     sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15
               device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260
          currently-attached ( RO): false
      
      
      [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23
      Error code: SR_BACKEND_FAILURE_47
      Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15],
      [10:10 xcp-ng-poc-1 ~]#
      

      Après ca, XO-Lite s'est correctement relancé.

      posted in French (Français)
      J
      jsajous26
    • XO-Lite ne se lance plus

      Bonjour,

      Après une mise à jour des RPM via "yum update", et un redémarrage du serveur, XO-Lite ne semble plus se lancer. Le port 443 est absent.

      [10:02 xcp-ng-poc-1 ~]# netstat -anp
      Active Internet connections (servers and established)
      Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
      tcp        0      0 127.0.0.1:5900          0.0.0.0:*               LISTEN      1375/vncterm
      tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1205/rpcbind
      tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1219/sshd
      tcp        0      0 127.0.0.1:9500          0.0.0.0:*               LISTEN      1375/vncterm
      tcp        0      0 127.0.0.1:8125          0.0.0.0:*               LISTEN      2115/netdata
      tcp        0      0 172.17.1.10:60938       172.17.1.1:3260         ESTABLISHED 2667/iscsid
      tcp        0    272 10.99.153.192:22        10.99.138.165:49677     ESTABLISHED 12856/sshd: root@pt
      tcp        0      0 172.17.0.10:54548       172.17.0.1:3260         ESTABLISHED 2667/iscsid
      tcp6       0      0 :::111                  :::*                    LISTEN      1205/rpcbind
      tcp6       0      0 :::22                   :::*                    LISTEN      1219/sshd
      udp        0      0 127.0.0.1:8125          0.0.0.0:*                           2115/netdata
      udp        0      0 0.0.0.0:111             0.0.0.0:*                           1205/rpcbind
      udp        0      0 127.0.0.1:323           0.0.0.0:*                           1272/chronyd
      udp        0      0 0.0.0.0:940             0.0.0.0:*                           1205/rpcbind
      udp6       0      0 :::111                  :::*                                1205/rpcbind
      udp6       0      0 ::1:323                 :::*                                1272/chronyd
      udp6       0      0 :::940                  :::*                                1205/rpcbind
      Active UNIX domain sockets (servers and established)
      Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
      
      

      Comment relancer le service ?

      Merci de votre réponse.

      posted in French (Français)
      J
      jsajous26

    Latest posts made by jsajous26

    • RE: XO-Lite ne se lance plus

      @yann
      Ce n'est pas de la PROD.
      Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible).

      [10:05 xcp-ng-poc-1 ~]# xe vm-list
      The server could not join the liveset because the HA daemon could not access the heartbeat disk.
      [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable
      Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force).
      [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force
      [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart
      Executing xe-toolstack-restart
      done.
      [10:07 xcp-ng-poc-1 ~]#
      

      Côté stockage

      [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15
      uuid ( RO)                  : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23
                   host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a
                     sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15
               device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260
          currently-attached ( RO): false
      
      
      uuid ( RO)                  : a2dd4324-ce32-5a5e-768f-cc0df10dc49a
                   host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11
                     sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15
               device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260
          currently-attached ( RO): false
      
      
      [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23
      Error code: SR_BACKEND_FAILURE_47
      Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15],
      [10:10 xcp-ng-poc-1 ~]#
      

      Après ca, XO-Lite s'est correctement relancé.

      posted in French (Français)
      J
      jsajous26
    • XO-Lite ne se lance plus

      Bonjour,

      Après une mise à jour des RPM via "yum update", et un redémarrage du serveur, XO-Lite ne semble plus se lancer. Le port 443 est absent.

      [10:02 xcp-ng-poc-1 ~]# netstat -anp
      Active Internet connections (servers and established)
      Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
      tcp        0      0 127.0.0.1:5900          0.0.0.0:*               LISTEN      1375/vncterm
      tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1205/rpcbind
      tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1219/sshd
      tcp        0      0 127.0.0.1:9500          0.0.0.0:*               LISTEN      1375/vncterm
      tcp        0      0 127.0.0.1:8125          0.0.0.0:*               LISTEN      2115/netdata
      tcp        0      0 172.17.1.10:60938       172.17.1.1:3260         ESTABLISHED 2667/iscsid
      tcp        0    272 10.99.153.192:22        10.99.138.165:49677     ESTABLISHED 12856/sshd: root@pt
      tcp        0      0 172.17.0.10:54548       172.17.0.1:3260         ESTABLISHED 2667/iscsid
      tcp6       0      0 :::111                  :::*                    LISTEN      1205/rpcbind
      tcp6       0      0 :::22                   :::*                    LISTEN      1219/sshd
      udp        0      0 127.0.0.1:8125          0.0.0.0:*                           2115/netdata
      udp        0      0 0.0.0.0:111             0.0.0.0:*                           1205/rpcbind
      udp        0      0 127.0.0.1:323           0.0.0.0:*                           1272/chronyd
      udp        0      0 0.0.0.0:940             0.0.0.0:*                           1205/rpcbind
      udp6       0      0 :::111                  :::*                                1205/rpcbind
      udp6       0      0 ::1:323                 :::*                                1272/chronyd
      udp6       0      0 :::940                  :::*                                1205/rpcbind
      Active UNIX domain sockets (servers and established)
      Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
      
      

      Comment relancer le service ?

      Merci de votre réponse.

      posted in French (Français)
      J
      jsajous26
    • RE: Avis et retour sur utilisation de stockage NetAPP

      @olivierlambert

      Je corrige le test FIO qui ne reflète pas la réalité. Le débit est d'environ 5.5Gbps/s

      throughput-test-job: (groupid=0, jobs=4): err= 0: pid=2452: Fri Feb 20 10:02:44 2026
        read: IOPS=10.3k, BW=646MiB/s (677MB/s)(75.7GiB/120002msec)
          slat (nsec): min=0, max=230384k, avg=183401.87, stdev=705796.53
          clat (usec): min=1227, max=252507, avg=11943.41, stdev=5670.46
           lat (usec): min=1357, max=252514, avg=12126.82, stdev=5722.82
          clat percentiles (msec):
           |  1.00th=[    8],  5.00th=[    9], 10.00th=[   10], 20.00th=[   11],
           | 30.00th=[   11], 40.00th=[   12], 50.00th=[   12], 60.00th=[   12],
           | 70.00th=[   13], 80.00th=[   14], 90.00th=[   15], 95.00th=[   16],
           | 99.00th=[   19], 99.50th=[   20], 99.90th=[   23], 99.95th=[  241],
           | 99.99th=[  247]
         bw (  KiB/s): min=347392, max=848640, per=100.00%, avg=661471.87, stdev=15968.62, samples=956
         iops        : min= 5428, max=13260, avg=10335.50, stdev=249.51, samples=956
        write: IOPS=10.3k, BW=646MiB/s (677MB/s)(75.7GiB/120002msec); 0 zone resets
          slat (nsec): min=0, max=230979k, avg=199173.27, stdev=642430.39
          clat (usec): min=1057, max=257260, avg=12446.58, stdev=5587.92
           lat (usec): min=1349, max=257596, avg=12645.76, stdev=5633.93
          clat percentiles (msec):
           |  1.00th=[    9],  5.00th=[   10], 10.00th=[   11], 20.00th=[   11],
           | 30.00th=[   12], 40.00th=[   12], 50.00th=[   12], 60.00th=[   13],
           | 70.00th=[   13], 80.00th=[   14], 90.00th=[   16], 95.00th=[   17],
           | 99.00th=[   20], 99.50th=[   21], 99.90th=[   24], 99.95th=[   40],
           | 99.99th=[  247]
         bw (  KiB/s): min=339968, max=829184, per=100.00%, avg=661765.89, stdev=15966.89, samples=956
         iops        : min= 5312, max=12956, avg=10340.09, stdev=249.48, samples=956
        lat (msec)   : 2=0.01%, 4=0.01%, 10=10.97%, 20=88.57%, 50=0.40%
        lat (msec)   : 250=0.05%, 500=0.01%
        cpu          : usr=1.26%, sys=13.37%, ctx=2934238, majf=0, minf=51
        IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
           submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
           complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
           issued rwts: total=1239575,1240247,0,0 short=0,0,0,0 dropped=0,0,0,0
           latency   : target=0, window=0, percentile=100.00%, depth=64
      
      Run status group 0 (all jobs):
         READ: bw=646MiB/s (677MB/s), 646MiB/s-646MiB/s (677MB/s-677MB/s), io=75.7GiB (81.2GB), run=120002-120002msec
        WRITE: bw=646MiB/s (677MB/s), 646MiB/s-646MiB/s (677MB/s-677MB/s), io=75.7GiB (81.3GB), run=120002-120002msec
      
      Disk stats (read/write):
          dm-3: ios=1238448/1239080, merge=0/0, ticks=1874753/2464217, in_queue=4338970, util=99.99%, aggrios=2479429/2480865, aggrmerge=0/48, aggrticks=3199165/4352700, aggrin_queue=7551881, aggrutil=81.90%
        xvda: ios=2479429/2480865, merge=0/48, ticks=3199165/4352700, in_queue=7551881, util=81.90%
      
      posted in French (Français)
      J
      jsajous26
    • Avis et retour sur utilisation de stockage NetAPP

      Bonjour,

      Nous envisageons et testons une infrastructure comprenant :

      • deux serveurs Hyperviseurs, avec des cartes supportant le 25Gbps en SFP+ DAC pour le stockage en iSCSI, branché en Direct-Attachement. 2 ports par hyperviseur.
      • une baie NetAPP AFF-A30

      Après plusieurs tests, nous atteignons le débit d'environ ~2/3Gbps par VHD :

      --- Lecture sequentielle (1MB blocks) ---
      seq-read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
      ...
      fio-3.35
      Starting 4 processes
      seq-read: Laying out IO file (1 file / 1024MiB)
      Jobs: 4 (f=0): [f(4)][100.0%][r=1081MiB/s][r=1081 IOPS][eta 00m:00s]
      seq-read: (groupid=0, jobs=4): err= 0: pid=3279: Thu Jan 29 08:45:04 2026
        read: IOPS=1714, BW=1714MiB/s (1798MB/s)(100GiB/60001msec)
          slat (usec): min=174, max=274642, avg=2118.43, stdev=2254.11
          clat (usec): min=3, max=340205, avg=72275.65, stdev=39502.46
           lat (msec): min=2, max=342, avg=74.39, stdev=39.57
          clat percentiles (msec):
           |  1.00th=[   59],  5.00th=[   61], 10.00th=[   62], 20.00th=[   63],
           | 30.00th=[   64], 40.00th=[   65], 50.00th=[   65], 60.00th=[   66],
           | 70.00th=[   67], 80.00th=[   68], 90.00th=[   71], 95.00th=[   78],
           | 99.00th=[  296], 99.50th=[  300], 99.90th=[  338], 99.95th=[  338],
           | 99.99th=[  338]
         bw (  MiB/s): min=  881, max= 2044, per=100.00%, avg=1718.80, stdev=77.73, samples=476
         iops        : min=  880, max= 2044, avg=1718.15, stdev=77.74, samples=476
        lat (usec)   : 4=0.01%, 10=0.01%
        lat (msec)   : 4=0.01%, 10=0.01%, 20=0.02%, 50=0.05%, 100=96.65%
        lat (msec)   : 500=3.25%
        cpu          : usr=0.14%, sys=21.80%, ctx=554669, majf=0, minf=32823
        IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
           submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
           complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
           issued rwts: total=102868,0,0,0 short=0,0,0,0 dropped=0,0,0,0
           latency   : target=0, window=0, percentile=100.00%, depth=32
      
      Run status group 0 (all jobs):
         READ: bw=1714MiB/s (1798MB/s), 1714MiB/s-1714MiB/s (1798MB/s-1798MB/s), io=100GiB (108GB), run=60001-60001msec
      
      Disk stats (read/write):
          dm-6: ios=170272/127, merge=0/0, ticks=84737/61, in_queue=84798, util=80.03%, aggrios=661787/139, aggrmerge=2511/21, aggrticks=352428/91, aggrin_queue=352520, aggrutil=77.94%
        xvda: ios=661787/139, merge=2511/21, ticks=352428/91, in_queue=352520, util=77.94%
      
      --- Ecriture sequentielle (1MB blocks) ---
      seq-write: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
      ...
      fio-3.35
      Starting 4 processes
      Jobs: 4 (f=4): [W(4)][100.0%][eta 00m:00s]
      seq-write: (groupid=0, jobs=4): err= 0: pid=3319: Thu Jan 29 08:46:05 2026
        write: IOPS=1140, BW=1141MiB/s (1196MB/s)(68.0GiB/61049msec); 0 zone resets
          slat (usec): min=220, max=8635, avg=1058.26, stdev=1250.23
          clat (usec): min=7, max=3092.6k, avg=108622.83, stdev=431088.98
           lat (usec): min=470, max=3095.8k, avg=109681.09, stdev=431047.23
          clat percentiles (msec):
           |  1.00th=[    9],  5.00th=[   10], 10.00th=[   17], 20.00th=[   24],
           | 30.00th=[   26], 40.00th=[   29], 50.00th=[   31], 60.00th=[   33],
           | 70.00th=[   36], 80.00th=[   41], 90.00th=[   66], 95.00th=[   87],
           | 99.00th=[ 2635], 99.50th=[ 2735], 99.90th=[ 3037], 99.95th=[ 3071],
           | 99.99th=[ 3104]
         bw (  MiB/s): min=   54, max= 5504, per=100.00%, avg=2532.47, stdev=352.22, samples=220
         iops        : min=   54, max= 5504, avg=2532.45, stdev=352.21, samples=220
        lat (usec)   : 10=0.01%, 20=0.01%
        lat (msec)   : 10=6.40%, 20=8.05%, 50=72.33%, 100=9.71%, 250=0.48%
        lat (msec)   : 2000=0.01%, >=2000=3.03%
        cpu          : usr=1.55%, sys=26.90%, ctx=636361, majf=0, minf=48
        IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0%
           submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
           complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
           issued rwts: total=0,69636,0,0 short=0,0,0,0 dropped=0,0,0,0
           latency   : target=0, window=0, percentile=100.00%, depth=32
      
      Run status group 0 (all jobs):
        WRITE: bw=1141MiB/s (1196MB/s), 1141MiB/s-1141MiB/s (1196MB/s-1196MB/s), io=68.0GiB (73.0GB), run=61049-61049msec
      
      Disk stats (read/write):
          dm-6: ios=145/746281, merge=0/0, ticks=50/505091, in_queue=505141, util=67.51%, aggrios=163/1015432, aggrmerge=0/28896, aggrticks=58/597902, aggrin_queue=597960, aggrutil=56.54%
        xvda: ios=163/1015432, merge=0/28896, ticks=58/597902, in_queue=597960, util=56.54%
      

      Est-ce une limitation côté hyperviseur (Xen) ?

      Comment voyez-vous l'architecture avec une ou plusieurs baies NetAPP ?

      Merci de votre réponse,
      Cordialement.

      posted in French (Français)
      J
      jsajous26
    • RE: Execute pre-freeze and post-thaw

      @olivierlambert said in Execute pre-freeze and post-thaw:

      You did create this thread, I think you would know what's the purpose of this discussion

      Have you asked VEEAM for a solution, because it seems more VEEAM-related question after all I have no idea what it means regarding "replication is not possible for this Oracle point via VEEAM". Remember we aren't from VEEAM in here, so you should explain more precisely what do you expect from us to do despite it seems to be a VEEAM limitation?

      The problem is not Veeam, but the inability to trigger a pre-free and post-thaw (Quiesce) on a VM before and after snapshot on XCP-NG.

      posted in Backup
      J
      jsajous26
    • RE: Execute pre-freeze and post-thaw

      Could this feature integrate XCP-ng?

      We are looking for a new virtualization solution.
      Oracle is a bottleneck for backups.

      We tested the backup with the Veeam agent, which appears to be successful.
      However, replication is not possible for this Oracle point via Veeam.

      posted in Backup
      J
      jsajous26
    • RE: Execute pre-freeze and post-thaw

      @florent

      Using webhooks, if I understand correctly, requires installing a Node.js server. This server would then call the commands to put Oracle into backup mode.

      This solution is cumbersome because it would require maintaining Node.js.

      Is there no way to directly use Guest Tools to call a script before and after snapshot, like with VMware or QEMU? If not, is this planned?

      posted in Backup
      J
      jsajous26
    • RE: Execute pre-freeze and post-thaw

      I don't quite understand the use of webhooks.
      Do you have a more specific example?

      Veeam not use XOA for backup, use directly XCP-ng.

      posted in Backup
      J
      jsajous26
    • Execute pre-freeze and post-thaw

      Hello,

      I'm working on testing the XCP-NG solution (PoC).

      Using the latest Veeam backup plugin, I want to run a script on the VM to freeze the Oracle database with ArchiveLog (BEGIN BACKUP and END BACKUP).
      I can't find a way to run /usr/sbin/pre-freeze-script and /usr/sbin/post-thaw-script via Guest Tools.

      Do you have a solution, or a feature that would allow me to run these two scripts?

      Thanks.

      posted in Backup
      J
      jsajous26