Updated my HP DL360 G7 home lab with no problem at all.. Everything seems to work OK.. 8-)

Posts made by Anonabhar
-
RE: XCP-ng 8.3 public alpha 🚀
-
RE: XCP-ng 8.3 public alpha 🚀
@olivierlambert Well.. I have been doing the occasional "yum update" with no new problems. Everything is working great, but I still have that re-scanning of the FS every 30 seconds.. Not impacting anything but just a little weird...
-
RE: Having trouble configuring transport-nagios after latest update
@ilidza I dont use Nagios reporting myself, but I noticed in my /var/log/orchestra.log the following after the last git pull. You might want to see if there is anything in there that looks similar:
2022-12-14T00:00:35.623Z xo:server:handleBackupLog WARN sendToNagios: TypeError: app.sendPassiveCheck is not a function at sendToNagios (file:///root/XO/xen-orchestra/packages/xo-server/src/_handleBackupLog.mjs:21:15) at handleBackupLog (file:///root/XO/xen-orchestra/packages/xo-server/src/_handleBackupLog.mjs:69:9) at file:///root/XO/xen-orchestra/packages/xo-server/src/xo-mixins/backups-ng/index.mjs:317:17 at ChildProcess.<anonymous> (/root/XO/xen-orchestra/@xen-orchestra/backups/runBackupWorker.js:27:11) at ChildProcess.emit (node:events:513:28) at ChildProcess.patchedEmit [as emit] (/root/XO/xen-orchestra/@xen-orchestra/log/configure.js:135:17) at emit (node:internal/child_process:937:14) at processTicksAndRejections (node:internal/process/task_queues:83:21)
-
RE: A lot of not found and error 127 in yarn build
@bern I just built mine on Dev 11 a few days ago...
Make sure sudo is installed and do:
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
-
RE: XCP-ng 8.3 public alpha 🚀
@ronan-a Sure.. I would be happy to... I just had to append a .txt to the end of the file in order to upload it.. Please remove .txt and decompress
-
RE: XCP-ng 8.3 public alpha 🚀
Hi Everyone,
I was wondering if this is a bug or if it is something specific to my setup. But.. I just noticed that a 'rescan' seems to be happening on my lab server every 30 seconds..
I noticed this because I was making a new template for XOCE and was deleting snapshots (viewing the /var/log/SMlog to wait until the coalescing was complete) and noticed that this was happening.
I will include the log file for viewing.. The log file is trimmed from 0730 -> present so it would fit in the posting limits.. But this was happening from the begging of the rotated logfile..
~Peg
-
RE: XCP-ng 8.3 public alpha 🚀
Oh! Is this a new feature in XCP-ng 8.3????
If so... YEA!!!!
-
RE: XCP-ng 8.3 public alpha 🚀
I was just wondering... Is it just my installation or is there no XenTools ISO in the dropdown for virtual disks?
EDIT: Never mind.. Its a XCP-ng Center thing.. I can mount the ISO from XO (it does not show up in XCP-ng Center in the dropdown list or even when it is mounted)..
-
RE: XCP-ng 8.3 public alpha 🚀
@stormi Yea.. I though it was weird as well, but I was happy to see it work.. Here is a screen shot of the XCP-ng screen.. I havent upgraded the Peg02 yet (as I have to get more disks in there tonight in order to migrate things around), but notice the version number on the Peg03
-
RE: XCP-ng 8.3 public alpha 🚀
The good news is that I eventually got 8.3 installed and it is looking good.. I had a few problems, but this was mostly down to something weird with the partition layout.
Every time I installed 8.3 (fresh install) onto a machine, it would refuse to create any "Local Storage". I bounced around in circles many time and gave up, installing 8.2 (fresh install) and got the same result. What I had to do eventually is "ALT+Right Arrow" until I got to a # prompt, do a lvdisplay to find the VG and then do a lvremove on the VG + a wipefs -a -f /dev/sda3 to clear EVERYTHING out. It was somehow picking up that the drives I were using previously were used in a ZFS pool (btw.. This is really weird as these drives were used in a different machine, on a different RAID system, etc.. I have no idea how it figured out there was a ZFS pool, but whatever).
Once I did the above, the fresh install worked just fine (8.2), joined it my my existing pool, promote it to a pool master and then do an upgrade to 8.3
After the upgrade, local storage was showing offline for about 5 minutes, but then it magically kicked itsself into life. I am assuming it was doing something in the background as the new pool master.
As a point of note, it looks like XCP-ng Center V20.11.0.3 is still compatiable as I am using that as well as XO to manage the pool with no issues (so far)
-
RE: XCP-ng 8.3 public alpha 🚀
Ohh.. Cool.. If I upgrade my lab machine by ISO, will subsequent updates be done by yum or will I have to update by ISO again (and again) ?
-
RE: VM's going really slow after 3 - 4 weeks
@Berrick Have your virtual machines started to use swap space?
-
RE: VM, missing disk
@MRisberg Just wondering.. Where is the lock file? I might try that in the future for myself..
-
RE: Snapshot chain too long in NFS SR
@alejandro-anv I have had problems like this on my NFS and iSCSI SR's previously.
One thing that might help is to shutdown the VM's before doing a re-scan. If my memory serves me, it does a different kind of coalesce (online vrs offline) and this has a better chance of success.
Also, be aware that from the time you re-scan and the time it actually starts to do work on the drives is 5 minutes.
When I have a problem like this, I normally ssh into the pool master and tail the logfiles to watch it work. IE:
tail -f /var/log/SMlog | grep SMGC
Its boring.. But.. it gives me a bit of comfort between prayers 8-)
-
RE: ENAMETOOLONG in orchestra.log during merge
@julien-f Hi Julien
Oh ok.. I think I understand now. It looks like the UUID inside of the ____files don't exist and must have been some leftovers from the previous or failed merges in the past. Some of them are 9+ months old... Is it safe to just delete these files or is there any way to tell XO to delete the files if they have no corresponding UUID (like a auto-cleanup?)
-
ENAMETOOLONG in orchestra.log during merge
Hi All,
After upgrading my XO from sources, I am seeing a lot of these errors in the /var/log/orchestra.log
Are these something to worry about? Just concerned that I am storing corrupted backups at this point.
2022-02-03T21:42:02.699Z xo:backups:mergeWorker INFO merging /xo-vm-backups/360d038b-1ed2-fabf-86af-61b5147fa3c0/vdis/c1f16ecc-6233-4843-8d43-20e503880cdd/9d7d0f22-e6c7-4841-870b-a5029f8f5e51/20220126T193321Z.vhd: 269/269 2022-02-03T21:42:04.208Z xo:backups:mergeWorker FATAL ENAMETOOLONG: name too long, rename '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z' -> '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z' { error: [Error: ENAMETOOLONG: name too long, rename '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z' -> '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z'] { errno: -36, code: 'ENAMETOOLONG', syscall: 'rename', path: '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z', dest: '/run/xo-server/mounts/41561065-7ed0-4956-8b9f-33d29d72d7e0/xo-vm-backups/.queue/clean-vm/________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________20211108T200210Z' } }
EDIT: Sorry.. I forgot to add that I am using commit: 15d06, x-server 5.87.0 and xo-web 5.92.0
Thanks,
~Peg -
RE: Custom script execution after backup job
@olivierlambert Ah.. I'm being thick.. I didnt have the plugin loaded so I was wracking my brains to try and find out where the hook would have been called.. Thanks for you help, and back to coding! 8-)
-
RE: Custom script execution after backup job
@olivierlambert Thanks!! I thought there might be something that I could tie into. Would you mind pointing me in the direction of any Doc's for the webhooks and I will start digging though them and see what I can put together
-
Custom script execution after backup job
Hi Everyone,
I was wondering if there was a way to execute a custom script (bash, etc) after a backup job has been completed. Ideally I would love to be able to execute a script after each VM is completed, but would be happy with just calling a script after all VM's have been completed.
What I am trying to achieve is this: We have multiple DC's that we have our VM running in. There is a 'prime' DC that handles 99% of our traffic and a standby DC that mirrors the configuration. Each night I perform delta backups of the VM's in the prime DC and over the weekend I perform CR backups to the backup.
In times of failure in the prime DC, I would have to change BGP routing to point the prime's IP address to the backup DC and then incrementally start up each VM. We are looking at ways to shorten the time required to start each of the VM's in the backup and have a hot-standby VM ready / What we are investigating is the possibility of doing delta and CR backups at the same time between the DC's and leaving a 'copy' of the CR backup running ready to slot in place in times of failure.
Obviously we would have to copy the CR backup to another VM and start this up, but this is where the automation would be nice. If there was a way to tell an external script that a VM has completed its backup task, we can let the script shutdown the current CR-Copy, destroy it, copy the CR-backup to the new CR-Copy and then start it. So, in times of failure, I would only have to worry about the physical routing to change on the networks and not have a delay of starting the VM's
Anyone have any ideas on how this can be done easily?
Kind Regards,
Peg