@doogie06
register cli: xo-cli --register http://url login
then remove all tasks: xo-cli rest del tasks
![](/forum/assets/uploads/profile/uid-6450/6450-profileavatar-1637076405397.jpeg)
Tristis Oris
@Tristis Oris
Best posts made by Tristis Oris
-
RE: Clearing Failed XO Tasks
-
RE: New Rust Xen guest tools
Problem with broken package on rhel8-9 looks resolved now. Done multiple tests.
-
RE: New Rust Xen guest tools
@DustinB said in New Rust Xen guest tools:
systemctl enable xe
yep, found them.
full steps for RHEL:wget https://gitlab.com/xen-project/xen-guest-agent/-/jobs/6041608360/artifacts/raw/RPMS/x86_64/xen-guest-agent-0.4.0-0.fc37.x86_64.rpm rpm -i xen-guest-agent* yum remove -y xe-guest-utilities-latest systemctl enable xen-guest-agent.service --now
-
RE: New Rust Xen guest tools
@chrisfonte new tools will remove the old one during install.
-
RE: Can not create a storage for shared iso
@ckargitest that was fixed few weeks ago. Do you use latest commit?
-
RE: Pool is connected but Unknown pool
@julien-f @olivierlambert
2371109b6fea26c15df28caed132be2108a0d88e
Fixed now, thanks you. -
RE: CBT: the thread to centralize your feedback
updated to
fix_cbt
branch.CR NBD backup works.
Delta NBD backup works.
just once, so we can't be sure yet.No broken tasks is generated.
Still confused why CBT toggle is enabled on some VMs.
2 similars vms on same pool, same storage, same ubuntu version. One is enabled automaticaly, other is not. -
RE: Advice on good remote targets?
if you have no any dedicated storage, it's possible to use literally any PC with nfs\smb\external drive, windows share also.
-
RE: two separate process for backup?
Backups not very fast anyway, because of XAPI limitations. I see cap about 300-400Mbit at 10Gbit local network.
-
RE: Permissions Make No Sense
I agree, got a similar dilemma now.
Required to grant permissions to single host (gpu computing) and ability to create\remove VMs.But it looks it no ACL for VM creation, it available only for global admin\pool admin accounts. So only way is setup standalone XO for each single host? This way i lose the pool storage and VM migration.
Latest posts made by Tristis Oris
-
RE: DevOps Megathread: what you need and how we can help!
@olivierlambert bookmarked. have no time for this right now.
-
RE: DevOps Megathread: what you need and how we can help!
Main task - create base VM image or update existed one. Apply few system tweaks, and sometimes change disk volume.
Most of things i do with ansible, but no VM creation.i tried to get into terraform\packer, but it almost no any howto about. Also license scandals and new forks. I'll wait to see who survives.
-
RE: SR Garbage Collection running permanently
I guess I'm a little confused.
Probably, after fix all bad snapshots been removed, and now they are exist only for hatled (archive) VMs. They got backup only once with such job:
so without backup tasks, GC for vdi chains not running. Is it safe to remove them manually, or better to run backup task again? (that very long and not required).
-
RE: SR Garbage Collection running permanently
@tjkreidl but that same scan as
?
-
RE: SR Garbage Collection running permanently
@tjkreidl nothing unusual.
I found same issue on another 8.3 pool, another SR, but never seen related GC tasks. No exceptions at log.
But no problems with 3rd 8.3 pool.SR scan don't trigger GC, can i run it manually?
-
RE: SR Garbage Collection running permanently
@Danp No, can't find any exceptions.
that typical log for now, it repeating a lot of times:Jan 20 00:02:00 host SM: [2736362] Kicking GC Jan 20 00:02:00 host SM: [2736362] Kicking SMGC@93d53646-e895-52cf-7c8e-df1d5e84f5e4... Jan 20 00:02:00 host SM: [2736362] utilisation 40394752 <> 34451456 * Jan 20 00:02:00 host SM: [2736362] VDIs changed on disk: ['34fa9f2d-95fa-468e-986c-ade22b92b1f3', '56b94e20-01ae-4da1-99f8-03aa901da64f', 'a75c6e7b-7f8d-4a4b-99$ Jan 20 00:02:00 host SM: [2736362] Updating VDI with location=34fa9f2d-95fa-468e-986c-ade22b92b1f3 uuid=34fa9f2d-95fa-468e-986c-ade22b92b1f3 * Jan 20 00:02:00 host SM: [2736362] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:00 host SMGC: [2736466] === SR 93d53646-e895-52cf-7c8e-df1d5e84f5e4: gc === Jan 20 00:02:00 host SM: [2736466] lock: opening lock file /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/running * Jan 20 00:02:00 host SMGC: [2736466] Found 0 cache files Jan 20 00:02:00 host SM: [2736466] lock: tried lock /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr, acquired: True (exists: True) Jan 20 00:02:00 host SM: [2736466] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/93d53646-e895-52cf-7c8e-df1d5e84f5e4/*.vhd'] Jan 20 00:02:00 host SM: [2736614] lock: opening lock file /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:00 host SM: [2736614] sr_update {'host_ref': 'OpaqueRef:3570b538-189d-6a16-fe61-f6d73cc545dc', 'command': 'sr_update', 'args': [], 'device_config':$ Jan 20 00:02:00 host SM: [2736614] lock: closed /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:01 host SM: [2736466] pread SUCCESS * Jan 20 00:02:01 host SMGC: [2736466] SR 93d5 ('VM Sol flash') (73 VDIs in 39 VHD trees): Jan 20 00:02:01 host SMGC: [2736466] *70119dcb(50.000G/45.497G?) Jan 20 00:02:01 host SMGC: [2736466] e6a3e53a(50.000G/107.500K?) * Jan 20 00:02:01 host SM: [2736466] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:01 host SMGC: [2736466] Got sm-config for *70119dcb(50.000G/45.497G?): {'vhd-blocks': 'eJzFlrFuwjAQhk/Kg5R36Ngq5EGQyJSsHTtUPj8WA4M3GDrwBvEEDB2yEaQQ$ * Jan 20 00:02:01 host SMGC: [2736466] No work, exiting Jan 20 00:02:01 host SMGC: [2736466] GC process exiting, no work left Jan 20 00:02:01 host SM: [2736466] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/gc_active Jan 20 00:02:01 host SMGC: [2736466] In cleanup Jan 20 00:02:01 host SMGC: [2736466] SR 93d5 ('VM Sol flash') (73 VDIs in 39 VHD trees): no changes Jan 20 00:02:01 host SM: [2736466] lock: closed /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/running
@tjkreidl Free space is enough. I never see GC job running for a long, so never see it at other pools or after fix. Have no any coalesce in queue.
xe task-list
show nothing.Maybe i need to cleanup bad VDIs manually for first time?
-
RE: SR Garbage Collection running permanently
2 days, few backup cycles, snapshots amount won't descrease.
-
RE: SR Garbage Collection running permanently
@olivierlambert got it. Will see what happens in few days.
-
RE: SR Garbage Collection running permanently
@olivierlambert is it some limit for items removal per run?
-
RE: Manual snapshots retention
@flakpyro true, sometimes that happens and they will never be deleted.
I'm don't remember, but maybe such snaspshots not visible as orphaned.