
Posts
-
RE: No backups after config restore
bump. Spent some time again to remember why backups don't work after migration.
-
RE: DevOps Megathread: what you need and how we can help!
@olivierlambert bookmarked. have no time for this right now.
-
RE: DevOps Megathread: what you need and how we can help!
Main task - create base VM image or update existed one. Apply few system tweaks, and sometimes change disk volume.
Most of things i do with ansible, but no VM creation.i tried to get into terraform\packer, but it almost no any howto about. Also license scandals and new forks. I'll wait to see who survives.
-
RE: SR Garbage Collection running permanently
I guess I'm a little confused.
Probably, after fix all bad snapshots been removed, and now they are exist only for hatled (archive) VMs. They got backup only once with such job:
so without backup tasks, GC for vdi chains not running. Is it safe to remove them manually, or better to run backup task again? (that very long and not required).
-
RE: SR Garbage Collection running permanently
@tjkreidl but that same scan as
?
-
RE: SR Garbage Collection running permanently
@tjkreidl nothing unusual.
I found same issue on another 8.3 pool, another SR, but never seen related GC tasks. No exceptions at log.
But no problems with 3rd 8.3 pool.SR scan don't trigger GC, can i run it manually?
-
RE: SR Garbage Collection running permanently
@Danp No, can't find any exceptions.
that typical log for now, it repeating a lot of times:Jan 20 00:02:00 host SM: [2736362] Kicking GC Jan 20 00:02:00 host SM: [2736362] Kicking SMGC@93d53646-e895-52cf-7c8e-df1d5e84f5e4... Jan 20 00:02:00 host SM: [2736362] utilisation 40394752 <> 34451456 * Jan 20 00:02:00 host SM: [2736362] VDIs changed on disk: ['34fa9f2d-95fa-468e-986c-ade22b92b1f3', '56b94e20-01ae-4da1-99f8-03aa901da64f', 'a75c6e7b-7f8d-4a4b-99$ Jan 20 00:02:00 host SM: [2736362] Updating VDI with location=34fa9f2d-95fa-468e-986c-ade22b92b1f3 uuid=34fa9f2d-95fa-468e-986c-ade22b92b1f3 * Jan 20 00:02:00 host SM: [2736362] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:00 host SMGC: [2736466] === SR 93d53646-e895-52cf-7c8e-df1d5e84f5e4: gc === Jan 20 00:02:00 host SM: [2736466] lock: opening lock file /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/running * Jan 20 00:02:00 host SMGC: [2736466] Found 0 cache files Jan 20 00:02:00 host SM: [2736466] lock: tried lock /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr, acquired: True (exists: True) Jan 20 00:02:00 host SM: [2736466] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/93d53646-e895-52cf-7c8e-df1d5e84f5e4/*.vhd'] Jan 20 00:02:00 host SM: [2736614] lock: opening lock file /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:00 host SM: [2736614] sr_update {'host_ref': 'OpaqueRef:3570b538-189d-6a16-fe61-f6d73cc545dc', 'command': 'sr_update', 'args': [], 'device_config':$ Jan 20 00:02:00 host SM: [2736614] lock: closed /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:01 host SM: [2736466] pread SUCCESS * Jan 20 00:02:01 host SMGC: [2736466] SR 93d5 ('VM Sol flash') (73 VDIs in 39 VHD trees): Jan 20 00:02:01 host SMGC: [2736466] *70119dcb(50.000G/45.497G?) Jan 20 00:02:01 host SMGC: [2736466] e6a3e53a(50.000G/107.500K?) * Jan 20 00:02:01 host SM: [2736466] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/sr Jan 20 00:02:01 host SMGC: [2736466] Got sm-config for *70119dcb(50.000G/45.497G?): {'vhd-blocks': 'eJzFlrFuwjAQhk/Kg5R36Ngq5EGQyJSsHTtUPj8WA4M3GDrwBvEEDB2yEaQQ$ * Jan 20 00:02:01 host SMGC: [2736466] No work, exiting Jan 20 00:02:01 host SMGC: [2736466] GC process exiting, no work left Jan 20 00:02:01 host SM: [2736466] lock: released /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/gc_active Jan 20 00:02:01 host SMGC: [2736466] In cleanup Jan 20 00:02:01 host SMGC: [2736466] SR 93d5 ('VM Sol flash') (73 VDIs in 39 VHD trees): no changes Jan 20 00:02:01 host SM: [2736466] lock: closed /var/lock/sm/93d53646-e895-52cf-7c8e-df1d5e84f5e4/running
@tjkreidl Free space is enough. I never see GC job running for a long, so never see it at other pools or after fix. Have no any coalesce in queue.
xe task-list
show nothing.Maybe i need to cleanup bad VDIs manually for first time?
-
RE: SR Garbage Collection running permanently
2 days, few backup cycles, snapshots amount won't descrease.
-
RE: SR Garbage Collection running permanently
@olivierlambert got it. Will see what happens in few days.
-
RE: SR Garbage Collection running permanently
@olivierlambert is it some limit for items removal per run?
-
RE: Manual snapshots retention
@flakpyro true, sometimes that happens and they will never be deleted.
I'm don't remember, but maybe such snaspshots not visible as orphaned. -
RE: SR Garbage Collection running permanently
@Tristis-Oris GC done, ~5 items removed, ~20 left.
-
RE: SR Garbage Collection running permanently
@Danp after some time GC task started automaticaly and running for 1 hour already. Still about 50%.
-
RE: Manual snapshots retention
@olivierlambert well, 30 days is a super maximum for any real needs to restore anything.
-
RE: XO tasks - select all button
@olivierlambert as i say, now it works. So nvm)
-
RE: XO tasks - select all button
@DustinB from source. I suppose it was available only for xoa?
-
RE: XO tasks - select all button
@olivierlambert i remember news, but it never works for me(
-
RE: SR Garbage Collection running permanently
- install patch, reboot pool.
- GC job started during restart and stuck at 0%, so i restart toolstack again.
- now nothing is running, bad snapshots not disappeared.
Should i wait longer or?