SMAPIv3 - Feedback & Bug reports
They removed the Hypervisor tag from their blog, so finding that posts isn't easy anmore and CH/XS posts get lost in stuff I don't care about.
Looks a bit like they only want to keep it because VMware Horizon becomes more attractive, if Citrix doesn't bring it's own HV for XenApp/XenDesktop for free.
Citrix want to keep their HV/platform because they don't want to rely only on competitors to run XenApp/XenDesktop. It's just not a "product" but a technical part of their solution.
I can't find anything on this blog
Its all about XA/XD I'm afraid but it does state year end 2019 for the next LTSR.
"Long-Term Service Release News
For those of you on 7.15 LTSR — one of our most popular releases — we have some exciting news about our next LTSR availability. We’re planning our next LTSR for year-end."
Somewhere they said they want to algin numbers and XA/XD LTRS only makes sense on a fitting CH LTSR.
Hmm okay so it's not written directly. We'll see soon enough
cocoon XCP-ng Center Team 🏚️
Hi, is the "raw-device-plugin" branch already working?
I just upgraded to 8.0 and wanted to try it now and found out, that the latest released package is v1.0.2 and is missing the raw-device.
@ronan-a is still working on it, and he's in the middle of a load of tests/benchs.
cocoon XCP-ng Center Team 🏚️
OK thanks, so I better should wait a bit before I try anything?
@olivierlambert @ronan-a Has there been any update on this development? I do see an update to master in org.xen.xapi.storage.raw-device from feb 2020. is it more safe to test now?
SMAPIv3 isn't production ready yet (we aren't really happy with the current state of it). Also because we are working on LINSTOR storage with LINBIT, it takes a lot of our storage R&D/resources right now. We can't be everywhere, so we have to prioritize…
Ok, thanks for the update. Wasn't expecting production ready, was just curious on status and if was safe to run tests with it which sounds like not.
I see movement in the github repo again. Good sign!
Is there anything new with SMAPIv3?
We'll let you know when visible things will be out. But yes, we work on it.
I am playing a bit with SMAPIv3, I created a storage of type file in /mnt/ and a VM, after restarting xcp-ng /mnt/ was not mounted so the storage was not mounted, I did it manually and when starting the VM Throw me
Error code: SR_BACKEND_FAILURE_24 Error parameters: VDIInUse, The VDI is currently in use
I did a detach to the disk from the vm and I also deleted the vm and created a new one but it still gives the same error.
detach and repair does the same
Am testing ext4-ng and noted that the vdi's it creates are not matching their uuid on filesystem. If I look at the ext4 file structure itself they are simply labeled 1, 2, 3 etc.
-Good news is I could create a 3TB vdi on this sr within xoa without having to use the command to force it as raw
I tried a raw-device sr but still get error that driver not recognized. Assuming plugin still not in
tjkreidl Ambassador 📣
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
You need to get rid of SMAPIv1 concepts If you meant "iSCSI block" support, the answer for right now: no.
It's a brand new approach so we'll take time to find the best one, to avoid all the mess that had SMAPIv1 on block devices (non thin, race conditions etc.)
I think the next big "device type" support might be
raw(passing a whole disk without any extra layer to the guest).
Ages ago (in the 1980s), I experimented with raw disk I/O on VAX systems using QIO calls. Yes, it's fast, but also doesn't take bad block or deteriorating disk sectors into account. I can't recall offhand if there way a way to at least update bad block lists or if you had to start from scratch.
Are there better mechanisms these days to handle such things as read/write errors and re-allocation to good blocks if bad blocks are detected on a running system?
Andrew Top contributor 💪
@tjkreidl In days gone by drives used to have a bad sector list printed on the case (SMD/MFM/RLL). It would also be stored on the drive for quick reference. When you formatted the drive the software would use the bad sector list and then add to it during formatting tests. These sectors were "allocated" in the filesystem so they would not be used for normal storage. DOS and unix support a hidden bad block list for this.
As time progressed the controllers got smarter and the bad sector avoidance moved from the OS to the controllers. The systems were able to map out bad blocks into spare sectors or tracks. As the controllers became integrated onto the drives (SCSI, IDE, etc) the drives mapped out bad sectors automatically and hidden from the OS and offered a continuous range of good blocks to the OS. This is why systems have moved to LBA and don't use Head/Track/Sector.
So data block X is always data block X even if the drive moved it somewhere else..... the OS does not know or care.
This contiguous whole disk range of good blocks exists today with flash storage and is automatically and dynamically handled by the flash controllers. As the flash blocks fail (or just get near failure) and get reallocated the spare block count decreases. When spare blocks reach 0 (zero, none) most flash drives force a read-only mode and the device has reached end of life. Hard drives also have a limited number of spare blocks. SMART tools can be used to check how healthy a drive is.
So today RAW drive/storage devices are not really raw but managed by the device and storage controller (flash, SATA, SAS, RAID, etc) to provide good blocks. I/O failure is very bad as it indicates a true unrecoverable failure and time to replace the drive.