XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    [DEPRECATED] SMAPIv3 - Feedback & Bug reports

    Scheduled Pinned Locked Moved Development
    75 Posts 20 Posters 29.2k Views 19 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates ๐Ÿช Co-Founder CEO
      last edited by

      Hmm okay so it's not written directly. We'll see soon enough ๐Ÿ™‚

      1 Reply Last reply Reply Quote 0
      • C Offline
        cocoon XCP-ng Center Team
        last edited by

        Hi, is the "raw-device-plugin" branch already working?
        https://github.com/xcp-ng/xcp-ng-xapi-storage/tree/feat/raw-device-plugin

        I just upgraded to 8.0 and wanted to try it now and found out, that the latest released package is v1.0.2 and is missing the raw-device.

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates ๐Ÿช Co-Founder CEO
          last edited by

          @ronan-a is still working on it, and he's in the middle of a load of tests/benchs.

          1 Reply Last reply Reply Quote 1
          • C Offline
            cocoon XCP-ng Center Team
            last edited by

            OK thanks, so I better should wait a bit before I try anything?

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates ๐Ÿช Co-Founder CEO
              last edited by

              Exactly.

              R 1 Reply Last reply Reply Quote 0
              • R Offline
                ravenet @olivierlambert
                last edited by

                @olivierlambert @ronan-a Has there been any update on this development? I do see an update to master in org.xen.xapi.storage.raw-device from feb 2020. is it more safe to test now?

                1 Reply Last reply Reply Quote 1
                • olivierlambertO Offline
                  olivierlambert Vates ๐Ÿช Co-Founder CEO
                  last edited by

                  SMAPIv3 isn't production ready yet (we aren't really happy with the current state of it). Also because we are working on LINSTOR storage with LINBIT, it takes a lot of our storage R&D/resources right now. We can't be everywhere, so we have to prioritizeโ€ฆ

                  1 Reply Last reply Reply Quote 1
                  • R Offline
                    ravenet
                    last edited by

                    Ok, thanks for the update. Wasn't expecting production ready, was just curious on status and if was safe to run tests with it which sounds like not.

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      ravenet
                      last edited by

                      I see movement in the github repo again. Good sign!

                      1 Reply Last reply Reply Quote 1
                      • mdavicoM Offline
                        mdavico
                        last edited by

                        Is there anything new with SMAPIv3?

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates ๐Ÿช Co-Founder CEO
                          last edited by

                          We'll let you know when visible things will be out. But yes, we work on it.

                          ForzaF 1 Reply Last reply Reply Quote 0
                          • mdavicoM Offline
                            mdavico
                            last edited by mdavico

                            I am playing a bit with SMAPIv3, I created a storage of type file in /mnt/ and a VM, after restarting xcp-ng /mnt/ was not mounted so the storage was not mounted, I did it manually and when starting the VM Throw me

                            Error code: SR_BACKEND_FAILURE_24
                            Error parameters: VDIInUse, The VDI is currently in use
                            

                            I did a detach to the disk from the vm and I also deleted the vm and created a new one but it still gives the same error.

                            detach and repair does the same

                            1 Reply Last reply Reply Quote 0
                            • R Offline
                              ravenet
                              last edited by ravenet

                              Am testing ext4-ng and noted that the vdi's it creates are not matching their uuid on filesystem. If I look at the ext4 file structure itself they are simply labeled 1, 2, 3 etc.
                              -Good news is I could create a 3TB vdi on this sr within xoa without having to use the command to force it as raw

                              I tried a raw-device sr but still get error that driver not recognized. Assuming plugin still not in

                              1 Reply Last reply Reply Quote 0
                              • tjkreidlT Offline
                                tjkreidl Ambassador @olivierlambert
                                last edited by

                                @olivierlambert said in SMAPIv3 - Feedback & Bug reports:

                                You need to get rid of SMAPIv1 concepts ๐Ÿ˜‰ If you meant "iSCSI block" support, the answer for right now: no.

                                It's a brand new approach so we'll take time to find the best one, to avoid all the mess that had SMAPIv1 on block devices (non thin, race conditions etc.)

                                I think the next big "device type" support might be raw (passing a whole disk without any extra layer to the guest).

                                Ages ago (in the 1980s), I experimented with raw disk I/O on VAX systems using QIO calls. Yes, it's fast, but also doesn't take bad block or deteriorating disk sectors into account. I can't recall offhand if there way a way to at least update bad block lists or if you had to start from scratch.

                                Are there better mechanisms these days to handle such things as read/write errors and re-allocation to good blocks if bad blocks are detected on a running system?

                                Reference: https://www.tech-insider.org/vms/research/acrobat/7808.pdf

                                A 1 Reply Last reply Reply Quote 0
                                • A Offline
                                  Andrew Top contributor @tjkreidl
                                  last edited by

                                  @tjkreidl In days gone by drives used to have a bad sector list printed on the case (SMD/MFM/RLL). It would also be stored on the drive for quick reference. When you formatted the drive the software would use the bad sector list and then add to it during formatting tests. These sectors were "allocated" in the filesystem so they would not be used for normal storage. DOS and unix support a hidden bad block list for this.

                                  As time progressed the controllers got smarter and the bad sector avoidance moved from the OS to the controllers. The systems were able to map out bad blocks into spare sectors or tracks. As the controllers became integrated onto the drives (SCSI, IDE, etc) the drives mapped out bad sectors automatically and hidden from the OS and offered a continuous range of good blocks to the OS. This is why systems have moved to LBA and don't use Head/Track/Sector.

                                  So data block X is always data block X even if the drive moved it somewhere else..... the OS does not know or care.

                                  This contiguous whole disk range of good blocks exists today with flash storage and is automatically and dynamically handled by the flash controllers. As the flash blocks fail (or just get near failure) and get reallocated the spare block count decreases. When spare blocks reach 0 (zero, none) most flash drives force a read-only mode and the device has reached end of life. Hard drives also have a limited number of spare blocks. SMART tools can be used to check how healthy a drive is.

                                  So today RAW drive/storage devices are not really raw but managed by the device and storage controller (flash, SATA, SAS, RAID, etc) to provide good blocks. I/O failure is very bad as it indicates a true unrecoverable failure and time to replace the drive.

                                  tjkreidlT 1 Reply Last reply Reply Quote 0
                                  • tjkreidlT Offline
                                    tjkreidl Ambassador @Andrew
                                    last edited by

                                    @Andrew Thank you for that, much appreciated. Although I was aware of this process for SSD drives, I did not know that spinning disks had become that much smarter in the interim (~40 years!). But in any case, raw drives are very powerful if you have decent code to access them and the overhead can be appreciably less than with formatted drives.

                                    1 Reply Last reply Reply Quote 1
                                    • ForzaF Offline
                                      Forza @olivierlambert
                                      last edited by

                                      @olivierlambert hi. I'm also eager to see how the new v3 is progressing. From my company point of view, being able to compact VDIs using guest trim/unmap is very valuable as it minimises storage space usage and improves backup/restore speeds.

                                      1 Reply Last reply Reply Quote 1
                                      • olivierlambertO Offline
                                        olivierlambert Vates ๐Ÿช Co-Founder CEO
                                        last edited by

                                        A big blog post is coming soon. I need to check with @matiasvl about trim passing via raw tapdisk datapath.

                                        1 Reply Last reply Reply Quote 3
                                        • S Offline
                                          swivvle
                                          last edited by

                                          Please let us know when we can test that new zfs-ng!

                                          1 Reply Last reply Reply Quote 1
                                          • C Offline
                                            Chmura
                                            last edited by

                                            SMAPI v3 looks very exciting, unfortunately on the bottom is still tapdisk, and that has one but it's a very serious limitation - no io/bandwidth limit ;(

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post