XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    file restore on large backups ends in print_req_error: I/O error's

    Scheduled Pinned Locked Moved Xen Orchestra
    8 Posts 2 Posters 656 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T Offline
      Tackyone
      last edited by

      Hi,

      I'm running XO from sources (2af84) under a Debian stretch VM.

      I've got all the dependancies installed to do 'file restore' from backups - and this works, on small backups (e.g. ~8Gbyte).

      However - a much larger VM I have backed up (~100Gbyte) fails.

      Looking on the system - I can see it's created the loop back (loop0) for it, and mounted it on the XO VM, but then you start getting:

      [882294.370559] print_req_error: I/O error, dev loop0, sector 8912912
      

      Type errors. This coincides with using XO to 'browse' through the directory structure of the backup (i.e. traverse the mounted backup filesystem).

      There's not a lot else logged in syslog, e.g. from the point you select a backup to file restore from, I see:

      [878193.892486] fuse init (API version 7.27)
      [878195.968810] loop: module loaded
      [878196.269943] EXT4-fs (loop0): write access unavailable, skipping orphan cleanup
      [878196.269996] EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery,norecovery
      [882233.697709] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
      [882235.713249] EXT4-fs (loop0): write access unavailable, skipping orphan cleanup
      [882235.713284] EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery,norecovery
      [882294.370559] print_req_error: I/O error, dev loop0, sector 8912912
      

      The I/O errors just then re-occur. The XO VM is left locked up at this point (kind of understandably).

      I'm trying to work out what else I can do to look at this - e.g. is there a way of getting XO to log what it did to create the local loop0 mount? - So I can try and replicate from the CLI?

      I've only tried this on small a small VM (~8Gbyte which works) and large (~100Gbyte which fails) VM so far - so I don't know if there's a size at which it "stops working"

      If anyone can point me in the right direction of trying to get some more info of what XO's done to get to this point - that'd be great,

      Thanks!

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        First, I would try with an XOA in latest to compare the outcome 🙂

        T 1 Reply Last reply Reply Quote 0
        • T Offline
          Tackyone @olivierlambert
          last edited by

          @olivierlambert said in file restore on large backups ends in print_req_error: I/O error's:

          First, I would try with an XOA in latest to compare the outcome 🙂

          Hi,

          Ok - moved from 2af84 to apparently 'master' at ce2b918a2 (let me know if that's wrong, I spend my life in SVN not Git).

          This does exactly the same thing - same issue, same syslog error.

          The backup being used for the file restore, is a 100G 'MBR' (i.e. legacy) disk split into 4 partitions, the last partition '/var' being ~78Gbyte in size (which is the one I select to restore from).

          The source is a Delta backup (with a retention of 4).

          ps. Sorry my log output above doesn't look formatted well, I don't know if this is a Safari thing or not - I did put 'code' markup before/after but here it looks like it's collapsed down to one line 😞

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            That's why you should try on XOA, not the sources. This way, we can see if it's a XO bug or an issue on your local source install 🙂

            T 2 Replies Last reply Reply Quote 0
            • T Offline
              Tackyone @olivierlambert
              last edited by

              @olivierlambert

              Sorry - missed the 'XOA' only saw 'latest'... More coffee needed... Off to try.

              1 Reply Last reply Reply Quote 0
              • T Offline
                Tackyone @olivierlambert
                last edited by

                @olivierlambert said in file restore on large backups ends in print_req_error: I/O error's:

                That's why you should try on XOA, not the sources. This way, we can see if it's a XO bug or an issue on your local source install 🙂

                Ok, latest XOA (updated) has the same issue - process dies with loop0 I/O errors logged to console same as my 'from sources' version.

                I'm going to do a full restore of the VM spin it up, and do a full filesystem check on it (well as good as ext3/4 can) - just to make 100% sure there's no corruption - as that seems sensible start / relatively easy to do.

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Yes, that sounds the next step to do 🙂 Something is wrong during the mount phase. Also check you have enough RAM in your XOA.

                  T 1 Reply Last reply Reply Quote 0
                  • T Offline
                    Tackyone @olivierlambert
                    last edited by

                    @olivierlambert said in file restore on large backups ends in print_req_error: I/O error's:

                    Yes, that sounds the next step to do 🙂 Something is wrong during the mount phase. Also check you have enough RAM in your XOA.

                    Ok, filesystem checks out - as far as I can see having:

                    • Done full restore of VM from the same Delta image.
                    • Booted VM, logged in and done 'tar cvf /dev/null *' of the filesystem I'm trying to restore from - which ran to completion without error.

                    I upped the RAM for XOA from default 2GiB to 16GiB and re-tried - and same result.

                    I can try yet more RAM if needs be but would hope 16GiB is more than enough?

                    The console error on XOA is more detailed than my install- so don't know if this helps:

                    blk_update_request: I/O error, dev loop0, sector 8912912 op 0x00:(READ) flags 0x80700 phys_seg 1 prio class 0
                    

                    Looks to be the same issue, just reported differently.

                    You also (on both systems) get kernel chatter about "tasked blocked" (understandable, as it is).

                    1 Reply Last reply Reply Quote 0
                    • ForzaF Forza referenced this topic on
                    • First post
                      Last post