S3 / Wasabi Backup



  • Has anyone successfully got s3/wasabi storage to work?

    I have tried through s3fs as a local mount point on the xoa server and also tried faking it through a nfs mount as well.

    I always get odd behavior and failures. Before I spin wheels too much, has anyone got this to work?


  • XCP-ng Team

    Can you describe more about "odd behavior and failures"?



  • Sure...

    So on the xoa server instance, when I have wasabi mounted in fstab as such:

    s3fs#xoa.mybucket.com /mnt/backupwasabi fuse _netdev,allow_other,use_path_request_style,url=https://s3.wasabisys.com 0 0
    

    I often get operation timed out

    For example,

     APC Powerchute (vault-1)  
     Snapshot 
    Start: Apr 29, 2020, 01:07:35 PM
    End: Apr 29, 2020, 01:07:48 PM
     Wasabi 
     transfer 
    Start: Apr 29, 2020, 01:07:49 PM
    End: Apr 29, 2020, 02:22:40 PM
    Duration: an hour
    Error: operation timed out
    Start: Apr 29, 2020, 01:07:49 PM
    End: Apr 29, 2020, 02:22:40 PM
    Duration: an hour
    Error: operation timed out
    Start: Apr 29, 2020, 01:07:35 PM
    End: Apr 29, 2020, 02:22:40 PM
    Duration: an hour
    

    Sometimes I get ENOSPC: no space left on device, write Like this:

     APC Powerchute (vault-1)  
     Snapshot 
    Start: Apr 29, 2020, 12:14:32 PM
    End: Apr 29, 2020, 12:14:36 PM
     Wasabi 
     transfer 
    Start: Apr 29, 2020, 12:14:50 PM
    End: Apr 29, 2020, 12:22:11 PM
    Duration: 7 minutes
    Error: ENOSPC: no space left on device, write
    Start: Apr 29, 2020, 12:14:50 PM
    End: Apr 29, 2020, 12:22:11 PM
    Duration: 7 minutes
    Error: ENOSPC: no space left on device, write
    Start: Apr 29, 2020, 12:14:32 PM
    End: Apr 29, 2020, 12:22:12 PM
    Duration: 8 minutes
    Type: full
    

    Those are the two common errors I get when wasabi is mounted directly as a local mount point on ssh on the xoa server, and then in the remotes section I have it setup as a "local".

    I have also tried a whole other pandoras box approach of having a vm called "WasabiProxy" which has the bucket mounted locally in /etc/fstab as described above, and then I expose that as a nfs share and mount it into xoa as a nfs remote. I found a forum post where a guy did this approach so i gave it a try (https://mangolassi.it/topic/19264/how-to-use-wasabi-with-xen-orchestra).

    However when I do that I get two different kinds of errors on different runs:

    APC Powerchute (vault-1)  
     Snapshot 
    Start: Apr 28, 2020, 05:56:41 PM
    End: Apr 28, 2020, 05:56:58 PM
     WasabiNFS 
     transfer 
    Start: Apr 28, 2020, 05:56:58 PM
    End: Apr 28, 2020, 05:58:23 PM
    Duration: a minute
    Error: ENOSPC: no space left on device, write
    Start: Apr 28, 2020, 05:56:58 PM
    End: Apr 28, 2020, 05:58:23 PM
    Duration: a minute
    Error: ENOSPC: no space left on device, write
    Start: Apr 28, 2020, 05:56:41 PM
    End: Apr 28, 2020, 05:58:23 PM
    Duration: 2 minutes
    Type: full
    

    AND

     APC Powerchute (vault-1)  
     Snapshot 
    Start: Apr 28, 2020, 05:59:26 PM
    End: Apr 28, 2020, 05:59:44 PM
     WasabiNFS 
     transfer 
    Start: Apr 28, 2020, 05:59:44 PM
    End: Apr 28, 2020, 05:59:44 PM
    Duration: a few seconds
    Error: Unknown system error -116: Unknown system error -116, scandir '/run/xo-server/mounts/a0fbb865-b55d-466c-8933-b3c091e302ff/'
    Start: Apr 28, 2020, 05:59:44 PM
    End: Apr 28, 2020, 05:59:44 PM
    Duration: a few seconds
    Error: Unknown system error -116: Unknown system error -116, scandir '/run/xo-server/mounts/a0fbb865-b55d-466c-8933-b3c091e302ff/'
    Start: Apr 28, 2020, 05:59:26 PM
    End: Apr 28, 2020, 05:59:45 PM
    Duration: a few seconds
    Type: full
    

    The proxy to me seems like a bit of a wonky workaround...I think it would be better to be mounted as local storage in the long run, but I don't understand what is causing it to timeout or provide the out of space error.

    When the out of space error occurs, I check to ensure fuse still has the volume mounted and that something silly like an unmount and the local drive filling up didn't occur. That doesn't seem to be the case.


  • XCP-ng Team

    And you have enough space? Anyway, it seems related to how wasabi handles writes… I'm not aware of any similar issue on other kind of remotes (or even local)



  • Looking at the below I assume so. Would you concur or do you think differently?

    Filesystem      Size  Used Avail Use% Mounted on
    devtmpfs        3.8G     0  3.8G   0% /dev
    tmpfs           3.9G     0  3.9G   0% /dev/shm
    tmpfs           3.9G   18M  3.8G   1% /run
    tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
    /dev/xvda1       20G  9.6G   11G  48% /
    s3fs            256T     0  256T   0% /mnt/backupwasabi
    tmpfs           780M     0  780M   0% /run/user/0
    


  • Also, do you know of any other cloud service that does work? s3/backblaze, etc?

    We were having luck with a fuse mounted ssh folder on a machine offsite in another physical location....but that particular server has other reasons that we can no longer use it with xcp-ng (the disk io was already too high, and adding this job was maxing it out),

    So we'd like to use a cloud provider instead of having to invest in hardware and maintenance, but worse case, I guess we might have to build a storage server offsite for it.



  • I also mounted wasabi via s3fs-fuse in fstab, but directly to xcp-ng.

    Most of my VM's were snapshotted, exported (with zstd), and correctly uploaded. However, 2 didn't, and I have this in XCP-ng Center:

    "File System on Control Domain Full","Disk usage for the Control Domain on server 'xcp-<hypervisor>' has reached 100.0%. XCP-ng's performance will be critically affected if this disk becomes full. Log files or other non-essential (user created) files should be removed.","xcp-<hypervisor>","Jul 26, 2020 2:28 AM",""
    

    Local filesystem is not even close to full, same as above.


  • XCP-ng Team

    @klou this month release will have S3 native integration in remote section.



  • I just saw that! (and am playing with it).

    Quick question: The configurator seems to require a Directory underneath the bucket. What if we want to just touch the root of the bucket?

    (I'm not an S3 guru, so please ignore any ... ignorance).

    But, it seems I can put a dummy value in the directory, and eliminate it later via "header" edits. Not so much at the "detail" level.


  • XCP-ng Team

    @nraynaud is the dev behind it. He'll answer 🙂


  • XCP-ng Team

    @klou sorry you'll have to put a directory for now. I have not taken the time to make it optional yet.



  • Update:

    Eliminating Directory via header edits actually didn't work. The backup/export seems to have completed, but the files don't exist. It may be a Wasabi thing, not the end of the world.



  • This post is deleted!

Log in to reply
 

XCP-ng Pro Support

XCP-ng Pro Support