Testing ZFS with XCP-ng
-
I am working on the issue right now. I am trying to exactly nail the problem.
there are 2 cases:- xe vdi-copy from another SR to a ZFS SR doesn't work
- xe vdi-copy on the same SR doesn't work.
I that a complete assessment of the issues you found?
A cursory test seems to show that
ssh ${XCP_HOST_UNDER_TEST} sed -i.bak 's/# unbuffered = true/unbuffered = false/' /etc/sparse_dd.conf
solves the issue of intra-ZFS copies, but I am still confirming that I have not done anything else on my test box.Thanks,
Nicolas. -
@nraynaud said in Testing ZFS with XCP-ng:
xe vdi-copy from another SR to a ZFS SR doesn't work
xe vdi-copy on the same SR doesn't work.Yes
Additional to that the copy from ZFS SR to another SR is not working.
-
ok, I really think changing /etc/sparse_dd.conf is the right path.
#!/usr/bin/env bash # HOW TO create the passthrough: xe sr-create name-label="sda passthrough" name-description="Block devices" type=udev content-type=disk device-config:location=/dev/sda host-uuid=77b3f6ad-020b-4e48-b090-74b2a26c4f69 set -ex MASTER_HOST=root@192.168.100.1 PASSTHROUGH_VDI=a74d267e-bb14-4732-bd80-b9c445199e8a SNAPSHOT_UUID=19d3758e-eb21-f237-b8f7-6e2f638cc8e0 VM_HOST_UNDER_TEST_UUID=13ec74c2-9b57-a327-962f-1ebd9702eec4 XCP_HOST_UNDER_TEST_UUID=05c61e28-11cf-4131-b645-a0be7637c044 XCP_HOST_UNDER_TEST_IP=192.168.100.151 XCP_HOST_UNDER_TEST=root@${XCP_HOST_UNDER_TEST_IP} INCEPTION_VM_UUID=a7e37541-fb9a-4392-6b54-60cf7ce3d08a INCEPTION_VM_IP=192.168.100.32 INCEPTION_VM=root@${INCEPTION_VM_IP} ssh ${MASTER_HOST} xe snapshot-revert snapshot-uuid=${SNAPSHOT_UUID} NEW_VBD=`ssh ${MASTER_HOST} xe vbd-create device=1 type=Disk mode=RW vm-uuid=${VM_HOST_UNDER_TEST_UUID} vdi-uuid=${PASSTHROUGH_VDI}` ssh ${MASTER_HOST} xe vm-start vm=${VM_HOST_UNDER_TEST_UUID} until ping -c1 ${XCP_HOST_UNDER_TEST_IP} &>/dev/null; do :; done sleep 20 # try EXT3 ssh ${XCP_HOST_UNDER_TEST} 'mkfs.ext3 /dev/sdb2 && echo /dev/sdb2 /mnt/ext3 ext3 >>/etc/fstab && mkdir -p /mnt/ext3 && mount /dev/sdb2 && df' SR_EXT3_UUID=`ssh ${XCP_HOST_UNDER_TEST} "xe sr-create host-uuid=${XCP_HOST_UNDER_TEST_UUID} name-label=test-ext3-sr type=file other-config:o_direct=false device-config:location=/mnt/ext3/test-ext3-sr"` TEST_EXT3_VDI=`ssh ${XCP_HOST_UNDER_TEST} xe vdi-create sr-uuid=${SR_EXT3_UUID} name-label=test-ext3-vdi virtual-size=214748364800` TEST_VBD=`ssh ${XCP_HOST_UNDER_TEST} xe vbd-create device=1 type=Disk mode=RW vm-uuid=${INCEPTION_VM_UUID} vdi-uuid=${TEST_EXT3_VDI}` ssh ${XCP_HOST_UNDER_TEST} reboot || true sleep 20 until ping -c1 ${XCP_HOST_UNDER_TEST_IP} &>/dev/null; do :; done sleep 20 ssh ${XCP_HOST_UNDER_TEST} xe vm-start vm=${INCEPTION_VM_UUID} on=${XCP_HOST_UNDER_TEST_UUID} sleep 2 until ping -c1 ${INCEPTION_VM_IP} &>/dev/null; do :; done sleep 20 ssh ${INCEPTION_VM} echo FROM BENCH ssh ${INCEPTION_VM} 'apk add gcc zlib-dev libaio libaio-dev make linux-headers git binutils musl-dev; git clone https://github.com/axboe/fio fio; cd fio; ./configure && make&& make install' ssh ${INCEPTION_VM} 'mkfs.ext3 /dev/xvdb && mount /dev/xvdb /mnt;df' ssh ${INCEPTION_VM} 'cd /mnt;sync;/usr/local/bin/fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=write --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=30 --group_reporting' > ext3_write_result ssh ${INCEPTION_VM} 'cd /mnt;sync;/usr/local/bin/fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=write --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=30 --group_reporting' >> ext3_write_result ssh ${INCEPTION_VM} 'cd /mnt;sync;/usr/local/bin/fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=30 --group_reporting' > ext3_read_result ssh ${INCEPTION_VM} 'cd /mnt;sync;/usr/local/bin/fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512M --numjobs=2 --runtime=30 --group_reporting' >> ext3_read_result ssh ${XCP_HOST_UNDER_TEST} xe vm-shutdown uuid=${INCEPTION_VM_UUID} # try ZFS # install binaries that don't use O_DIRECT rsync -r zfs ${XCP_HOST_UNDER_TEST}: scp /Users/nraynaud/dev/xenserver-build-env/blktap-3.5.0-1.12test.x86_64.rpm ${XCP_HOST_UNDER_TEST}: ssh ${XCP_HOST_UNDER_TEST} yum remove -y blktap ssh ${XCP_HOST_UNDER_TEST} yum install -y blktap-3.5.0-1.12test.x86_64.rpm ssh ${XCP_HOST_UNDER_TEST} yum install -y zfs/*.rpm ssh ${XCP_HOST_UNDER_TEST} depmod -a ssh ${XCP_HOST_UNDER_TEST} modprobe zfs ssh ${XCP_HOST_UNDER_TEST} zpool create -f -m /mnt/zfs tank /dev/sdb1 ssh ${XCP_HOST_UNDER_TEST} zfs set sync=disabled tank ssh ${XCP_HOST_UNDER_TEST} zfs set compression=lz4 tank ssh ${XCP_HOST_UNDER_TEST} zfs list SR_ZFS_UUID=`ssh ${XCP_HOST_UNDER_TEST} "xe sr-create host-uuid=${XCP_HOST_UNDER_TEST_UUID} name-label=test-zfs-sr type=file other-config:o_direct=false device-config:location=/mnt/zfs/test-zfs-sr"` TEST_ZFS_VDI=`ssh ${XCP_HOST_UNDER_TEST} xe vdi-create sr-uuid=${SR_ZFS_UUID} name-label=test-zfs-vdi virtual-size=214748364800` # this line avoids O_DIRECT in reads ssh ${XCP_HOST_UNDER_TEST} "sed -i.bak 's/# unbuffered = true/unbuffered = false/' /etc/sparse_dd.conf" # try various clone situations ssh ${XCP_HOST_UNDER_TEST} xe vdi-copy uuid=${TEST_ZFS_VDI} sr-uuid=${SR_ZFS_UUID} ssh ${XCP_HOST_UNDER_TEST} xe vdi-copy uuid=${TEST_ZFS_VDI} sr-uuid=${SR_EXT3_UUID} ssh ${XCP_HOST_UNDER_TEST} xe vdi-copy uuid=${TEST_EXT3_VDI} sr-uuid=${SR_ZFS_UUID}
this script complete to the end without error.
-
If other people can reproduce my results, I propose to directly change the parameter in the XCP-ng distribution RPM.
-
Your package is experimental, so feel free to add the modification inside it
-
@nraynaud said in Testing ZFS with XCP-ng:
If other people can reproduce my results
with the change in
/etc/sparse_dd.conf
I can copy my VMs from:- EXT3 -> ZFS
- ZFS -> ZFS
- ZFS -> EXT3
Yeha!
Thanks for your work!
By the way, my XCP-ng replication host at work is working just fine with ZFS-SR. All stable like ZFS should be.
-
Yay!! Thanks for testing
-
I have a clean install of 7.5 and I'm following the guide on the wiki but can't install zfs-test or enable the zfs module:
[root@xcp-ng-endlqfgb ~]# yum install --enablerepo="xcp-ng-extras" zfs-test Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package zfs-test.x86_64 0:0.7.9-1.el7.centos will be installed --> Processing Dependency: lsscsi for package: zfs-test-0.7.9-1.el7.centos.x86_64 --> Processing Dependency: ksh for package: zfs-test-0.7.9-1.el7.centos.x86_64 --> Processing Dependency: fio for package: zfs-test-0.7.9-1.el7.centos.x86_64 --> Processing Dependency: rng-tools for package: zfs-test-0.7.9-1.el7.centos.x86_64 --> Finished Dependency Resolution Error: Package: zfs-test-0.7.9-1.el7.centos.x86_64 (xcp-ng-extras) Requires: fio Error: Package: zfs-test-0.7.9-1.el7.centos.x86_64 (xcp-ng-extras) Requires: lsscsi Error: Package: zfs-test-0.7.9-1.el7.centos.x86_64 (xcp-ng-extras) Requires: rng-tools Error: Package: zfs-test-0.7.9-1.el7.centos.x86_64 (xcp-ng-extras) Requires: ksh You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root@xcp-ng-endlqfgb ~]# zpool create tank /dev/sdb The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them. [root@xcp-ng-endlqfgb ~]# /sbin/modprobe zfs modprobe: FATAL: Module zfs not found. [root@xcp-ng-endlqfgb ~]#
-
@stormi this is for you
-
I knew we shouldn't have released on a Friday just before my holidays
XCP-ng community, note that I do that for you!
To answer the question, the zfs-tests package is not strictly needed so let's not install it (it pulls too many dependencies that we would have to add. We can consider making it installable later.). We forgot to add some steps in the procedure. We have updated it now.
If you run:
depmod -a
Then you should be able to run
modprobe zfs
This should bring you at least a step further.
-
@eexodus If it's not working, we can chat to troubleshoot your issue
-
@borzel It would be nice also to improve https://github.com/xcp-ng/xcp/wiki/ZFS-on-XCP-ng-7.5-and-later because I wrote it and I never used zfs!
-
@stormi @borzel Thank you. I was able to create the zpool with those two extra commands. I then created a dataset and created a type=file and device-config:location SR. I realize ZFS is still a new feature, but is that the recommended way? I'm completely new to Xen so I am not familiar with creating SRs.
[root@xcp-ng-miqfcsgc] depmod -a [root@xcp-ng-miqfcsgc] modprobe zfs [root@xcp-ng-miqfcsgc] zpool create -f tank /dev/sda [root@xcp-ng-miqfcsgc] zfs create tank/sr [root@xcp-ng-miqfcsgc] xe sr-create host-uuid=MY_UUID name-label=zfs-sr type=file other-config:o_direct=false device-config:location=/tank/sr/zfs-sr
-
@eexodus said in Testing ZFS with XCP-ng:
zpool create -f tank /dev/sda
If you have 4k disks (aka Advanced Format) use
ashift=12
zpool create -o ashift=12 -f tank /dev/sda
I would say, for the first testing, please use:
zfs set sync=disabled tank zfs set compress=lz4 tank zfs set atime=off tank
pool/discs can be live monitored with
zpool iostat -v 1
-
@borzel Not sure if this thread is the right place to ask .. but whats the min and recommended requirements to test ZFS ?
-
Well, depends on the level of perfs you want to achieve. A lot of RAM is the key.
-
@olivierlambert said in Testing ZFS with XCP-ng:
A lot of RAM is the key.
And if you want more performance, add more RAM
-
@stormi said in Testing ZFS with XCP-ng:
@borzel It would be nice also to improve https://github.com/xcp-ng/xcp/wiki/ZFS-on-XCP-ng-7.5-and-later because I wrote it and I never used zfs!
I have no write permissions
-
Good ressource with a lot of information, how to really use ZFS, build storages, tuning, etc.: https://www.zfsbuild.com
-
Zfs is a local sr and imho does not add too much advantages vs classic ext sr
The real value of zfs is to use built in replication to remote zfs system.
Proxmox has built a very interesting solutions around it: async replication of vm on a another node.I know xoa already offer a similar solution but is snapshot based and everyone knows the performance and size problem with this approach.