As hinted in my last post, I’ve tried using ZFS on a Fusion Disk (FUD).
You can use the disk that core storage creates and use ZFS on it. You can’t set the volume format to ZFS , but you can use the device that core storage creates:
TIN>diskutil unmount blub Volume blub on disk8 unmounted TIN>diskutil eraseVolume 'ZFS Pool' '%noformat%' /dev/disk8 The specified file system is not supported by Core Storage for use on a Logical Volume TIN>zpool create foozfs /dev/disk8 TIN>zfs list NAME USED AVAIL REFER MOUNTPOINT foozfs 776Ki 457Gi 464Ki /Volumes/foozfs
10 Minutes ?
So I created data on it again. This time I created random data as ZFS automatically creates empty holes in files where there are zeros (sparse files). After writing the first 120 GByte to the SSD, data was written to the HDD. When I stopped the process, it immediately did copy data between HDD and SSD for about 600 seconds / 10 minutes.
Accessing the first megabyte of the files in directories 20 to 22 for a few rounds - like in my last post - I stopped that process. Core storage kept on coping data back and forth for about 10 minutes.
I wonder if the 10 minutes is somewhere hardcoded or it’s just a coincidence.
And low and behold - Yes, after those 10 minutes I started reading the first megabyte again and iostat looked like this:
disk1 disk7 cpu load average KB/t tps MB/s KB/t tps MB/s us sy id 1m 5m 15m 124.10 1738 210.58 0.00 0 0.00 15 6 78 1.15 0.98 0.99 124.46 1731 210.41 0.00 0 0.00 16 6 78 1.15 0.98 0.99 122.01 2006 239.05 0.00 0 0.00 17 8 75 1.21 1.00 0.99 104.96 2119 217.15 0.00 0 0.00 16 7 77 1.21 1.00 0.99 101.27 2105 208.22 0.00 0 0.00 17 8 75 1.21 1.00 0.99
So moving last accessed data to the SSD is filesystem agnostic.
Addition 1: Even though that after rebooting ZFS recognizes the zpool on the FUD disk automatically it’s not feasible to use ZFS on FUD as a single ‘zfs scrub’ might drestroy the access pattern for which data should reside on the SSD and which are going to the HDD.