vmserver upgrade

With the change to ZFS for my storageserver i decided to also change my vmfileserver to ZFS for the same benefits.

But there is more:

i decided to change the mainboard and cpu to some celeron i had lying around from some testserver, go with raid10, add an L2ARC with 2x 120GB Samsung SSDs and change from NFS to iscsi as it natively supports MPIO. So i added an additional 1TB drive (yeah not optimal with 3 striped mirrors, but the mainboard only has 8 ports) and added a total of 24GB DDR3 RAM (L2ARC consumes ~1GB RAM per 40GB L2ARC). I dropped a standard switch between the vmserver and the vmfileserver and connected all NICS to it (6 in total).

the only downside is the characteristics of a cow-filesystem: the COW, it fragments your whole disks when a write happens, which is placed somewhere else and does not overwrite the previous block. VMs tend to write very small and very often, so after a while, to get a file in the vm itself, the underlying ZFS filesystem has many extends to traverse to finally get all contents of that file which can take a while when using rotating disks. I’ve seen this behaviour for raid5/6, for raid1/10 it should be less noticable, but the future will show. Sadly there is no automatic and easy way to defragement your ZFS pool except copying everything off and recreating it. The alternative is BTRFS and I will change to it as soon as it has matured enough. It still has the same problem of COW but supports (online) defragmentation and even an autodefrag mount option (which makes things worse for vms at the time writing, but this might change in the future).

Leave a Reply

Your email address will not be published. Required fields are marked *