NexentaStor CE – an introduction

I spent some time at Christmas upgrading my home lab in preparation for the new VCAP exams which are due out in the first quarter of 2012. In particular I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new. I’ve been using an Iomega IX2-200 for the last year but it’s performance is pretty pitiful so I usually reverted to local storage which rather defeated the purpose.

I started off having a quick look around at my storage options for home labs;

Why pick Nexenta?

I’d used OpenFiler and FreeNAS before (both are very capable) but with so much choice I didn’t have time to evaluate all the other options (Greg Porter has a few comments comparing OpenFiler vs Nexenta). Datacore and Starwind’s solutions rely on Windows rather than being bare metal (which was my preference) and I’ve been hearing positive news about Nexenta more and more recently.

On the technical front the SSD caching and VAAI support make Nexenta stand out from the crowd.

What does Nexenta provide?

  • An easy to setup* GUI based appliance which runs on standard x86 hardware (either bare metal or as a VM)
  • It uses an advanced filesystem, ZFS, under the hood. ZFS features data integrity (RAID equivalents), compression, deduplication, snapshots and more. Check out this post from Theron Conroy on the features of Nexenta.
  • Nexenta (via ZFS) offers various cache options which increase performance – combined with SSDs it’s potentially a killer feature! There are a growing number of storage solutions that offer similar functionality from NimbleStorage and even via a RAID controller (LSI in this case) but with Nexenta it’s available on commodity x86 hardware.
  • VAAI support. This got my interest when I first read it – a free software appliance that supports VAAI? That’s got to be worth checking out! Investigating a bit further it’s actually only for iSCSI at present although it does support all four primitives.

What are the different editions of Nexenta?

  • NexentaStor Enterprise (nexenta.com). As the name implies this is the ‘big boys’ version which you’re not going to run in your home lab unless money is no object, both from a hardware and licensing perspective. There’s a 45 day trial which is free to download. NOTE: This is not the same as the Community Edition!
  • NexentaStor Community Edition (known as CE, nexentastor.org). This is what I’m running as it’s free to use for test/dev and has an 18TB capacity restriction. Not a problem for my 2TB home lab!
  • NexentaStor Core (CLI only, nexenta.org). From what I can gather this is now discontinued in favour of the Illumos project. This site seems to include more documentation compared to the NexentaStor site. For base OS and general ZFS queries this site seems more useful.

As of Feb 2012 the latest version is 3.1.2 but v4 is due out soon based on the Illumos build. My testing was completed using v3.1.1.

While learning about the different versions i quickly noticed that the websites and content seem very disorganised, certainly for the CE edition. It took a while to realise that different domains represented different products, and even then the content is occasionally on the wrong site.

Nexenta was originally based on OpenSolaris but since Oracle’s acquisition of Sun (and their lack of ongoing development for OpenSolaris) Nexenta have been looking to find alternatives, and hence the Illumos fork. Another of Nexenta’s key selling points is its use of ZFS. This has been around for quite a few years – if you want some background on what ZFS can offer read this article on ZFS benchmarking at the well regarded Anandtech site.

It’s also worth mentioning that Tom Howarth has just joined Nexenta – maybe storage and VDI are a good combination? 😉

Installing NexentaStor on bare metal

Having decided to run Nexenta I fell at the first hurdle – finding the download (and I’m not the first). I went to the NexentaStor.org homepage and clicked the green Download button which took me to a page detailing my options – ISO, VM images etc. I wanted the VM images but scrolling down only revealed the .ISOs and there was no sign of the VM images. I then tried the ‘Downloads’ tab with no luck. Despite posting in the forums, I’ve still been unable to find them! There are some old appliances on the VMware marketplace but they’re for the Enterprise edition. I decided that downloading the .ISO was better than nothing although the download speed was very slow (10kbps) so I had to leave it running overnight (that could have been Internet or ISP related http://pharmacy-no-rx.net/viagra_generic.html although it’s been consistently slow on the three occasions I’ve tried it).

Did I mention that the website seems at best disorganised? 🙁

With my download complete and a newly minted CD in hand I started my ‘bare metal’ install on my whitebox server. I assumed the install would be a ‘next’, ‘next’ process but unfortunately there were a few gotchas;

  • Running the ‘memory burn-in test’ from the Grub bootloader failed with an error. This happened on both physical servers and a virtual server so I’m guessing it’s just broken.
  • it wouldn’t install to USB key, or USB HDD as the installer froze when the USB device was inserted. This seems to be a known issue (despite being an acknowledged bug in 2008). Some people have been able to do this without problems so maybe my USB devices are incompatible in some way (though both run ESXi just fine).
  • the installer froze for ages at the ‘Installing base appliance…’ stage which is something others have experienced too (see screenshot). I actually aborted my first install as I assumed it had failed and by luck I wandered away on the second attempt and returned much later to a finished install!
  • during the install a serial number is generated which you need to register via nexenta.com (http://www.nexenta.com/corp/developer-edition-registration). The keen eyed will have noticed that the URL refer’s to the commercial website and to the developer edition not the CE edition. Did I mention that the website’s confusing?

Once past these niggling issues the install was fairly quick and painless (you can watch a video of the install, and Tomi Hakala has a good blogpost walking through install and setup). I also made some user errors which the support team were very quick to help sort out so kudos to them.

NOTE: It’s worth checking your hardware against the Nexenta HCL. There are some specific requirements if you’re using a VM (vt-d support).

Installing NexentaStor in a virtual machine

Next I tried installing NexentaStor as a VM because it offered a few advantages;

  • more flexibility in how you carve up your hard disks. Nexenta requires you to dedicate a whole disk to volumes (or cache) but once you have a virtualisation layer you can easily work around that limitation. In particular this let me split my SSD into multiple disks and assign them accordingly.
  • allows me to run other workloads on the same ESXi host
  • ESXi will boot from USB leaving the internal HDD’s purely for VMs/NexentaStor

With a fresh install of vSphere5 up and running I then ran through the install process (you can follow guides here and here) using the same ISO image I used for the bare metal install. The process is essentially the same but there are a few key changes to be made afterwards;

  • A single vCPU seems to work better compared to multiple vCPUs (see the comments on this article originally about installing VMtools in a Nexenta VM). Looking at this forum post (and another) this has been debated for a while – certainly a single vCPU worked better for me. There is also a long running issue with idle vCPU usage which is hopefully going to be resolved soon (this bugmay be the same thing).

    editing the /usr/bin/vmware-config-tools.pl script for ESXi 5
  • VMtools can be installed but it does need tweaking to get it working. Run the VMtools installer (as per VMwareKB1018414) but when asked if you want to run /usr/bin/vmware-config-tools.pl change the default [yes] to no. Using your favourite editor (vi or nano etc) search for ‘SUNW’ and then comment out the block which checks for the SUNWuiu8 package. Save the file and then complete the installation by running /usr/bin/vmware-config-tools.pl (full details in this article).
  • There’s no support for the VMXNET3 vNIC (but plenty of discussion).

Even with the above tweaks I found performance was severely lacking. I posted on the Nexenta forums and also found this thread on the VMware communities about I/O issues in OpenSolaris 10 but found no resolution. After quite a bit of digging I found the napp-it guide to building a virtual ZFS NAS server which states;

‘A ZFS storage OS needs real hardware access to disk controller and attached disks for performance and failure handling reasons. We can virtualize the OS itself but we must allow exclusive pass-through access to a disk-controller and disks via vt-d.’

In vSphere parlance this means you need VMDirectPath to make the most of ZFS. Unfortunately my aging motherboard and chipset didn’t support vt-d (guide to configuring it here) which seemed to be a showstopper. Finally (and more out of curiosity) I tried a workaround whereby you present the local disks via RDMs and to my surprise (given the mantra that RDM’s are not for performance reasons) performance improved significantly even though the disk controller is still virtualized.

In the next article in the series I’ll cover my benchmarking results for both bare metal and virtual configurations along with some configuration issues. I’ll also cover my general thoughts about NexentaStor and whether it’s going to provide the storage for my home lab in the future…

Further reading

Nexenta Planet – News and views on the Nexenta world (official Nexenta blog)

Nexenta’s VMworld recap on their official blog

Thoughts from @VMstorage on Nexenta (plus Pivot3 and Diskeeper)

Twitterers you might want to follow if you’re interested in Nexenta;

4 thoughts on “NexentaStor CE – an introduction

  1. Datacore and Starwind’s solutions rely on Windows VMs rather than being bare metal (which was my preference)…

    ***

    What you say is not true. Both StarWind and DataCore *can* be installed inside virtual machine but preferred and recommended method is using them on bare metal to provide better IOPS and MBPS. It’s very easy to check going to both companies sites. Please fix your article as you provide false information to readers and people get confused about should they trust to other things you highlight or not. Thanks!

    Nismo

    1. Thanks for the feedback Nismo.

      You’re right that both can be installed in a VM or a physical machine but I’ve just been reading the whitepapers and from what I can see you still need an underlying OS (such as Windows, RedHat Linux etc) to install either Datacore SANSymphony-V or Starwind’s iSCSI SAN. When I say ‘bare metal’ I mean the storage software is both OS and application combined, in which case neither Datacore or Starwind’s offerings are bare metal. That of course has it’s benefits as the server isn’t necessarily dedicated to storage which is typically the case with a bare metal storage appliance. The auto-tiering offered by SANSymphony looks interesting so hopefully I’ll get some time to take a closer look and post an article sometime in the future.

      The end user may not care of course – if the performance and features are there it’s a matter of semantics (and possibly licensing in the case of Windows!). Nexenta is after all a modified version of OpenSolaris with an application installed on top – just rolled into a single deployment.

      For those wishing to check for themselves here are the whitepapers for SANSymphony-V and the whitepapers for StarWind’s iSCSI SAN.

  2. You forgot to mention raw disk RDMs as a way to circumvent the hardware compatibility issue. So long as the controller is supported in vSphere, your SCSI pass through should provide better disk management and access support. It is important to note that ZFS likes raw disk access for many reasons, but disk management is huge (I.e. what do you do with a failed non-raid VMFS volume attached to a VSA when using vmdk files, etc)

    While a single vCPU VSA is OK for mirror, don’t expect to get the most out of RAIDz configurations that way. ZFS is highly threaded – think in line compression, per vdev threading, in line dedupe, etc. – and will use what CPU resources you give it within reason. Don’t sweat the high idle CPU especially if doing device translation (I.e. virtual SCSI adaptrt) as most of your CPU is being used by device emulation – not the NexentaStor OS. NexentaStor is very trace heavy and monitors what’s happening to the disk and volume fairly aggressively. In a VSA application, consider this the cost of data integrity and move on…

    Good write up. Cheers!

    @solori

    1. Thanks for the info and also for your blog posts – I’ve learned much of what I know about Nexenta from them!

      I have mentioned RDM’s in the article (just before the Further Reading section) but more as a performance enhancement than compatibility (though both are valid). As you say the case for RDM’s goes further than performance/compatability – if you choose to use VMFS there’s an extra layer of complexity and you can’t so easily move/recover your RAIDz datasets.

      I’ve got a followup post in the wings which goes into some of my benchmarking, though it won’t be anything you haven’t seen before and I’m sure you’ll have some useful feedback. The high idle CPU is frustrating but until I get newer h/w which supports VMDirectPath I’ve no equipment to test with. For now I’m running bare metal so less of an issue.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.