NexentaStor Home Brew Contest - Submissions

Added by Derek WasNexenta over 2 years ago

In celebration of the Third Annual OpenStorage Summit, we have decided to hold the first-ever NexentaStor Home Brew Contest! From the uncommon to the extraordinary to the utilitarian—how do you use NexentaStor? Tell us your story and you could win some cool stuff (to be decided shortly).

There is no defined list of criteria (except one, see below) for winning. (It is, after all, the first time we have done this contest and we have no idea what people will submit!) Tell us about your experience using NexentaStor software: Does it encompass creative, effective, or wacky hardware configurations. Does it solve unique problems? Does it fill a specific business use case? How is it deployed? Tell us your story.

The only mandatory requirement is that you must be using NexentaStor and you should have a compelling story.

Enter early for more chances to win great stuff. We will have fun random door prizes up until the October 25th deadline! Final winners will be selected via secret ballot by the NexentaStor community.

Submit your story (along with any photos, videos, or other visual aids) to this forum thread.

If you are not already a member of this community, click Register in the upper right to sign up. You can then post a reply to this forum post.


Replies

RE: NexentaStor Home Brew Contest - Added by Brenn Oosterbaan about 1 year ago

Hi,

Since nobody submitted anything yet I'll get it started :)

Small Introduction:

My name is Brenn Oosterbaan and I work in IT as an engineer on a wide variety of systems and applications, with a specialization in the Storage and Backup area.

Why Nexenta:

About 6 months ago I started looking for a NAS device for home use (shame on me! a Storage guy and i didn't even have a simple NAS device:). These were my requirements:

  • Scale out - stating with 3-5 disks but capable of expanding.
  • Reasonable compute power - be able to par and rar files as fast as a home computer.
  • 1Gb Ethernet - streaming HD video's to multiple clients.
  • Form Factor - needed to fit in my 'meter cupboard'.

Most of the regular home use NAS devices maxed out at 4-5 disks, did not have enough compute power to par/unrar a video file within a few minutes, and were fitted in a cube like casing which did not fit in my meter cupboard. Since off-the-shelf wasn't working for me I started looking at DIY storage servers.

During this time one of the biggest dutch IT websites had an extensive article on building a 'Do It Yourself' ZFS based storage server for small business/home use. They sold me on the idea of going for a ZFS based solution. I decided to use Nexenta since this is an Enterprise product, and I would very likely encounter it at my work some day.

Hardware Setup:

Photobucket

As you can see this form factor does fit inside my meter cupboard :)

  • SuperMicro X8SI6-F with a Xeon X3440
  • 16GB ECC Kingston memory
  • 2x OCZ Vortex 32GB SSD - as a mirror (for OS)
  • 6x 2TB Seagate Barracuda - as 2 RaidZ1 vdev's

Software Setup:

My Nexenta needed to be able to function as my media server, which meant installing some software which was never meant to be installed on OpenSolaris, not to mention a Nexenta... After a few nights of compiling, debugging and patching I managed to get everything working. My Nexenta is now happily running SickBeard, SABnzbd (including par, unrar and yEnc), CouchPotato, MySQL (for my shared XBMC database), and a custom script to auto-download matching subtitles. All running as services. All my home pc's now have their Desktop's and other User folders redirected to a share on my Nexenta, which uses auto-snap giving everyone the possibility to use 'previous versions' to restore their own files. I also removed all hard drives except the OS drives and replaced them with iSCSI lun's.

Monitoring and Graphing:

At my work we use Nagios to do all our monitoring and graphing. I especially like the performance graphing, and since I could not find a Nagios plugin for Nexenta I decided to spent a few weekends/evenings brushing up on my coding and writing a Nagios check of my own. I now have Nagios running on one of my own servers and am using it to monitor my home Nexenta (overkill I agree, but a lot fun!). The check uses the API to report all errors which the Nexenta runners might encounter and checks if my pool,folders or syspool are not reaching some configurable thresholds (like 80% or 90% full). It also uses SNMP and the API to report performance data for Disks, Snapshots, CPU's, Memory and Network usage. Photobucket

Apart from these features I did some fun stuff with converting the description of 'known errors' to human readable descriptions (or appending a default message to unknowns) and making the new or appended description a click-able link to the runners page of the Nexenta GUI (like 'A disk has failed' instead of 'nms-fmacheck: ZFS diagnosis, UUID:'). I also added support for snmp extends to my check so I can graph more advanced stuff like disk IO, Latency, ARC statistics etc.

Photobucket Photobucket Photobucket

The next thing on my to do list is adding syslog monitoring to my check. Mainly to be able to detect a link going down (which is not reported if it is part of a vif), and maybe some other stuff as well.

EDIT: The nagios check is now available for everyone on github: https://github.com/schubergphilis/Check_Nexenta

Conclusion:

All in all it has been a great experience! My wife likes it a lot because it always works, and I have learned a lot about ZFS, Solaris, SNMP and Nexenta. I encourage everyone who likes to tinker and wants to get some valuable experience with Nexenta to setup one for home use.

RE: NexentaStor Home Brew Contest - Added by Larry Smith about 1 year ago

About Me

I am a Senior Systems Engineer that uses virtualization and Enterprise class storage solutions by other major vendors in the industry on an everyday basis.

The Story

So I have been using Nexentastor for over 2 years now in my home lab. I have to say it is an amazing storage solution. I have used many others over the years (FreeNAS, OpenFiler, NASLite) and I by far like Nexentastor the best. I also use HP Lefthand P4000 iSCSI storage and fiber channel IBM SAN storage on a daily basis at work, but the ZFS filesystem is amazing. I now have two Nexentastor CE NAS devices, one is dedicated to just backing up all of my systems around the house.

My Setup consists of the following

It has 12TB of usable storage. One ZPool with 7 mirrored vDev’s and 1 hot spare (Total of 15 SATA disks), connected to two Supermicro AOC-USAS-L8i SAS controllers with SATA fan-out connectors. All data disk are inserted into three 5-bay SuperMicro front loading hot swap drive cages for easy swapping out. Each member of each mirror group alternates between the Supermicro controllers to provide redundancy in the event of sas controller failure. The horsepower is from a Dual-Core AMD and 16GB of memory. The OS is mirrored between two 2.5" SATA disks in hot swap trays that load from the rear of the case inside two PCI slots. This way the OS is not using up two of the hot swap drive bays. For network connectivity I am using two Intel 1GBe NICS bundled and configured for L2, L3 with multiple VLANs separating NFS and iSCSI traffic.

What lead me to NexentaStor CE

I started using ZFS when messing with some of the later versions of FreeNAS a few years back and was always finding myself spending lots of time tweaking and adjusting settings just to get good performance. I finally gave in and gave Nexentastor CE a go and I was sold from the beginning. After going through some initial testing I decided on building the current setup I am still using. The ease of increasing capacity as I need to without doing a complete rebuild has been great to say the least. I have also been through some disk failures during this time that have been relatively painless as well, once was a failed disk in my syspool and another time or two with disks in my datapool.

How I use this setup

I use this NAS for vSphere testing mainly. I continually test different iSCSI and NFS scenarios for each different version of vSphere starting with 4.x. I have had great success with this solution so far in each scenario even through the little gotchas of SCSI Unmap features in VAAI. I am using approximately 40-50 virtualized systems in my home lab on this setup. I was using NFS for vSphere4 and have converted to iSCSI for most everything for the past year since using vSphere5. This is mainly because I am testing SDRS (Storage DRS) and storage clusters in vSphere5.

Conclusion

NexentaStor CE has allowed for me to continually test relevant technologies on a day to day basis which benefit me in my home lab and at work. So great job Nexenta and keep up the good work. I am a big believer for sure.

Below are some pics to check out. I have some video but they are too large to upload here. You can also follow me on my blog at everythingshouldbevirtual.com for other advanced setups that utilize NexentaStor CE. Here is a post I did a while back on my setup for ESXi, iSCSI, MPIO, Etc. http://everythingshouldbevirtual.com/nexentastoresxi53750glacpvdsnfsiscsi-part-1

RE: NexentaStor Home Brew Contest - Added by Marco Broeken about 1 year ago

My Home Brew Nexenta CE Home Lab

I spent some time this weekend upgrading my home lab. I needed to improve my shared storage and hoped that I could reuse old h/w instead of buying something new and expensive. I’ve been using an QNAP TS-459 Pro II Turbo NAS for the last couple of year. It’s iSCSI performance is acceptable for 1-5 VM’s but when I needed to build a complete View environment or vCloud Director lab I usually reverted to local storage which rather defeated the purpose.

I started to look around for storage solutions that could give me loads of IOps with 4+ SATA disks and three SSD’s with options like auto-tiering and VAAI to get the IOps I wanted and 2Tb+ of usable storage. In my professional work I work daily with these kind of enterprise storage boxes and nowadays it is all about software. Would it be possible to build something like that myself?

I’ve send some tweets around to storage people all around the globe and everybody answered “try #Nexenta It has Cache and VAAI”. Now this gets interesting!

Hardware

My QNAP needed some more terabytes to storage my music and movies. I bought 4x Seagate Barracuda 7200 2TB 7200rpm 64MB SATA3 drives to replace the old 4x 1Tb drives to reuse.

I’ve got 2x HP XW9400 Workstations equipped with 16Gb memory and 2 AMD dual core CPU’s I used to run ESX5i on.

The workstation has 8 SATA ports onboard but only room for 4 SATA drives inside the systems so I bought: 1x Chieftec SNT-3141 SATA 4-HDD, Hot-Swap and removed the cd-rom drives and floppy drives and placed the hot-swap bay in the Workstation. This upgraded me to 8 SATA bays.

I have 1xOCZ Vertex2 120Gb SATA2 SSD laying around (used for ZFS L2ARC cache and purchased one OCZ Vertex4 64GB 2.5" SATA3 (for ZFS intent log (ZIL) ).

On the Network side I have added an 4x port Intel gigabit server network card. My management traffic arrive on the mainboard network card , and my iSCSI stack is running on the Intel card. The iSCSI ports are set to use 9000 MTU.

Installing NexentaStor CE on HP XW9400

Read the Nexenta Release Notes and download the iso file NexentaStor Community Edition 3.1.3.

It took me more time than I expected, the installed is very slow. Just wait and it will finish eventually. I needed to change some options in the bios to get rid of some PCI errors.

Once up and running I found out all onboard interfaces are supported, all network ports where working and also the onboard 8 port SATA controller was visible with all 8 disks attached! This got me more than happy!

Storage Layout

As you can see my volume volume1 is composed of 5 disks in RAIDZ1, and I’m using one 120Gb SSD as the L2ARC read cache, and one 60Gb SSD as the ZLOG write cache disk.SNAGHTML668f8a2
Because I don’t have more than 16GB of RAM in the server, I decided not to use the de-dupe functionality of the NexentaStor.

VAAI support!

vStorage API for Array Integration (VAAI) provides five different benefits. In effect, the ESX hypervisor instructs the storage controller to off-load certain tasks and perform them at storage controller level, leaving I/O and CPU cycles available to the VMs.

  • SCSI write same. Accelerates zero block writes when creating new virtual disks.
  • SCSI ATS. Enables a specific LUN region to be locked instead of the entire LUN when cloning a VM.
  • SCSI block copy. Avoids reading and writing of block data through the ESX host during a block copy operation.
  • SCSI unmap. Enables freed blocks to be returned to the pool for new allocation when no longer used for VM storage.

VAAI support is applicable only with block-based protocols like iSCSI. Other SCSI commands are all performance related.

FC

This got me thinking... block level…. FC… let’s see if I can get this to work… I jumped up and looked in my old gear and found 2 x 4Gb Qlogic Fiber Channel cards. I know this is not something everybody has lying around (It’s still OK if u use iSCSI Smile)

I found a good article that enabled FC target mode on Oracle Solaris. I shutdown the Nexenta, plugged in a 4Gb FC adapter and configured the ports into target mode. It worked like a charm. (watch out when updating the Nexenta Software, this will disable FC target mode again)

After the adapter was in target mode, you can configure the mappings in the web interface. I needed to map some lun’s to FC and some other LUN’s to iSCSI, this all can be done in the web interface.

image

I created 2 Initiator groups which included one ESX host to split up my lun’s for testing purposes. One included the WWN for the FC initiator and one with the IQN of the iSCSI initiator.

I created 3x 750 Gb LUN’s for all my VM’s and one 4GB to add as RDM for testing purposes.
image
Added all 4 to the Fiberchannel group.

I did the same with one 750GB LUN on iSCSI with 1 RDM LUN and added them to the iSCSI initiator group.

On your ESX5i host iSCSI must be configured as defined here so we can use round robin and make use of all your network interface cards and get some real performance! SNAGHTML67d315e

I presented three 750GB LUNs to my ESX5i server. You can see in the following screenshot those four LUNs with LUN ID 0,1,2 while the small 4GB LUN with ID 3 are RDM LUN. We also see that the VAAI Hardware Acceleration is Supported!
image

On the iSCSI part, make sure all your lun’s are configured as RR (Round Robin) It makes you use all 4 paths to your Nexenta and use maximum bandwidth.
SNAGHTML68555bf

As shown below, all lun’s are mounted with hardware acceleration enabled! this is going to be fun!image

Home Brew In the Test

I mounted the two 4Gb LUN’s (one FC, one iSCSI) to a Windows 2008R2 Server to do the first tests:
image
[E] is FC and [F] is iSCSI (also bound the RDM disk to Round Robin)

The first test is awesome, almost 400MB/sec on the 4GB FC LUN.
image

The second test is what I expected, max 120MB/sec on the iSCSI LUN. It would make sense if it was more like 250MB/sec because Round Robin would make use of 2x 1Gb network. image
This is rather good result also, but still I get 3x the speed on the FC RDM LUN.

Conclusion

NexentaStor CE has allowed me to use my Home Brew Lab to test the latest VMware technologies on a day to day basis. I now can start and run more than 40 VM's in my homelab! Great job Nexenta and keep up the good work. I am a big fan!

RE: NexentaStor Home Brew Contest - Added by Derek WasNexenta about 1 year ago

About Me

I started working at Nexenta in July 2012. Previous to that, I was working at NETGEAR on their ReadyNAS storage line for the last five years. Obviously, as a Nexenta employee, I am exempt from this contest, but I just wanted to share my 'home brew' with the community.

The Story

Seeing as I had a spare ReadyNAS Pro system from my previous employment, I wanted to see how easy it would be to convert it into a NexentaStor system. I run VM's of it on my desktop or in the cloud for demos and such, but thought it would be nice to have a small machine on my desk as well with it running on real hardware.

My Setup consists of the following

The ReadyNAS Pro chassis has 6 hot-swap drive bays. The system currently has around 4.5TB of usable storage; one zpool with five 1TB SATA disks in a RAID-Z1 configuration, with a 64GB SSD drive as a ZIL. The OS is stored on a 80GB USB HDD that I stuffed internally to the chassis, to keep the hot-swap bays dedicated to the data volumes. Horsepower of the system is not much by default, it has an Intel E2160 CPU running at 1.8Ghz, and 1GB DDR2 RAM. So far I have only doubled the RAM to 2GB, though the motherboard can handle up to 8GB. The CPU can also be upgraded, I've used an E6300 in the past. It has two 1GBe NICs on-board, though I am only using one of them at the moment for initial setup; in the future I will look into teaming them for higher throughput. The four and six bay models come with an LCD screen, I will go into details on customizing the display on there also soon.

How I use this setup

Like I said above, performance is not a goal with this system, but just proof of concept so far. I use it in my work cube for CIFS/NFS transfers from my desktop. I may also use iSCSI in the near future for some ESXi datastore action.

Conclusion

Converting the purpose of this box was easier than expected, though having some internal knowledge of the hardware did help make it an easier transition. For the out of warranty ReadyNAS users that want to do more with their system, it is a realistic alternative.

I have created a more detailed thread for this system, Home Brew: Running NexentaStor on a ReadyNAS

RE: NexentaStor Home Brew Contest - Added by Bart Braet about 1 year ago

Well, here's a Belgian guy with his Nexenta setup :) .

ESXi - Desktop - HTPC - SAN/NAS all in one

Introduction

For a while now I have been working with ESXi, VMware vSphere and different storage products. At home I noticed my 2 x Single SATA which were hosting my VM's were too slow and not scalable. After a while I read about ZFS and looked up some distro's which provided the ZFS-functionality. Nexenta was one of the many which popped up but was unique in its kind. But (and I hope I am not the only one with this problem) the girlfriend didn't see much use in having a dedicated machine running "just for IT-stuff". So that made me thinking... What if I could combine the following things all in ONE (!!!) machine:

  • A full blown, easy to use and available desktop in the living room
  • An HTPC with Windows 7 and XBMC
  • A SAN/NAS
  • An ESXi

Combining the parts

Looking for all of the above, at a reasonable price was, to say at least; a challenge. First of all: not all motherboards are able to host an ESXi-installation. That was the first limiting factor, and a big one... Second: I needed a motherboard which could accept 32GB of RAM. And above all, it needed to be able to pass through not ONE, but TWO video-cards AND an HBA by using VT-D or IOMMU. For a while I thought it would be impossible but after reading some post at the [H]ardwareforum I found a setup which met all of the following requirements:

  1. 32GB RAM : CHECK!
  2. ESXi compatible: CHECK!
  3. VT-D/IOMMU compatible: CHECK!

I ordered the whole shebang at a couple of Dutch webshops and after a week everything arrived:

  • Samsung Spinpoint F4 EG HD204UI, 2TB
  • Antec Thee Hunded Two
  • 3Wae 3Wae Multi-lane SATA cable CBL-SFF8087OCF-05M
  • IBM ServeRAID M1015 SAS/SATA Contolle fo System x
  • be quiet! Pue Powe L8 530W
  • Patiot Pyo SE 120GB
  • Cosai CMV8GX3M1A1333C9
  • AMD Phenom II X4 955 Black Edition
  • Sapphie HD 4350 256MB DDR2 PCI-E HDMI
  • Gigabyte GA-990FXA-UD3
  • MSI R5450-MD512D3H/LP

Building the All In One

Nothing fancy over here :) . Assembled the server and I put the M1015 in first together with the cables and everything and added the video cards later on:

Storage

First I had to flash the IBM M1015 to IT mode so ZFS can do what it does best: being a rock solid filesystem, it accomplishes this best when it can address disks directly without any RAID-controller (=HBA). I flashed my M1015 with the IT-firmware by using an HP desktop at work and hey presto: it became a "dumb" SAS HBA, sweet!

I added one extra disk I had laying around, to store the Nexenta VM, I could use an SSD to lower power consumption, but at the moment I don't have the budget ;) .

I built the server, plugged in the M1015 together with the 3 x 2TB disks and the Patriot SSD and configured Nexenta to present its storage to my ESXi. With the added SSD the thing blew away my old storage and I installed a couple of Windows 2008 R2 machines in about 7 minutes. Sweeeeeeet :)

The Desktop and HTPC

But now for the "girlfriend" part ;) . I still had to get two VM's working with video cards.

I created a Windows 7 VM and installed Windows 7 with an ISO on the ZFS-pool. I gave the VM 4GB's of memory and put a reservation on it. Besides that you also need to fiddle with some parameters in the VMX-file, called "pciholes". After finding the correct settings I could see ESXi booting and then hanging (oh oh, but after I gave it some thinking I came to the conclusion it is very normal: the console releases the video card for IOMMU and you need to wait untill the W7 VM claims the card and starts sending video to the card). Windows 7 booted successfully and after some experimenting I found out it is best to install the drivers really "clean" by extracting the drivers, and let Windows look for the driver files itself by using Device Manager (tip: don't install the whole package like Catalyst Manager (or what is the name of that thing?)). I also needed to disable Flash acceleration, but that's a minor disadvantage, considering what I gained by using this setup.

By passing through an USB controller I can attach an USB keyboard and mouse and control the Windows 7

Then I created another VM, repeated the above procedure and I was able to present XBMC over HDMI to my TV.

2012-08-07_21.29.06.jpg - Cards/SAS-SATA cable (1.6 MB)

2012-08-07_21.29.13.jpg - Disks (1.2 MB)

2012-08-07_21.29.26.jpg - Cards/SAS-SATA cable (1.6 MB)

2012-06-11_18.05.17.jpg - Flashed! w00t w00t (1.9 MB)

2012-06-11_18.08.14.jpg - Flashed! w00t w00t (2 MB)

2012-08-07_21.28.11.jpg - The box (1.8 MB)

2012-08-07_21.28.51.jpg - Cards (1.6 MB)

2012-08-07_21.28.58.jpg - Cards/SAS-SATA cable (1.5 MB)

RE: NexentaStor Home Brew Contest - Added by Derek WasNexenta about 1 year ago

Thanks for your submissions guys. Locking this thread now. Will announce the winners in another thread shortly.

Content-Type: text/html; charset=utf-8 Set-Cookie: _redmine_session=BAh7BiIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%3D%3D--cebfb08d300a85bd88dafd1422210ebe7c9a5873; path=/; HttpOnly Status: 500 Internal Server Error X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.0.3 ETag: "ce4ace27a5838f18e47e792e2a1bcd24" X-Runtime: 2264ms Content-Length: 42346 Cache-Control: private, max-age=0, must-revalidate redMine 500 error

Internal error

An error occurred on the page you were trying to access.
If you continue to experience problems please contact your redMine administrator for assistance.

Back