I wanted to love this monster from the moment I read the spec sheets:
- Network attached storage device with 4 or 6 hot-swappable drive bays
- Available empty (so you can bring your own drives, including SSDs)
- iSCSI server with 2 network ports, jumbo frame support, VLANs
- Official VMware ESX/ESXi, Hyper-V, Windows DFS compatibility
- RAID 0/1/5/10
- Cloud support – automatically copy files to Amazon S3, Mozy, or other Iomegas over the Internet
- External USB3 drive support (for leftover USB hard drives)
- Insane number of home media support options including BitTorrent downloading, Time Machine backups for Apples, Bluetooth, automatic uploads to Facebook/YouTube/Flickr, recording server for Axis/Panasonic/D-Link webcams
- Quiet (31dBa max) and low power (based on an Intel Atom CPU and 2GB of RAM)
The full feature list is ridiculous – it’s absolutely everything I needed for my VMware lab and my backups. At around $700 from Amazon for the diskless version, I couldn’t resist. I wouldn’t recommend buying the versions that come populated with hard drives – if you ever need to replace the drives, you’re in for a rough time, and oddly, the bring-your-own-disk version doesn’t suffer from that issue.
Choosing Drives and RAIDs for the PX4 and PX6
As of this writing, the list of officially supported hard drives is pretty short:
- Hitachi Deskstar 2TB hard drive – $115
- Hitachi Deskstar 3TB hard drive – $175
- Micron RealSSD C400 128GB solid state drive – $240
Storage pools in the PX4/PX6 have to all be the same size and speed, and I didn’t really need performance, so I chose to go with four of the Hitachi 3TB drives. With those drives, RAID 5 would give me 9TB of usable storage, and the possibly-faster RAID 10 would give me 6TB. I wouldn’t recommend the 2TB drives for a reason that’ll be clear shortly.
PX6 users have 6 drives to choose from, so in that environment, it might make sense to go with four magnetic hard drives and two solid state drives. This would allow two tiers of storage: a blazin’ fast SSD mirror, and a bigger/slower RAID 5 pool. With VMware ESX/ESXi’s Storage vMotion, we can move virtual machines back and forth between the two RAID pools without taking the VM down for a reboot. It’s a fairly inexpensive way to get tiered storage, but with just 128GB in the fast pool, I’m not sure how useful this would be in practice.
Pluggin’ in the N: Network Attached Storage
The PX4-300d comes with the basics: power brick, one Ethernet cable, and the management tools CD. After adding in drives and plugging it in electricity and Ethernet, the control panel displays the IP address it fetched from DHCP. Fire up a web browser, go to that address, and you’re going to be impressed with the user interface. You can even test drive the Iomega StorCenter control panel online. (No, that’s not my Iomega, and no, you can’t delete data. I tried.)
For the first minute or two, Iomega impressed the heck out of me. The device could fetch its own firmware upgrades over the Internet, and yes, it was up-to-date. I went into Drive Management, and sure enough, it offered a really easy GUI to configure RAID levels. I picked RAID 10 and clicked Apply, but Iomega warned me that I hadn’t chosen any drives yet. I’d missed the subtle little visual checkboxes inside each hard drive in the GUI, so…
I need to stop here and tell you a little about myself: I like to break things.
I didn’t say I like to take things apart and put them back together – oh no. I just like to find new ways to break ’em. I take an evil satisfaction out of knowing somebody, somewhere didn’t test their code.
So I did what I do – I checked three boxes, chose RAID 10, and clicked Apply.
As far as I know, there’s not a way to perform RAID 10 with 3 hard drives (it requires an even number of drives) but the PX4 happily started initializing the array. I sat there absolutely dumbfounded – this wasn’t a small bug, but a giant, ugly, monstrous bug. I had no idea what the PX4 might be doing to the disks behind the scenes or whether I’d have any redundancy whatsoever.
I wouldn’t be so freaked out about this if Iomega wasn’t owned by EMC, one of the biggest, most reliable companies in the storage industry. I had instant flashbacks to the click of death, the nasty sound Zip drives made before they crashed, resulting in a class action lawsuit against Iomega. Was Iomega back to its old tricks of cutting costs at our peril, or were they up to EMC’s quality standards? I took a deep breath, pretended I never saw it, and deleted the array. I set up a four-drive RAID 10 array, decided to let one bug slide, and pressed on.
While the array is initialized, the Iomega’s LCD display and banners across the top of the web UI both warn the user that performance is degraded – very nice touch. As the array is initialized, you can continue to use the storage array, and even write data to it – again, nice touch. However, the initialization process takes hours – I left mine running overnight – which means I wouldn’t recommend the 2TB drives. A PX4 with 2TB drives is $1,165, and with the 3TB drives it’s $1,405 – just $240 more for 50% more storage. Yes, you could upgrade the drives later, but it’s going to take a loooong time to swap out each drive one by one and let the array rebuild.
Some Server Changes Require Reboots
As the array initialized, I started wandering through the web control panel setting things up. I changed the name to LittleBlackBox, and I was taken aback when the PX4 asked to reboot itself. Really? Just for a name change? Okay, well, alright, nobody was on the array yet anyway.
Next up – configure remote access. The PX4 will connect to a dynamic domain name service so you can use a simple DNS name to access your data from anywhere. Sounds good, so I set it up and – you guessed it – reboot required.
Next up – configure iSCSI and jumbo frames. This one at least didn’t require a complete NAS reboot, but it did warn that “This may take a few minutes to complete. Anyone currently accessing this device will be disconnected. Do you want to continue?”
The Problem with Rebooting an iSCSI NAS
NAS reboots aren’t a problem for some home users. If iTunes stops working for a few moments, or my pictures stop uploading to the cloud, life goes on.
In a business or virtualization lab environment, it’s a massive problem. VMware will be storing the virtual hard drives for several running machines on it. In order to reconfigure many settings on the PX4/PX6, I have to:
- Shut down all of my virtual servers
- Shut down VMware
- Restart the Iomega and wait for it to come up
- Start up VMware
- Start up my virtual servers
This is a total showstopper in a corporate lab environment where multiple people might be playing with the device. I simply wouldn’t be able to count on junior sysadmins not changing settings on the Iomega, triggering a reboot, and losing all of my virtual servers. I would have to tightly control security settings on the NAS, but the only security option is a checkbox for administrative privileges: either you’re in, or you’re not.
That NAS is a Monster, M-M-M-Monster
Philosopher L. Gaga once said in a poem entitled “Monster”:
I’ve never seen one like that before
Don’t look at me like that
You amaze me
While she was referring to crazy yet well-endowed gentlemen, I’m sure she would agree that the PX4-300d is just such a monster. Its feature list is the size of a baby’s arm, but sometimes it’s got the business logic of a weiner.
The good news is that it’s got killer qualities for a virtualization lab – or perhaps even a small business’ virtualization needs. The bad news is that it doesn’t have the stability or testing to back it up.
I was heartbroken. I’d really wanted to recommend this storage device to small businesses for their staff to learn virtualization, storage vMotion, and perhaps even as a backup target. With the current version of firmware, though, I just couldn’t do it. With that in mind, I gave up on testing the network card failovers (which do work at first glance), multipathing (which doesn’t – more on that later), the speed differences with jumbo frames, and anything reliability-related. For now, it’s just a good device for geeks to use in their labs and homes, so I’ll focus the rest of the review on that.
Device Features: Shares, Apps, Time Machine, BitTorrent, More
Onboard storage is set up in a hierarchy:
- Drives are grouped together in pools (like 2 drives in a RAID 1 pair or 4 drives in a RAID 5). Pools are a fixed size.
- Pools are carved up into multiple volumes. Volumes are a fixed size.
- Volumes are carved up into multiple shares. Shares are thin provisioned.
- Applications are configured at the share level.
So in my PX4-300d, I have:
- Storage pool – 4 drives in a RAID 10 for 6TB of usable space
- “Media” volume – a 1TB volume that lives in my storage pool
- “Torrents” share – a thin provisioned share in the Backups folder. It starts at zero size, and can grow to whatever size of files I dump in there – up to 1TB. I can access the Torrents share as a folder in Windows or on the Mac.
- The Torrents onboard application added a Download folder in the Torrents share. I can copy any torrent file into this folder, and the Iomega begins downloading it.
The application user interfaces are designed by geeks, for geeks: they’re just good enough to be point-and-click usable, but not good enough that I’d want to walk family members through configuring or troubleshooting them. For example, in the Torrent configurations screen, there’s a Port box. You can manually configure a port or use the automatically supplied one, but you can’t tell if it’s actually working. There’s no “test the router” functionality built in, so I have no idea if uploads are working or not until I start seeding a torrent. There’s edit boxes for Maximum Download Speed and Maximum Upload Speed, but there’s no scale – is it KB/sec or MB/sec?
The apps have just enough capabilities to say they work, but not much more than that. Modern BitTorrent clients all have scheduling setups so that you can transfer more at night and throttle back the sharing during the day – not here.
The Amazon S3 sync app will upload your files to S3, but it won’t delete the ones that get deleted on the NAS, thereby making your storage bill steadily increase. If you use it for, say, SQL Server offsite backups, you would want to reuse your backup file names in order to get a 7-day rotation in the cloud. You’d run into a separate issue anyway – S3’s max file upload size is 5GB. Any larger files just don’t get uploaded, and you don’t get an alert. No upload/download throttling or time-of-day scheduling here either – your bandwidth will just slow to a crawl as new files are added to S3.
Time Machine works wonderfully as a backup target, but the user is presented with two cryptic fields: Apple Network Hostname and Ethernet ID. The network hostname comes from your Mac’s System Preferences, Sharing, click Edit, and it’s the part you can edit. The Ethernet ID comes from going into Terminal, type ifconfig, hit enter, and look for en0 – the next line says ether, and copy/paste the characters to the right of that. The Iomega doesn’t tell you any of this. The only thing it does tell you is the Time Machine backup folder’s size, a constant 173MB, and even that is wrong. After several days of backups, the folder sizes still showed incorrectly on the Time Machine panel of the Iomega.
But it works.
At least, I think it works.
And that’s what makes this review so hard. When the control panel tells me it’s configuring a RAID 10 array with 3 drives, I know it’s wrong. A basic error like that shakes my confidence in the entire unit. Same thing with the incorrect Time Machine folder sizes, and the wacko configuration screens. If I can’t trust what it’s showing me, how can I trust what’s happening behind the scenes? I’m not sure. This doesn’t bother me too much for home media backups, but it would bother me a lot for running my virtual machines.
Is the Iomega PX4-300d the Best VMware Storage Server?
Most business iSCSI network implementations involve a completely separate logical network for iSCSI – separate IP addresses on a separate subnet. For example, my home network is on 192.168.37.x, so I might use 192.168.47.x for my iSCSI network. The iSCSI traffic is kept away from the regular network.
Most business iSCSI setups also involve redundancy: the storage device is plugged into the network with at least two network patch cables plugged into two different network cards. If either one fails, you’re still able to talk to the storage.
At first glance, the PX4 can handle either one of those requirements, but not both. If you want two separate logical networks, then you probably want to isolate the iSCSI traffic completely. You can configure one of the PX4’s network ports into your management network (like 192.168.37.x) and plug the other into an iSCSI network. When you do that, you just lost redundancy. If you choose instead to have both of the PX4-300d’s network jacks plugged into the same network, that network is going to need to see the outside world if you want to use any of the PX4’s home-media-savvy features like BitTorrent.
This is where the good monster comes in again.
This Good Monster Has VLAN Support
This NAS supports VLANs – multiple virtual network subnets attached to the same network card. Imagine an incoming network packet that finds its way to your server, and it’s got a tiny part at the beginning saying, “I’m coming from Virtual LAN #1.” Your operating system would need to understand that it’s attached to a certain subnet and route it to Virtual Network Card #1. The next packet might come in saying, “I’m coming from Virtual LAN #2,” and your computer would understand that it’s part of your Virtual Network Card #2 – the one for iSCSI.
Granted, your one network cable is only so fast, and we can’t stuff 2Gb of traffic into a 1Gb
bag patch cable. However, this does let us segregate network traffic at the switch level so that your chatty iSCSI traffic doesn’t overwhelm your regular network, and your chatty BitTorrent client traffic doesn’t overwhelm your iSCSI storage network. While it is indeed delicious to get your chocolate in my peanut butter, it’s not nearly as delicious to get your BitTorrent in my iSCSI.
VLANs require a savvy operating system like VMware ESX/ESXi and a VLAN-capable switch. I chose the $125 Cisco SLM2008-T because it supports VLANs, jumbo frames, has 8 ports, and it’s cheap. Cisco switches have a reputation for being difficult to administer, but this one comes with a pretty good web control panel. The setup process of putting in VLANs, jumbo frames, and ESXi configurations is a good idea for a future blog post, but suffice it to say that between Iomega’s web UI, Cisco’s web UI, and the vSphere Client user interface, you can be up and running with a very real-world-ish lab setup in under an hour – for some values of “running.”
Here comes the bad monster again – users can’t exclude network interfaces from iSCSI use. Every time VMware rescanned the iSCSI adapters, the Iomega happily reported back that iSCSI services were available from every single IP address it owned. I wanted to just do iSCSI over one network subnet, not both – no can do. After pulling my hair out for a few hours, I threw in the towel and switched to NFS. It worked the very first time I tried it, and it came with a nice perk: the StorCenter’s LCD display panel actually reflects the right amount of used space used on NFS volumes. iSCSI volumes show up as completely full even if there’s not a single file on ’em. (That’s not the device’s fault, but users won’t know that.)
That’s probably a good lesson for StorCenter users: the features that require zero parameters are easy, and they work like a champ.
How Fast is This Cheap NAS?
Geeks love to tweak parameters. I want to set the stripe size, caching, NIC load balancing, and flux capacitor voltage for every storage device. That’s not what the StorCenter PX4/PX6 is about: it’s just, uh, storage. Just as the other LifeLine apps don’t have much in the way of parameters, the iSCSI app checks just enough boxes to work – but that’s where the fun ends.
Most of my Windows-based iSCSI tests were able to saturate a 1Gbps Ethernet link with reads or with big sequential writes, and small or random writes ran around 10-60 MB/sec. I was surprised, though, that no matter what tricks I played with multipathing or multiple iSCSI volumes striped together, I just couldn’t get this thing to blow past a single NIC’s throughput. The lack of network card performance metrics on the Iomega made it a little tricky to troubleshoot, but after I checked the Cisco switch metrics, I got proof: the iSCSI traffic was only sent back through one network card on the Iomega – specifically, NIC #1.
I can write iSCSI data to both of the PX4’s network cards simultaneously. They just won’t send data back from both simultaneously – at least, not when they’re both on the same subnet. Maybe by getting even fancier with the multipathing setup and putting the two network cards on two different VLANs, in two subnets, I might be able to break past the 100 MB/sec read speed limit. (For more about multipathing, check out my SAN Multipathing series.)
But you know what? 100MB/sec is fine for me. See, if I really needed speed, I would have filled this thing with SSDs to begin with. Solid state drives excel at random access, and that’s what every Iomega StorCenter NAS is going to end up doing – lots of random access. Multiple VMs accessing the same LUN, BitTorrents being transferred, MP3s being played, these little devices are a party in a box. The drives are going to be worked hard, and I bet for most people, the real speed limit will be the random access speed of the drives anyway.
But while troubleshooting storage performance multipathing, I ran into the bad side of the monster again. I unplugged one of the Iomega’s two network cables, and I didn’t get any warnings whatsoever. The PX4 didn’t show any errors on the LCD display or the web control panel, and it didn’t even notify me via the built-in email alerts. I had to dig into the network screen to even figure out that one of the cables was unplugged. Upon reconnecting the cable, the Iomega autonegotiated to 10Mbps (not 100, not gig, but 10) half duplex. No warnings were shown on the control panel for that, either, and to make matters worse, you can’t set the Ethernet speeds. It’s autonegotiate or nothin’.
Bottom Line: It’s a Little Monster Alright.
The Iomega PX4-300d NAS seduced me with its long, strong…feature list, and I’m willing to overlook the wild infestations of
crabs bugs for my lab at home. It’s the first NAS I haven’t returned back to the shop in less than 72 hours. Like the philosopher said:
Look at him, look at me
That boy is bad, and honestly
He’s a wolf in disguise
But I can’t stop staring in those evil eyes
Who should buy the StorCenter PX4/PX6? Home users that want one quiet, capable box to sit in a closet handling backups, media, BitTorrents, uploading photos to the cloud, and other menial chores. Yes, you could build your own more-capable solution for less money, but it won’t be as point-and-click easy as the Iomega. IT, DevOps, and developer managers should also pick up one of these as a reward for a well-performing team. Throw one of these under a cubicle and for less than the price of a good workstation, the entire team now has an iSCSI sandbox and shared MP3 storage off the domain and away from the corporate IT overlords.
Who shouldn’t buy it? I wouldn’t recommend the PX4/PX6 in a business environment for anything other than a lab – at least, not until versions of the firmware come out that fix the glaring UI bugs and stop users from rebooting the NAS. I would be very uncomfortable to go into a small business as a consultant, sell them a shared storage setup using a StorCenter PX4/PX6, and walk away.
Can they fix these issues with firmware updates? Yes, but only if they test the new firmware updates better than they’ve tested so far, and as of August 2011, it’s not looking good. After applying a firmware update to fix the VMware multipathing issue, I received this disturbing email from Iomega:
Thank you for downloading the recent firmware update for your Iomega StorCenter px. After release, we identified an issue with growing or expanding storage pools with the v 3.1.10. 45882 update. If you have not yet applied the update to your system, please wait. We will be releasing a new version of the firmware that resolves the issue soon. If you applied the update, you should not experience any issues unless you expand or modify the size of a storage pool. If you do experience any issues with a storage pool, please reply to Iomega Technical Support using this incident number….
If you decide to buy it, you can throw me a few coins by buying the Iomega PX4-300d via my Amazon link or the 6-drive version, the PX6. And hey, it’s in stock for Prime members, so you could have it tomorrow – plenty of time to play with it over the holiday weekend. I’m not sayin’, I’m just sayin’.