DroboPro Performance Testing

Recently I was able to purchase and test two Drobo storage appliances.

A DroboS with five disks and a DroboPro with eight disks. All the disks are the same size, speed, and manufacturer. The specs are listed further below for your consumption.

The DroboS is attached via USB 2.0 to my desktop workstation – a Dell OptiPlex 745 with 4GB RAM and running Windows 7 (64bit). I have an eSATA card on order. I’ll be posting more details on this specific setup at a later date.

The DroboPro is the device I’m most interested in. Depending on the test results that I get – it may live in a remote campus building attached to two small ESX hosts with six virtual servers on them.

Currently our environment is hosted on three MPC DataFrame 120’s. These units are no longer covered by manufacturer’s warranty due to MPC’s crash and burn bankruptcy. However, I have two HP/LeftHand 2120 storage modules that could replace them, but it may be overkill. The HP units each have 12 disks and use much more power than our existing MPC units or the Drobo. Two domain controllers, two file servers, a print server, and an application server.

Read on for the gory details.

Test Environment

VMWare Host:

· HP DL360 G6 (Sixth Generation)

CPU: 1 x Intel Xeon E5530 Quad Core 2.4 GHz with Hyper threading enabled (8 total cores available)
Memory: 3 x 4GB PC3-10600R RDIMMs DDR3
Network: Six total 1Gbps Ethernet ports (2 on-board, 4 port PCIe NIC)

· Local Storage:

HP P400 Raid Controller with 512MB Cache and Battery Backup Unit (for full read/write caching)
4 x 72GB 10k RPM SAS (2 x RAID1)

· NAS Storage

2 x LeftHand (HP) NSM 2120 Storage Modules (mirrored but not load balanced)
2 x RAID 6 arrays and combined into one storage pool using SAN iQ

· VMFS volumes:
3 x 1TB (vmware default configurations)
1 x 250GB (vmware default configurations)

DroboPro

1 x 1Gbps Ethernet connection
Latest firmware (1.4.1)
8 x 500GB 7200RPM SATA
Single drive failure protection (dual disk disabled for testing)

· VMFS volumes:
2 x 2TB (8MB blocks, per best practice documentation from Drobo)

· Network:

1 x HP 3500 YL gigabit Ethernet switch (24 port – 101Gpps backplane)

· Testing Software:

Host OS: vSphere 4.0
VM OS: Windows XP SP3 (1GB RAM, 9GB HDD)
Notes: VMware tools and latest virtual hardware upgrades applied.
HD Tach 3.0.4.0

Notes:

HD Tach uses base 10 for conversion of bytes to megabytes and gigabytes. I use base 2 to reflect the OS and application measurements.

First Test:

Test VM on LeftHand 1TB VMFS volume.drobo test 1 - LeftHand Throughput

We start off slow but maintain a high throughput with minimal latency, averaging 8.4ms on the random access tests. Average read speeds are 51.5 MB/s with a burst above 61MB throughout the test. The test bandwidth ranges from 28.6 MB to over 66 MB, but mostly staying above 42 MB during the entire test.

Second Test:

Test VM on Drobo 2TB VMFS volume. drobo test 1 - DroboPro Throughput

After migrating the same test machine over to the Drobo, I ran the exact same test. We see much less top end and very low valleys on the ranges of throughput. The test ranges from 10MB on the lowest to a hair over 45MB/s on the bursts. The throughput averaging 27.56 MB/s with a latency of over 450ms. I reran the test at a later time and had a better result.

Third Test:

Test VM on Drobo 2TB VMFS volume (#2) drobo test 2 - DroboPro Throughput

I reran the HD Tach test on the Drobo to test, this time using 32MB blocks. The random access latency was much better at 14.4ms. The throughput remained the same – averaging 29.85 MB/s, the difference easily attributed to the larger block size.

Fourth Test:

Test VM on Drobo 2TB VMFS volume during a storage vmotion:drobo test 2 - drobo and vmotion

I started a storage VM Motion of a powered off clone of the test VM. This copy was being svmotioned to the second VMFS volume on the DroboPro, and finished during the first 3rd of the test. As you can see, the disk access for the running machine was crushed by the svmotion process of the vSphere host.

Fifth Test:

Text VM on HP local storage.XP VM DAS RAID1

I also wanted a benchmark in this test to prove the virtual machine and the host are not the bottlenecks in this test. I’ll let the graph speak for itself.

The sustained throughput and burst speeds are more than enough to prove that vsphere host and virtual hardware have opened up the floodgates to high I/O applications.

Final Thoughts:

I’m expecting too much from this SMB storage device. While I point out the performance issues I found, it really isn’t fair – I’m comparing apples to oranges in this environment.

The Pro is really a good unit and has a lot of potential for SMBs or workgroups. It’s brain dead simple to use and manage. The Drobo line of products are devices that “just works” out of the box. Plug it in, slap a pair of hard drives in, and turn it on.

I don’t think I can recommend this unit for the original purpose it was purchased for: Shared storage for two ESX hosts and 4-6 virtual servers in a remote campus. Even though it is certified by VMWare, it’s apparent by the tests that it would not support more than a few virtual machines – and certainly not any file servers that require a decent storage subsystem to keep users from complaining about slow file access or delays.

This is not to say “don’t buy it” because I really do think it’s a worthy product if you need a lot of storage that can be upgraded over time with very few technical skills. What other device out there can you yank a 500GB drive out and slap in a 1TB and start using it right away? That’s the power of these units.

I can think of a dozen situations here at work that this unit would be a perfect fit. Just not this one.

The lack of monitoring (remote or self) also removes it from our production environment. I brought this question up with Drobo support. They were very prompt in their turnaround, as I received a reply in less than an hour with a follow up question and then another 45 minutes with a recommendation.

They recommend I install the Drobo dashboard software on a virtual machine and setup email alerts from within the software. The catch is that the dashboard is an application – not a service. I would have to remain logged in for the dashboard to be running. This is not an option.

Alternatives:

The DroboPro is not the biggest dog in Drobo’s kennel. They recently released the Drobo Elite which offers a faster processor, a second gigabit nic, and some additional file system features to allow for multiple computer access from the network. The cost is about double what the Pro was.

However, I was hoping for more performance from an 8 drive unit than I got. It’s just not fast enough for my environment. I will post additional test information from my DroboS when I get the eSATA card installed. I’m looking forward to that!

-Update-
I reviewed my test data and methods and wanted a clearer picture of where the DroboPro was on my chart of storage. So I retested the DroboPro with additional software and tests for additional data. Read on to DroboPro Testing, Part 2.

7 thoughts on “DroboPro Performance Testing

    1. I did. I’ve got it racked and ready for testing. Work and family have kept me exceptionally busy. Stay tuned for the new post by the end of the week. -update- Yeah this week was a trainwreck. I’ve got raw data from the test but then got word that new firmware was going to be released on Monday that cranks up the VMware performance so I’ll be retesting.

  1. I would love to see how the Pro and Elite performance characteristics change as additional drives are added AND what happens when N+2 data protection is enabled for higher spindle-counts.

    It would also be interesting to see what happens when swapping drives: how is performance impacted during “data protection” as well as how drive size and drive performance changes impact the unit’s performance as a whole.

    In essence, I’m looking to find the “sweet spot” for performance vs capacity in the device I already own.

    1. Yeah those would be interesting metrics, but I have only one type of drive available and I start the unit fully loaded to test it for my needs.

      I have a feeling performance scales linearly starting at one. Spindle count increases will always reduce latency, but not always throughput.

Comments are closed.