Recently I was able to purchase and test two Drobo storage appliances.
A DroboS with five disks and a DroboPro with eight disks. All the disks are the same size, speed, and manufacturer. The specs are listed further below for your consumption.
The DroboS is attached via USB 2.0 to my desktop workstation â€“ a Dell OptiPlex 745 with 4GB RAM and running Windows 7 (64bit). I have an eSATA card on order. I’ll be posting more details on this specific setup at a later date.
The DroboPro is the device Iâ€™m most interested in. Depending on the test results that I get â€“ it may live in a remote campus building attached to two small ESX hosts with six virtual servers on them.
Currently our environment is hosted on three MPC DataFrame 120â€™s. These units are no longer covered by manufacturerâ€™s warranty due to MPCâ€™s crash and burn bankruptcy. However, I have two HP/LeftHand 2120 storage modules that could replace them, but it may be overkill. The HP units each have 12 disks and use much more power than our existing MPC units or the Drobo. Two domain controllers, two file servers, a print server, and an application server.
Read on for the gory details.
Â· HP DL360 G6 (Sixth Generation)
CPU: 1 x Intel Xeon E5530 Quad Core 2.4 GHz with Hyper threading enabled (8 total cores available)
Memory: 3 x 4GB PC3-10600R RDIMMs DDR3
Network: Six total 1Gbps Ethernet ports (2 on-board, 4 port PCIe NIC)
Â· Local Storage:
HP P400 Raid Controller with 512MB Cache and Battery Backup Unit (for full read/write caching)
4 x 72GB 10k RPM SAS (2 x RAID1)
Â· NAS Storage
2 x LeftHand (HP) NSM 2120 Storage Modules (mirrored but not load balanced)
2 x RAID 6 arrays and combined into one storage pool using SAN iQ
Â· VMFS volumes:
3 x 1TB (vmware default configurations)
1 x 250GB (vmware default configurations)
1 x 1Gbps Ethernet connection
Latest firmware (1.4.1)
8 x 500GB 7200RPM SATA
Single drive failure protection (dual disk disabled for testing)
Â· VMFS volumes:
2 x 2TB (8MB blocks, per best practice documentation from Drobo)
1 x HP 3500 YL gigabit Ethernet switch (24 port â€“ 101Gpps backplane)
Â· Testing Software:
Host OS: vSphere 4.0
VM OS: Windows XP SP3 (1GB RAM, 9GB HDD)
Notes: VMware tools and latest virtual hardware upgrades applied.
HD Tach 126.96.36.199
HD Tach uses base 10 for conversion of bytes to megabytes and gigabytes. I use base 2 to reflect the OS and application measurements.
Test VM on LeftHand 1TB VMFS volume.
We start off slow but maintain a high throughput with minimal latency, averaging 8.4ms on the random access tests. Average read speeds are 51.5 MB/s with a burst above 61MB throughout the test. The test bandwidth ranges from 28.6 MB to over 66 MB, but mostly staying above 42 MB during the entire test.
Test VM on Drobo 2TB VMFS volume.
After migrating the same test machine over to the Drobo, I ran the exact same test. We see much less top end and very low valleys on the ranges of throughput. The test ranges from 10MB on the lowest to a hair over 45MB/s on the bursts. The throughput averaging 27.56 MB/s with a latency of over 450ms. I reran the test at a later time and had a better result.
Test VM on Drobo 2TB VMFS volume (#2)
I reran the HD Tach test on the Drobo to test, this time using 32MB blocks. The random access latency was much better at 14.4ms. The throughput remained the same â€“ averaging 29.85 MB/s, the difference easily attributed to the larger block size.
Test VM on Drobo 2TB VMFS volume during a storage vmotion:
I started a storage VM Motion of a powered off clone of the test VM. This copy was being svmotioned to the second VMFS volume on the DroboPro, and finished during the first 3rd of the test. As you can see, the disk access for the running machine was crushed by the svmotion process of the vSphere host.
I also wanted a benchmark in this test to prove the virtual machine and the host are not the bottlenecks in this test. Iâ€™ll let the graph speak for itself.
The sustained throughput and burst speeds are more than enough to prove that vsphere host and virtual hardware have opened up the floodgates to high I/O applications.
Iâ€™m expecting too much from this SMB storage device. While I point out the performance issues I found, it really isnâ€™t fair â€“ Iâ€™m comparing apples to oranges in this environment.
The Pro is really a good unit and has a lot of potential for SMBs or workgroups. Itâ€™s brain dead simple to use and manage. The Drobo line of products are devices that â€œjust worksâ€ out of the box. Plug it in, slap a pair of hard drives in, and turn it on.
I donâ€™t think I can recommend this unit for the original purpose it was purchased for: Shared storage for two ESX hosts and 4-6 virtual servers in a remote campus. Even though it is certified by VMWare, itâ€™s apparent by the tests that it would not support more than a few virtual machines â€“ and certainly not any file servers that require a decent storage subsystem to keep users from complaining about slow file access or delays.
This is not to say â€œdonâ€™t buy itâ€ because I really do think itâ€™s a worthy product if you need a lot of storage that can be upgraded over time with very few technical skills. What other device out there can you yank a 500GB drive out and slap in a 1TB and start using it right away? Thatâ€™s the power of these units.
I can think of a dozen situations here at work that this unit would be a perfect fit. Just not this one.
The lack of monitoring (remote or self) also removes it from our production environment. I brought this question up with Drobo support. They were very prompt in their turnaround, as I received a reply in less than an hour with a follow up question and then another 45 minutes with a recommendation.
They recommend I install the Drobo dashboard software on a virtual machine and setup email alerts from within the software. The catch is that the dashboard is an application â€“ not a service. I would have to remain logged in for the dashboard to be running. This is not an option.
The DroboPro is not the biggest dog in Droboâ€™s kennel. They recently released the Drobo Elite which offers a faster processor, a second gigabit nic, and some additional file system features to allow for multiple computer access from the network. The cost is about double what the Pro was.
However, I was hoping for more performance from an 8 drive unit than I got. Itâ€™s just not fast enough for my environment. I will post additional test information from my DroboS when I get the eSATA card installed. Iâ€™m looking forward to that!
I reviewed my test data and methods and wanted a clearer picture of where the DroboPro was on my chart of storage. So I retested the DroboPro with additional software and tests for additional data. Read on to DroboPro Testing, Part 2.
7 thoughts on “DroboPro Performance Testing”
Did you ever receive the Drobo Elite? I am very interested if there was a performance increase.
I did. I’ve got it racked and ready for testing. Work and family have kept me exceptionally busy. Stay tuned for the new post by the end of the week. -update- Yeah this week was a trainwreck. I’ve got raw data from the test but then got word that new firmware was going to be released on Monday that cranks up the VMware performance so I’ll be retesting.
I would love to see how the Pro and Elite performance characteristics change as additional drives are added AND what happens when N+2 data protection is enabled for higher spindle-counts.
It would also be interesting to see what happens when swapping drives: how is performance impacted during “data protection” as well as how drive size and drive performance changes impact the unit’s performance as a whole.
In essence, I’m looking to find the “sweet spot” for performance vs capacity in the device I already own.
Yeah those would be interesting metrics, but I have only one type of drive available and I start the unit fully loaded to test it for my needs.
I have a feeling performance scales linearly starting at one. Spindle count increases will always reduce latency, but not always throughput.
Comments are closed.