Skip to content

Posts from the ‘Review’ Category

24
May

VMware View 5.1 Deployed

I took this week after classes finished to tear down our View 4.6 cloud that was hosted on vsphere 4.1 ESX servers and redeploy it properly with a dedicated vCenter server, upgrade the vmware environment to 5.0 U1 and then roll out a new View 5.1 environment.

A few quick observations for those planning upgrades. Read the installation, administration, and upgrade manuals completely and make notes of all the changes or ancillary upgrades you may need to do.

I ran into a couple of hiccups but nothing too painful.

The security server wouldn’t link with the connection server until we opened up the extra ports in our DMZ firewall and had IPsec encapsulation enabled. Yes, it’s clearly documented – it just needed to be read. Oh and the installer says you can use the IP or FQDN of the connection server while installing the security server – don’t use IPs. Use the FQDN and make sure that your security servers can resolve the FQDN of the connection server.

Make sure you have a good public cert if you’ll be letting anyone outside your organization connect. If not, bone up on running a certificate authority in your network. You should already be deploying internal certs to your servers and workstations.

I’m digging the new features like host caching (2GB of server ram dedicated to caching storage… Zoom!) and finally an OS X client that does PCoIP and doesn’t require Microsoft’s RDP client.

I just finished deploying new thinclient images with View 5.1 clients and the new root CA. The wildcard cert we purchased in February from GeoTrust was great… Except the HP thinclients didn’t have GeoTrust’s root cert so the entire view environment was Untrusted and the clients just failed to connect.

Tomorrow I start deploying Win 7 desktops…

21
Jun

New VMware vSphere Cluster

This month has been busy… very rewarding, but bizzzz-aye! Here’s our latest project come to close… upgrade our existing VMware infrastructure to new hardware to better utilize our existing licenses. Easy enough, right?

Old environment:

3 x HP DL380 G5 servers, each identically configured as such:

  • 2 x Dual Core Xeon Processors (1.6Ghz)
  • 16 GB of DDR2 ECC RAM
  • 1 x 4 Port Gigabit NIC
  • 2 x 850Watt power supplies
  • ESX 3.5 Update 5

Estimated power usage 1,950 watts at 60% utilization. Averaging 650 watts per host – or 17082 kWh for the year

New environment:

Inside of a fully configured sixth generation HP ProLiant DL360

HP DL360 G6

3 x HP DL360 G6 servers, each identically configured as such:

  • 2 x Quad Core Xeon H5540 Processors (2.4Ghz)
  • 54 GB of DDR3 ECC RAM (12 x 4GB + 3 x 2GB
  • 1 x 4 Port Gigabit NIC
  • 2 x 450 Watt power supplies (set for active/passive fault tollerance)
  • ESX 4.0 Update 2

Estimated power usage 480 watts with the same virtual machine count. With a confirmed average of 240 watts for two hosts 0r 4,208kWh for the year. One host is in standby, using less than 1W of power.

We’re currently leveraging vCenter’s Dynamic Power Management feature to shutdown one host because there are enough resources on the other two to maintain all of the virtual machines AND still provide high availability. If we lose a host, HA will reboot the lost VMs on the remaining host – and power up the standby host to provide more resources to the cluster. In the mean time though – we’re saving the earth and money by doing just a little more configuration. As our virtual environment grows we may reach a point where we won’t be able to keep one host powered off, but in the meantime – why not?

Savings:

  • 3 RU of space.
  • 12,874 kWh of electricity annually
  • $1,205 per year in power costs

Also in this environment we took the steps to standardize our vSphere host build so that adding additional hosts or rebuilding any failed hosts is just an install disc and host profile application away from production. Our previous environment was a pre-production proof of concept that got rolled into production without much validation or configuration standardization.

I’m very pleased with the rollout and now that we have modern vSphere cluster attached to our previously installed 28TB NetApp 3140 SAN/NAS cluster – we’re ready to rock into the year with Exchange 2010 and SharePoint 2010 virtualization – estimated savings in power alone is looking to be in the range of $3,000-$4,500 per year. The ROI reports write themselves!

30
Jan

DroboPro Performance Testing Part 2

My original post included a quick and dirty test on raw hard drive performance using HDTach to give me an idea of what I was working with with my new DroboPro. Of course true to any benchmarking test – there are so many metrics that can be tested that may be able to give me a clearer picture, but I didn’t have the luxury of time to do so.

After publishing my testing, I felt that it was worth going back and getting more data. It’s obvious from the hit count I’ve seen – that this is an interesting topic that a few people actually find interesting. I’m in a unique position as a consumer to be able to test this unit in a great development environment against some really good equipment that others can identify with.

So, without further ado… here’s what I did.

Read moreRead more

26
Jan

DroboPro Performance Testing

Recently I was able to purchase and test two Drobo storage appliances.

A DroboS with five disks and a DroboPro with eight disks. All the disks are the same size, speed, and manufacturer. The specs are listed further below for your consumption.

The DroboS is attached via USB 2.0 to my desktop workstation – a Dell OptiPlex 745 with 4GB RAM and running Windows 7 (64bit). I have an eSATA card on order. I’ll be posting more details on this specific setup at a later date.

The DroboPro is the device I’m most interested in. Depending on the test results that I get – it may live in a remote campus building attached to two small ESX hosts with six virtual servers on them.

Currently our environment is hosted on three MPC DataFrame 120’s. These units are no longer covered by manufacturer’s warranty due to MPC’s crash and burn bankruptcy. However, I have two HP/LeftHand 2120 storage modules that could replace them, but it may be overkill. The HP units each have 12 disks and use much more power than our existing MPC units or the Drobo. Two domain controllers, two file servers, a print server, and an application server.

Read on for the gory details.

Test Environment

VMWare Host:

· HP DL360 G6 (Sixth Generation)

CPU: 1 x Intel Xeon E5530 Quad Core 2.4 GHz with Hyper threading enabled (8 total cores available)
Memory: 3 x 4GB PC3-10600R RDIMMs DDR3
Network: Six total 1Gbps Ethernet ports (2 on-board, 4 port PCIe NIC)

· Local Storage:

HP P400 Raid Controller with 512MB Cache and Battery Backup Unit (for full read/write caching)
4 x 72GB 10k RPM SAS (2 x RAID1)

· NAS Storage

2 x LeftHand (HP) NSM 2120 Storage Modules (mirrored but not load balanced)
2 x RAID 6 arrays and combined into one storage pool using SAN iQ

· VMFS volumes:
3 x 1TB (vmware default configurations)
1 x 250GB (vmware default configurations)

DroboPro

1 x 1Gbps Ethernet connection
Latest firmware (1.4.1)
8 x 500GB 7200RPM SATA
Single drive failure protection (dual disk disabled for testing)

· VMFS volumes:
2 x 2TB (8MB blocks, per best practice documentation from Drobo)

· Network:

1 x HP 3500 YL gigabit Ethernet switch (24 port – 101Gpps backplane)

· Testing Software:

Host OS: vSphere 4.0
VM OS: Windows XP SP3 (1GB RAM, 9GB HDD)
Notes: VMware tools and latest virtual hardware upgrades applied.
HD Tach 3.0.4.0

Notes:

HD Tach uses base 10 for conversion of bytes to megabytes and gigabytes. I use base 2 to reflect the OS and application measurements.

First Test:

Test VM on LeftHand 1TB VMFS volume.drobo test 1 - LeftHand Throughput

We start off slow but maintain a high throughput with minimal latency, averaging 8.4ms on the random access tests. Average read speeds are 51.5 MB/s with a burst above 61MB throughout the test. The test bandwidth ranges from 28.6 MB to over 66 MB, but mostly staying above 42 MB during the entire test.

Second Test:

Test VM on Drobo 2TB VMFS volume. drobo test 1 - DroboPro Throughput

After migrating the same test machine over to the Drobo, I ran the exact same test. We see much less top end and very low valleys on the ranges of throughput. The test ranges from 10MB on the lowest to a hair over 45MB/s on the bursts. The throughput averaging 27.56 MB/s with a latency of over 450ms. I reran the test at a later time and had a better result.

Third Test:

Test VM on Drobo 2TB VMFS volume (#2) drobo test 2 - DroboPro Throughput

I reran the HD Tach test on the Drobo to test, this time using 32MB blocks. The random access latency was much better at 14.4ms. The throughput remained the same – averaging 29.85 MB/s, the difference easily attributed to the larger block size.

Fourth Test:

Test VM on Drobo 2TB VMFS volume during a storage vmotion:drobo test 2 - drobo and vmotion

I started a storage VM Motion of a powered off clone of the test VM. This copy was being svmotioned to the second VMFS volume on the DroboPro, and finished during the first 3rd of the test. As you can see, the disk access for the running machine was crushed by the svmotion process of the vSphere host.

Fifth Test:

Text VM on HP local storage.XP VM DAS RAID1

I also wanted a benchmark in this test to prove the virtual machine and the host are not the bottlenecks in this test. I’ll let the graph speak for itself.

The sustained throughput and burst speeds are more than enough to prove that vsphere host and virtual hardware have opened up the floodgates to high I/O applications.

Final Thoughts:

I’m expecting too much from this SMB storage device. While I point out the performance issues I found, it really isn’t fair – I’m comparing apples to oranges in this environment.

The Pro is really a good unit and has a lot of potential for SMBs or workgroups. It’s brain dead simple to use and manage. The Drobo line of products are devices that “just works” out of the box. Plug it in, slap a pair of hard drives in, and turn it on.

I don’t think I can recommend this unit for the original purpose it was purchased for: Shared storage for two ESX hosts and 4-6 virtual servers in a remote campus. Even though it is certified by VMWare, it’s apparent by the tests that it would not support more than a few virtual machines – and certainly not any file servers that require a decent storage subsystem to keep users from complaining about slow file access or delays.

This is not to say “don’t buy it” because I really do think it’s a worthy product if you need a lot of storage that can be upgraded over time with very few technical skills. What other device out there can you yank a 500GB drive out and slap in a 1TB and start using it right away? That’s the power of these units.

I can think of a dozen situations here at work that this unit would be a perfect fit. Just not this one.

The lack of monitoring (remote or self) also removes it from our production environment. I brought this question up with Drobo support. They were very prompt in their turnaround, as I received a reply in less than an hour with a follow up question and then another 45 minutes with a recommendation.

They recommend I install the Drobo dashboard software on a virtual machine and setup email alerts from within the software. The catch is that the dashboard is an application – not a service. I would have to remain logged in for the dashboard to be running. This is not an option.

Alternatives:

The DroboPro is not the biggest dog in Drobo’s kennel. They recently released the Drobo Elite which offers a faster processor, a second gigabit nic, and some additional file system features to allow for multiple computer access from the network. The cost is about double what the Pro was.

However, I was hoping for more performance from an 8 drive unit than I got. It’s just not fast enough for my environment. I will post additional test information from my DroboS when I get the eSATA card installed. I’m looking forward to that!

-Update-
I reviewed my test data and methods and wanted a clearer picture of where the DroboPro was on my chart of storage. So I retested the DroboPro with additional software and tests for additional data. Read on to DroboPro Testing, Part 2.

12
Jun

Nehalem quietly

I love deliveries. Especially this time of year. UPS, FedEx, AirTrans, carrier pigeon… I don’t care – it’s usually something expensive and always something that is going to make my job easier.

Today, FedEx delivered a pallet of new HP servers and parts for our new campus. The pallet also contained a few parts for servers we had just got last week.Xeon5500

So I’ve got five new HP DL360 servers, sixth generation. HP just released their new line last month with the new Intel 5500 Xeon processors. I like to compare them like the pro version of the i7 consumer chips. Four hyperthreading cores with onboard memory controller per chip.  Yeah, and even though it matches clock speed with our existing G5 servers – it’s smoking fast.

Opening the little 1U server chassis, shows a lot of room for expansion – given the amount of gear this unit has already. It has an onboard raid control card that can address up to eight 2.5” SAS or SATA drives. It also has an IDE controller for optical media. It also includes a USB port and SD Card slot on the motherboard… great for those moronic copy protection dongles or emergency boot drives or utilities.

I’m not going sit here and try and sell you a server by just spewing specs… what HP really did to impress me is cut the noise and power usage so drastically I seriously thought there was something wrong with it. c01668139

These servers are usually so loud I can’t build them at my desk – I had to take them and bench build them in our staging room. Not anymore.

I actually had this DL360 G6 installing windows 2008 64bit from DVD on a bench next to a Dell OptiPlex 755 sitting idle. When I placed my head between the two to check if the fans were actually spinning on the HP – the Dell was louder. I have never actually heard an SAS drive until today… amazing.

After diving into the onboard monitoring systems, I found out how they are able to keep the fans spinning at 19% while keeping cool – 28 onboard temp sensors watching everything in the box… if a section gets warmer – only the fans dedicated to that area increase their speed and only as much needed to move more air to cool it.

With a single quad core processor, three 2GB memory cards, four 10,000 rpm hard drives, and a four port gig nic PCI-x card – this server only pulled 130 watts of power out of both power supplies at its peak. When it was idling it sat at 93 watts. The only time I ever heard the fans is when I started the server after that near silence.

Yes, I’m that impressed with this new line – I’m looking forward to the next year when we upgrade our ESX environment to G6 host servers… Maybe I won’t be able to hear the server room from down the hall.

%d bloggers like this: