Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)
I attended the second day of the HP Converged Infrastructure Roadshow in NYC last week. Most of the day was spent watching PowerPoints and demos for the HP Matrix stuff and Virtual Connect. Then came lunch. I finished my appetizer and realized that the buffet being set up was for someone else. My appetizer was actually lunch! Thanks God there was cheesecake on the way…
There was a session on unified storage, which mostly covered the LeftHand line. At one point, I asked if the data de-dupe was source based or destination based. The “engineer” looked like a deer in the headlights and promptly answered “It’s hash based.” ‘Nuff said… The session covering the G6 servers was OK, but “been there done that.”
Other than the cheesecake, the best part of the day was the final presentation. The last session covered the differences in the various blade servers from several manufacturers. Even though I work for a company that sells HP, EMC and Cisco gear, I believe that x64 servers, from a hardware perspective, are really generic for the most part. Many will argue why their choice is the best, but most people choose a brand based on relationships with their supplier, the manufacturer or the dreaded “preferred vendor” status. Obviously, this was an HP – biased presentation, but some of the math the Bladesystem engineer (I forgot to get his name) presented really makes you think.
Lets start with a typical configuration for VMs. He mentioned that this was a “Gartner recommended” configuration for VMs, but I could not find anything about this anywhere on line. Even so, its a pretty fair portrayal of a typical VM.
Typical Virtual Machine Configuration:
- 3-4 GB Memory
- 300 Mbps I/O
- 100 Mbps Ethernet (0.1Gb)
- 200 Mbps Storage (0.2Gb)
Processor count was not discussed, but you will see that may not be a big deal since most processors are overpowered for todays applications (I said MOST). IOps is not a factor either in these comparisons, that would be a factor of the storage system.
So, let’s take a look at the typical server configuration. In this article, we are comparing blade servers. But this is even typical for a “2U” rack server. He called this an “eightieth percentile” server, meaning it will meet 80% of the requirements for a server.
Typical Server Configuration:
- 2 Sockets
- 4-6 cores per socket
- 12 DIMM slots
- 2 Hot-plug Drives
- 2 Lan on Motherboard (LOM)
- 2 Mezzanine Slots (Or PCI-e slots)
Now, say we take this typical server and load it with 4GB or 8GB DIMMs. This is not a real stretch of the imagination. It gives us 48GB of RAM. Now its time for some math:
Calculations for a server with 4GB DIMMs:
- 48GB Total RAM ÷ 3GB Memory per VM = 16 VMs
- 16 VMs ÷ 8 cores = 2 VMs per core
- 16 VMs * 0.3Gb per VM = 4.8 Gb I/O needed (x2 for redundancy)
- 16 VMs * 0.1Gb per VM = 1.6Gb Ethernet needed (x2 for redundancy)
- 16 VMs * 0.2Gb per VM = 3.2Gb Storage needed (x2 for redundancy)
Calculations for a server with 8GB DIMMs:
- 96GB Total RAM ÷ 3GB Memory per VM = 32 VMs
- 32 VMs ÷ 8 cores = 4 VMs per core
- 32 VMs * 0.3Gb per VM = 9.6Gb Ethernet needed (x2 for redundancy)
- 32 VMs * 0.1Gb per VM = 3.2Gb Ethernet needed (x2 for redundancy)
- 32 VMs * 0.2Gb per VM = 6.4Gb Storage needed (x2 for redundancy)
Are you with me so far? I see nothing wrong with any of these yet.
Now, we need to look at the different attributes of the blades:
* The IBM LS42 and HP BL490c Each have 2 internal non-hot plug drive slots
The “dings” against each:
- Cisco B200M1 has no LOM and only 1 mezzanine slot
- Cisco B250M1 has no LOM
- Cisco chassis only has one pair of I/O modules
- Cisco chassis only has four power supplies – may cause issues using 3-phase power
- Dell M710 and M905 have only 1GbE LOMs (Allegedly, the chassis midplane connecting the LOMs cannot support 10GbE because they lack a “back drill.”)
- IBM LS42 has only 1GbE LOMs
- IBM chassis only has four power supplies – may cause issues using 3-phase power
Now, from here, the engineer made comparisons based on loading each blade with 4GB or 8GB DIMMs. Basically, some of the blades would not support a full complement of VMs based on a full load of DIMMS. What does this mean? Don’t rush out and buy blades loaded with DIMMs or your memory utilization could be lower than expected. What it really means is that you need to ASSESS your needs and DESIGN an infrastructure based on those needs. What I will do is give you a maximum VMs per blade and per chassis. It seems to me that it would make more sense to consider this in the design stage so that you can come up with some TCO numbers based on vendors. So, we will take a look at the maximum number of VMs for each blade based on total RAM capability and total I/O capability. The lower number becomes the total possible VMs per blade based on overall configuration. What I did here to simplify things was take the total possible RAM and subtract 6GB for hypervisor and overhead, then divide by 3 to come up with the amount of 3GB VMs I could host. I also took the size specs for each chassis and calulated the maximum possible chassis per rack and then calculated the number of VMs per rack. The number of chassis per rack does not account for top of rack switches. If these are needed, you may lose one chassis per rack most of the systems will allow for an end of row or core switching configuration.
One thing to remember is this is a quick calculation. It estimates the amount of RAM required for overhead and the hypervisor to be 6GB. It is by no means based on any calculations coming from a real assessment. The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O ÷ 0.3 I/O per VM = 66 VMs.
I set out in this journey with the purpose of taking the ideas from an HP engineer and attempted as best as I could to be fair in my version of this presentation. I did not even know what the outcome would be, but I am pleased to find that HP blades offer the highest VM per rack numbers.
The final part of the HP presentation dealt with cooling and power comparisons. One thing that I was surprised to hear, but have not confirmed, is that the Cisco blades want to draw more air (in CFM) than one perforated tile will allow. I will not even get into the “CFM pre VM” or “Watt per VM” numbers, but they also favored HP blades.
Please, by all means challenge my numbers. But back them up with numbers yourself.
Cisco B200M1 | Cisco B250M1 | Dell M710 | Dell M905 | IBM LS42 | HP BL460c | HP BL490c | HP BL685c | |
Max RAM 4GB DIMMs | 48 | 192 | 72 | 96 | 64 | 48 | 72 | 128 |
Total VMs Possible | 16 | 64 | 24 | 32 | 21 | 16 | 24 | 42 |
Max RAM 8GB DIMMs | 96 | 384 | 144 | 192 | 128 | 96 | 144 | 256 |
Total VMs Possible | 32 | 128 | 48 | 64 | 42 | 32 | 48 | 85 |
Max Total Redundant I/O | 10 | 20 | 22 | 22 | 22 | 30 | 30 | 60 |
Total VMs Possible | 33 | 66 | 72 | 73 | 73 | 100 | 100 | 200 |
Max VM per Blade (4GB DIMMs) | 16 | 64 | 24 | 32 | 21 | 16 | 24 | 42 |
Max VM per Chassis (4GB DIMM) | 128 | 256 | 192 | 256 | 147 | 256 | 384 | 336 |
Max VM per Blade (8GB DIMMs) | 32 | 66 | 48 | 64 | 42 | 32 | 48 | 85 |
Max VM per Chassis (8GB DIMM) | 256 | 264 | 384 | 512 | 294 | 512 | 768 | 680 |
Hey Dave, I think your sums are wrong (or I’m thick, which is most likely!) on the VMs/8GB/chassis.
Under Cisco 250M! Max VM/blade/8GB DMs – row 9, data column 2 – should you have 126 instead of 62? That would make max VMs/chassis to 126 * 8 = 1008? Or did I miss something?
Of course this kind of matrix has limited real world usefulness and is really the toy of technical marketing engineers who are looking to put blinkers over the head of customers and blind them from the full, big picture. Not that I’m suggesting you are just regurgitating HP bad maths and propaganda! 😀
As you say about half-way through your post, there is much more design work needed than these simple matrices, and in fact I think these myopic tables do more harm than good even when they make Cisco look good – Scott Drummond, who I respect enormously, also has the same opinion (remember his debate with Crosby when he said exactly this on stage where “those VMware numbers are too good to be true, so I doubt the methodology”).
Great site!
Cheers
Steve
Actually Steve, after I looked at it again, the number should have been 66. I added a reason under the chart:
The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O÷ 0.3 I/O per VM = 66 VMs.
You are correct about what I am trying to say. There really is nothing “wrong” with ANY of the blades (and I really dislike the IBM stuff) as long as you properly design your infrastructure. Also remember that with cloud computing it is not all about how many VMs you can squeeze into a space. To me, the orchestration and automation become very important, especially with a large number of VMs crammed into that space. This is where I think Cisco AND HP shine right now.
Dave
Actually Steve, after I looked at it again, the number should have been 66. I added a reason under the chart:
The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O÷ 0.3 I/O per VM = 66 VMs.
You are correct about what I am trying to say. There really is nothing “wrong” with ANY of the blades (and I really dislike the IBM stuff) as long as you properly design your infrastructure. Also remember that with cloud computing it is not all about how many VMs you can squeeze into a space. To me, the orchestration and automation become very important, especially with a large number of VMs crammed into that space. This is where I think Cisco AND HP shine right now.
Dave
I’ve seriously read this article three times and still don’t get what you or HP for that matter is trying to show.
Isn’t virtualization about combining loads? When have you ever seen a load max out all at the same time? (except for badly scheduled backups.) The numbers used by HP don’t seem to match industry standards when I look at the last couple of Cap. Planners I did.
I’ve seriously read this article three times and still don’t get what you or HP for that matter is trying to show.
Isn’t virtualization about combining loads? When have you ever seen a load max out all at the same time? (except for badly scheduled backups.) The numbers used by HP don’t seem to match industry standards when I look at the last couple of Cap. Planners I did.
@Dave Convery
When have you ever seen all VMs needing 300Mbps? I have never ever seen this. Even during the backup that hardly happens.
@Dave Convery
When have you ever seen all VMs needing 300Mbps? I have never ever seen this. Even during the backup that hardly happens.
Although I’ve met lots of marketing and consulting folks who talk about 100+ guests per blade, I’ve not met a customer who’s doing it yet though I’m sure they exist somewhere.
Factor in their risk appetite, cluster overhead, capacity planning etc. and these kind of numbers are moot 🙂
Cheers
Steve
@Duncan
You are correct Duncan. I have done quite a few Cap Planner assessments as well, 300Mbps seems high most of the time, and so does 3GB RAM. I did say you need to “ASSESS and DESIGN”. This was based on what he claimed was a “Gartner recommended” VM configuration. This is the only piece that I can’t confirm. I am very sure that you could put even more VMs in any of the blades. The numbers were for comparison only.
Its good that two powerhouses like Stevie and You are commenting here! I sat through the HP presentation and it was very biased, obviously. I tried to take that and remove at least some of the bias.
@Duncan
You are correct Duncan. I have done quite a few Cap Planner assessments as well, 300Mbps seems high most of the time, and so does 3GB RAM. I did say you need to “ASSESS and DESIGN”. This was based on what he claimed was a “Gartner recommended” VM configuration. This is the only piece that I can’t confirm. I am very sure that you could put even more VMs in any of the blades. The numbers were for comparison only.
Its good that two powerhouses like Stevie and You are commenting here! I sat through the HP presentation and it was very biased, obviously. I tried to take that and remove at least some of the bias.
@Duncan
@Steve Chambers
Posted some REAL capacity numbers based on recent assessments ->
http://www.dailyhypervisor.com/2009/12/21/is-your-blade-ready-for-virtualization-part-2-real-numbers/
Dave
@Duncan
@Steve Chambers
Posted some REAL capacity numbers based on recent assessments ->
http://www.dailyhypervisor.com/2009/12/21/is-your-blade-ready-for-virtualization-part-2-real-numbers/
Dave
Dave,
we have discussed in the past similar topics and I know we tend to “disagree” on a number of things… 🙂
As Duncan pointed out the I/O requirements seem to be based on a typical marketing methodology called “marketing reverse engineering”. That is: “we need to get to this number, what should the assumptions be?”. This is typical marketing practice (not only from HP obviously). They had to do this (ridiculous) I/O assumption otherwise the Cisco B250M1 would look too good (as it can support two tons of RAM).
BTW I really think that these numbers are pointless if you look at what’s ACTUALLY happening in the field (at least as far as I can see).
HP/IBM/Dell/whoever could market their own stuff in their own way pointing out their own perceived advantages but at the end of the day it seems there is a common (bad?) deployment pattern in the industry today that make all these efforts evaporate.
I wrote this a few weeks ago: http://it20.info/blogs/main/archive/2009/12/10/1427.aspx
We agree 100% on this though:
>I believe that x64 servers, from a hardware perspective, are really generic for the most
>part. Many will argue why their choice is the best, but most people choose a brand based
>on relationships with their supplier, the manufacturer or the dreaded “preferred vendor”
>status
Massimo.
P.S. by the way, if I was HP, I would take what Gartner says with a grain of salt. They even predicted that Itanium would have taken over the world….. 🙂
Myself I love the 495c blade, surprised you didn’t mention it, I haven’t used HP blades myself yet(we do have a dell blade chassis with a bunch of M605s), though am pushing to go HP next year with virtualconnect and 495c (or perhaps 12-core opterons if they are available in time, probably Q2), combine with boot from SAN w/MPIO and ditching the need for disks on the blades..
When I priced the stuff out earlier this year(around July) it came in far cheaper than the Dell stuff mainly because of the added memory slots, also look forward to using virtualconnect. The management stuff on the Dell blades left a lot to be desired.
64GB with 4GB dimms, and 12 cpu cores, with integrated dual port 10GbE virtualconnect. On one of my ~100 VM setups average is about 1.8GB per VM. I like the CPU core: memory ratio that the 495c offers, I’m not sure if I’d want much more ram with only 12 cores.
One other area where HP did tout their stuff over the competition is their backplane performance, which they claimed was in the ~5Tb range, and said folks like IBM have to limit their expansion abilities in some configurations because of their slower backplane. Not that I have plans to drive that much throughput..maybe in my dreams.
Also it appears you listed the 685c as a 2-socket system when it’s in fact a 4 socket system.
Though keep in mind redundancy – http://www.theregister.co.uk/2009/01/09/hp_bladesystem_problem/
@nate
The BL495c is an excellent blade as well. It is basically an AMD version of the BL490c. The chart was already a little too large and I didn’t want to list every single blade server. Take a look at the ESXi on SD card as well. I prefer it to boot from SAN.
You’re correct about the BL685c. It is a four socket machine and I will fix the chart.
Take a look at Scott Lowe’s posts on Flex-10. They are very informative. He also has some great stuff on the Cisco UCS.
http://blog.scottlowe.org/2009/07/06/using-vmware-esx-virtual-switch-tagging-with-hp-virtual-connect/
http://blog.scottlowe.org/2009/07/09/using-multiple-vlans-with-hp-virtual-connect-flex-10/
http://blog.scottlowe.org/2009/07/09/follow-up-about-multiple-vlans-virtual-connect-and-flex-10/
http://blog.scottlowe.org/2009/08/28/thinking-out-loud-hp-flex-10-design-considerations/
When is comes to redundancy, consider having two chassis if you can afford it. Then limit the number of hosts per HA cluster. Check out Duncan Epping’s post on this:
http://www.yellow-bricks.com/2009/02/09/blades-and-ha-cluster-design/
http://www.yellow-bricks.com/tag/ha/
Basically, I limit the cluster sizes to 8 hosts per cluster and only have 4 hosts per chassis. You can scale it up if you have more chassis.
Dave
@nate
The BL495c is an excellent blade as well. It is basically an AMD version of the BL490c. The chart was already a little too large and I didn’t want to list every single blade server. Take a look at the ESXi on SD card as well. I prefer it to boot from SAN.
You’re correct about the BL685c. It is a four socket machine and I will fix the chart.
Take a look at Scott Lowe’s posts on Flex-10. They are very informative. He also has some great stuff on the Cisco UCS.
http://blog.scottlowe.org/2009/07/06/using-vmware-esx-virtual-switch-tagging-with-hp-virtual-connect/
http://blog.scottlowe.org/2009/07/09/using-multiple-vlans-with-hp-virtual-connect-flex-10/
http://blog.scottlowe.org/2009/07/09/follow-up-about-multiple-vlans-virtual-connect-and-flex-10/
http://blog.scottlowe.org/2009/08/28/thinking-out-loud-hp-flex-10-design-considerations/
When is comes to redundancy, consider having two chassis if you can afford it. Then limit the number of hosts per HA cluster. Check out Duncan Epping’s post on this:
http://www.yellow-bricks.com/2009/02/09/blades-and-ha-cluster-design/
http://www.yellow-bricks.com/tag/ha/
Basically, I limit the cluster sizes to 8 hosts per cluster and only have 4 hosts per chassis. You can scale it up if you have more chassis.
Dave
Looks a bit like the fairy tale.
1. CPU count matters. With more than 50% 4 vCPU VMs your %REDY times will be skyrocketing
2. Memory alone is not very important. Important are memory used and active memory. VMs with too much active memory (and 2+ vCPU) will need more than 5 seconds for the last VMotion iteration (downtime – ever heard of it?)
Generally your calculations would work for “infrastructure mix” DNS, DHCP, DC and other stuff which does not need a lot of resources. However such machines definitively would not need 1.6 Gb LAN and 3.2 Gb SAN throughput.
Calculations in tables are very rough and may be misleading.
When you configure a blade – you just need consider the following
1. At least 4 NICs for availability and security
2. At least 2 FC HBA
3. Check the workload to compute the best core:RAM proportion (usually 1:4-8gb)
4. Think of real world. 8GB DIMMs are still expensive. Thus 2 pCPU blades might provide necessary computing power for half a price, comparing with 4 pCPU (remember the core:RAM proportion)
5. Mitigate risks – failure of a big server may disrupt work much more VM. But on the other hand you may need more than one cluster…
6. Think about vCPU count – do you have 8 vCPU VMs? Do you plan some? How many 4 vCPU VMs do you have.
7. ESX version matters: 4.0 has better scheduler and optimized VMotion stack, thus ESX 4 may allow you to have more 4 vCPU VM on 2PCPU blade (4+cores of course)
8. Pray that your core:RAM proportion will work on Nehalem. (may be you will need 1core:8GB on Nehalem, compared to 1:4 computed on older CPU architecture)
Well it is a short list of things you need to consider, when choose the “perfect blade”.
And tables in your post… I still cannot understand where I could use them…
Hmmm…
I’m very sorry, but I cannot beat the temptation…
may be you would send those tables to IPCC, they surely could use them to predict warm winters ;-))))
@Seva
You should check out the follow-up post. It shows real world numbers based on ACTUAL performance assessments ->
http://www.dailyhypervisor.com/2009/12/21/is-your-blade-ready-for-virtualization-part-2-real-numbers/
@Seva
You should check out the follow-up post. It shows real world numbers based on ACTUAL performance assessments ->
http://www.dailyhypervisor.com/2009/12/21/is-your-blade-ready-for-virtualization-part-2-real-numbers/