VMware Workstation 7, VMware Player and Microsoft Virtual PC

A little over a week ago,  I was pleasantly surprised by an email from VMware announcing the  release of VMware Workstation 7. Since I actively participated in the beta, they gave me a free license key for the new version. That’s reason enough to love it in itself! But, to be honest, I have been using VMware Workstation for quite some time now. I vaguely remember Y2K testing with it back when is was an IT pup. Since I got the fresh copy, I decided to completely redo my laptop with a fresh install of Winders 7 and all of my handy convenience programs (Office, TortoiseSVN,  TweetDeck, FeedDemon, Firefox, Pandora, etc.). Since Winders 7 and IE8 have some compatability issues with some things, I decided to create a hybrid of what I did when I ran Ubuntu as the host OS. Since I was making things fresh, I created a Winders 2003 template then spawned a VM to host all of my favorite tools for VMware. I will most likely create spawns of the template for other things, like SAN tools. This gives me modules to do the job of the day and portability in case the host crashes.

Continue reading “VMware Workstation 7, VMware Player and Microsoft Virtual PC”

A Different Take on CEE and FCoE

Last Month, I attended a Brocade Net.Ed Session that covered Converged Enhanced Ethernet (CEE) and Fibre Channel over Ethernet (FCoE) and the idea of Server I/O Consolidation. If you missed the Net.Ed sessions, you can learn about it at Brocade’s Training Portal.  Once you register / login, click on Self-Pased Training and search or browse for FCoE 101 Introduction to Fibre Channel over Ethernet (FCoE).  It’s free. Here is an unabridged report about the Net.Ed session with some of my opinions wrapped in:

Trends

With cloud computing, the consolidation of servers, storage and I/O are becoming popular. Once upon a time, server consolidation ratios were bound by processor and RAM count. With the introduction of servers with higher core count, faster processors and higher RAM capacities, the new boundaries are becoming I/O. related. And the I/O stack is answering the call for faster speeds. If you look at the trends, Fibre Channel speed has gone from 1Gb to 2Gb to 4Gb and now 8Gb. Soon, 16Gb FC will be the norm. Ethernet has gone from 10Mb to 100Mb to 1Gb and now 10Gb. The next chapter will bring 40Gb or 100Gb or both.

Fibre Channel and Ethernet have been in a leap frog contest since Fibre Channel was introduced. And there are plenty of arguments about which is “better” and why. Remember how iSCSI was going to take over the world with storage I/O? Why? Because people think they can implement it on the cheap. If it is implemented properly, it may not be that much cheaper than FC. I see too many instances where admins will implement iSCSI over their existing network, without thought of available bandwidth, security, I/O, etc. Then they complain how iSCSI sucks because of poor performance. Consolidation magnifies this. To top it off, iSCSI doesn’t help when dealing with things like FICON or the many tape drives that need faster throughput than what iSCSI can offer.

Hardware consolidation is also popular, and sometimes occurs during the server consolidation project. Blade servers are becoming more popular for many reasons. Less rack space, less cables, centralized management, etc. are all good reasons for blade servers. I just LOVE walking in to a data center and looking at the spaghetti mess behind the racks! Even with blade servers, the number of cables is still crazy. Some people still have Top of Rack switches, even with blades. More enlightened people have End of Row or Middle of Row switches. But there is still that mess in the back of the rack. I especially love when some genius decides to weave cables through the handles on a power supply….

Consolidate Your I/O

Enter I/O consolidation. Brocade calls it Unified I/O.  This is supposed to reduce cabling even more. I say “maybe.” In order to consolidate I/O, different protocols, adapters and switches are necessary. OH MY GAWD! New technology! This means the dreaded “C” word…Change. In a nutshell, it reduces the connections. You go from two to four NICs and two to four FCAs to two Converged Ethernet Adapters (CNAs). It is supposed to reduce cabling and complexity. It’s supposed to help with OpEx and CapEx by enabling more airflow/ cooling, and saving money on admin costs and cable costs, blah blah blah… Didn’t we hear this about blades too?

The Protocols (Alphabet Soup)

In order to make all of this work and become accepted, you need to worry about things like low latency, flow control and lossless quality. This needs to be addressed with standards. The results are CEE and FCoE. The issue arises with CEE. Not all of the components have been finalized. Things like priority based flow control (IEEE 802.1Qbb), Enhanced Transmission Selection (IEEE 802.1Qaz), Congestion Management (IEEE 802.1Qau) and. The IETF is still working on Transparent Interconnection of Lots of Links (TRILL) which will enable a layer 2 multipath without STP.

Feature/Standard

Benefit

Priority Flow Control (PFC)
IEEE 802.1Qbb

Helps enable a lossless network, allowing storage and networking traffic types to share a common network link

Enhanced Transmission Selection (Bandwidth Management)
IEEE 802.1Qaz

Enables bandwidth management by assigning bandwidth segments to different traffic flows

Congestion Management
IEEE 802.1Qau

Provides end-to-end congestion management for Layer 2 networks

Data Center Bridging Exchange Protocol (DCBX)

Provides the management protocol for CEE

L2 Multipathing: TRILL in IETF

Recovers bandwidth, multiple active paths; no spanning tree

FCoE/FC awareness

Preserves SAN management practices

Source: Brocade Data Center Convergence Overview Net.Ed Session
.
.

My Two Cents

So, without fully functioning CEE, the FCoE cannot traverse the network. This stuff is all supposed to be ratified soon. Until these components are ratified, the dream of true FCoE is just a dream. The bridging can’t be done close to the core yet. So People who decided to start using CNAs and Data Center Bridges will need to place the DCBs close to the server (No Hops!) and terminate their FC at the DCB. In the case of the UCS, this is the Top of Rack or End/Middle of Row switch . In the case of an HP chassis, it’s the chassis, and they don’t even have this stuff yet.

My question is this: Why adopt a technology that is not completely ratified? Like I said before, all of this requires change. You may be in the middle of a consolidation project and you are looking at I/O consolidation. Do you really want to design your data center infrastructure to support part of a protocol? Are you willing to make changes now and then make new changes in six months to bring the storage closer to the core?

So, let’s assume everything is ratified. You have decided to consolidate your I/O. How many connections do you really save? Based on typical blade chassis configurations, it may be four to eight FC cables. But look at it another way: You are losing that bandwidth. A pair of 10Gb CNAs will give you a total of about 20Gb of bandwidth. A pair of 10GbE Adapters and a pair of 8Gb FC adapters gives you about 36Gb. So, sure, you save a few cables. But you give away bandwidth. When you think about available bandwidth, is a pair of 10Gb CNAs or NICs enough? I remember when 100Mb was plenty. If consolidation is becoming I/O bound, do you want to limit yourself?  How about politics? Will your network team and storage team play nice together? Where is the demarcation between SAN and LAN?

I first saw the UCS Blades almost a year ago and I was excited about the new technology. Their time is coming soon. The HP Blades have always impressed me since they were introduced. They will never go away. I have used the IBM and Dell blades. My mother always said that if I didn’t have anything nice to say about something, don’t say anything at all…

When I take a look at the server hardware available to me now (HP and Cisco), I see pluses and minuses to both. The UCS Blades have no provisions for FC, so you need to drink the FCoE Kool Aide or use iSCSI. The HP blades allow for more I/O connections and can support FC, but not FCoE. If you want to make the playing field similar, you should compare UCS to the HP Blades with Flex-10. This will make the back-end I/O modules similar. Both act as a sort of matrix to map internal I/O to external I/O. Both will pass VLAN tags for VST and both will accommodate the 1000-v dvSwitches. The thing about Flex-10 is that it requires a different management interface if you are already a Cisco shop.

There’s a fast moving freight train called CHANGE on the track. It never stops. You need to decide when you have the guts to jump on and when you have the guts to jump off.