A Different Take on CEE and FCoE

Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)

Last Month, I attended a Brocade Net.Ed Session that covered Converged Enhanced Ethernet (CEE) and Fibre Channel over Ethernet (FCoE) and the idea of Server I/O Consolidation. If you missed the Net.Ed sessions, you can learn about it at Brocade’s Training Portal.  Once you register / login, click on Self-Pased Training and search or browse for FCoE 101 Introduction to Fibre Channel over Ethernet (FCoE).  It’s free. Here is an unabridged report about the Net.Ed session with some of my opinions wrapped in:

Trends

With cloud computing, the consolidation of servers, storage and I/O are becoming popular. Once upon a time, server consolidation ratios were bound by processor and RAM count. With the introduction of servers with higher core count, faster processors and higher RAM capacities, the new boundaries are becoming I/O. related. And the I/O stack is answering the call for faster speeds. If you look at the trends, Fibre Channel speed has gone from 1Gb to 2Gb to 4Gb and now 8Gb. Soon, 16Gb FC will be the norm. Ethernet has gone from 10Mb to 100Mb to 1Gb and now 10Gb. The next chapter will bring 40Gb or 100Gb or both.

Fibre Channel and Ethernet have been in a leap frog contest since Fibre Channel was introduced. And there are plenty of arguments about which is “better” and why. Remember how iSCSI was going to take over the world with storage I/O? Why? Because people think they can implement it on the cheap. If it is implemented properly, it may not be that much cheaper than FC. I see too many instances where admins will implement iSCSI over their existing network, without thought of available bandwidth, security, I/O, etc. Then they complain how iSCSI sucks because of poor performance. Consolidation magnifies this. To top it off, iSCSI doesn’t help when dealing with things like FICON or the many tape drives that need faster throughput than what iSCSI can offer.

Hardware consolidation is also popular, and sometimes occurs during the server consolidation project. Blade servers are becoming more popular for many reasons. Less rack space, less cables, centralized management, etc. are all good reasons for blade servers. I just LOVE walking in to a data center and looking at the spaghetti mess behind the racks! Even with blade servers, the number of cables is still crazy. Some people still have Top of Rack switches, even with blades. More enlightened people have End of Row or Middle of Row switches. But there is still that mess in the back of the rack. I especially love when some genius decides to weave cables through the handles on a power supply….

Consolidate Your I/O

Enter I/O consolidation. Brocade calls it Unified I/O.  This is supposed to reduce cabling even more. I say “maybe.” In order to consolidate I/O, different protocols, adapters and switches are necessary. OH MY GAWD! New technology! This means the dreaded “C” word…Change. In a nutshell, it reduces the connections. You go from two to four NICs and two to four FCAs to two Converged Ethernet Adapters (CNAs). It is supposed to reduce cabling and complexity. It’s supposed to help with OpEx and CapEx by enabling more airflow/ cooling, and saving money on admin costs and cable costs, blah blah blah… Didn’t we hear this about blades too?

The Protocols (Alphabet Soup)

In order to make all of this work and become accepted, you need to worry about things like low latency, flow control and lossless quality. This needs to be addressed with standards. The results are CEE and FCoE. The issue arises with CEE. Not all of the components have been finalized. Things like priority based flow control (IEEE 802.1Qbb), Enhanced Transmission Selection (IEEE 802.1Qaz), Congestion Management (IEEE 802.1Qau) and. The IETF is still working on Transparent Interconnection of Lots of Links (TRILL) which will enable a layer 2 multipath without STP.

Feature/Standard

Benefit

Priority Flow Control (PFC)
IEEE 802.1Qbb

Helps enable a lossless network, allowing storage and networking traffic types to share a common network link

Enhanced Transmission Selection (Bandwidth Management)
IEEE 802.1Qaz

Enables bandwidth management by assigning bandwidth segments to different traffic flows

Congestion Management
IEEE 802.1Qau

Provides end-to-end congestion management for Layer 2 networks

Data Center Bridging Exchange Protocol (DCBX)

Provides the management protocol for CEE

L2 Multipathing: TRILL in IETF

Recovers bandwidth, multiple active paths; no spanning tree

FCoE/FC awareness

Preserves SAN management practices

Source: Brocade Data Center Convergence Overview Net.Ed Session
.
.

My Two Cents

So, without fully functioning CEE, the FCoE cannot traverse the network. This stuff is all supposed to be ratified soon. Until these components are ratified, the dream of true FCoE is just a dream. The bridging can’t be done close to the core yet. So People who decided to start using CNAs and Data Center Bridges will need to place the DCBs close to the server (No Hops!) and terminate their FC at the DCB. In the case of the UCS, this is the Top of Rack or End/Middle of Row switch . In the case of an HP chassis, it’s the chassis, and they don’t even have this stuff yet.

My question is this: Why adopt a technology that is not completely ratified? Like I said before, all of this requires change. You may be in the middle of a consolidation project and you are looking at I/O consolidation. Do you really want to design your data center infrastructure to support part of a protocol? Are you willing to make changes now and then make new changes in six months to bring the storage closer to the core?

So, let’s assume everything is ratified. You have decided to consolidate your I/O. How many connections do you really save? Based on typical blade chassis configurations, it may be four to eight FC cables. But look at it another way: You are losing that bandwidth. A pair of 10Gb CNAs will give you a total of about 20Gb of bandwidth. A pair of 10GbE Adapters and a pair of 8Gb FC adapters gives you about 36Gb. So, sure, you save a few cables. But you give away bandwidth. When you think about available bandwidth, is a pair of 10Gb CNAs or NICs enough? I remember when 100Mb was plenty. If consolidation is becoming I/O bound, do you want to limit yourself?  How about politics? Will your network team and storage team play nice together? Where is the demarcation between SAN and LAN?

I first saw the UCS Blades almost a year ago and I was excited about the new technology. Their time is coming soon. The HP Blades have always impressed me since they were introduced. They will never go away. I have used the IBM and Dell blades. My mother always said that if I didn’t have anything nice to say about something, don’t say anything at all…

When I take a look at the server hardware available to me now (HP and Cisco), I see pluses and minuses to both. The UCS Blades have no provisions for FC, so you need to drink the FCoE Kool Aide or use iSCSI. The HP blades allow for more I/O connections and can support FC, but not FCoE. If you want to make the playing field similar, you should compare UCS to the HP Blades with Flex-10. This will make the back-end I/O modules similar. Both act as a sort of matrix to map internal I/O to external I/O. Both will pass VLAN tags for VST and both will accommodate the 1000-v dvSwitches. The thing about Flex-10 is that it requires a different management interface if you are already a Cisco shop.

There’s a fast moving freight train called CHANGE on the track. It never stops. You need to decide when you have the guts to jump on and when you have the guts to jump off.

4 Replies to “A Different Take on CEE and FCoE”

  1. Dave – nice overview of the landscape. But regarding consolidating I/O, we have customers seeing double-digit reductions in components, costs, etc. by virtualizing I/O, and then using converged networks. And, BTW, this is all using Dell blades.

    Also, I believe the real value in abstracting infrastructure is less about the transport (FCoE, CEE, whatever) and more about the management of the I/O and networking. Why should I care about packets on a wire? Instead, I care about the ability to re-configure, greater flexibility, etc.

  2. Hi Dave,

    Nice article and some good points.

    On the question of why are people adopting it now even though the DCB standards are not yet fully ratified…… aside from the obvious vendor pushing, there are people that see a need for the consolidation (cable, PIC adapter, switch port…) and *need* to do something about it. You rightly ask “why” deploy when its not ratified, but on the other hand “why” deploy something now that you know will be/is already being superseded. Technology moves fast enough without deploying something that is already fast becoming “yesterday”.

    Also, lets remember, the guys who are pushing CEE/DCB and FCoE are the guys that are writing the standards. So the chances of them radically changing from what is currently being offered is slim. Also, and I know this is a cheap shot and not quite the same, but jumbo frames are not and never likely will be an IEEE standard, yet we deploy them now because there is a need.

    Just my 2 penny’s worth

    Nigel
    @nigelpoulton

Leave a Reply