This Technology Brief examines the new IEEE Std 802.3bs-2017 standard for 400 Gbps and 200 Gbps Ethernet. The new standard continues the rapid development of Ethernet to accommodate the increasing bandwidth demands of cloud data centers. The current 400 Gbps specifications cover only fiber optic media and have pushed the limits of optical lane speeds and the number of parallel fibers in a link. As with most new Ethernet standards, a number of new transceiver form-factors, connectors, and cables have been developed to accommodate the new 400 Gbps standard. This new era of Ethernet innovation will no doubt cause a surge in new development that will last some time into the future.
The Evolution of Ethernet Standards
The incredible pace of Ethernet development has now surged to 400 Gbps. From 1 Gbps Ethernet in 1997, to 10 Gbps in 2004, 100 Gbps in 2010, and then 4-lane (4×25 Gbps) 100 Gbps in 2014, it has taken a while for the next step up to 400 Gbps. The IEEE officially ratified its 802.3bs standard for 200 Gbps and 400 Gbps on December 6, 2017. Driven by the demands of ever-increasing Internet traffic through cloud data centers, there will always a need for more bandwidth, so it can be expected that 800 Gbps or 1600 Gbps Ethernet will not be too far away.
As usual, the IEEE leveraged existing standards to forge a pathway to 400 Gbps. A key characteristic and limitation of current data rates has been the single-lane serial rate that is achievable with current electrical technology. With the latest 100 Gbps Ethernet standard being based on four parallel lanes of 25 Gbps, this proved to be a natural starting point for the 400 Gbps development. However, a method to increase lane data rates to 50 Gbps was clearly needed for 400 Gbps, and with work already ongoing to achieve 100 Gbps optical lanes, using these new achievements would prove a great advantage for 400 Gbps Ethernet.
There were a number of ways to meet the 400 Gbps objectives while still considering the various tradeoffs and requirements of the networking industry. With the 400 Gbps standard limiting the physical media to only multimode and single-mode optical fiber, it was apparent that the number of fibers in a link would be a key issue. Multiple parallel fibers are known to be an acceptable solution for short-range links up to 500 m, but not for longer cable lengths (2 to 10 km) where the costs become excessive. However, a data rate of 400 Gbps with 16 x 25 Gbps parallel lanes would require 32 fibers per link for transmit and receive. When developing a set of specifications for 400 Gbps, the IEEE task force had employ a number of technologies and methods to define acceptable cost-effective solutions for both short-range multimode fiber and long-range single-mode fiber using a various number of fibers and line rates. In addition, a set of 200 Gbps standards based on the 400 Gbps standards were also specified as a practical migration path to 400 Gbps.
The 400 Gbps and 200 Gbps Standards
With a 50 Gbps lane rate being the fundamental basis of reaching 400 Gbps, the first major decision was to change the signal encoding scheme. Up till now, all Ethernet standards have used simple 2-level Non-Return-to-Zero (NRZ) method for encoding a binary data stream into a transmittable electrical signal. To attain a higher lane data rate, an encoding scheme known as 4-Level Pulse Amplitude Modulation (PAM4) needed to be used, which effectively doubles the amount of data transmitted in the same amount of time.
If you think of binary data represented by a signal with two voltages, one voltage for a “0” and the other voltage for a “1”, then this describes the NRZ encoding method. For PAM4 encoding, the signal has four voltage levels, which encodes two binary bits per voltage level. A method known as “Gray coding” combines the most significant bit (MSB) and least significant bit (LSB) pairs in a data stream into one of the four voltage levels. Gray coding helps to reduce the bit errors in the signal caused by voltage amplitude noise. It is easy to see how with two data bits mapped to one voltage level, double the information can be transmitted in the same amount of time.
The IEEE completed its 802.3bs standard (IEEE Std 802.3bs-2017) for 200 Gbps and 400 Gbps Ethernet by implementing a combination of PAM4 encoding and multiple parallel lanes. The specifications cover multimode and single mode optical fiber options running from 70 m up to 10 km. The following table summarizes the Ethernet PHY variants for 400 Gbps and 200 Gbps.
|400GBASE-SR16||MMF||16||16 x 25 Gbps||70 m (OM3)100 m (OM4)||NRZ|
|400GBASE-DR4||SMF||4||4 x 100 Gbps||500 m||PAM4|
|400GBASE-FR8||SMF||1||8 x 50 Gbps (WDM)||2 km||PAM4|
|400GBASE-LR8||SMF||1||8 x 50 Gbps (WDM)||10 km||PAM4|
|200GBASE-DR4||SMF||4||4 x 50 Gbps||500 m||PAM4|
|200GBASE-FR4||SMF||1||4 x 50 Gbps (WDM)||2 km||PAM4|
|200GBASE-LR4||SMF||1||4 x 50 Gbps (WDM)||10 km||PAM4|
The 400GBASE-SR16 specification supports 16 multimode fibers at 25 Gbps using NRZ encoding, which means that the total transmit and receive fibers in a link will be 32. The 400GBASE-DR4 option supports the controversial 100 Gbps lane speed over four single-mode fibers, but can only reach 500 m. The duplex-fiber variants, 400GBASE-FR8 and 400GBASE-LR8, both use wavelength-division multiplexing (WDM) to transmit eight lanes over eight different wavelengths on single-mode fiber for reaches up to 10 km. The 200 Gbps specifications basically follow those for 400 Gbps, but use four lanes of 50 Gbps over one or four single-mode fibers.
Pluggable Modules and Cables
To deploy equipment supporting the new 400 Gbps Ethernet standard, clearly new pluggable modules, connectors, and cables are required. A 400GBASE-SR16 port requires 32-fiber connectors and cables, and the 400GBASE-DR4 variant uses a higher electrical signaling rate of 56 Gbps.
The common 100 Gbps QSFP28 connector supports four lanes of 25 Gbps with an electrical signaling rate of 28 Gbps. That is eight fibers for transmit and receive, which is supported by 12-fiber MPO (multi-fiber push-on) connectors and cables that are currently used for 100 Gbps. Clearly, the 12-fiber MPO connectors and cables would support the 400GBASE-DR4 and 200GBASE-DR4 variants in the 802.3bs standard. The other variants that only use two fibers in a link can utilize the common duplex-LC connectors and cables. That leaves the 32-fiber MPO as a new development specifically for the 400 Gbps Ethernet standard.
The 12-fiber MPO connector arranges the fibers in a single row between two alignment pins. For the 32-fiber MPO there are two rows of 16 fibers, which makes the new connector incompatible with the 12-fiber MPO. The 32-fiber MPO connector is therefore keyed differently from the 12-fiber MPO to avoid transceivers and cables being connected incorrectly.
With the development of faster signaling, more lanes, and more fibers, new transceiver form-factors are required for 400 Gbps Ethernet. There are always challenges and tradeoffs to consider when trying to put more components that dissipate more power into small modules. As with previous Ethernet standards, a number of optical transceiver form-factors have emerged for 400 Gbps Ethernet:
- CFP8 – Large form-factor. Low port density. Good thermal management.
- OSFP – Designed for optimal signal and thermal performance. Non-compatible form-factor.
- QSFP-DD – The “double-density” QSFP. Backward compatible with 100 GbpS and 40 Gbps Ethernet. High port density.
- COBO – On-board optics module, which is not a pluggable transceiver. Highest port density.
Driven by the increasing demands of cloud data centers, the evolution of Ethernet standards has rapidly reached an incredible data rate of 400 Gbps. Using new encoding schemes and higher signaling rates, the 400 Gbps Ethernet standard marks a significant break from past standards and sets a path towards even faster speeds in the future. The new 400 Gbps and 200 Gbps standards include multimode and single-mode optical fiber options running from 70 m up to 10 km. The standards cover various cost-effective short-range and long-range applications and have been supported by a number of new transceiver form-factors, cables, and connectors.
As new 400 Gbps equipment becomes available, you can expect to see it quickly deployed in data center, carrier access, and service provider networks worldwide. The much-needed extra bandwidth will provide a boost as network traffic continues to increase year-on-year. However, Ethernet development will not stop there. New Ethernet working groups are already striving for higher speeds and more economical, compact form-factors that will keep pace with future industry demands.
From 400 Gbps to 800 Gbps
A recently published report by the Dell’Oro Group states that 400 Gbps is expected to comprise 20 percent of data center switching revenue by 2020. According to the research group, higher speeds – 100 Gbps, 200 Gbps, 400 Gbps and 800 Gbps – are all forecast to drive significant growth over the next five years. Perhaps not surprisingly, cloud data centers are expected to play a key role in the upcoming speed jump for networks (from 100 Gbps to 400 Gbps).
Meanwhile, a market report released by the 650 Group confirms that 200 Gbps, 400 Gbps and 800 Gbps will all ship in the next five years – with the latter projected to ramp early next decade.
“The first wave of 200 Gbps and 400 Gbps will hit the market in early 2018 as the Ethernet switch market expands the number of port speeds just three years since the last major set of technology advances,” says Alan Weckel, Founder and Technology Analyst at 650 Group. “Both 200 Gbps and 400 Gbps will emerge off of 50 Gbps SerDes technology announced in 2017. The rapid pace of innovation is led not only by impressive technology improvements, but by Software Defined Networks (SDN) which is enabling the cloud to better utilize the compute and networking resources at their disposal.”
Testing 400 Gbps, eyeing 1.6 Tbps
According to Ronen Isaac of Military Embedded Systems, the industry is currently testing and ratifying technologies that will bring speeds up to 400 Gbps and beyond.
“Between 2018 and 2020, 50 Gbps and 200 Gbps will be tested and adopted. Thousands of 25GbE servers and eventually 50GbE servers in hyper-scale data centers, such as cloud service providers, will drive the need for 400GbE to the metropolitan area networks (MAN) and wide area networks (WANs),” he says. “In the not-too-distant future, testing will begin on 200 Gbps, 8000 Gbps, and astonishingly data rate speeds of 1 Tbps and 1.6 Tbps with expected testing and ratification of the standards by the year 2020.”
Ethernet is quickly moving from 40Gbps to 100Gbps to 400Gbps, thereby spurring a number of new SerDes initiatives and developments. Indeed, 25Gbps SerDes served as the key enabler for 100Gbps Ethernet, with the industry expected to initially leverage 50Gbps SerDes for 400Gbps Ethernet before moving on to 100Gbps SerDes technology.
Concurrently, SerDes technology is shifting from NRZ to PAM4 as it accelerates from 25Gbps to 50Gbps. This has prompted a number of architectural changes, including the replacement of traditional analog with ADC + DSP to help meet performance targets and maintain a similar power/area envelope.