Today, we have something really fun: a look at the Marvell COLORZ 800. This is a long-range 800G ZR+ optical module that comes in a standard OSFP pluggable form factor, yet it can reach over 500km or 1000km at 800Gbps speeds. It can even be tuned to allow 400Gbps communication at up to 2500km. We got a behind-the-scenes look at how this coherent optical technology works at Marvell’s labs and wanted to show you how they make this happen.
For this one, we have a video that you can find here:
As always, we suggest opening this in its own tab, browser, or app for the best viewing experience. We also need to say that Marvell is sponsoring this since George and I had to fly to get inside the lab, and the lab is not something that folks normally get to see. With that, let us get to it.
Going 800Gbps at up to 1000km with the Marvell COLORZ 800
Since we will get into a lot of detail here, let us quickly start with an overview. In datacenters today, the rule is basically: Use copper as long as you can, then when the reach is too far, go optical. If you take a look at something like a NVIDIA GB200 NVL72 the big innovation was the ability to interconnect 72 GPUs and switches using copper in the rear.

After around 3m of cable length or so, high-speed and copper simply do not mix due to signal integrity.

While copper can often reach within a rack and to an adjacent rack, optical modules are used to span longer distances. There is, however, a catch. Optical modules use different technologies to span different distances at different speeds.

A short range optical module that is operating at 10Gbps or 100Gbps costs a lot less to manufacture than a long range optical module operating at 400Gbps or 800Gbps in large part because the technology scales in complexity.

Another aspect is the form factors that optical modules come in. CFP-style form factor modules tend to be more common in telecom applications. In data centers, we tend to see small SFP modules for lower-end applcations and larger QSFP modules and OSFP modules for higher-speed applications. Even common AI infrastructure NICs like the NVIDIA ConnectX-7 400GbE adapter use OSFP. Today we are talking about a Marvell COLORZ III 800G ZR+ OSFP module. OSFP provides a larger module standard with the power, cooling, and importantly the space required to handle all of the components. Here is what one looks like:

In simple terms, on one end, the module takes electrical signals from the device.

On the other end, we have optical transmit and receive sides where the fiber cable is plugged into.

While that may sound simple, inside the metal casing is where the magic happens. Electrical signals turn into optical signals and on the other side optical signals are converted back to electrical. We are going to keep the discussion of what happens inside at a fairly high-level so many folks can follow along.
Next, let us show you what goes inside that OSFP casing and how this all works.
The only things that lab table is missing are:
– DUCT tape
– SuperGlue
– Coffee cup
What a superior Friday read. I could’ve handled more depth, but I’ve been doing this many years.
It’s good to have a Kennedy back in optical networking. You look like I remember your dad when I first met him when I was the buck with a newly minted PhD 20 years ago.
I always wondered if you’d go here.
Cool demo but I feel that this type of hardware is on the way out in favor of chips that leverage silicon photonics. The advantages are way too great to ignore in terms of power and potential bandwidth. Simpler encoding schemes tend to win in the end and this is a straight forward means to maintain simplicity without the need for highly complex DSPs in the transceivers (complex here is relative).
That convergence diagram looks like QAM 16 encoding, at least visually. Hardware exists to do QAM 4096 (see Wi-fi 7) but at a significantly lower real clock speed. I don’t think that there is enough sensitivity to go that high in modulation but QAM 16 does appear to be low resolution for what it is capable of. Perhaps with 1000 km of cable in between modules, the images are not so clean.
The ideas of expanding out a traditional data center using these modules is feasible but a pair of links for a big GPU cluster isn’t going to cut it: each GPU node nowadays is getting 100 Gbit of networking bandwidth and 800 Gbit would only be a means of linking up two big coherent frames of gear. An order of magnitude more bandwidth would be necessary to really start attempting this. If a company is willing put down tens of millions of dollars for a remote DC, then spending a few million to lease/run more additional fiber between the locations is a straight forward means by using more cables. Optics is fun in that signals can be transformed in transition to an extend (polarization filters as an example) to merge multiple lines together on to a single cable without interference. That is a way to aggregate more bandwidth over existing lines but what ever is multiplexed on one end had to be demux on the other side. This is also ignoring that such aggregation technics are not all ready used inside of the module (and my simple polarization example likely wouldn’t work with those described in this article). These techniques also incur some signal loss which directly impacts their range.
This tech is at its own level, and I could see it being a big deal for long-distance (subsea?) cables where it might be worth investing a lot in gear at each end if it lets you avoid repeaters/retimers along the way. Would welcome the explainers on optics!
It’s also *wild* how far commodity fiber stuff has come. This is a bit of a market fluke, but a ton of used Intel 100G 500m transceivers are out there that you can pick up on eBay for $10. New optics still wouldn’t be much of the cost of a 100G deployment. Even higher-end stuff doesn’t look ridiculous proportionate to what it’s doing, and isn’t single-vendor unobtanium.
DWDM is also incredibly cool! If you’re a giant multi-datacenter operator, maybe it’s simpler to just get 800G everything and be done with it, but conceptually it’s really neat that you can lay a many-strand cable and gradually get to 400G or 1T/strand, one 10 or 25G wavelength at a time.
It was only pretty recently I picked up even a vague awareness of what modern networking is able to do, since work hasn’t dealt in physical hardware for a while. Keep the posts on fun network stuff coming!
I don’t understand the mentions to AI, if you visit the OpenZR+ website it is very clear that the target market of these products are big telecom providers and their transport networks.
Thanks Patrick! Great stuff as always.
One question I’m asking my pluggable optics vendors these days is:
Will your 800G coherent modules work in my 400G router?
Specifically, I’m *really* interested in the extra-long reach capabilities of the 112GBaud QPSK 400GE encoding 800ZR+ opens up, & with the host side running at 56GHz per lane it might just fit within the existing 25W QSFP-DD power envelope. Also the L-Band possibilities are quite interesting.
Even if those particular modes aren’t supported, the economics have changed. If you install more than a handful of these optics in your router, you probably spent more on your optics than your router, & they likely consume more power & put out more heat than the router itself…