Compute Express Link (CXL): From Nine Members to Thirty Threeby Dr. Ian Cutress on April 15, 2019 12:30 PM EST
- Posted in
- Compute Express Link
- PCIe 5.0
Last month the CXL Specification 1.0 was released as a future cache coherent interconnect that uses the PCIe 5.0 physical infrastructure but aimed to provide a breakthrough in utility as well as cache coherency. At the time, the to-be-defined consortium consisted of Intel and eight other founding members. Since the announcement, membership has grown from that initial nine to thirty three, including some important names in the industry.
The Future Is In Interconnect
In August 2018, in coverage of AMD’s Infinity Fabric interconnect, I stated that the battle of the future would be on the front of the interconnect. Specifically relating to CPUs at the time, I said:
After core counts, the next battle will be on the interconnect. Low power, scalable, and high performance: process node scaling will mean nothing if the interconnect becomes 90% of the total chip power.
Fast forward a year later, and interconnect is still the hot topic when it comes to future design. Not only from CPU-to-CPU, but CPU-to-Device, and Device-to-Device, the ubiquity of the interconnect and the utility that each one offers is gearing up to be a battlefield. For non-coherent interconnects, at a system level, then PCIe is still the top player, but companies involved are looking to cache coherent options, such as CCIX, GenZ, and now CXL.
Compute Express Link, known as CXL, was launched last month. A fanfare was made as the standard had been building inside Intel for almost four years, and was now set to be an open standard built upon PCIe 5.0 infrastructure, allowing devices using CXL to have the same physical connection interface. The nine initial promotors of the CXL specification included industry heavy hitters: Alibaba, DellEMC, Facebook, Google, HPE, Huawei, Intel, and Microsoft, indicating that CXL is expecting to be a big part of the chip-to-chip portfolio for these companies, and it even has the support of the GenZ consortium. An official ‘CXL consortium’ has not been registered as of yet, however it is expected to be incorporated this year, with these nine companies at the helm.
Part of the announcement last month into the CXL 1.0 specification was to encourage new participants into the CXL standard. It has been designed as an open standard, and thus companies are willing to propose adjustments to future versions of the standard as well as build upon it without any licensing fees. We’re expecting the CXL technical specifications to be open in due course, as the technology is built upon.
One of the key elements to the announcement was the founders. Nine sizeable companies, each with interests in servers and accelerators, is more than the founding members when PCIe (5) or USB (7) started. There were some key names missing, however some of them have now signed up. The full list reads as follows:
- Achronix Semiconductor Corp
- *Alibaba Group
- Ayar Labs
- BlackFore Technologies LTD
- Cadence Design Systems
- *Cisco Systems
- DriveScale Inc.
- Eidetic Communications Inc
- Faraday Technology Company
- Fastwel Group Ltd
- *Hewlett Packard Enterprise
- InterOperability Laboratory
- Mellanox Technologies
- Memsule Inc.
- Microchip Technology Inc
- Mobiveil, Inc
- Norel Systems LTD
- Northwest Logic
- Numascale AS
- PLDA, Inc
- SK Hynix
The new highlights of this list include Arm, Cadence, Lenovo, Mellanox, SK Hynix, and Synopsys. Each of these companies has a high impact factor in the future of computing, either from a fundamental technical standpoint, implementation, or product line. It is interesting to note that Mellanox is a member, but NVIDIA isn’t, given that NVIDIA recently acquired the company. NVIDIA has its own NVLINK technology, however Mellanox’s product portfolio one suspects has to be open to new standards more than NVIDIA’s. Having SK Hynix as a member could be interesting for future memory offerings, and Arm as a member means that we could see any of Arm’s licensees perhaps looking into CXL technologies in the future.
As CXL 1.0 relies on PCIe 5.0, we’re going to have to wait until PCIe 5.0 comes to market before we see anything CXL related. However, a handy diagram from Intel at the CXL launch is a key one to remember regarding potential future CXL support. Intel recently held an Interconnect Day where CXL was explained in more detail. Unfortunately we were unable to attend, however we do have the slides and the notes, and will be going over them in due course.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Kevin G - Monday, April 15, 2019 - linkI was kind of hoping that Intel would migrate to optical interconnects to move data around. Their silicon photonics technology would lend itself nicely to their chiplet strategy since those use similar but slightly different processes. Jumping go optical for their socket-to-socket and socket-to-coherent-accelerator would be straight forward means of scaling upward since motherboard traces don't become a limitation. This would also permit external chassis cabling to be leveraged in an optical fashion, allowing for designs to really scale upward in terms of socket count. I've only seen this done by IBM and SGI (now HPE) do this in previous systems and something I thought would become far more mainstream.
I do think it is a detriment that Intel is doing their own thing alongside CCIX, Gen-Z and OpenCAPI. CCIX and OpenCAPI are also based off of PCIe PHY and at least at a high level it looks like it was possible to adhere to both of those and have PCIe fall back. I was hoping that long term those two standards would merge into one and everyone live happily ever after. If Intel went optical, they'd have a better justification for doing their own thing. Then again, with a PCIe 5.0 fall back, it isn't going to be the end of the world either, just suboptimal.
The one last thing that stands out is that Mellanox is on that list. They are currently being absorbed by nVidia and I suspect that this plan was in place before that announcement.
rahvin - Monday, April 15, 2019 - linkGiven how much Intel was pushing optical interconnects the fact they are sticking with copper probably means they can't get the costs in line. IMO optical makes a lot of sense for connections off the board (like thunderbolt, but even those cables still have copper in the bundle) but optical doesn't make a lot of sense when you are in the same box. It would needlessly add a ton of cost to have a bunch of optical/electrical connections switching back and forth between copper and fiber especially when you already need the copper for power. Unless they can get the optical to copper transfer silicon down to a penny or less it's just extra cost for little benefit (and there are significant negatives such as the copper/fiber switches adding latency translating back and forth and you can't have fiber in a trace on a board).
IMO optical only make sense in external cables, when it's in the same box or on the same board there's no reason to use it over copper.
Kevin G - Monday, April 15, 2019 - linkIntel's silicon photonics process was supposed to be the next big thing to bring prices of high speed optical down to consumer levels. The reason optical was being looked at for main buses as that the signal propagation between sockets was getting longer (think of how many DIMM slots Cascade Lake-AP can have between sockets) while data rates were expected to increase per pin. That is a loosing situation for copper. Even PCIe 4.0 is needing signal repeater to reach one end of the board on consumer systems. For example several AMD Ryzen boards are only supporting PCIe 4.0 data rates only to the first 16x PCIe slot. PCIe 5.0 has even tighter restrictions on length which is going to be a major issue on server platforms which have lots of lanes. For servers, DIMM slots are often found parallel to PCIe slots for airfow and pose a routing restriction as well. Reaching the first slot on a PCIe 5.0 enabed server may require a repeater for example. This is where costs jump up with copper as the additional chips are necessary. High speed interconnects like CXL, CCIX, OpenCAPI etc. have similar restrictions since well, they use the same PCIe PHY.
This is where I would disagree on pricing. It isn't going to be a penny, rather optical has to be price competitive with the additional PCIe repeaters as PCIe 4.0 and especially 5.0 come to market. It isn't going to a simple price comparison either (and for the record, I would still expect copper to win initially here). Rather it is will be a cost-benefit analysis as optical opens up higher bandwidths over longer distances that copper cannot compete with.
Well for Intel, IBM actually has the patents for doing fiber traces in the PCB and they did demo it so works in the lab at least. IBM was expected to go migrate in this direction for their ultra highend products (mainframe, POWER) but took a bit of a right turn with POWER to cut costs and boost market share as the RISC market started to collapse a few years ago. During that era, they did have a POWER7 chip that leveraged a massive optical interconnect for supercomputing, though within each node was still copper between processors
rahvin - Tuesday, April 16, 2019 - linkMy main point was that yes, Intel did have big plans for their silicon photonics, the fact that it died on the vine should be proof that they weren't able to get it working like they'd hoped.
mode_13h - Wednesday, April 17, 2019 - link> For example several AMD Ryzen boards are only supporting PCIe 4.0 data rates only to the first 16x PCIe slot.
AFAIK, those are current-gen boards that weren't really designed for PCIe 4.0, but where it's just being enabled via BIOS update.
SydneyBlue120d - Tuesday, April 16, 2019 - linkWhat about AMD and Qualcomm?
Smell This - Tuesday, April 16, 2019 - link
That would be GenZ -- CCIX/CAPI
Smell This - Tuesday, April 16, 2019 - link
Funny how AT didn't mention there were twice as many members in the GenZ Consort, which includes ARM, Google, MS, etc, too ...