Reading time ( words)
Nolan Johnson speaks with Arjun Bangre, director of product for high-speed interface IPs for PCI Express and CXL at Rambus. They discuss new developments in CXL, PCI Express, and interoperable IP solutions that Rambus has developed.
Nolan Johnson: Arjun, would you please introduce us to Rambus?
Arjun Bangre: Definitely. We are experts in the memory space—how we find innovative solutions for memory and solve some of the memory bottleneck problems in data centers. Last year we announced the CXL Memory Interconnect Initiative. Rambus is very focused on researching innovative solutions to solve that problem which leverages the new disruptive technology of Compute Express Link (CXL). We also use these economies of scale to develop IPs for interconnects and have that advance the industry in whatever application they are trying to build, which can be artificial intelligence / machine learning (AI/ML) accelerators, SmartNICs, or some other innovative solutions in the memory space itself.
We make scalable IP solutions which reduce the time to development and time to market for components like an AI/ML accelerator, a CPU or a GPU, or any special purpose security component on a hardware server solution so that they can develop interconnect solutions very quickly and save themselves from having to reinvent the wheel each time through a design cycle. All the different components on a server motherboard must talk to each other, and they all need to talk a common language. The industry has converged on certain protocols by which any device that you buy from any vendor can be interoperable. In this case, it would be either the PCI Express or CXL.
PCI Express offers Gen 6, PCIe Gen 6, which is 64 giga-transitions per lane speed—that’s a lot of bandwidth. CXL offers pretty much the same data rates, but now we are at CXL 2.0, which is still at 32 giga transitions per second per line, but it is low latency. The time it takes from pin to pin on CXL and then get the response back is shorter than PCI Express. Second, CXL offers these two devices on either end of the line to be on a coherent memory domain, establishing coherency between these two components. If someone is building a chip and needs to have these interfaces—either the PCI Express or CXL—they buy the IP solution from us. This consists of the service—the mixed signal part, and the digital part, which is the controller. And then they build their own secret special sauce of whatever value prop their device offers.
We allow these chip designers to pull in their schedule and remove risk for them because they are getting verified, validated IP. This brings economies of scale; the IP is already proven as interoperable because several others are using it and because it is validated for following the specification for either PCIe or CXL, in this case.
Johnson: What are some of the key market drivers for those who are specifying the Rambus products into their designs? What are you hearing and seeing for the assemblers who are then responsible for putting that design together?
Bangre: Regarding market drivers, CXL is finding its feet right now and running. That’s because there is a problem accessing memories for compute resources. A typical processor has some amount of DDR, which is tightly coupled to it through a DDR interface, and there are only so many you can connect. But as you expand into more data-intensive applications you need more memory. This is where CXL comes in and gives you another serial interface, but also runs at a high data rate. It’s not quite the same as a tightly coupled DDR, but the next best thing with lower latency. This is what is driving the CXL option.
And in terms of PCIe it is just faster data rates because you have more data; it’s not a surprise. One IDC says, “By 2025, data consumption is going to hit 175 zettabytes.” That’s more zeros than I have the patience to write out. There’s that part of personal usage as well as efficiency and productivity usage where AI/ML comes in, which helps you do nifty things like face recognition for either security or for other things. In a nutshell, what is driving new technology or chip technologies, is data consumption—the exponential data consumption.
Johnson: What are the challenges for bringing the Rambus product into the actual manufacturing of the subsystem? What would be an assembler’s perspective on that?
Bangre: In terms of assemblers, there are two parts. One, Rambus makes its own chips and we play in the memory interconnect segment. There’s a business unit for memory buffer chips, and there’s another business unit for the interface IPs. The channel modeling, which is called IBIS modeling, as well as the PCB materials and all those simulations, take a lot of signal integrity and power integrity expertise. And typically this is pushed out by an IP vendor to the actual chip maker.
Rambus has an excellent applications engineering team for our services who have years of experience doing this. We have a track record of more than 300 services in production. We have that expertise to consult with our customers on these new challenges with signal integrity and power integrity.
When it comes to packaging there are even more challenges when you go to die-to-die solutions, like with an interposer like HBM with a 2D and 3D packaging, which is a whole different level of complexity.
In terms of assemblers, the question is likely, “Have you modeled your channel characteristics before you actually get the silicon in the lab, and then you’re starting to build up a board to put the silicon in and then bring up validation testing?” We consult on that aspect of the design process early in the stream. That is where it’s getting even more complex at this higher data.
Johnson: I’m starting to realize that the chip is an application specific IC and that you are also doing customer specific ASICs depending upon what their needs are?
Bangre: We build and offer memory buffer chips that expand memory capacity and performance for data centers and high-performance computing. We also design and license IPs for high speed interfaces , memory and hardware based security applications. These reusable IP solutions accelerate the time to market for our customers. Our customers design their chips and we extend engineering support towards the integration and use of our IPs.
Johnson: What do you see over the horizon in this space? What are the challenges that you at Rambus still need to be tackling? What’s under development?
Bangre: Rambus recently announced the availability of PCIe Gen 6 digital controllers that support data rates of 64 giga-transitions per lane. Chip makers are now working on their first Gen 6 products. The market as a whole is gearing to support the transition from PCIe Gen 5 to Gen 6. PCIe 6.0 devices need to support two modes of modulation—PAM4 and NRZ. The NRZ mode is supported for speeds at Gen5 and below, which is 32 giga transitions per second all the way down to Gen 1 at 2.5 giga transitions per second. When the data rate changes to Gen 6 which is 64 giga-transitions per second, the device’s physical layer needs to flip over and use the PAM4 encoding. This ensures backward compatibility since a PCIe Gen 6 device now can speak to a PCIe Gen 5 or below capable device. This is a major shift. Until PCIe Gen 5, NRZ was the only modulation scheme used at all data rates.
Johnson: Excellent. Arjun. Thank you for taking the time to talk with us.
Bangre: You’re welcome. Thanks.