PCB Design Challenges: Designing With DDR

Reading time ( words)

Longtime signal integrity experts Rick Hartley and Barry Olney join the I-Connect007 Editorial Team for a discussion around DDR and the complications board designers inevitably face when they design for DDR. If, as Rick and Barry explain, the DDR design process is not that much more complicated than that of a typical high-speed board, why does DDR cause design engineers so much grief? Much of this comes down to following the process, running simulation, and not relying on reference designs. 

Andy Shaughnessy: I thought maybe one of you could explain why DDR is used so often. What is the advantage of using DDR?

Barry Olney: It’s the fastest technology we have for memory. With each new version there is a dramatic increase in device data rates and bandwidth; for instance, DDR4 has a maximum data rate of 3200 MT/s whereas DDR5 peaks at 6400 MT/s.

Rick Hartley: I have not used DDR5, but it’s not just double data right now; it’s quadruple data.

Olney: Yes, DDR5 has only recently come out. A few of the chip manufacturers in Taiwan have it, but I have not seen it myself yet, so I’m looking forward to it.

Shaughnessy: What does this look like? What does this entail? When someone says, “We’ve got a design for this DDR,” what are we talking about? Is this a physical thing?

Hartley: Like any bus technology it has data, address, clocks, control lines, it has all the things that most bus technologies have. It’s source-synchronous clocking, meaning that the clock is launched with signals so you’re not launching a clock and then depending on the signals to arrive at the receiver within a given timeframe. You launch all of them together and if all the data, for example, is set up within a certain timeframe relative to the clock/strobe, everything functions as it should. And the same is true for the clocks of the address. It is source-synchronous, which makes it a much, I think, easier technology to use. What are your thoughts, Barry?

Olney: It is easier to use. Especially with the DDR3, DDR4, and DDR5, now you have the write leveling which synchronizes the clocks to the address, command, and control lines. You must realize that with DDR, there are two sets of clocks. There is the main clock of the two, and there are also the strobes that trigger the data capture. Basically, the clock and strobe must be at the longest delay of all signals because the address and data must settle before the data is captured.

Hartley: The key is to route the strobe to the longest so that it has the greatest delay and arrives last.

Olney: Exactly, and that’s what a lot of people don’t do. They have the strobe arriving first and then the data last, and you get all the reflections and noise. You don’t know whether it’s going to capture the correct data.


Hartley: The first time I did a DDR2 design was at L3 in 2012, and the next one we did was also DDR2 in 2014. One of our engineers did a timing analysis of the design in 2014. He came up with a number that nobody believed. Everybody in engineering said, “Oh, this can’t be right, because all the app notes say that DDR timing is so critical.” We came up with a number that said, for example, that the address line only needed to be ±125 mils from some optimal length, which is basically ± 3 mm. Everybody said, “This can’t be right.” Yet shortly after that, I ran into a note that I found from Keysight Technology, written by an engineer named Chang Fei Yee, who wrote:

“Maximum DDR2 skew of trace length to meet timing margin. In order to be compliant with the JEDEC specification, the maximum skew among all signals shall be less than ±2.5% of the clock period drew in by the memory controller. All signals of the SDRAM are directly or indirectly referenced to the clock, for example in normal FR-4 material with a dielectric constant of approximately four, a differential clock-rate of 1.2GHz, the maximum skew shall be ±125 mils, or ± 3 mm.”

This is precisely the number we came up with, even though half the engineers in our department said it couldn’t be right. I intentionally mismatched the length of some of these lines just to keep them within that quarter of an inch distance from one another, and the thing worked perfectly. That was when I first realized how non-critical this really is.

Olney: I have a table on that, Rick. For 166 MHz, the interconnect margin is 155 picoseconds.

Hartley: Yes, which is huge!

Olney: That relates to about 75 mils.

Hartley: I know, it’s crazy. And people route these things within ±5 mils or ± 0.1 mm.

Olney: I guess I’m one of those crazy people because I’m a PCB designer, and the way I see it is everything has to be done properly. Whether it’s a really fast DDR4 or a slow 200 MHz DDR, I route them all the same. I route them all to 10 picoseconds delay and it doesn’t take any extra time to do that. People say, “Why should I have it so accurate?” Well, if you just do it out of habit every time then your design works perfectly. If it is 125 out, it might work on the test bench, but if you put it in the field and temperature cycle it, it will fall over. To make it reliable and performing efficiently, you need to have it spot on, and it takes little extra effort to do it that way.

To read this entire conversation, which appeared in the May 2021 issue of Design007 Magazine, click here.


Copyright © 2023 I-Connect007 | IPC Publishing Group Inc. All rights reserved.