Todd Westerhoff Discusses His New Position and Much More

Reading time ( words)

At DesignCon, I met with our old friend Todd Westerhoff, a veteran signal integrity engineer. Todd joined Mentor, a Siemens Business, since we last spoke. We discussed his new job responsibilities, his drive to get more designers and engineers to use SI tools, and the increasing value of cost-reduced design techniques versus overdesigning PCBs.

Andy Shaughnessy: Good to see you, Todd. Can you tell us about your new job responsibilities and what you’re working on?

Todd Westerhoff: Thank you, Andy. I joined Mentor in August of 2018 as the product marketing manager for HyperLynx. HyperLynx is an analysis product family that includes signal integrity, power integrity, electromagnetic modeling, and design rule check capabilities. I’m the product manager for that and also some of the analog mixed-signal analysis capabilities we have for board-level designs.

If you went down to the show floor at DesignCon 2019, you’d see we’re talking about three themes. We’re looking at the design cycle and figuring out how we can design things more quickly and effectively. We’re looking at a thing we call power-aware simulation, which looks at the effect of a power distribution network and how it interacts with high-speed signals. Finally, we’re looking at compliance risks and ways that you can automate different types of design rule checks, and all of those things are about how we can do more simulation earlier in the design cycle and try to design quickly and more effectively.

I think we all know that one of the things that happens is that if you make a mistake during design and let it propagate forward in the design cycle, the cost of finding it and correcting it continues to increase the farther that you let it go. When I got out of college (a long time ago), I worked for GenRad, who built automated board test equipment. The same concept applied: finding and fixing problems becomes more expensive as they propagate downstream. In those days finding a fault at board test might have cost $10. Not finding that fault at board test and finding it during system integration might have raised the cost to $100, and it went up another factor of 10 every time that flaw got closer to its end application—unless you put it into space, in which case it went up a lot more.

This problem isn’t limited to manufacturing; it’s about design too. If I’m capturing a schematic, and I make a simple mistake but don’t catch it, then I have to finish the schematic and simulate it before I have the next opportunity to find and fix it. I run my simulation and don’t get the results I expect, so I have to find the bug by backtracking to where the problem occurred in my system simulation model and fix it. But that can take considerable time and effort (read: money) to do that, so we’re trying to understand how we can find error earlier and make the design process more efficient.

Take safety compliance testing, for example: I design a board, and it has to meet certain safety compliance requirements, so you look at the requirements for making sure a board is going to meet the requirements for electric shock. Creepage refers to current finding its way over the surface of the board to somewhere where you don’t want it to go, and the rules around something like creepage are amazingly complicated. When you look at current jumping from conductor to conductor, you have all of these factors that you multiply distances by, because you de-rate things based on the environmental conditions and so on. so it gets complicated and tedious really quickly.

Shaughnessy: It sounds very chaotic.

Westerhoff: When first you look at it, particularly from an outsider’s point of view, it seems sort of arbitrary, but when you look at all the de-rating factors and the reasons behind them, it’s very well spelled out. It’s a very methodical process, but it’s also very hard to do; so it’s one of these things that’s perfectly suited for automation. Because if I design a product, design the boards, put it all together, then go to a testing lab to get it certified and it fails, that’s a huge hit. I have to rework boards, rebuild prototypes, schedule time with the lab to redo the testing and pay the associated costs, all while delaying my initial product shipment.

Whether it be compliance-based stuff with IEC electrical shock safety standards, or even if you look at a whole bunch of rules of thumb about high-speed signals or EMI or PI, there’s this whole class of things that you can define as best design practices, codify them, and analyze them, which is what HyperLynx DRC does. It lets people find problems earlier in the design cycle, that would be more expensive and time consuming to find through simulation or physical prototypes.

Another interesting topic we’ve been examining is something we call “Model-Free Analysis”. It’s based on our observations of an unpleasant but pervasive truth: designer’s don’t use simulation as much as they should.  I’ve been doing signal integrity for 20 years now, and we have always talked about all the reasons why people should simulate boards and perform signal integrity analysis, but the fact is most people don’t.

Westerhoff smaller.JPG

So, why is that? What’s that all about? A lot of times, people say they can’t get models in time or at all, they can’t get a model they trust, or they can’t understand how to use the model. People might also say they don’t have the SI resources or the people in their company who can do signal integrity are few and far between. So, we took a step back and looked at what kinds of engineering analysis people could do that you would normally think of as simulation-based, but without a vendor-specific silicon model.

We asked, “Could we come up with ways of creating DDR4 and DDR5 models that wouldn’t necessarily be specific to a manufacturer, but would still allow designers to perform useful analysis?” And the answer, to our surprise, was yes. We came up with ways that you could create these models for DDR4 and DDR5 that would allow you to do design analysis and trade-off without having to get into a vendor- or model-specific accurate simulation issue.

We got a lot further than we thought we were going to get. We created some automated processes that allow us to take a board layout and auto-generate models for the controllers and the DRAMs. Then we ran simulations just to see how those results compared to the actual models, and they were way closer than we thought they were going to be.

Shaughnessy: And it’s something that a typical designer or an engineer could use, so you don’t have to be an SI engineer to do it.

Westerhoff: Exactly! We call it that SI/PI expert bottleneck. If you take 100 engineers, how many of them can really do signal integrity at the level that we’re talking about at DesignCon? What happens most of the time is you have this large community of people trying to do system design, and then there are a couple of people who are often a shared resource trying to service everybody, and they’re overbooked. As a result, most of the actual design work doesn’t get analyzed due to lack of resources or lack of access to the expertise, and now it’s 112 gigabits/sec that we’re talking about. The more we push the state of the art, the smaller the percentage of the population that can actually do the kind of simulation needed to make design trade-offs and catch common problems.

And so we asked, “What happens if we look at the broad base of designs that aren’t the tip-of-the-spear type stuff?” As it turns out, a lot of those designs are being overdesigned. Designers new to signal integrity ask, “How do I route a high-speed signal?” And they’re told to design a cross-section and a clean reference path, making they have a ground plane under every high-speed signal. Which is great if you afford it, but what about all these cheap high-performance devices that permeate our daily lives. If I have something I want to put in a watch, or I have a Raspberry Pi, where I’m going to try to build a board that I can sell as part of a $35 product, I need to minimize my layer count to control costs. I can’t add enough ground planes to control the return path for every signal, so I’m going to do something else

If I’m going to build the board and sell it for low cost, I’m going to have to figure out ways to selectively bend the traditional rules and know how far I can push them before the design won’t work. We’re living in the world of consumer electronics and high volume. Think about the Raspberry Pi, with over 19 million manufactured, and do the math. If I had figured out a way to save a penny on every Raspberry Pi ever built, I’d have $190,000.

And where does that leave us, from a design analysis point of view? Well, you have the tip of the spear at 112 gigabits/sec, but we’re also at the point where we have electronics on our wrists. So what do we do? The term “shift left” means bringing analysis and verification forward in the design cycle, placing it in the hands of users that have traditionally relegated these tasks to others, allowing them to find mistakes and correct them before they go downstream. In the particular case of low-costs design, it also means bringing sophisticated SI/PI/3D EM modeling capabilities within the reach of the average system designer, which is exactly what HyperLynx is known for.

Westerhoff smallish.JPGIt’s critical to understand the assumptions that are “baked in” to the tools and processes we use as designers. You take any SI tool, put down a driver, a couple of transmission lines and a receiver, and simulate it and get a waveform. Well, what assumptions did the analysis make? Truth is, if you just put down drivers and receivers and transmission lines, every SI tool makes the assumption that your PDN, driver, and receiver are perfect. The driver’s power rail might be de-rated for corner, but you’re assuming that power is rock solid, DC to daylight, and that your via and trace return paths are perfect.

And for all of the baseline SI modeling and simulation we do, we assume that PDNs and return paths are perfect, but they’re not. When we start accounting for the interactions between PDNs and high-speed signals, we’ve moved into the realm of power-aware simulations, which only a few SI tools can handle. Here’s my point: as I try to make my designs lower in cost, more profitable, and higher in volume, a clean return path for all signals is something I probably can’t afford. Power-aware simulation analysis lets me say, “I’m not going to do this design by the book. I’m going to degrade the signal quality, chop it the planes and other things because they let me make the design cheaper and quantify through simulation whether that’s going to cause the design to fail or not.” That’s a big deal.

Shaughnessy: It sounds like the SI and EMC tools have been geared toward people working on these super high-speed, sexy things on the one end, but as you said, there’s all of the stuff like the watch board and anti-theft tags in clothing, etc.

Westerhoff: Yes, and that’s a really interesting problem. Technology evolve rapidly, but we keep doing design analysis the way we always did it. I think we have operated on the assumption that EDA needed to address the absolute state-of-the-art, and if we could handle the most complicated problem, then everything was going to be okay, and everybody could use it. But the actual evidence is to the contrary, in the sense that we have a broad collection of people doing electronic design that aren’t doing the kinds of simulation we think they should, or that they could benefit from, largely because those tools are geared toward the super high-end users.

Who is going to take the time to understand these needs and put the tool chains together with their associated methodologies? What do we do for the broader base, and how can we make people more effective? What happens is that when you tune products and technologies for state-of-the-art, well, you’ve tuned them for state-of-the-art. At the end of the day, if I’m going to model the most complicated electromagnetic problem conceivable, it’s all about performance and capacity, and if it’s hard to use, too bad, but the users addressing those problems are experts, and they will make it work pretty much no matter what. However, that’s not what you need to do in a broader base; you need to tune the product differently, and I think that’s how we got to where we are.

Shaughnessy: Is that one of the things that you’re going to be working on now as the HyperLynx manager?

Westerhoff: Yes, it’s my new thing. I’ve been opening up my electronics at home. I had an Ethernet switch that died, I bought a new one, and opened up the old and the new. These were equivalent boards for the same product under the same model number, but when I looked at the two different boards, they were completely different because one was cost-reduced. I think it’s a really interesting problem, and it’s not just cost reduction.

We’ve reached levels of performance even in our cheapest consumer products where we need to be able to do some fairly sophisticated analysis to make it work. My current favorite:  I can buy a five-port gigabit Ethernet switch (with a lifetime warranty) for $15 on Amazon, but if I go to Five Guys, a burger, coke, and fries will cost me $14. Think about how extreme the cost savings on the PCB inside that metal case with all that other stuff has to be. Remember:  it’s a gigabit switch; you have to get the signal integrity right. So, how do we figure out how to make this extremely inexpensive board that’s still going to pass a gigabit signal well enough?

Shaughnessy: I like the whole idea about realizing that there’s a market for SI tools to be used on products that are not super high-end.

Westerhoff: One last thought: I think many people get the wrong idea about the goal of signal integrity. They seem to think the goal is make the signal as “clean” as possible, but that’s it at all. The goal is to have a signal that’s clean enough to work, and no more. If I’m going to make the signal any cleaner than it minimally needs to be, and I increase the cost of manufacturing by a penny doing it, then I’ve wasted money. So, the real mastery is not about making signals clean; it’s about how cheap can I make the design, and how dirty can I let the signals get and still ensure that the design will work in volume reliably? That is what masters of signal integrity do.  

Shaughnessy: It has been great talking with you, Todd. Thanks for your time.

Westerhoff: Thank you, Andy.




Suggested Items

Why We Simulate

04/29/2021 | Bill Hargin, Z-zero
When Bill Hargin was cutting his teeth in high-speed PCB design some 25 years ago, speeds were slow, layer counts were low, dielectric constants and loss tangents were high, design margins were wide, copper roughness didn’t matter, and glass-weave styles didn’t matter. Dielectrics were called “FR-4” and their properties didn’t matter much. A fast PCI bus operated at just 66 MHz. Times have certainly changed.

DFM 101: PCB Materials

04/30/2021 | Anaya Vardya, American Standard Circuits
One of the biggest challenges facing PCB designers is understanding the cost drivers in the PCB manufacturing process. This article is the first in a series that will discuss these cost drivers (from the PCB manufacturer’s perspective) and the design decisions that will impact product reliability.

Barry Olney’s High-Speed Simulation Primer

04/09/2021 | I-Connect007 Editorial Team
The I-Connect007 editorial team recently spoke with Barry Olney of iCD about simulation. Barry, a columnist for Design007 Magazine, explains why simulation tools can have such a steep learning curve, and why many design engineers are still not using simulation on complex high-speed designs.

Copyright © 2021 I-Connect007. All rights reserved.