From DesignCon: Bringing the Power of Simulation to a Wider Audience

Reading time ( words)

I ran into Siemens EDA Product Marketing Manager Todd Westerhoff during DesignCon 2021 and asked him for an update on his company’s simulation and analysis tools. We discussed Siemens’ ongoing efforts to bring the power of simulation to more PCB designers and run simulation earlier in the design process, as well as the challenges Siemens faces in developing tools that appeal to both signal integrity experts and mainstream hardware design engineers.

Andy Shaughnessy: How are you doing? Long time, no see.

Todd Westerhoff: Hey, Andy—great to see you in person, for a change. How are you?

Shaughnessy: Good. Glad to be at a trade show again. So, what’s new in simulation?

Westerhoff: The question we’re always asking is, “How do we bring analysis to a wider group of people?” When you look closely at signal integrity, it’s amazing how many people don’t analyze their design in any quantitative way to validate it after layout. What’s your first thought, if I ask, “How do you analyze a serial link once you’ve laid it out?”

Shaughnessy: I would start with a model, I guess.

Westerhoff: Precisely! Most people would say, “We’ll get an IBIS-AMI model from the silicon vendor and simulate our design based on the actual silicon IP and equalization settings.” But the challenge is that running meaningful simulations with AMI models still requires someone that runs signal integrity analysis for a living—a dedicated specialist. AMI models come in multiple flavors, with different levels of completeness and accuracy. So, once you get a model, you have to figure out whether the model is complete (for example, includes jitter or not), how to set up simulations, how to configure the model-specific inputs, and how to interpret the results to determine whether the serial link passes or fails. AMI may be a modeling standard, but the process of using AMI models to determine a link’s operating margin changes from vendor to vendor and model to model. I can use the same simulator with three different models and end up with three different analytical methodologies that I need to follow—all while trying to accomplish the same thing, which is to figure out whether my link will work. Thus, if you’re not deeply versed in AMI model technology, you’re going to have a tough time using AMI models productively—and there’s a big difference between simply running a simulation and running simulations that produce results you can trust enough to make design decisions.

My previous job was based on IBIS-AMI, so that didn’t matter to me. IBIS-AMI was our main focus morning, noon, and night; we knew how to assess models and make them work. But now that I’m serving a broader audience and taking a deeper look at how people deploy simulation technology, I’m seeing that people are still struggling, all these years later, with AMI models—with obtaining and understanding AMI models, understanding their limits, running them, and understanding the results.

I saw several papers today at DesignCon where people were contrasting IBIS-AMI-based simulation against compliance analysis, talking about which method to use, and when. Both methods assess serial link performance, but they differ in terms of scope and analytical method. IBIS-AMI simulation predicts the behavior of the end-to-end link based on actual silicon IP and device settings—in other words, it tries to predict link behavior as accurately as possible, at the cost of increased modeling and analytical complexity. By contrast, compliance analysis looks at the behavior of the passive (unpowered) channel only. In essence, compliance analysis assumes that the Tx and Rx IP will meet or exceed the requirements of the associated protocol spec, then predicts whether the channel will pass or fail in that context.

Compliance analysis works because protocol specifications identify requirements for the Tx/Rx IP and their associated packaging. So, if your passive channel meets the requirements of the spec, and your Tx/Rx are both compliant, the end-to-end link should work. The main advantage of compliance analysis is that it’s more predictable: the Tx and Rx IP are based on the standard, so they’re no longer variables in the analysis. Compliance analysis can be run when IBIS-AMI models aren’t available—a big bonus. The main drawback to compliance analysis is that it’s conservative, so it can fail a channel that would work with Tx/Rx IP that exceeds the spec.

Because compliance analysis isn’t vendor model-dependent, it’s accessible to a broader range of users, since the analysis process can be made much simpler. I’ve been around IBIS-AMI since its inception, but compliance analysis has become my favorite way to get a handle on channel performance — because it’s faster, easier, and predictable. I don’t have to wonder whether I’ll be able to run compliance analysis on a channel, and I know how to interpret the results, because that doesn’t change.

There’s one point we should clear up here: When I talk about compliance analysis, most people think of channel operating margin, or COM. But compliance analysis is really a category of analysis, of which COM is only one method, used by some protocols and not by others.

Shaughnessy: That’s true. What are the other methods in this category?

Westerhoff: The methods follow the protocols. Some use frequency-domain masks to judge channel behavior; some run a time-domain simulation and measure against an eye mask; some use COM; some use a variant of COM called JCOM; and others use standalone tools like Seasim. Each of these different methods requires a particular set of input data, uses a particular analytical method, and produces a particular set of outputs. If you look at compliance analysis as a category with all its different input data, methods, and outputs, it begins to look a little bit like IBIS-AMI, in the sense that there are a lot of process details to master. The difference, of course, is that the method stays constant for a particular protocol, independent of the vendor silicon you’re using.

In HyperLynx we took this a step further by bringing all the different protocols and methods together under a single wizard that takes care of the details of generating input data, running analysis, formatting results, and assessing pass/fail status. No matter which protocol you’re running, for both pre- and post-layout analysis, the steps you follow and the report that gets produced are the same.

This means that you get all the protocol-specific analytical complexity you need to perform a detailed assessment of whether your channel is protocol compliant, without the associated process complexity. You don’t need to figure out how to extract and format data for analysis; the wizard does that for you. You don’t need to set up analysis parameters for your protocol, because the wizard does that too. When the analysis completes, you get the information you need most: an itemized listing of which channels passed, which channels failed, and by how much. 

From a designer’s point of view, this becomes kind of a magical thing; suddenly you have an analysis tool that you know you can count on to help assess your design choices. You don’t need IBIS-AMI models, and you don’t need to be a simulation guru to use it. It’s reliable and predictable, which goes a long way. It may be conservative, but you can factor that into your decision making. If you’ve got a channel that just barely passes and you know that your vendor’s Tx and Rx exceed the spec, you know that your design will probably have better margins than compliance analysis is reporting. I think of compliance analysis as my base-level analysis; it’s the thing I always do first. If my margins are good enough and I’m confident about my Tx/Rx IP, my analysis is probably done, and I’m ready for fab-out. If my margins are tight or I have other reasons to be concerned, then I’ll probably run IBIS-AMI simulation afterward to try and get a better handle on actual design margins. Even then, I’m making many of my design decisions and running much of my initial post-route analysis using compliance analysis, because it’s much easier and faster, and it gives me a good handle on how my design is performing.

Shaughnessy: Is compliance analysis something that typical hardware design engineers could run themselves, or would you need a dedicated signal integrity specialist?

Westerhoff: Yes, and that’s exactly the point. We need to give more people within the design community the ability to run simulations and make design decisions for themselves, because we’re bottlenecking on the signal integrity experts we have. If I need a signal integrity specialist every time I want to evaluate a design tradeoff or validate a section of routing, I really only have two choices: 1) wait until a specialist becomes available; or 2) use my best judgment to make a design choice and hope for the best. Neither is a particularly good option.

We need to keep specialists who do signal integrity for a living working on advancing the state of the art. They create new methodologies that combine tools on the fly; these are the people who work on the top of the analytical pyramid and can make anything work. The problem is, there are too few of these people and way too many links and layouts that need to be verified. There are serial links in everything today. There are serial links in your computer, your phone, and even your watch. How are we going to do any kind of design, optimization, and verification for all those designs with this small community of signal integrity experts?

Shaughnessy: Several companies don’t seem to have signal integrity people on staff at all.

Westerhoff: That’s true. So, the question becomes, “What can we bring to a broader engineering community that lets them make informed design decisions and increase their chance of first-pass design success?”

Shaughnessy: These are the people in the middle of your hypothetical analysis pyramid, I guess?

Westerhoff: Exactly. And what we see is that compliance analysis just works. It’s simpler, it’s reliable, and I can factor its conservative nature into my decision making. So, if I’ve got the ability to run compliance analysis as a hardware design engineer, there’s a lot of work I can do myself without needing a specialist involved. If the problem becomes too complex or the margins are too tight, that’s when I need a specialist to back me up.

Shaughnessy: This all fits in with your efforts to front-load the design process by using simulation earlier in the design cycle.

Westerhoff: Absolutely. We want to define the analysis flow and automate as much as possible to provide access to a wider engineering audience. We want to relieve “expert bottleneck” pressure, both to let experts focus on the tough problems and to give hardware engineers the tools they need to do more analysis on their own.

We need to provide better verification options to designers who are laying out boards to vendor guidelines. The questions I ask are: “Once you’ve laid out that design, how do you verify it before fab-out? Do you inspect it visually and hope for the best? Do you retain an external consultant to do analysis for you, which means every change is going to be another spin and more money? Do you compete for the time and attention of an internal signal integrity expert to do the analysis for you?” Truthfully, none of those is a particularly good option.

Shaughnessy: They’re all expensive options.

Westerhoff: Yes. They’re time-consuming and expensive, whereas we’re usually trying to bring things to market quickly and on-budget. Can automated compliance analysis enable a broader group of designers to make better design decisions on their own? We think so.

Shaughnessy: What do you think of the conference so far, overall?

Westerhoff: We’re at the halfway point as we sit here. Attendance is obviously light, but the papers are good. I’ve been surprised at how many papers are contrasting IBIS-AMI vs. compliance-based approaches, and how many people are still trying to understand the differences.

Shaughnessy: Have you been out on the show floor much?

Westerhoff: Briefly—attendance was pretty thin. I saw a lot of measurement equipment and components, but not much else.

Shaughnessy: I’m just glad to see people at this point, and to get on an airplane.

Westerhoff: I wholeheartedly agree. The mood at the show, low attendance or not, is quite positive. I think people who are here came looking to make the best out of a difficult situation, and it seems to be working.

Shaughnessy: Well, great to see you again, Todd.

Westerhoff: Thanks. Great talking to you, Andy.


Copyright © 2022 I-Connect007 | IPC Publishing Group Inc. All rights reserved.