Reading time ( words)
Since the beginning of PCB design, physical prototypes have played a heavy role in the verification and ultimate approval of a new product for market. The challenge posed by this approach is simple—most designs require more than one prototype spin to achieve approval.
A Lifecycle Insights survey1 turned up an industry average of 2.8 respins per project at a cost of $46,000 per spin. Certainly, there are variables such as design size and complexity, number of boards in a system, bare vs. fully assembled, and target industry or application. These all impact the cost and number of typical spins. As design complexity increases, so do the unknowns; even if a team designs conservatively, they’re bound to trip over a few new spin-inducing issues. This is so common that project managers often “bake in” three to four respins to project schedules; you could call that mitigating risk—or planning to fail.
The promise of the digital transformation of the electronics design process is “zero-spin,” going directly from design into volume production. This requires that every existing check performed on a physical prototype has a digital equivalent, or better yet, constraints synthesized from requirements that ensure correct-by-design. The reality today is that confidence in digital verification isn’t high enough for anyone to bet the farm on zero-spin; most consider a single, fully-tested prototype pass as the holy grail.
The heart of digital verification is the digital twin, a model of the design with enough fidelity to ensure that checks or simulations catch errors typically caught within the physical testing process. A model of the electronics system is hierarchically constructed of many smaller models representing the enclosure, the environment, the multiple boards, and each board’s materials/stackups, components, and interconnects. Since models are necessary for an array of checks, they must also be multi-discipline. For instance, a PCB component model could be used to verify signal integrity (SI), power integrity (PI), thermal, form/fit, vibration, and more.
Today, while there are more models for PCB components than any other part of the electronics system, they are still far from ideal: there are multiple formats (e.g., IBIS, VHDL-AMS, BCI-ROM); there are many variants, each targeted at a single type of analysis; and quality varies widely. Yet, there is hope. Device vendors realize the value of models to help sell their product, so some are introducing models when they launch a new device. And there are industry efforts to standardize on neutral formats to represent multi-discipline models. In the meantime, creation and/or validation of models is often left to each engineering team that wants to utilize them.
If digital twins (models) are the fuel, then the simulators that consume them are the engine. These engines typically perform a single function (e.g., SI analysis, or even more specifically, DDRx or SerDes analysis). Significant energy has been spent over the last two decades to improve the accuracy of these simulations so that they mirror what’s seen in the real world with the physical product. Of course, it’s a moving target, as new signaling protocols are introduced along with new devices, materials, and manufacturing processes. Since the results of these simulations are typically one-dimensional, it’s left to the engineer to manage trade-offs (e.g., between performance and manufacturability, or SI and thermal performance).
Today, the state-of-the-art is multi-physics simulators (e.g., SI and PI, PI and thermal, thermal and vibration) coupled with AI-enabled technologies capable of identifying the ideal solution given a set of target operating parameters. Cloud-based compute farms promise scalability from today’s world where, even within one discipline like SI, only small portions of the system model can be analyzed at a time. As there are often far too many manual steps and opportunities for error and wasted time, there are efforts to streamline the digital thread from the core design database and authoring environment into these simulators. Yet we’re still a ways off from the holy grail of a single, multi-dimensional model of the product that’s consumed by a single, multi-dimensional simulator capable of emulating all real-world conditions.
I should note that, while many simulators have been validated in comparison to a physical prototype, the real target is the product in use in its end-market. That environment doesn’t have test fixtures that artificially send signals, nor does it have mechanical jigs that hold a single board in a HALT chamber. Simulators can test corner cases on a digital twin—something that’s impractical with physical prototypes (even if you build thousands of boards, you can’t ensure that they represent every possible operating condition). And simulators offer the promise of automated multi-domain verification vs. the one-at-a-time manual testing processes for physical prototypes.
Over the past couple decades, verification has been slowly shifting earlier (to the left) in the design process. It moved first from a physical prototype-only approach to complex, hard-to-use simulators applied at the end of the design process by domain specialists. The second shift in verification has been from the end of design to the core authoring stage, enabled by tighter integration with the central digital twin of the design. While not replacing the specialists, this shift enabled the design authors to make smarter decisions earlier, minimizing iterations with the specialist at the end of the design process. The third shift, currently in gestation, leverages AI to generatively synthesize designs to achieve optimal performance, based on a set of high-level product requirements. This generative technology has been applied to mechanical design, enabling additive manufacturing of optimal structures. Lest I cause anyone to fear for their jobs, I should note that electronics systems represent a far more complex, multi-domain, multi-discipline problem than purely mechanical structures.
An Aberdeen survey asked about processes for design verification. It classified the use of simulation by design engineers early in the design process as best-in-class for the following reasons:
- First, it enabled product innovation and optimization, because physical prototypes were reduced 27% using virtual prototypes and virtual testing. This allowed them to explore hundreds of design iterations, freeing them up to identify and focus on designs with the highest potential.
- It improved time to market. Best-in-class designers improved their length of development time by 29%—six times the rate of improvement by all others. Best-in-class organizations also met their time-to-market targets 76% of the time, a 17% higher rate than all others.
- It reduced product costs; 71% of best-in-class designers met their product cost targets.
- Higher product quality was achieved; 77% of best-in-class firms met their product quality targets. Plus, best-in-class products were more likely to work right the first time and less likely to require rework: the best-in-class teams improved their ECOs, after release to manufacturing, by 21%.
While I’ve pointed out plenty of obstacles to complete digital verification with an optimal digital twin, there are still many steps an engineering team can take today to leverage existing technology and minimize physical prototypes. Organizations are incorporating discrete simulators and checkers (e.g., for SI, PI, thermal, vibration, or manufacturability) and are reporting first-pass success. Incorporation earlier in the design process has improved engineering decision-making and minimized iterations with specialists. Fundamentally, leveraging a digital twin enables easier, faster verification which ensures higher quality products without time- and cost-consuming respins.
- “Accelerating development with pervasive simulation,” a white paper presented by Siemens.
This column originally appeared in the December 2021 issue of Design007 Magazine.