Reading time ( words)
One of the most important steps when getting a product from the design stage to manufacturing is new product introduction (NPI). In this paper, I will discuss a more efficient flow to the NPI phase, identifying ways to streamline the NPI flow to improve both time to market and cost savings, especially concentrating on the Process Preparation tool. I will review a typical NPI flow that has been compiled through our experience of talking with many customers over many years. It will be unusual for any one customer to match the typical NPI flow, but most customers will have a number of areas where they see correlation to the typical NPI flow with their own organization and therefore have opportunities for improvement as part of overall best practice. I will then provide solutions to the challenges faced in the typical NPI flow and the overall benefit of these changes with the introduction of the best practice NPI flow.
Before we start looking at NPI challenges, let’s first review high-level business objectives as described from the Aberdeen Group (Figure 1), a market research firm specializing in the electronics market. They found that there were three main drivers driving new products:
- Improving time-to-market--being able to beat their competition with a new product to market.
- Reducing the cost of the product--enabling a more aggressive price that can drive market share or enabled a higher margin that contributes more to the profit.
- Improving product quality--a means to reduce field failures that are the mostly costly to resolve.
A top-quality design process is necessary for successful innovation of new products, but it is not sufficient to be fully successful. The end product has to be manufactured, not only designed. That means on time (and faster than the competition), and at or below cost (and with lower total cost than the last project). Success is achieved when the product hits the market, not when an output from CAD is generated (Figure 2). The critical NPI stage we address here is that of the handoff process between design and manufacturing, where the design-company needs maximum flexibility from its manufacturing suppliers while keeping time delays and total cost to a minimum. As we evolve the typical NPI, we will conclude with the introduction of a Best Practice NPI flow that provides the framework to create products that can be produced more efficiently, at higher manufacturing yields and with added manufacturing flexibility. Figure 1: Data from The Aberdeen Group reveals the top three business objectives for electronics companies. (Source: The Aberdeen Group, 2010.)
Product Life Cycle
Looking at the entire product life cycle (Figure 3), we see that many decisions are made early in the design phase that ultimately affect the product during the ramp-to-volume and volume through the entire life cycle. Having to make suggested changes as we move through this product life cycle causes needless distractions too, who need to work on the next generation products, but, instead, are interrupted with revisions to existing products that must be addressed.
Figure 2: Success is measured when the product hits the market, not when the design is completed.
Figure 3: The entire product life cycle. According to data from The Aberdeen Group, the average product development requires 2.8 revisions, resulting in three to five weeks of delay for each. (Source: The Aberdeen Group, 2006.)
These “call-backs” may be just questions, and not necessarily design change requests, but they interrupt the focus on new product design and return focus to designs that should have been closed out. These changes occur as we move from phase-to-phase in the overall product life cycle, which in a best practice paradigm would have been addressed as part of the initial design. These changes can be seen in the 2.8 design spins that are needed, on average, for each revision of a PCB design. Each design spin results in an average a three to five week delay, plus the associated financial impact.
Why Designers Should Care About NPI Flow
Before we review the typical NPI flow, let’s first consider why PCB designers should even care about this phase of introducing new products in to manufacturing. There are a number of areas in both the fabrication as well as assembly and test where activities take place that involve the design data. The PCB designers should consider any activity that is related to the product data, in other words, the “what” of the product. This is opposed to the process of building the product, or the “how.”
Let’s consider some of these steps now. Fabricators commonly make changes to the design data that are not related to the process of fabricating the PCB. These issues should be highlighted as part a complete design for manufacturing (DFM) analysis during the design phase and considered at that point in conjunction with the design performance targets of the product.
During assembly and test, a number of steps re-create the design data from various different sources that all can result in different data being used to build the product. Given the different sources and different applications used to work with that data it is not unusual for inconsistencies to arise across the different disciplines.
Panelization is typically performed in a number of places including fabrication, assembly and test. Even if these steps were all performed identically, there is the wasteful time and effort in performing the same task multiple times.
We also should consider the actual parts being placed on the board in addition to the PCB itself. If the parts are being created as part of the design flow, for complete DFM analysis, then we should leverage that data in order to be able to use it at multiple points in the subsequent assembly and test applications. This rarely happens due to so many different applications being used to work with the data. As the number of libraries multiplies, the effort to create and manage those libraries increases substantially.
Typical NPI Flow
In Figure 4, we have mapped out a typical NPI flow. At a high level there are four main blocks:
1. The actual design environment where the main tasks consist of schematic capture and layout design.
2. Then there is a collaborative task for design review where the resultant layout undergoes a number of design rule checks for fabrication (DFF), assembly (DFA), and test (DFT) in addition to other layout review activities not shown.
3. Next is the actual fabrication process, in which in the vast number of cases is sub-contracted out to any one of the many board fabricators.
4. Finally, we have the last area for assembly and test, where the bare PCBs are assembled with their components and ultimately into the final product.
Figure 4: Here the entire, typical NPI process is mapped, showing all phases and groups involved.
This last step will happen either at an electronic manufacturing services (EMS) provider or at the original equipment manufacturer (OEM). Although there is a wide use of EMS companies, there is still a significant amount of in-house assembly and test that occurs.
Breaking this flow down further shows the first area that we would like to highlight, that is the creation of files from design to all downstream consumers of the product related data. Most PCB design departments will create multiple files at this time to deliver the design intent to enable the fabrication, assembly and test of the final product. Although not a complete list of files, the common set includes Gerber files, usually 274X, but also 274D with associated aperture files, drill files and an IPC 356 Netlist file. Then generic centroid files may be created, usually for pick and place equipment, intelligent layout files that can also be used for any number of assembly and test processes and production documentation even though at this point only cursory information on the target manufacturing line may be available.
Separate to the design files, a Bill of Materials (BOM) file is usually created from the ERP system that describes the as-built parts that make up the assembly. The BOM will typically be merged a number of times in different areas of the process that in itself can cause errors or inconsistencies with the resultant data in manufacturing.
As you can see there are many files created as part of this process and there are many consumers of this data, hence why historically these files are created. Due to this history though, some files are still created even though no one uses the files today. It is not uncommon for this file creation process to take many hours to complete, even up to a whole day in some cases.
Reviewing the assembly and test step, we see a number of tasks that are currently using one or more of the files that have been created. At this point there is a lot of duplication of tasks that are being performed, on these different files depending on the use case. This leads to additional work and inconsistencies in the use of the data for each of the separate assembly and test processes. We also see here a number of separate part libraries that are proliferating depending on the actual equipment in use in the assembly and test environment. It is not unusual to find at least a couple of pick and place machine vendor libraries, some that are even machine specific libraries and at least one or two separate libraries that are used in the test and inspection areas.
So we immediately see a number of redundant processes, multiple data formats and part libraries that all need to be created and managed on an on-going basis. So let’s look in to these areas in more detail and see what the steps are to achieve best practice.
The Path to Best Practice NPI (Step 1)
Figure 5 depicts Step 1 of our best practice NPI flow as a result of the concurrent DFM actions. Not only have we simplified the overall flow with concurrent DFM as part of the design process, but we have also eliminated the fabrication analysis needed in the fabrication process due to comprehensive design analysis being performed as part of the concurrent DFM. By having concurrent DFM analysis, we have removed the need for a separate DFM step after layout has been completed as well as the fabrication analysis step where changes to the layout are commonly performed that will impact the design performance. Figure 5: Step 1 of the best practice NPI is illustrated.
Assembly Panel Design (The Old Way)
The typical NPI will normally include a flow such as that in Figure 6 for assembly panel design when the design environment is creating a single up image and requiring the contract manufacturer to assemble the product. This data is then delivered to assembly and test who will determine how that design needs to panelized for optimized assembly through manufacturing.
Manufacturers usually have some assembly panel guidelines that describe how to panelize a single up board. Depending on the maximum panel size and the actual board dimensions, they will determine how many images can be placed on a single panel. This information will be marked up as a drawing and then sent to the PCB fabricator.
The PCB fabricator will then create the assembly panel from the drawing provided to them and return manufacturing data back to the EMS to confirm what was done. The assembly and test department will then review the information supplied to them before providing the approval to the PCB fabricator to proceed. Figure 6: This illustrates the “old way,” which included a lot of redundancy.
The PCB fabricator will then manufacture the required quantity of assembly panels. Assembly and test will then create panelized data that is used to drive the remainder of the NPI process. As you can see from the diagram, there are two fundamental data passes that all take time to occur. These multiple passes can each impact the overall time to market of the NPI Process.
Assembly Panel Design (The New Way)
In the best practice NPI flow, PCB design information uses the integrated NPI engineering platform to create panelized design data that will be included with the design file. For captive manufacturing the panelization step can even be integrated in to the design flow to further improve the NPI timeline, as shown in Figure 7.
The output from this step is the actual assembly panel design manufacturing data that will be used by the PCB fabricator as opposed to a marked up drawing of the panel requirements. This enables the fabricator to directly create the final assembly panels. That same data file can also delivered to assembly and test, meaning both groups use the same information source. This flow turns a slow, two-pass flow into an efficient, streamlined single data pass. By integrating the panelization in to the design flow, the concurrent DFM process can also be used to further check the final panel design and not just the individual design layout. Figure 7: Here, we have illustrated the “new way” that should become Step 1 of the best practice NPI method.
The Path to Best Practice NPI (Step 2)
By consolidating the multiple panelization steps from fabrication as well as in the assembly and test areas, significant simplification is achieved as part of Step 2 in our path to best practice, mapped in Figure 8. The output files that now contain the complete panelized design information is available in the data set that can then be passed on to both fabrication and manufacturing. There is now significant reduction in the opportunity for panel data to get out of synchronization between fabrication and assembly processes compared to our typical NPI flow. Not only that but time to market is also improved as the information flow is a single pass instead of two passes that were needed for the traditional panelization flow. Figure 8: Step 2 of the best practice NPI.
Assembly and Test DFM Problems
Most product designers allow more than one part vendor to be used as a substitute during assembly. Single-sourcing a component leaves no choice when that component runs in short supply, impacting the ability to ship a product. Having at least an alternative for each part, if possible, lends more flexibility to part availability during the assembly process. However, although most parts have similar generic packages, different vendors will have different absolute physical dimensions. Figure 9: The approved vendor list should have substitute parts so that no single-sourced part will hold up production should it not be available.
An approved vendor list (AVL) is used to specify which manufacturers can be used as replacement parts on a design. These alternate parts are electrically equivalent to each other but can be packaged differently from one vendor to another. It is these physical differences that can lead to issues in assembly that we want to highlight before the parts are used in manufacturing. The BOM file can typically list the alternate parts to be used as can be seen in the above picture. Illustrated in Figure 9, we have three different manufacturer names, each with their manufacturer part number listed. This highlights the first issue that we need to resolve, which vendor package the specific part number is associated with. Rarely does a BOM file, or AVL, list the manufacturer package name so this mapping is needed to correctly understand the package dimensions.
Approved Vendor List Analysis
To examine the issues with alternate parts in more detail, Figure 10 shows three similar, but not identical components from three different manufacturers. These parts are electrically the same, but have slightly different physical dimensions. In this specific case, the overall body dimensions are different as well as the pin contact areas. As the PCB uses a single land pattern for all three components as well as the stencil for paste deposition, we need to be careful that each unique combination will deliver the required yield targets during assembly. Figure 10: Three similar parts, but with slight variance in physical dimensions.
Considering the pin contact areas of each package in the AVL allows the unique packages to be evaluated to determine if they will work with the design layout and not just the electrical properties and price information that are usually considered.
We clearly see in Figure 11 what will happen when each are superimposed on top of the copper land pattern. Components B and C should work correctly with respect to the copper but component A will overlap the edge of the copper. There are two possible paths to take here, either enlarge the copper, which may impact the solder joint for components B and C, or remove component A from the AVL so that only two parts remain.
The Valor Parts Library (VPL) provides mapping from the manufacturer specific part information to the corresponding package. This enables these types of checks to be performed quickly and easily as part of the design process. They ensure that all alternate components will provide optimum results regardless of which one is available during assembly. Figure 11: Even though they may be equivalent electrically, these three components have subtle physical differences.
This same VPL data can also be used after the initial design has been completed when additional alternates may be needed. Instead of just selecting a part based on nominal package size, cost and electrical characteristics, vendor specific package information can also be considered to confirm that yield targets are not impacted as a result of selecting a new alternate. In addition to checking the pin contact areas, the true vendor specific body outlines can also be analyzed to ensure that correct part separations are maintained regardless of specific part selection. Optimally it is desired to define this part data once, but leverage it multiple times in the flow to reduce the number of part libraries that the typical NPI flow proliferates.
The Path to Best Practice NPI (Step 3)
With Step 3 of our best practice NPI flow, shown in Figure 12, we now provide the ability to manage alternate part selection as part of the AVL based on physical information on the package and not just electrical properties. This improves overall product reliability so that parts that would have been selected are not included on to the AVL leaving those components that will provide the performance required for both the design and manufacturing of the product. The AVL information can also be passed in to assembly and test through ODB++to further stream-line the process from design to manufacturing. Another challenge is the sheer number of files being created from design to support all areas of PCB manufacturing. Figure 12: Step 3 is mapped in this figure.
Output File Creation
We have identified that the process of creating a design data pack to convey the necessary information to both assembly and fabrication is one where opportunities exist to achieve a best practice NPI flow. A recurring theme from designers is that this output file creation process takes a lot of their time, typically increasing with the number of variants that exist with each design. It is not unusual to hear that this step in the flow can take upwards of a day to complete for each design due to the sheer volume of files and steps that need to be performed. When examined further these multiple files are created, largely for historical purposes. The initial package of files has always delivered that way to manufacturing and the fabrication and assembly people don’t care if extra files are in the zip, they just ignore what they don’t need. Traditionally Gerber, drill and IPC 356 netlist files have been used to fabricate the bare PCBs. However for assembly and test purposes, these files are less than ideal. They have to be significantly processed during assembly and test process engineering to be usable by the software applications used in these disciplines. So either a lot of effort is put in to use these formats, or numerous other files are used instead. Regardless of the path that is taken during NPI it is certainly not best practice, as we either have significant amounts of work to make the files usable or we have a lot of duplicate processes that create inconsistencies from process to process.
So the question is then, what is the best practice solution? Data Output Automation
Shown in Figure 13, a list of all the different formats that proliferate the NPI environment. Not all these are created in every NPI flow, but they do an idea of the breadth of file formats that abound this area. Given the requirements of a single format that can deliver design data to fabrication as well as assembly and test, there is only one proven intelligent format that is in today, and that is ODB++. ODB++ has proven itself for over 15 years as the preferred format for fabrication over unintelligent files such as Gerber, drill, and IPC 356 netlist. ODB++ also contains all the necessary data for assembly and test programming. With a single export from the design tool, all the necessary data is put together in one package that can then be delivered to both fabrication and assembly environments. Figure 13: The ODB++ data format provides a single, intelligent output file, replacing all the obsolete data files shown.
The Path to Best Practice NPI (Step 4)
Step 4 of our Best Practice NPI flow shows the adoption of ODB++, where we have a single file format that delivers both fabrication ready and assembly ready data, as shown in Figure 14. This significantly reduces the time taken to create the design file package for the designer as well as delivering a single data set for use by fabrication, assembly and test. The latter significantly reduces the possibility of mistakes when multiple files are being used to drive different parts of the flow.
Furthermore the elimination of working with Gerber and its associated drill files in assembly and test reduces the overall time taken as well as eliminating accuracy issues inherent in this process.
Figure 14: This illustrates Stgep 4 of the best practice NPI, which includes incorporation of the ODB++ data transfer format.
We have also moved from the creation of multiple files as part of the design sign-off process to an efficient creation of a single format for all our manufacturing needs.
Simplified Data Preparation Solution
This leads us to one of the most common themes we see in the assembly and test area which is the use of multiple applications to process the input files and BOM. This is typically done using vendor specific software as it was delivered with the machine and is perceived to be “free.” In reality, multiple people performing data preparation in different applications makes the process inefficient and prone to errors, both of which are not without cost.
Now that we hand-off from design in to manufacturing through a single data format there is further desire to stream-line the processing of that data across assembly and test. Performing typical data preparation tasks in a single place reduces the chance of mistakes as well as requiring less people to process the data set.
From the designer’s perspective, the reduction of files being handed off to manufacturing provides the vehicle for more efficient data preparation. A lot of what we hear from customers is that design creates the same files because no one complains and manufacturing don’t question the files being sent them because that’s what they always get. This creates a situation where nothing changes for the better.
The Path to Best Practice NPI (Step 5)
Step 5 of our path to best practice, shown in Figure 15, sees another significant change arising from consolidating the data preparation tasks of multiple applications in to one, as can be seen from the updated NPI flow. Now we can leverage the ODB++ data and associated BOM in a single place and then create the application specific formats needed for each of the subsequent tasks. Figure 15: Step 5 of the best practice NPI flow is shown here.
Single Programming and Part Library
The step in our progression to best practice NPI flow covers the consolidation of all the part and package libraries used after design layout. This can add up to a lot of additional effort to create and maintain what amounts to the same data in multiple different formats for similar purposes. Consider just a simple case of running pick and place vendor software and test programming. Each time a new part arrives it has to be created and maintained in two places, by two people. Now consider when a couple of pick-and-place vendors are used and for test both in-circuit test and flying probe testers are used. This is not an unusual situation yet we now have four part and package libraries that need to be maintained for every new part that comes down from design. The data will usually cover physical information on the size of the part, number of pins and what type or how big those pins are. Then there are electrical properties that need to be considered such as what type of component it is, what is its value and tolerance.
The requirement for our optimized NPI flow would be to create the part once and then transform that data as needed for all the subsequent needs. Having a common part library would also enable a common programming environment for SMT, test, inspection, documentation, and stencil. Remember the Parts Library mentioned earlier for physical part data? Well this library can be used as a basis for SMT machine libraries, test libraries and inspection libraries as well as for stencil design.
The Path to Best Practice NPI (Step 6)
The final step, shown in Figure 16, in our path to best practice NPI sees the removal of multiple part libraries that have been replaced with a single part library for assembly and test data. The ODB++ data is merged with the BOM file and part library information to then become the basis for all processes in manufacturing. These cover stencil design, multi-vendor SMT programming, test and inspection programming and assembly documentation. Figure 16: This figure illustrates Step 6, the final step in the best practice NPI flow.
We now see a clear description of the product NPI or “what” and the process NPI or “how.” Designers can take control of what the product is and manufacturing is given the freedom to determine how the product is made. This further allows the product to be manufactured in different facilities around the world as requirements dictate.
The Six Steps of Best Practice NPI
In summary, we see all the areas that have been optimized as part of our best practice NPI revolution. First was a concurrent DFM process that works in conjunction with the design process to both avoid issues in the first place as well as allowing other issues to be found earlier and hence to be fixed faster.
Next was the optimized panel creation flow, either through the design tools themselves or working in conjunction with a contract manufacturer to reduce the time and chance for error on assembly panels.
For AVL management, being able to consider manufacturer specific package information as well as the electrical characteristics of the part allows better part selection. Parts that are not optimized for the design footprints in use should be removed from the AVL list meaning improved solder joints during manufacturing.
Next was the change from the creation of many files to data output automation to be used for fabrication, assembly, and test through the adoption of ODB++, the only proven format to address this requirement.
Then we have the final two areas where we are able to adopt a single ODB++, BOM, Library merge process with a single assembly and test application as well as a single part and package library to support this application. The result of all these changes is reduced errors, consistent data and ultimately improved yields throughout the assembly and test process. The first four steps form our product NPI phase and the last two steps form the process NPI phase.
Figure 17: The six steps of best practice NPI.For more information, visit www.mentor.com/valor.
Fatal error: Call to a member function getOptions() on a non-object in /home/iconnect/releases/20160311/public_html/elements/next_item.php on line 28