The risk of reducing test is tied to accurate, timely and complete traceability build records.
There are two faces of traceability. Either traceability enables serious market defects to be quantified and analyzed, saving significant cost, business loss and professional embarrassment, or the process of collecting traceability data on the shop floor to the standard required for traceability data to be useful is perceived as a burden to manufacturing, and definitely not classified as Lean.
This second view may be moot, however, because increasingly traceability data collection is automated. Software-based, “best-practice” production management tools include traceability data capture as part of their operation. As we identify more value that can be obtained from traceability, especially that which supports the application of Lean thinking within manufacturing, the perception of the balance of cost versus benefit starts to change. Let’s consider the example of testing, illustrating how the use of traceability as a Lean tool can cut the cost of testing by as much as half.
As a concept, the principle of manufacturing test is not Lean. The only reason it exists is to catch mistakes made in some form or another. The purely Lean approach would be to abolish test and focus instead on “doing it right the first time.” For some small and simple consumer electronics, such as USB flash drives, no tests are performed in manufacturing until just before final packing and shipping. The USB drive either works at that point and it is shipped, or it is scrapped. If the defect rate is small enough and the final test simple enough to be run inline, this can make real economic sense. For everyone else, however, testing remains a necessary evil because most electronic products are far more complex than can be confirmed by a single test at the end of manufacturing.
Even with the sum total of all the testing that takes place during the manufacture of a typical product, eliminating the risk of failure in the market is not guaranteed. Design-for-test (DfT) software can be used to analyze any PCB layout, which almost invariably reveals tracks or nodes that may be untestable by in-circuit test (ICT), AOI or other related technologies. Complete functional testing of the many thousands of potential use-cases of the product is also impractical. And testing in the factory will not reveal other issues: for example, a solder connection that functions in the factory, such that the product passes every test, but it fails shortly in the market after being stressed by temperature change and mechanical shock during normal use.
The emphasis for quality has tended toward the control of the manufacturing processes, even though a testing strategy and implementation in the factory will always remain vital and is often mandated. Depending on the product, however, the cost of performing the many tests in production on every product made can often represent a doubling of the cost of manufacture, excluding materials. Reducing the cost of test in the factory, without increasing any risk of defects in the market, could save a lot of money.
The Lean approach to reducing the cost and burden of test processes would be to eliminate testing where it is not necessary. This is easier said than done when considering complex products made up of many key components and subassemblies. The principle is quite simple, however. For example, let’s look at a production work order of 1,000 products. Partway through the work-order, PCB “A” arrives to be tested at a certain process, and it passes the test. PCB “B” is next to arrive. If instead of simply starting to test B, let’s understand why it is that B would need to be tested. Logic would suggest that if B had been following A through all the production processes, and no changes whatsoever were made to any of the processes, then B would not have been manufactured in any way different to A. Because A had passed the test, then why wouldn’t B? The same logic could then be extended to C and D and so forth.
Practically though, we know from experience that there will at some stage be a product that fails the test, and logically, there must be a cause, and logically again, that cause must be related to something that changed about or related to the manufacturing processes. We would need to understand the complete production history of each individual product to find out what the change was, not just what materials were used, but also the status of each manufacturing process at the time the product was made and details of any event that happened around those processes. We then may be able to understand the difference in the way that the two products were built that caused the test failure.
If complete build records were available of each of the products up until the point the test was executed, analysis of the data could quantify any risk concerning the difference between the two products, which could then be used to influence the decision about whether a test is required. The build record required would consist of the fully detailed material and process traceability data that have been collected as part of the preceding operations.
In the past, traceability data were predominantly collected manually, and keeping records was usually fraught with errors, omissions, and delays. More recently, as computerized systems have become a part of the standard manufacturing process, a great deal of the required data is captured automatically. However, the complete traceability build record is a product of the capture of many types of data from many different machines and processes, that is then put all together.
Advanced manufacturing software solutions available today, linked directly to all shop-floor processes, can collect these data in real time. Such manufacturing software solutions combine production event data with materials, quality, and planning data, for example. The availability of this information can make automated decisions for testing very effective. Clearly, the more available and complete the information, the lower the risk, and the greater the opportunity to avoid needless testing while avoiding significant risk of additional defects.
Here are some examples of how the analysis of product-build records can be used:
Materials. If any material has been changed – for example, the change of a reel of materials on the SMT machine when exhausted – then there should be a requirement to test. The likelihood of making a mistake in the setup of allowable replenishment material is virtually zero where material verification at the machine was done. Even so, the new material could have come from a different batch from the same supplier, or even be the same internal part number but from a different supplier with slightly different physical attributes. Also because of the setting up of the new material on a feeder, the feeder setup, or the condition and position of the material first pickup, could bring a higher risk of variation in placement positioning. In virtually all cases then, a change of a material would trigger the need for a test of the product. The materials to be considered as a trigger for testing should also include the PCBs, which often come in discrete labeled packets, each of which may come from different production batches. Solder paste and other production consumables should also be considered for testing.
Exceptions in the flow. We imagine production lines as continuous operations, but in reality many exceptional things can happen to disrupt the flow. The screen printer may go through a cleaning cycle. The SMT placement machine may stop for some error, such as a blocked output conveyor potentially as a result of a bottleneck in the reflow oven, a routine maintenance check, or simply a failed part pickup sequence. These exceptions should be a trigger to require subsequent test. Any symptom captured from a prior test or inspection, such as AOI, even if the result was ultimately not a defect, should also trigger the need for test, as well as any manual “touches,” such as those required, ironically, for post-setup quality assurance.
Drift. With all mechanical and automated systems, a certain amount of drift will accumulate over the course of the lifecycle of all of the various mechanical components involved. Typically in the SMT world, we see increasing variation in x-y positioning and rotation of placements as nozzles wear or with feeder usage. This is not necessarily a fault of the machines but rather a fundamental issue of automation. The effect of the ongoing work performed by machines against the possibility of wear should be continually assessed. For example, rules could be defined so that for every 10 PCBs produced, some significance of drift could have occurred, and no more than 10 consecutive PCBs should skip the test process.
Manual operations. The drift from manual operations is far higher and much more difficult to assess than from automated operations. A manual operator putting a part such as a large electrolytic capacitor onto a PCB could easily create a number of defects with just one action. The part could be the wrong one if the operator were also putting similar-looking parts into different locations. The part could be reversed, a major issue for any polarized materials. There may be a bent lead so the part looks as though it is inserted, but actually only one of the two pins made it in the hole correctly. Humans can easily make “one-off” defects seemingly at random through distraction as concentration inevitably lapses. Good industrial engineering practices can reduce risk of these simple mistakes by separating similar parts across different assembly operators and providing good instruction, for example. Other mistakes are inevitable, however. One effective solution for this is to provide a level of “self-check” within the assembly line, where a second operator will confirm the presence and, where necessary, the condition of materials that were assembled by a previous operator. This common industrial engineering practice is one that can drastically reduce the incidence of random errors at manual processes. In general, however, it is still a challenge to reach a low enough level of risk to be prepared to avoid testing where manual processes are involved.
Subassemblies. Most subassemblies will have been through a test process before arriving at final assembly. The traceability build record for the subassemblies should be queried to find whether there had been any exceptions in production, such as a test failure, even if they perhaps ultimately passed test. There is also the additional risk of damage to the subassembly as a result of transportation between lines, and the potential effects of holding the subassembly as stock so the first subassembly unit out of a box could trigger a test of the main assembly that it was combined into. Each of these issues, and others identified case by case, can cause a risk that may trigger mandatory testing.
All these examples depend on the various conditions and events within the production operation before the point the test is to be performed on a product, the rules for which can vary enormously between different equipment sets, the stage in manufacturing of the test process under consideration, and different industrial engineering practices across industry segments and product profiles. With so many sources of variation, the decision to skip any testing may seem risky. Statistics have shown, however, that reducing the testing of individual products where such risk analysis has been performed has not significantly increased the number of defects found later in the production process, or more importantly, in the market. Clearly, the level of risk is proportional to the effectiveness of the analysis, which in turn is based on the availability of accurate, timely and complete traceability build records of products.
Lean test strategy decisions do not have to be limited to the decision of whether or not to test. Several test processes, including AOI and ICT, can provide choice for the levels of test. For example, a full test can be done where there is a major difference detected, including for the first product tested of a work order, which may take a long time to execute but will be thorough and complete. Alternatively, a reduced test profile may be selected that can execute much faster, but would cover only areas such as undeterminable risk areas related to manual assembly.
The key is knowing the detailed traceability, knowing the materials, how they were placed, and the history of the process used to place them. A full traceability build record available in an automated and timely way can significantly reduce test requirements, reducing test-related costs by up to 50%. Applying Lean to test, then, provides the ability to remove those tests that are not required, supported by the availability of complete traceability information, without risk of increasing quality issues.
michael_ford@mentor.com. His column runs bimonthly.
is marketing development manager, Mentor Graphics (mentor.com);