caLogo
Tech Tips

The best, most practical design means limiting the size of the study.

In a bit of a strategic departure from the normal range of electronic manufacturing topics, the EMPF would like readers to consider the importance of preparation in the arena of designed experiments in order to properly qualify a manufacturing process. This particular topic has relevance because of the associated scope of work the EMPF encounters in numerous projects from material R&D to manufacturing process optimization, which require appropriate experimental designs to ascertain significant data.

There are three important DOE rules of engagement:

Plan the experiment with a realistic goal in mind. One experiment may not reveal all the pertinent information needed to optimize the manufacturing process. Figure 1 reflects a process to build a PWB that may contain as many as 200 sub steps. The practicality of a full-blown experiment incorporating all the subsequent processes is unrealistic. Breaking down the individual areas into manageable experimental units is more practical in assessing where the greatest variability occurs in the manufacturing process. The type of experiment used also will depend on the type of data required and the stage of development the manufacturing process has achieved.

Fig. 1

One possible approach indicates that a screening experiment may be appropriate for a number of processes that are relational or sequentially adjacent to each other. An optimization should occur at each process to determine relationships properly without introducing excess variability that would mask true variability between processes. Finally, a nested ANOVA (analysis of variance) may be used to determine where variability occurs in the larger manufacturing flow. Whatever experimental methodology is used, many smaller experiments will yield greater information and prevent the costly mistakes frequently encountered in one grand experiment.

Define the constraining factors and block where appropriate. Experiments are expensive, and it is not always possible to run the experimental unit in totally random fashion. A recent example of this occurred when cost and time precluded running a solder paste experiment with four different paste types, two different panel finishes, four atmospheric oven conditions, and two peak reflow temperatures. Even with a fractional factorial design, the practicality of running each board in randomization was prohibitive. In cases such as these, and they are more common than not, it is advisable to incorporate a blocking scheme that can be used to effectively monitor the effects of process sequence and bias on the experimental results. This may require more than one block, for example, noting the day the experimental runs were conducted. It could be all runs with high peak reflow temperatures and a specific solder paste were processed on Day 1. If those blocks are not incorporated into the data, any effects from those conditions will be confounded and unreliable. It must be proven that on Day 1, there were no anomalies that would skew the results; for instance, that the oven temperature was recorded accurately and the solder paste not mishandled or given any extraordinary treatment the other pastes would not receive.

Replicate. It is better to replicate experimental runs than to expand the experimental factors and levels with no replication. Replicating runs will generally increase the statistical significance of the data. A rule of thumb is to incorporate 25% of the experimental runs as replicates. Randomize the sequence of the experiment (this can be done in Excel effectively) and select the first 25% of the runs sequenced. Note that replicates are a representation of the experimental unit, not siblings. An example of a sibling is running an experimental unit (i.e., having the same factors) concurrently under the same conditions. A true replication is run as a randomized and independent event, having the same experimental levels and conditions.

Experimental design can be a costly and inconclusive process if the proper precautions are not taken to ensure a statistically sound probability of detecting true variability or causation. When DoE is properly implemented, process improvements can be instituted that will help decrease cost and improve product reliability.

The American Competitiveness Institute (aciusa.org) is a scientific research corporation dedicated to the advancement of electronics manufacturing processes and materials for the Department of Defense and industry. This column appears monthly.

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article
Don't have an account yet? Register Now!

Sign in to your account