caLogo

screen printing The effect of various print and reflow factors on broadband printing.

Last November, we introduced the concept of broadband (dynamic) printing, which in simple terms is solder paste deposition for components of multiple (and dramatically varied) sizes on a single board.

We recently finished a study that investigated the effect of various print and reflow factors. Step one focused on the printing process and included the following printing factors: stencil technology, stencil thickness and board finish. Step two focused on effect of reflow environment, stencil technologies and pad finish on solder joint integrity. I’ll summarize the study and findings over the next couple columns.

The test vehicle (Figure 1) TV was divided into four quadrants with the same pad layout in each quadrant. The top half of the board was a “step and repeat,” while the bottom half was the “mirror image” of the top half. The board layout was created to understand the interaction between pad orientations, pad location, board and stencil stretch. Each quadrant incorporated a range of commercially available components and packages that included miniature (01005 and 0201 passives) and larger components (BGA-256, QFP-180, etc.).

Fig. 1

As the objective was to overprint the larger components to provide higher volume of paste, the larger components’ aperture size was systematically increased from 100% to 160% (based on the component type) of the pad size. Table 1 shows the aperture layout for these components and Figure 2 shows typical schematic of a QFP-160 and TSOP-32 aperture size distribution.

Table 1

 

 Fig. 2

Printing DOE. A full factorial design with two factors, at two levels, was performed to optimize paste transfer efficiency. This DOE was blocked over two different stencil technologies to assess the effect of stencil technologies on large and small pad sizes. Table 2 shows each stencil’s actual measured thickness and taper.

Table 2

Factors used were:

Variable:

  • Stencil thickness – 0.003" and 0.004".
  • Board finish – OSP and ENIG.

Fixed:

  • Paste type – SAC305, Type IV
  • Print speed – fixed for each stencil type.
  • Print pressure – fixed for each stencil type.
  • Separation method – fixed for each stencil type.

Blocked:

  • Stencil technology – Electroformed and stainless steel laser cut.

Table 3 shows the standard order design table. Before each block of DOE was started, print parameters were optimized. Print optimization started with printing a board with the paste manufacturer’s recommended settings, followed by visual inspection to verify the board-stencil alignment. The offsets were then input in the printer, and the same board was cleaned and printed again. This process was repeated until the paste was completely deposited on the pads. The initial setup for print speed, print pressure and separation speed were then modified to obtain optimum print quality. The criteria for the optimum print quality were deposit aperture fill, deposit shape, and clean sweep of the stencil after every stroke. Figure 3 shows a typical acceptable print setup.

Table 3

Fig. 3

The responses considered for this study were the transfer efficiency (TE), paste height, absolute paste volume and bridging. Transfer efficiency is defined as:
        
TE = Measured paste volume/Theoretical volume of the aperture * 100

A “repeat” noise strategy was adopted for this study to learn run-to-run variations. Three boards per treatment combination were printed with “rear to front” squeegee direction only. This strategy was adopted to minimize noise effects due to squeegee stroke direction on the print quality. All data analysis was conducted using JMP statistical software.

Next time, we’ll discuss the findings.

Rita Mohanty, Ph.D., is director advanced development at Speedline Technologies (speedlinetech.com); rmohanty@speedlinetech.com.

Chinese

A new platform employed sophisticated statistical tools, eschewing traditional trial-and-error paste formulation.

No-clean solder pastes use rosins or resins to boost flux activity and to ensure electrical reliability. Rosins are derived from natural sources, such as trees or plants. Resins are either highly refined rosins or completely synthesized chemistries with no basis in nature whatsoever. Rosins present problems because they are naturally occurring substances, and they are subject to a great deal of variation in their makeup and performance. They can contain numerous chemical compounds in varying amounts, depending on their country of origin, area of the country, source plant type – even changes to growing conditions. On the other hand, resins, which are manufactured under controlled conditions, provide far more consistent performance. Synthetic resins have rapidly evolved over the past 10 years as a result of increased polymer research. They promised the potential to eliminate many paste performance variations assemblers have grown to accept, but their implementation required formulators to step away from existing knowledge and experience to embark on a new learning curve.

That’s where the Six Sigma methodologies came into play. Although these principles have been used in the industry for years, for the most part they have been applied to PWB fabrication or assembly processes – not to product development, and certainly not to soldering materials themselves. The recent development of a new SnPb solder paste, however, combined modern raw materials and the application of Six Sigma principles.
The project was a unique endeavor that combined the analytical skills of Black Belts in electronics design and assembly with the expertise of flux and paste formulators. Both parties combined statistical and scientific knowledge to find new ways to apply the proven methods.

Identifying Deficiencies

One of the first steps in defining the path of the project was identifying deficiencies in currently available SnPb solder pastes. These perceived deficiencies include:

  • Voiding in BGA components.
  • Inability to run fast print speeds, sometimes gating production throughput.
  • Reflow defects such as solder beading and balling, tombstoning, and wetting to oxidized copper pads.

These areas were targeted as major performance characteristics of the current solder paste formulation that required improvement, but experimental analysis was not solely limited to this group of outputs. A total of 27 performance characteristics were considered in the final analysis. The multiple inputs and multiple outputs associated with this project required a series of analyses to map the entire system. The primary Six Sigma tools used in the co-development effort included:

  • Houses of Quality, used to establish the Voice of the Customer and translate customer requirements into product design requirements.
  • Measurement Systems Evaluations, to define variability in the testing. Gage repeatability and reproducibility (GR&R) studies were used to verify certain partially subjective quantification methods such as solder bead count. Standard laboratory measurement techniques with GR&R and calibration records were not requalified.
  • Response Optimization, to determine performance of each raw ingredient. Analytical methods that included high-performance liquid chromatography and gas chromatography were used to identify cause-and-effect relationships between raw material components and paste performance. (This correlation process was the cornerstone of understanding the mechanisms at work, shortcutting and outperforming the traditional trial-and-error methods.)
  • Regression analysis and mixture DoEs, to determine optimum overall performance. Because the chemical compounds used in solder paste can sometimes have strong interactions with each other, mixture DoEs were preferred over traditional factorial designs to better characterize those interactions.
  • Balanced Scorecards and Desirability Curves, which allowed the research team to reduce over 20 outputs to a single factor or variable (desirability) to optimize overall performance.

Results

Balancing solder paste properties can be a daunting task. Improving one property, such as voiding for example, can often impair other key properties, such as wetting. Overall optimization of performance is an intricate balancing act. The trial-and-error process has the potential to become an infinite loop, and eventually compromises must be struck to define a final product. Stacked bar graphs accelerate the process by helping to compile the overall scores for individual properties – the overall desirability – to aid the optimization process.

Fig. 1

Figure 1 shows a sample stacked bar graph comparing different formulations. This tool helps provide an overall score, but it is not the ultimate decision tool. The product development team must still be on guard for individual properties performing below minimum acceptable levels. In the ranking system that was applied, a perfect score would equal 28. Normalizing those numbers to a percentage basis, the improvement in overall performance equals 42% over the previous formulation and 21% improvement over a competitive material.
Voiding. Improvement in voiding properties was a key goal of the development process. The final product demonstrated significantly less voiding than the previous paste (Figure 2).

Fig. 2

Print speed. Another key area for desired performance improvement was the squeegee speed used in printing. The previous paste formulation demonstrated a maximum print speed of 2"/sec. (50 mm/sec.). Four inches/sec. (100 mm/sec.) was required of the new formula. Figure 3 shows three different types of paste prints at 100 mm/sec.

Fig. 3

Reflow performance. Reflow considerations included solder balls, solder beads, tombstones, and wetting to OSP. Tests for these properties were performed on the Benchmarker II test vehicle. Figure 4 shows the difference in reflow performance between the existing and new formulations. The new formulation reduced solder balls by 72% and solder beads by 67%. The study did not produce enough tombstones to provide statistically significant results. The previous formulation generated one tombstone defect, and the new formulation generated none.

Fig. 4

To gauge wetting performance, the test vehicle used four different patterns (Figure 5). Results from each test are scored. The more points that a paste formulation scored, the better it wetted.

Fig. 5

Figure 6 shows results of the wetting test. The use of modern ingredients to improve soldering is obvious in the results. The previous formulation only scored points on the simplest test pattern, and scored no points in the more challenging patterns. The new formulation substantially outperformed the previous formulation in all four test categories.

Fig. 6

The target characteristics defined by the customer included BGA voiding, print speed and reflow properties. In the cases of print speed and solder balls, actual values are reported; in the case of voiding and wetting, outputs were normalized by assigning points on a performance scale. Table 1 summarizes the outputs.
The original goals also included improvements to compatibility with direct pressure print heads, resistance to high humidity and resistance to hot slump. Performance in all major categories met or exceeded the original project’s goals.

Table 1

Conclusions

The project changed the way the customer and solder paste supplier worked with each other. In the past, the customer would conduct periodic benchmarking. A list of desired attributes would be provided to the supplier, which would then choose what they felt was the best candidate based on the list. Unfortunately, this meant the solder paste supplier would independently determine the tradeoffs prior to submitting candidates to the customer for testing. The customer would test the product to determine its overall acceptability, and very little technical dialogue regarding performance benefits or tradeoffs occurred. The best candidates for the job could have possibly gone untested and unrecognized in this scenario, because the true needs of the product were not communicated on a functional basis. Furthermore, natural variation could not be captured or understood by the customer because its benchmarking occurred at a single point in time.

A major benefit for the supplier was correct use of experimental techniques and the ability to understand the outputs of solder paste formulations as functions of their inputs. Prior to this project, formulators were more familiar with factorial designs of experiments, and had limited exposure to mixture designs. Only through the customer-supplier interaction were the real benefits of mixture designs appreciated and maximized.

Both the supplier and customer immediately began realizing the mutual benefits of understanding the Y = f(X) relationships spanning from the flux’s raw material stage through the assembly process and on to the final solder joint. Dialogue regarding potential production issues and their resolutions were catalyzed by the mutual understanding of how multiple inputs (Xs) affected multiple outputs (Ys). Furthermore, the supplier began applying the methodologies from the SnPb co-development effort to the next generation of Pb-free solder paste products.

Even the assembler that lacks the resources to support a full Design for Six Sigma co-development effort can realize a cost savings by updating its process chemistries. Most of today’s no-clean SnPb solder pastes are based on formulations developed in the 1990s. Ten-year advances in the raw materials’ technologies have opened new pathways to improved material stability, with direct results in greater process stability. Upgrading a process chemistry, such as solder paste, can reduce the costs of defects and rework, improve assembly line utilization, and reduce expensive false failures at test.

Bibliography

H. Sanftleben, L. Lewis, K. Stark, M. Young and M. Skrzat, “The Value of Joint Customer and Supplier Quality Function Deployment (QFD) and Design for Six Sigma (DFSS) Toolset Applications,” SAE World Congress & Exhibition, April 2008.

Richard Lathrop, “The Digital Solder Paste,” IPC Apex Proceedings, April 2009.

Derek Moyer is an applications and customer support engineer with Heraeus (heraeus.com); derrick.moyer@heraeus.com. Brian Bauer, Steve Ratner, Martin Lopez, John McMaster, Frank Murch and Mike Skrzat are with Heraeus.

 Focus on Business

Five great things an EMS program manager can do in a recession.

I’ve said repeatedly that program management is the most difficult job in electronics manufacturing services. Reason: Program managers are on the front lines of the fray and often have responsibility for both customer satisfaction and account profitability. When times are good, they must find resources to support customer upside demand. When times are bad, they often are blamed when accounts miss forecasts, slow pay, end up with obsolete inventory liability or disengage. In some cases, program managers legitimately carry a share of the blame for a poorly performing account. This month, we look at ways to make the best of a challenging business environment. If you are doing all these things, congratulations. If not, it is a great time to start.

1. Dust off the contracts. Ideally, the contract is a living document used to govern commitments from the customer and EMS provider. The best contracts provide strong frameworks for payment terms, inventory liability and reconciliation, termination liability, workmanship standards, product acceptance terms and dispute resolution. In reality, sometimes this useful tool sits in a drawer and is only pulled out at project termination. Customers take advantage of this in economic downturns by stretching their payables, pushing out inventory reconciliations, rejecting a higher quantity of goods and attempting to terminate on their preferred terms.
When a contract gathers dust, it will be harder to enforce because the customer has grown used to doing business on its own terms. However, enforced or not, a contract provides at least a basic framework to begin negotiations on changes in terms that are more mutually agreeable than those a customer is choosing to dictate on its own. PMs who choose to negotiate changes in terms, versus accepting customer behavior changes as inevitable, generally get better terms. The reality is customers who push the envelope dictating new terms find a certain percentage of their suppliers don’t push back. They then negotiate with those who do. Don’t be part of the crowd that assumes negotiation is not possible.

If you’ve never gotten a contract signed, these are times that illustrate why it really is worth the time and effort to negotiate contracts. Your current customers may be past the point where a contract is obtainable, but don’t continue the mistake with new customers.

2. Get everything in writing and organize it well. Most PMs understand the value of documenting key decisions in writing. In a business downturn, this discipline is more important than ever. While some customer representatives are ethical in both good times and bad, others can develop a survivor mentality that helps them justify bad behavior as acceptable if it helps achieve their employer’s goals. In short, the Golden Rule becomes, “If you are stupid enough to allow your company to be taken advantage of, it is your fault, not mine.” This behavior can be exacerbated when customer team members or their management change, because new team members may not feel bound by predecessors’ commitments. In the end, your documentation of events may be your company’s only defense in recovering monies the customer legitimately owes. Keep good notes on discussions in meetings and phone calls, and distribute a copy of key points, agreements and action items to all participants after the meeting or call. Organize this documentation by customer and work with IT to ensure it is appropriately backed up. Getting things in writing is meaningless if the notes can’t be found when needed as a reference. Remember the adage, “There are two kinds of people in the world: those who lose data and those who will lose data.” Be part of the group who has a backup and knows how to retrieve it.

3. Develop a roadmap for each account. Accounts seldom go bad all at once. Instead, they develop liabilities and bad behavior over time. When viewed over time, it can be difficult to recognize a vulnerable or failing account until it is too late to course correct. Account business plans help address that issue by providing benchmarks of account status. A good account plan lists relationships in the account, known competitors and their account share, short- and long-term goals for the account, performance metrics and trends, potential new opportunities and potential risks.

The value of having this information in writing and reviewing it quarterly is that it is very obvious when account dynamics start to change. For example, if the customer’s key team for the account is changing, it is a signal to start building relationships with new team members. If account revenue is slipping, it is a signal to determine the root cause. Back in my corporate EMS days, I once found business was moving to a dual-sourced competitor that my company was outperforming in quality, simply because the competitor was charging penalties if revenue dropped below forecast and my company was not. When we began charging as well, the business moved back because we were the better-performing supplier. Watching risk trends over time may also drive proactive resolutions that solve issues before they grow or age beyond the point that they are easily resolved.

4. Hold quarterly review meetings. Besides meeting your customer’s project status update goals, quarterly review meetings are a tool for building relationships and assessing account viability face-to-face. They also provide a venue to discuss new capabilities, explore larger business opportunities and discuss sensitive account issues with a larger team than you may be talking to on a day-to-day basis. Regularly scheduled and well-planned meetings ensure key issues are being formally discussed in a timely manner. The meeting also can be a timeline trigger for reaching agreements on issues that require higher levels of management involvement.

5. Update your resumé and your network. This last point is particularly true if you haven’t been good at items 1 to 4, but in this economy even great PMs can find themselves looking for a job unexpectedly. EMS companies fail, consolidate or have large layoffs driven by business loss. When that happens, employees who can be hurt the most are those who were fully committed to the job and worked at a single employer for a long period of time, because they are often the least prepared to market themselves in a highly competitive job market. The biggest value of keeping an up-to-date résumé is that it makes it easy to track your most relevant accomplishments. Track record is ultimately what new employers will look for, and those who maintain current résumés usually have a good itemized list of results to share. It is also a good sanity-check on your performance. If you don’t feel you have good accomplishments to list, perhaps that is a wakeup call to volunteer for a challenging project or become more proactive managing existing projects. While no one is indispensible, those who pull more than their weight tend to have more job security than those who simply fulfill minimum requirements.

The value of having a large network is that it can both keep you abreast of broader industry trends and provide options should you be unexpectedly unemployed. Signing up to LinkedIn the day after you get a pink slip and then contacting people you didn’t feel were worth keeping in touch with for years isn’t the way to get the best referrals into new opportunities. In this type of environment, résumés sent to competitors may simply sit in a large pile in HR. Referrals to a hiring manager are often a key step in moving from HR to an actual interview process, and when two equal candidates are considered, the one with references known to someone at the hiring company may have the edge. A good network broadens the pool of potential references.
This is the time when PMs can distinguish themselves. Be the standard by which others measure their performance.

Susan Mucha is  president of Powell-Mucha Consulting Inc.; (smucha@powell-muchaconsulting.com.) Her book, Find It. Book It. Grow It. A Robust Process for Account Acquisition in Electronics Manufacturing Services, is available through barnesandnoble.com, amazon.com, IPC and SMTA.

 Caveat LectorSaying the electronics manufacturing services industry is now “out of the hypergrowth phase,” iSuppli (isuppli.com) in mid June forecast the sector would grow an anemic 1.1% compounded annually between 2009 and 2013. The research firm’s forecast for 2009 is even more grim: a drop of 12.3%, down from a prior estimate of a 9.9% dip.

In presenting the forecast, Adam Pick, who heads the firm’s EMS/ODM research group, cited the uncertainties of the recession, government stimulus plan, looming mortgage resets, prime mortgage foreclosures, and continuing decisions by OEMs to pull assembly in-house. Hope that industrial OEMs would lead the next wave of outsourcing has diminished, also. “We don’t see it,” was his grim assertion.

Pick also took issue with those who claim the industry has turned up. “First-quarter EMS sales shrunk 16%. The rate of deterioration slowed, causing many to suggest we have entered the trough. … [However,] guidance is still negative, and how could we hit the bottom if guidance is still negative?”

While I don’t question the data points, in my opinion there are at least a couple of things wrong with the conclusions. For starters, the emphasis is simply on revenue growth. As such, much of Pick’s data are tied to the performance of less than 10 publicly traded EMS companies. Extrapolating the health of an industry of at least 2000 firms worldwide based on the results of 10 is risky.

Two, the size of the market itself. iSuppli shows Foxconn as a $55 billion company. Yet keep in mind that the Taiwanese firm makes motherboards, bare boards, connectors and countless other products. By contrast, Circuits Assembly’s analysis of Foxconn’s actual EMS and ODM sales puts the number quite lower: $42.3 billion. Then there’s the issue of Foxconn’s wild growth rates – which some say are not real – which have driven much of the broader sector’s ramp over the past 10 years. Hon Hai Group (Foxconn is its trade name) has done nothing to quell the craziness. Recall that in early 2008 the company forecast sales to increase 30% to $81.3 billion. Take away Foxconn and the entire scope of the market changes.

I would argue the more important – and long neglected – data point is profit growth. And that’s why I like what some of the former crew of Technology Forecasters is doing. Now under new management and renamed InForum (inforuminc.com), the new leaders’ expertise is supply chain management, and the group is looking to add the rest of the electronics supply chain to its OEM-EMS roots. As InForum president Kathleen Geraghty told us in June, all sizes of electronics companies’ supply chains have some global dimension to them. “We want to bring in some extended views and [place] a great emphasis on globalizing” the reports and data.

To be sure, some of the old hands are still on deck. Matt Chanoff remains the head of economics. TFI founder Pam Gordon is contributing from the environmental perspective. Charlie Barnhart and Jennifer Read are in the mix as well.

But when InForum looked at the demographics of its members, it realized the majority are OEMs and EMS firms. As InForum’s Eric Miscoll notes, “The way the forum always worked is, get the OEMs, and everyone else will be there.” But members sensed the group became EMS-centric. In response, Geraghty wants to increase the balance of component manufacturers. “We see them at the center of what we are able to do as an industry. The capacity and efficiency of the industry begins with that node. So if we are trying to tackle that issue and facilitate discussions that allow us to move forward, that segment needs to be a part, both from a technology and operations aspect.”

As an industry, our logistics operations have made real progress since the last downturn. On the latest AMR Research Top 25 Supply Chain list (amrresearch.com/content/view.aspx?compURI=tcm:7-43469), half – including Apple, Dell, Cisco, Nokia and IBM – come from high tech. And there’s no doubt the industry on the whole has learned its lesson when it comes to cash management and debt-to-asset ratios.

But there’s one other input to this equation, and not to pick on iSuppli too much, it is one that InForum seems to understand better. Even in the run-up from 2003 to the first half of ’08, EMS companies struggled with margins. In a downturn, it’s easy to say there’s excess capacity. But the EMS side needs to shed capacity now – and repress the urge to add it as things turn up – if it wants to make the relationships with OEMs a long-term, profitable undertaking.

And when the day is done, aren’t profits the most important metric?

Chinese

Evidence shows tight specifications on the two response variables for the temperature profiles can do the trick.

An initial study was conducted to determine if just three thermocouples could verify a PCB’s profile during reflow. In doing so, a thermal recorder, with the capability to measure up to 20 channels was used to thoroughly map a test vehicle. Three of the 20 channels were used to record the temperatures on the leading edge of the PWB laminate, while the other 17 were distributed around the PCB in critical and noncritical locations. Two techniques were conducted: “characterization” and “verification.”

In the characterization technique, a T/C is applied to a low-mass component (L), an intermediate/sensitive component (S) and a high-mass component (H). In the verification technique, the three T/Cs (L1-L3) are placed along the top, leading edge surface of the PCB. The characterization technique can be represented graphically (Figure 1). This figure is representative; component locations conceivably could be anywhere on the PCB. Similarly, Figure 2 demonstrates locations of the three T/Cs in a verification technique. Here, the locations – left, center and right – are fixed.

Fig. 1

 Fig. 2

The logic behind the placement/evaluation of these three locations is that the components with the smallest thermal mass – i.e., the most thermally sensitive components on the board – will not see profiles significantly different from these leading-edge locations.

Experimental

Thermocoupling the TV. The test cehicle (TV) is a desktop motherboard. The board measures 9.5" x 9.5". One goal of this project is to permit the profiling to be done in a quick fashion; a “lick-and-stick” approach was taken for connecting the T/Cs to the TV. The T/Cs, in other words, were attached solely via Kapton tape either on the surface of the board, or on or under components of interest. A more thorough R&D type of T/C attach – for example, drilling into a BGA sphere – was intentionally avoided to better reflect current practice in many line-side situations. Figure 3 shows a schematic of the 20 T/C locations; a photographic image of the TV appears in the Appendix.

Fig. 3

Because a major focus of this work is to determine if three T/Cs located along the top, leading edge of the PCB is sufficient for verifying product temperatures (profiles), the other 17 T/Cs available (via use of the 20-channel M.O.L.E. Temperature Recorder) were attached to various locations on the TV.

T/Cs used for “characterization” technique. Locations/components identified for the characterization technique were:

  • Location 18 – Identified as a component with low thermal mass (L).
  • Location 8 – Identified as a component with intermediate/sensitive thermal mass (S).
  • Location 19 – Identified as a component with large thermal mass (H). This location is arguably the most thermally sensitive on the assembly with the narrowest process window for acceptably reliable assembly.

T/Cs used for “verification” technique. The three T/Cs placed along the top, leading edge of the PCB are those identified for the “verification” technique as described (Figure 2). They are numbered locations 1, 2 and 3.

Reflow profiles. To perform this initial study, four different Pb-free reflow profiles were planned: a baseline profile and three other profiles that represent changes to the stability of the oven. A reflow oven with six heating zones and one cooling zone was used. A brief description of each of the profile settings is provided below; diagrams of each of the temperature profiles are in the literature.1

Profile 1: This profile is referred to as the baseline. It was selected as it provided a good Pb-free reflow profile for three critical components (4, 5 and 19). Locations 4, 5 and 19 are considered the most critical of the components, as they correspond to the largest thermal masses on the board: the corner joints of a large ball grid array and the processor socket joints. For this profile and for comparison purposes for the other profiles, the zone temperature settings (top zones and bottom zones set to be the same) are shown in the Appendix; the belt speed for Profile 1 is 30 cm/min.

Profile 2: This profile is identical to Profile 1 in zone temperature settings; belt speed is 35 cm/min.

Profile 3: This profile is identical to Profile 1 in zone temperature settings; belt speed is 40 cm/min.

Profile 4: This profile is identical to Profile 1 except for one significant exception. The reflow zone (6) temperature was set much lower than it was in the other profiles. It was set at 220˚C, whereas the reflow zone temperature was 260˚C for all other profiles.

The logic for changing the belt speed with regard to Profiles 2 and 3 was to determine if the three top leading edge T/Cs alone would be sufficient to depict a change in temperature recordings with regard to the response variables of peak temperature and time above liquidus (TAL). Profile 4 was an attempt to simulate a bad temperature zone without, for example, actually shutting down a fan and compromising the oven.
Procedure. The following steps were followed for each of the four profiles:

  • Set machine parameters (zone settings and belt speed).
  • Permit oven to stabilize.
  • Set mole to record.
  • Place TV on center of the conveyor belt and allow assembly to go through oven.
  • Upon exiting oven, stop recording on the mole.
  • Set TV aside to allow it to cool/stabilize – while keeping the oven running at current profile. (This is done to mimica production environment.)
  • Disconnect mole from T/Cs.
  • Connect mole to computer and download the temperature readings.
  • After TV has cooled/stabilized for a one-hour period, reattach mole to the T/Cs.
  • Repeat Steps 3 through 9 until a total of five replicates have been accumulated for the current profile.

Results

As two studies are conducted, one for characterization and one for verification, the results will be separated accordingly. Each study consists of a group of three T/Cs.

For each experiment/technique (characterization and verification), the data were analyzed first by looking at each individual T/C in the group, and second, by looking at the grouped behavior of the three T/Cs. In evaluating the profiles, the following two response variables are of interest:

  • Peak temperature.
  • Time above 217˚C; aka TAL.

In analyzing the response variables, a Oneway Analysis of Variance (Oneway ANOVA) was conducted. The purpose was to determine if reflow profile has an effect on the response variables. A Tukey-Kramer multiple comparisons test (MCT) was also conducted. The MCT was used to determine if differences in the performance (readings) of the T/Cs between the different profiles are significant. The MCT is conducted at an α = 5% value (i.e., providing a 95% confidence).1

Before the results are summarized, it is important the reader have a fundamental understanding of how the data were analyzed.

Consider the following discussion with regard to peak temperature and T/C 1 – one of the T/Cs used in the verification experiment. In comparing the baseline (Profile 1) to Profiles 2 and 3, which have the same zone temperature settings, but increasingly quicker belt speeds, we would expect peak temperature would reduce from baseline to Profile 2 to Profile 3. We should also expect that in comparing the baseline to Profile 4 (where the belt speed is the same as the baseline, but the reflow zone (6) is significantly reduced) peak temperature will also reduce. Given this discussion, Figure 4 is a graph for T/C 1 (L1) and peak temperature.1

Fig. 4

Figure 4 indicates the mean values of peak temperature (designated by the horizontal lines in each diamond figure) have decreased, as expected, from baseline (256.1˚C) to all other profiles, respectively (253.64˚, 252.00˚, and 242.18˚C). The circles on the right-most portion of the figure indicate the significance of the differences across the different profiles. For example, the circle related to the baseline (Profile 1) is separate from the circle of Profile 2. Since they do not overlap, the Peak Temperature of Profile 1 for T/C 1 can be said to be significantly different from that of all the other profiles.

A similar understanding/analysis can be made with the TAL data appearing in Santos et al. (2008). We should expect the TAL, in moving to Profiles 2 and 3, as compared to the baseline, will decrease. Profile 4, however, is a little more interesting. As Profile 4 is only different in one zone (6), but because that setting is still high (220oC) and the liquidus value is below that (217oC), there may not be as large a difference in TAL when comparing the baseline to Profile 4 as when comparing the baseline to Profiles 2 and 3. All this is evidenced in Figure 5.

Fig. 5

Figure 5 indicates significant differences between the mean TAL values for the baseline, as compared to all other profiles. (The circle for the baseline does not intersect with any others.) The mean values of TAL - T/C 1 for Profiles 1-4, respectively, can be found in Santos, et al1 and are 142.12, 119.58, 101.46, and 124.26 sec.
Now that an understanding of how some of the data/graphs were analyzed, summary results of the two experiments are now presented, beginning with the characterization experiment.

Characterization experiment summary results. The three characterization T/Cs are locations 18, 8 and 19. These represent a low thermal mass component (L), an intermediate/sensitive component (S) and a high thermal mass component (H).

Tables 1 and 2 in the Appendix present the mean peak temperature values and mean TAL values for each of the components in the characterization group. The values are presented for each of the reflow profiles (RP1-RP4). In addition, percent changes in moving from the baseline (RP1) to each of the other profiles (RP2, RP3, or RP4) are noted.

Table 1

Table 2

Verification experiment summary results. The three verification T/Cs are those numbered 1-3 and are the three located on the laminate across the top, leading edge of the TV. The white paper1 provides the following for each of these T/Cs: an ANOVA analysis for peak temperature and an ANOVA analysis for TAL. The white paper also provides an ANOVA analysis for peak temperature for the combined (1, 2 and 3) thermocouples, as well as an ANOVA analysis for TAL for the combined (1, 2 and 3) thermocouples.

Tables 3 and 4 in the Appendix present the mean peak temperature values and mean TAL values for each of the components in the verification group. The values are presented for each of the reflow profiles (RP1-RP4). In addition, percent changes in moving from the baseline (RP1) to each of the other profiles (RP2, RP3, or RP4) are noted.

 Table 3

Table 4

Conclusions

The two response variables of importance in this work are peak temperature and time above liquidus. Pb-free guidelines for these two variables are typically listed as:

  • Peak temperature: min. 235˚C, max. 260˚C.
  • TAL: 60-120 sec.

In looking at the two groups (characterization and verification groups), Tables 5 and 6 in the Appendix of this paper provide the mean values of Peak Temperature and TAL.

Table 5

Table 6

Response variables and baseline profile. Concerning the baseline profile (RP1), we see that regardless of group, mean peak temperature does not exceed the 260˚C specification, as desired. For TAL, the characterization group does not exceed 120 sec., which is also desired. However, the verification group does exceed the 120 sec. threshold. We offer that this is not necessarily a bad situation. The reader should keep in mind the baseline profile was developed while considering the three locations of highest thermal mass (locations 4, 5, and 19). To get those three locations to temperature and for sustained (60-120 sec.) duration, it is not surprising that three thermocouples simply placed along the leading edge of the substrate have a TAL that exceeds 120 sec. Further, and to restate, the mean peak temperature does not exceed 260˚C in this group; nor does the mean peak temperature of any individual thermocouple in this group exceed this value (Table 3).

To further support that this is not a necessarily bad situation, consider one of the most thermally sensitive components on the TV as measured by T/C 18 (see Tables 1 and 2). T/C 18’s mean peak temperature is comfortably below 260˚C, and its TAL is only slightly above 120 seconds.

Effect of changing from baseline profile on the response variables. Even a casual evaluation of Tables 5 and 6 reveals that when changing to Profiles 2 or 3 – where the belt speed is increasingly quickened – both the characterization group and the verification group see decreases in peak temperature and TAL. These results are expected. In changing to Profile 4 – that simulates a bad reflow zone, both the characterization and verification groups also see decreases in peak temperature and TAL. Again, these results are expected, but it is even more important that the data support these expectations.

This work represents but a subset of a 35+ page white paper1 that contains a wealth of additional statistical analysis, graphs, and tables. Readers are invited to contact the authors for a copy.

Interesting ending observation. In fact, by the very conducting of this study and focusing on only three T/Cs, there is evidence to support (studying the performance of T/C 3 alone – Table 3) that the reflow oven used in this experiment may need to be serviced soon! In fact, an evaluation of T/Cs 14 and 15, relatively in the same plane of travel as T/C 3, also show (but are not presented herein) non-statistically-separable performance (as did T/C 3) in peak temperature between Profile 1 (baseline) and Profile 2.

Acknowledgments

We would like to extend our sincere appreciation to Unovis Solutions for allowing Larry Harvilchuck to participate in this study. Thanks also go out to Ashok Pachamuthu, graduate research assistant and student lab manager of Binghamton University’s surface mount assembly laboratories. The authors would also like to thank ECD Inc. for use of the 20-channel temperature profiler in this study. Finally, the authors would like to thank the Integrated Electronics Engineering Center (IEEC) and the S3IP Center at Binghamton University.

References

1. D.L. Santos, A. Ramasubramanian and L. Harvilchuck, “On the Use of 3 Thermocouples to Verify a Printed Circuit Board Profile During the Reflow Operation,” White Paper Technical Document, Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY, September 2008.

Ed: This article was originally published in the Proceedings of the 2009 SMTA Pan Pacific Microelectronics Symposium, with minor additions herein, including to the title, and is published with permission.

Dr. Daryl L. Santos is professor in Systems Science and Industrial Engineering (SSIE) Department of Binghamton (NY) University (ssie.binghamton.edu); santos@binghamton.edu. Arun Ramasubramanian was at at Binghamton (NY) University. Laurence A. Harvilchuck is a process research engineer at Unovis Solutions (unovis-solutions.com).

 Chinese

The iNEMI Roadmap spells out the plan for lowest cost per watt.

Photovoltaic technology is poised to play an important role in the development of renewable energy sources. The largest volume application for photovoltaic (PV) cells is expected to be in power generation for commercial, industrial and domestic use. Although the past few years have seen increased deployment, fueled in part by skyrocketing oil prices, PV today accounts for less than 0.1% of power generation.

The major barrier to widespread adoption of PV cells is the cost of each cell compared to the energy it can generate, which is a function of material cost, manufacturing cost and cell efficiency. The good news is that – since this is not a mature technology – there is plenty of room for improvement of these factors for all the technologies currently used.

Those countries seeing the most significant growth in PV are the ones that have used preemptive initiatives to make solar power attractive to the consumer before its cost reaches parity with more conventional forms of power. Japan, for example, had a program dubbed “One Million Rooftops,” an initiative to have PV panels mounted on the roofs of one million Japanese homes. Once this goal was reached, however, the initiative was disbanded, and the Japanese PV market fell into a lull.

Today, Germany and Spain are hotbeds of activity. The programs in these countries are based on feed-in tariffs, where those that install PV systems will be permitted to return electricity to the grid to receive a guaranteed stipend. This sort of initiative has met with far greater acceptance because the payback is more solid than what was offered in Japan.

Most of the attention on today’s PV development is focused on the conversion technology – the semiconductor that actually produces the electricity. The efficiency of the final PV panel is a function of the technology used. This portion of the system accounts for about half the expense and is the part of the supply chain where innovation is most likely to occur. The other half (or more) of the cost goes to assembly of PV cells into modules (10 to 15%) and installation costs (35 to 40%), which are predominantly labor.

Two main semiconductor technologies are currently used for direct solar conversion to electricity:

Crystalline silicon – monocrystalline and polycrystalline.

Thin films – amorphous silicon (aSi or a-Si), cadmium telluride (CdTe) and copper indium gallium selenide (CIGS).

In addition, promising work is being done with organic materials and dye-sensitized cells, but these technologies are still in development. It is unclear at this time which technology (or technologies) will take leadership of the market in the future, and it is possible that we will see new technologies displace all of these technologies over the next 15 years. The market is too young to tell.

Cell Efficiency

Perhaps the most important issue to be addressed for PV cells is their efficiency. The higher the cell’s efficiency, the more cost-effective it becomes, since more power can be from a given area of active material. Significant research is underway to improve the efficiency of PV cells (Figure 1). Multijunction cells (the purple line) are clearly the most efficient. These devices use multiple P/N junctions (positively and negatively doped semiconductors) to generate electricity from different wavelengths of light (Figure 2). In these cells, secondary junctions scavenge energy that passes through the first junction, usually taking advantage of the spectral sensitivities of two complementary solar cell technologies. Production costs for multijunction cells are currently expensive, but it is reasonable to expect them to fall over time.

 Fig. 2

Below multijunction cells fall the more conventional silicon technologies: monocrystalline, showing the highest efficiency, followed by multicrystalline. The efficiency of monocrystalline silicon cells can be increased with through use of concentrators (reflectors or lenses) (shown by the dotted line).

Even lower in efficiency are the thin-film technologies (green lines), which promise to be far less costly to produce than any technology manufactured on an expensive purified silicon substrate. Should the lower cost offset the loss in efficiency, thin film PV cells would likely be used for cost-sensitive products, with silicon-based technologies reserved for niche markets where higher cost per watt is acceptable in light of space limitations.

The red lines denote research on organic and dye-sensitized cells. Since these technologies expect to harness volume-printing techniques, they are likely to be the lowest cost to produce, possibly resulting in a very low cost per watt. It will be a number of years before we can clearly see whether this technology can compete favorably against its inorganic competition.

Today, higher efficiency comes at a higher cost. For example, high-efficiency multijunction technology costs more than monocrystalline, which, in turn, is more expensive than polycrystalline. Even less expensive than polycrystalline are thin films, and organic cells are the least expensive (but also least efficient). Given these tradeoffs between efficiency and cost, cost per watt is the most rational way to compare the technologies. Ongoing development for all of these technologies is focusing on materials and processes that will reduce the costs and/or improve efficiencies.

Silicon. An estimated 90% of solar cells in production today are made of silicon. Silicon has the advantage of being better understood than other materials, and volumes are significantly higher. The process used for crystalline PV cells is a subset of that used for standard semiconductor processing, so manufacturing efficiencies are high.
Monocrystalline wafers are manufactured via the same crystal-pulling technique used to make wafers for standard semiconductor processes. This approach is somewhat more expensive than the molding approach used for polycrystalline wafers. However, polycrystalline wafers have lower peak efficiency; thus there is a cost/efficiency tradeoff between these two approaches.

Thin films. A recent silicon shortage drove development of thinner wafers and amorphous silicon cells. It also helped encourage the development of alternative thin film materials, which are attractive for two reasons:
They can be manufactured in unusual shapes (curves), on flexible substrates, and by continuous rather than batch processes (roll-to-roll).

They reduce the industry’s dependence on bulk silicon.

Thin film cells consist of active semiconductor material that has been deposited on a passive substrate – most commonly, glass, stainless steel or plastic. This differs from the approach used with crystalline silicon cells in which an active region is formed into a substrate of bulk semiconductor material. The amount of active material can be significantly reduced if it is not used as a substrate to provide mechanical support.

The leading thin-film PV technologies include amorphous silicon (aSi or a-Si), cadmium telluride (CdTe) and copper indium gallium diselenide (CIGS). To date, CdTe and CIGS exhibit the highest efficiencies of any single-junction thin film materials.

Organic PV devices. Countless alternative PV technologies are being researched. One that holds promise as a favorable alternative to existing technologies is the organic PV cell, a new thin film technology. This is a cell whose active element is an organic material, as opposed to the inorganic materials used in silicon, CIGS, and CdTe cells.

Organic cells are attractive for three reasons:

  • They can be manufactured by low-cost screen-printing processes or even inkjet printers.
  • They can be printed onto flexible substrates, permitting use of very inexpensive materials and simplified handling.
  • They can be used to make a lightweight power source for portable products.

Organic photovoltaics had a slow start since early experiments showed efficiencies below 0.1%. The use of nanostructured material cells led to more efficient charge separation, and efficiencies are currently in the 3% to 5% range. Work in this area is still primarily research-based at universities and institutes and a few pioneering startup companies.

Applications. Applications range from grid-connected to completely distributed electricity generation. Grid-connected could be residential (on rooftops of homes) to commercial (rooftops of retail stores and office buildings) to utility scale (public utilities using PV).

Different technologies have better suitability to different latitudes and climates. For example, concentrated solar is best in direct sunlight, close to the equator, using tracking technology. Thin films (silicon or others) are better suited for northern latitudes and somewhat cloudy climates. Single crystal silicon does well in mid latitudes.
A new class of applications is emerging – building-integrated photovoltaics (BIPV) – for which flexible substrates are well suited. With BIPV, special photovoltaic materials replace conventional building materials in parts of a building – such as the roof, skylights or facade – as an alternative to traditional PV modules mounted above the roof on racks (Figures 3 and 4). These applications are expected to grow significantly in the next several years. NanoMarkets projects the market for BIPV will exceed $4 billion in revenues by 2013 and surpass $8 billion in 2015.1

Fig. 3

Fig. 4

Long-Term Challenges

Photovoltaics have been a low-volume technology over their history, but we anticipate significant changes over thenext 15 years, as the cumulative volume of solar modules increases significantly. This somewhat clouds the picture, making it difficult to predict what kinds of challenges we may face in the next decade and beyond.
We expect for some of the more mundane issues to be worked out – issues like materials shortages and the quantity of material required to manufacture a PV cell. These issues are typically related to commodity-like behavior and to supply/demand behaviors. Several other portions of the system will be fine-tuned to provide more efficient operation; for example, inverters and other control electronics will be incrementally improved.

It is not clear whether today’s standard silicon, CdTe and CIGS cells will yield to other technologies, or which of these three existing technologies might take the lead. Efficiency will not be the key measure of acceptance. Cost per watt will determine which technology wins and which others lose. As manufacturing volume increases, we will be able to discern which of these technologies proves to have a clear cost/watt advantage.

Unlike semiconductors, there appear to be no technical barriers to the acceptance of new PV technologies. Photovoltaics will be governed more by economics than by any performance specifications, and the economics of these devices depend on little more than manufacturing cost and efficiency.

References

NanoMarkets, “Building Integrated Photovoltaics Markets: 2008,” July 2008.

Alain Harrus is a partner with Crosslink Capital (crosslinkcapital.com) and co-chair of the Photovoltaics chapter of the 2009 iNEMI Roadmap; aharrus@crosslinkcapital.com. Jim Handy is a director at Objective Analysis (objective-analysis.com) and co-chair of the Photovoltaics chapter.

Page 60 of 192

Don't have an account yet? Register Now!

Sign in to your account