caLogo

Caveat Lector

Many industry business leaders are a bit shaken over the idea of a Democrat in the White House, and the notion that President Obama may lead broad and deep cuts to the country’s Defense Department budget is unsavory to the new president’s advocates within our industry, and downright unsettling to the most cynical.

For some time, military electronics has made up roughly 15% of the national defense budget. The US defense budget last year was around $715 billion – which is coincidentally in the neighborhood with the just-signed (and somewhat dubiously named) Recovery Bill ($787 billion). (For the record, I’m using figures from the National Defense Budget Estimates for Fiscal Year 2008 Greenbook, supplemented with additional US outlays for the Global War on Terror. The URL is defenselink.mil/comptroller/defbudget/fy2008/fy2008_greenbook.pdf.) That’s a pretty impressive neighborhood. Moreover, military spending has, to no small degree, rescued many US-based OEMs and assemblers from the telecom debacle of 2001-02.

Back in my standards-writing days, the task groups for soldering and workmanship were filled with DoD primes and subcontractors. (To large degree, they still are.) The rooms reverberated with complaints aimed with Tomahawk missile-like precision at the government procurement agencies over how onerous they were supposedly making the processes and related documentation. The rare consumer OEM who would wander in would inevitably get scared off, and a not-insignificant number of companies determined the (lower) margins and concurrent paperwork that went with supplying the Pentagon were hardly worth the trouble, especially when the computer and telecom markets were exploding in growth.

But as an old friend of mine who at the time was in a sensitive procurement role would confide, those who stuck it out did so because supplying to the government meant two things: cash flow even in bad economic times, and the promise of future business.

Fast-forward to 2009, and company after company is aggressively seeking ITAR registration and going after military contracts like US Rangers with Osama Bin Laden in their sights. For many executives, Defense programs are the difference between running a high-tech company and managing a McDonald’s. So it is understandable that even to the least skeptical of the Obama administration, the prospect of the military buffet line suddenly being yanked away is jarring.

But I would offer this: At a minimum, we’ve learned President Obama is a student of history. Much of his presidential campaign and post-election strategy has echoes of great visionaries who preceded him, like Lincoln and Kennedy. Although Obama is tied for obvious historical reasons to the former, insofar as our industry is concerned, the latter may be more instructive. For it was Kennedy who recognized that the Soviet Union represented a clear and present danger to the US, and that future battles would be fought not necessarily with boots on the ground but in the skies, and even in space. He impressed on the nation the idea that putting a man on the moon would be the single feat that could catapult America into the lead in the technology race.

But clearly the race was never about planting a US flag on the moon. It was about conceiving and building the instruments needed to ensure military superiority. To do that, Kennedy first had to inspire a somewhat trepid citizenry toward the pursuit of engineering and basic science research. He did so by wisely making space – not Europe – the next battleground.

You need not be a student of history to know we still revel in the advancements of that time. One of the best-selling items during the past holiday season was the Garmin GPS. That novel idea dates to the 1950s, when a Raytheon scientist proposed it for a missile guidance system. And it’s hard to board a plane without seeing an array of passengers outfitted with noise-canceling headphones, another NASA development. I have confidence, given President Obama’s previous nods to history and his penchant for listening to his advisors, that he will recognize these facts.

This time, the financial disaster facing many companies in electronics is not of our making. Nevertheless, like Major Kong’s long trip to Earth on the atomic bomb in Dr. Strangelove, we are in for a bumpy ride. It says here, the new president won’t make it worse – let alone irreparable – by cutting the very funding that makes our nation secure and our technology the most advanced in the history of the world.

 The Defects Database

Lead contamination can affect reflow profiles.

Secondary reflow of Pb-free solder joints is not an uncommon defect found in manufacture. X-ray inspection of a lifted J-lead (Figure 1) was part of an investigation to pinpoint the possible cause of open joints. In this case, a known amount (nine to 10%) of lead was present on the lead plating in these Pb-free joints.

Fig. 1

Board temperature control during wave soldering is very important to avoid secondary reflow on the topside of the board. This can occur when a board passes over the solder wave and heat is transferred by conduction through the bulk of the board. This issue can be eliminated through correct topside profiling, which in turn mandates use of a process control tool to monitor changes in contact time and temperature.

It is understood some companies have experienced problems over the past couple years with secondary reflow of Pb-free joints during wave soldering. This has led to separation of the termination from the bulk of the joint. In the past, the same phenomenon has occurred with traditional SnPb alloys when the top surface of the board reached or exceeded 180°C. NPL research has examined the level of lead contamination and its impact on reliability. It has been demonstrated that small amounts of lead will lower the reflow temperature of Pb-free alloys (217°-221°C), leading to secondary reflow at SnPb reflow temperatures (183°-184°C).

It is important to check components for lead contamination to reduce the possibility of secondary reflow due to a low temperature phase and to meet RoHS requirements. Technical articles on this subject are available to SMTA members on its website (smta.org).

These are typical defects shown in the National Physical Laboratory’s interactive assembly and soldering defects database. The database (defectsdatabase.npl.co.uk), available to all Circuits Assembly readers, allows engineers to search and view countless defects and solutions, or to submit defects online.

Dr. Davide Di Maio is with the National Physical Laboratory Industry and Innovation division (npl.co.uk); defectsdatabase@npl.co.uk.

Global Sourcing

In outsourcing, Envy – or imitation – is neither fun nor useful.

The fourth of the Seven Deadly Sins is one of our favorites. In fact, it was the inspiration for this series of columns. Envy is unique in that it is the only sin that lacks a pleasurable angle. It is in no way fun – just mean-spirited and sad. Envy can be summed up by this quip from H.L. Mencken: “Happiness is making $10/hour more than your brother-in-law.” When you rejoice in another’s misery, you are at the bottom of the barrel.

Envy’s Latin derivative is Invidia. In outsourcing terms, we call it “imitation.” In the outsourcing relationship, we see the effects of Envy in a corporate culture that fails to innovate, and falls into the trap of imitating competitors rather than taking risks with truly new products and services.

While many consultants, us included, encourage the right kind of benchmarking, when Envy is in charge of an organization, the concept can be self-defeating. Why only study successful companies? Success is time and market-sensitive. It is by definition subjective because things change with time, and studying a company at one point in time is akin to viewing a snapshot, not the whole picture. For a while, Dell was a benchmarker’s dream because of a business model that seemed to enable them to spin gold out of straw: It invoiced customers before paying suppliers. Its supply chain was the answer. For years, every business conference included a speaker from Dell so the rest of the industry could “benchmark” its success. But maybe the company was just in the right place at the right time. It has since fallen out of favor. Studying something as subjective as success is a type of philosophy, not business analysis.

By studying failures, there is more to be learned and applied to future business. Failures are the result of fundamental errors. We should now all be carefully studying Bernie Madoff and his Ponzi scheme. How could all those smart people have been so stupid? Shouldn’t they have known it wasn’t real, that no investment manager can deliver those returns? What exactly did he say to make people believe the unbelievable? When you study failure, you can find the root cause. They failed because of some objective mistakes. Why is GM teetering on complete failure? It is possible to retrace the history of the organization and see in hindsight what went wrong. Sometimes failure is the result of bad decision-making, misguided strategy, greed, incompetence, bad luck, and/or bad timing. It is usually a lot more obvious in retrospect. There is really not a lot of ambiguity about failure: It is objective and actionable. It is much more difficult to attribute specific causes to successful strategies.

Envy is the engine of the marketing world: “Don’t hate me because I’m beautiful” (but really, I’m fine if you do). When the electronics industry outsourced manufacturing, it was to focus on core competencies of marketing and product development. But true innovation is a lot riskier, and now it seems that in many electronics organizations, marketing is king.

Take, for example, the iPhone. Every cellphone introduced in the past few years is an iPhone knockoff. Is it really the perfect cellphone? Some people don’t like it. In many arenas of electronics, arguably the greatest innovation of all time, products quickly become commodities because of the flood of knockoffs. True innovation, and the necessary accountability that goes along with it, is rare.

Let’s do some math. Fortune 500 companies generate about $26 trillion in annual revenue, if you add it all up. If they spend 5 to 6% on R&D, that’s an investment of $1 trillion to $2 trillion a year. Over 10 years, that’s $10 trillion to $12 trillion. Forget the stimulus package – corporations have been spending trillions on research and development for a long time. What is the result? The innovation curve of the first few decades of electronics was steeper than today, especially in hardware. Is outsourcing the cause? Is it more difficult to develop innovative electronics products when manufacturing is divorced from design? Watching films like The Right Stuff and Apollo 13 reminds us of the courage and engineering prowess of the early space pioneers, who left our planet in spacecraft that had the computing power of a calculator. Today, we seem focused on advertising innovation: inspiring commercials for endless consumer knockoffs fueled by Envy.

Who said necessity is the mother of invention? That’s the gift from this financial meltdown: the chance to start over again. We can perhaps rethink what is necessary and focus our innovation engine on leveraging the power of the electronics revolution for good. We won’t think only about what we could sell, but develop innovative products and services that address important challenges like energy and the environment. Even in the consumer space, let’s get serious. New ringtones are not innovation. Why must we have a cellphone and a Bluetooth earpiece? We should have one device that fits comfortably in the ear, or on a pair of sunglasses. How about a practical home robot?

The good news is, if you accept the premise that you can learn more from failure than success, it’s a great time to be a student! Much of what we believed to be true has been proven wrong. So it’s time to forget about benchmarking. It’s time to stop copying and start innovating. Rather than following the lemmings to sub-Saharan Africa for lowest labor cost, there are better ways.

Jennifer Read is cofounder of Charlie Barnhart and Associates (charliebarnhart.com); jennifer@charliebarnhart.com.

Pb-Free Lessons

Excess copper dissolution remains a big problem for assemblers.

No one, save for a few equipment suppliers, likes rework, but it’s an ugly fact of life. At best, it is performed properly and costs only time and money. At worst, it results in a product that leaves the factory with a latent defect destined to fail in service. In between, all kinds of things can go wrong that result in more extensive repairs or even scrap.

There has always been an inherent risk to rework. Lifted pads, ripped barrels, torn traces – they all come with the territory. Pb-free ratchets up that risk a few notches, especially with plated through-hole components. Pb-free PTH rework provides numerous opportunities to ship product with hidden defects caused by excess copper dissolution. We’ve seen plenty of evidence a PTH solder fillet can, from the outside, be visually acceptable, but on the inside lack the knee to connect the barrel to the annular ring. We’ve seen thin spots in barrel plating consumed by solder, permitting laminate to outgas and create giant voids inside joints that have good looking top- and bottom-side fillets. And we’ve seen 0.005" traces reduced 60 to 96% with 30 sec. exposure to flowing Pb-free solders. As an industry, we don’t really know the full ramifications of these hidden defects, and I don’t think we’re going to find out anytime soon.

With the potential issues that can arise from Pb-free PTH rework, we should ask why we perform the risky operation in the first place. If a component is defective or a pin bent, then the process clearly is unavoidable. But if performed because of insufficient hole fill during the primary attach process, we need to revisit the basis for the rework. Missing topside solder fillets is not a condition that should necessitate repair. Even IPC’s Class 3 workmanship standards dictate 75% hole fill on signal connections and 50% on ground plane connections. So why do so many assemblers rework PTH devices that lack topside fillets? Because QA wants to see them. For 50 years, we have validated SnPb wave processes by looking at topside solder fillets. Intellectually, most assemblers know they are not necessary for performance, but emotionally, they want to see them. This is a classic example of the road to hell being paved with good intentions: The person who requests topside solder fillets thinks they will make the joint stronger and more reliable, not realizing the joint can actually wind up weaker and less reliable if too much copper is dissolved in the process.

I don’t think we should stop at simply reconsidering our stipulations for topside solder fillets. I believe we should rethink hole fill specifications in general. If a 50-pin connector has 49 pins that meet workmanship standards and one that falls a little short, do we risk all 50 joints for the sake of that one? Which scenario renders the connector less reliable: one barrel of 50 that is short on solder fill, or 50 barrels that have all endured rework? This is where the decision-making process can become a bit imprecise. To avoid ambiguity on the shop floor, quality metrics need to be based on specific, measurable quantities that have no room for interpretation. But in the case of hole fill – especially on PWBs 0.093" or thicker – strict enforcement of rigid metrics has the potential to do more harm than good. Perhaps a better direction for production personnel is to seek the advice of engineering or quality professionals, or relegate the assembly in question to the material review team for disposition.

When determining the rework candidacy of an assembly, a number of factors should be considered. They include the end-product, its performance requirements, the degree of the shortfall and some specifics of the rework process to which it will be subjected. The process itself is a critical factor in the decision, as mini-wave soldering often can be much harsher on the PWB than the original primary attachment process. Consider the number of solder contact cycles associated with a mini-wave repair: The first cycle melts the existing solder (often without preheat) so the connector can be removed; the second cycle re-melts residual solder in the barrels so the holes can be cleared; the third one re-solders the connector (again often without benefit of preheat), and subsequent cycles may be applied to remove bridges. The cumulative exposure times add up quickly. They can exceed 60 sec. on a minimally challenging design and 120 sec. on a demanding one. Compare that with the typical 3-9 sec. contact time on a wave-solder machine and it’s no wonder the copper is disappearing.

Numerous factors influence erosion rates; they include alloy type, solder temperature and flow rate, dwell time, and the quality of the copper that’s on the PWB. The copper’s plating process has been identified as a considerable factor in dissolution rates. It’s influence has only recently been quantified, and the reasons why are not yet fully understood, but even if they were, they would likely be out of the assembler’s control anyway. Probably the single most influential factor the assembler can control is the dwell time on the wave. There are a couple of ways to effectively do that:

Use preheat. Although preheaters are available for miniwaves, many processes still don’t use them. Instead, they rely on the flowing solder as the sole heat source for the process. That practice might have been okay in the world of SnPb, but not so with Pb-free solders. Heating a board to liquidus temperature from room temperature requires three times more solder exposure than heating it from a preconditioned 300°F/150°C. The entire time the flowing solder is transferring the heat, it’s simultaneously washing away copper from the bottom of the PWB. Three times as much solder contact for at least two cycles: the removal and the reattachment. That’s an awful lot of extra dwell time the copper might not be able to tolerate.

Perform component removal and barrel clearing with hot air. This process, popularized by Bob Farrell at Benchmark Electronics and Greg Morose of the University of Massachusetts at Lowell, exposes the PWB to flowing solder only when it is actually needed – for the purpose of attaching the component.1 The process artfully avoids unnecessary dwell time and therefore limits risk of excessive copper loss. It uses a BGA rework system to melt solder in the barrels with forced convection. When the solder is fully liquefied, the component is removed manually. To clear the barrels, the board is preheated and the barrels vacuumed by the automatic site redressing system on the same BGA rework station. This process has shown great success, even on large components like 240-pin DIMM connectors. Farrell provides two caveats when employing this method: Always preheat the entire PWB assembly, and use only automated site redressing equipment, if possible. Similar to BGA redressing concerns, handheld solder vacuum pens can do a great deal of damage to a PWB’s surface if they fall into the wrong hands.

Limiting the dwell time is probably the easiest way to combat copper erosion during rework, but not the only way. Alloy selection can have a profound effect on erosion performance. Research indicates alloys with lower (or no) silver content and/or small amounts of nickel dissolve the PWB copper more slowly than SAC 305 or 405, which were selected as SnPb replacements based primarily on SMT considerations. Due to the many wave solder-related issues identified with the 305/405 family, like copper dissolution, shrinkage tears, pot corrosion and material cost, alternative alloy options for wave soldering continue to proliferate, and at a relatively rapid rate. None of these newer alloys boasts as much reliability data as SAC 305/405 systems, and early products could not meet the SAC 305 benchmark for debridging and hole fill, but that’s not necessarily the case anymore. Some recent market entrants show a lot of promise in rework and primary attach processes, and the overall performance of alternative alloys will get better as researchers continue to improve them.

Other tactics to mitigate copper erosion include optimizing the solder’s temperature and flow dynamics. These methods probably will have less impact on erosion rates than limiting dwell times or switching alloys, but in a process that has the potential to ruin an assembly in 30 sec. or less, every little bit of optimization helps. It’s scary to think while we are fixing one defect we might be creating another, and it’s even scarier when we consider the new problems are often undetectable. Some industry leaders would like to see development of a standard test that can check a copper’s sustainability, similar to the solder float test, but adapted to address the current issues. It would provide assemblers with some assurance the PWB could withstand a certain minimum level of exposure.

Unfortunately, we can’t avoid the need for rework altogether, but we can rethink our criteria and perform it only when it is more likely to help a product’s performance than hurt it. We still have a lot to learn about the new material sets and manufacturing processes. What we do know is that when it comes to PTH rework, our best defense against excess copper dissolution is limiting the board’s exposure to the flowing solder. In short, we need to fix the defect, not dwell on it.

References

1. R. Farrell and G. Morose, “Assembly Rework and Lead-Free Impact,” CALCE Symposium on Part Reprocessing, Tin Whisker Mitigation, and Assembly Rework, November 2008.

Chrys Shea has 20 years’ experience in electronics manufacturing and is founder of Shea Engineering; chrys@sheaengineering.com.

Process DoctorContamination can artificially decrease readings.

Albert Einstein wrote, “To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science.”

Let’s consider the great physicist’s observation in the context of cleaning agent bath measuring techniques. Prior to the introduction of modern aqueous cleaning technologies (in the 1990s), users mostly were limited to titration. Measuring the pH-value of a product is one important variable, but in itself does not provide the true nature/state of the cleaning bath in question. The refractive index measuring technique was introduced in North America in the late 1990s, as “modern” aqueous products started to gain traction. It allowed users to combine pH measurements with refractive index measurements and assess the organic and alkaline level of ingredients in their product. Companies even started to introduce automated concentration monitoring systems, based on the refractive index measurement. But let’s take a moment to recount the limitations of the refractive index measurement.

Per Wikipedia:

The refractive index of a medium is a measure of how much the speed of light (or other waves such as sound waves) is reduced inside the medium. For example, a typical soda-lime glass has a refractive index of 1.5, which means that inside the glass, light travels at 1/1.5 = 0.67 times faster than the speed of light in a vacuum. Two common properties of glass and other transparent materials are directly related to their refractive index. First, light rays change direction when they cross the interface from air to the material, an effect used in lenses. Second, light partially reflects from surfaces that have a refractive index different from that of their surroundings.

The refractive index, n, of a medium is defined as the ratio of the phase velocity, c, of a wave phenomenon such as light or sound in a reference medium to the phase velocity, vp, in the medium itself (Figure 1).

Fig. 1

Since the refractive index is a fundamental physical property of a substance, it is often used to identify a particular substance, to confirm its purity or to measure its concentration. In our industry, the refractive index is used to measure the concentration of a solute in an aqueous solution. A refractometer is the manual instrument used to measure the refractive index. When talking about a solution of sugar, the refractive index can be used to determine the sugar content.

But what happens if the sugar solution were to become contaminated with flux residues? A current study discovered that all cleaning agents used in electronics are affected by the contamination (i.e., flux) they remove. And I mean all of them! In extreme cases, the difference between the perceived concentration and the actual concentration deviates by over 15%! For example, the operator is reading a 12% concentration solution, which seems fairly close to the specified bath concentration of 14%. The operator adds a concentrated cleaning solution to make up the difference. They must be certain the cleaning process is in full compliance, as most Western companies produce highly reliable products. They cannot afford any inaccuracy at this crucial point of the production process. The boards either are about to be shipped to the customer or will be undergoing a conformal coating step.

Remember the refractive index is prone to deviation. Given that the operator was not using a freshly prepared cleaning agent in their cleaning equipment, there is a high possibility that what is thought to be a 12% concentration is actually a 5% one. This means that the cleaning agent might not be able to clean at its full strength, possibly resulting in board failure due to contamination-induced electrochemical migration. But there is no way of knowing, unless the operator sends the contaminated sample to the cleaning agent manufacturer and has more elaborate analytical tests performed, such as GC (gas chromatograph) measurements. A GC is much too expensive for regular bath maintenance purposes. Therefore, alternative methods are indeed required. Further, the contamination also can decrease the operator’s reading artificially. In other words, the actual concentration is 20%, but the contaminated solution gives the perception of 14%; so the result will be significantly higher concentrations and higher chemistry costs. Again, any contamination added to the cleaning agent has a different effect on the refractive index and cumulatively will affect the actual bath concentration accuracy.

We are glad to report that this issue now has been resolved, and new products are once again eliminating problems that once seemed impossible to overcome. Let’s marvel at science and how it allows all of us to become more efficient at what we do.

Harald Wack, Ph.D., is president of Zestron (zestron.com); h.wack@zestronusa.com.

Reflow Soldering

The SnCuNi alloy (SCN) has been used in wave soldering applications because of its applicability in achieving acceptable soldering results and its lower rates of reaction with copper (used in PCBs and components) and iron (used as a based material in wave soldering equipment). The lack of precious metals like silver makes it less expensive, and the cosmetic appearance of the final joints is similar to that of SnPb joints; but these benefits are offset by its higher melting point temperature of 227°C in reflow applications. The higher melting point may necessitate reflow profiles with higher peak temperatures and/or longer time above liquidus (TAL) than that of SAC 305 to obtain complete and homogenous mixing of the paste deposits with the component lead/bump. Concerns such as possible damage of heat-sensitive components and joint reliability arise.

A study was carried out to develop reflow processes for SCN solder paste using SAC 305 and SCN-bumped BGA-CSP components. Assembly characterization was performed using cross-sectional analysis, vibration testing and thermal cycling. The objective was to characterize the performance of pure SCN joints and compare them with pure SAC 305 solder joints and mixed SCN paste/SAC 305 sphere solder joints. This was accomplished by designing reflow soldering profiles that reached the same peak temperatures and TAL (above 217°C) optimized for typical SAC 305 assemblies.

The test vehicle was a 0.062" thick, four-layer FR-4 PWB with Cu-OSP surface finish and non-solder mask-defined pads. Each board was populated with 16 256 I/O BGA-CSP components. The design of experiments included different peak temperatures (238°C and 248°C) and TAL (50 and 75 sec.). The corresponded TAL above 227°C was 30 and 50 sec., respectively. Levels for each factor were based on the current SAC 305 process window and recommendations from the SCN solder paste supplier. All boards were reflowed in air, and a small batch of pure SCN joints were reflowed in nitrogen (<100 ppm).

An x-ray automatic program was used to inspect voiding. All solder joints from 12 boards with 16 components each were tested. The percentage of the single largest and overall voiding per solder ball was recorded. Results showed pure SCN shows fewer and smaller voids than the other two metal systems. However, the overall sizes of voids were insignificant. The overall sizes were on average less than 3.5% for the SAC and mixed systems and less than 1.2% for SCN systems. The size of these voids passed IPC-A-610D, which sets acceptance criteria for Class 1, 2 and 3 at a maximum 25% of the ball x-ray image area.

Good solder joint formation and collapse was observed on SCN and mixed assemblies when reflowed at 238°C and TAL of 75 sec. Microstructure analysis showed the main difference between these two systems was the presence of Ag3Sn intermetallic in the mixed joints. Another difference was the thickness of the intermetallic between the PCB and joint. A 60% thicker intermetallic was observed in the mixed system. This might be an indication that the nickel content inhibits the growth of CuSn intermetallic.1

Vibration testing was performed on 12 boards reflowed in air. The goal was to excite the first resonance (bending movement) at relatively low amplitude to induce high cycle fatigue failures. The boards were mounted with standoffs at the four corners to an electrodynamic shaker. Failure data were divided into four groups depending on component location on the board because they experienced different stress levels. Figure 1 shows a schematic of the board with its groups. Two failure modes were observed: pad cratering (groups 1 and 2) and solder fatigue (groups 3 and 4).

Fig. 1

Results showed SAC, SCN, and mixed assemblies performed similarly in each group. The lowest cycles to fail were observed in groups 1 and 2, followed by groups 3 and 4. Further testing is planned to compare alloys in drop testing where the strain rate and stress levels are much higher.

Thermal cycling was performed with temperature ranges from 0° to 100°C, with a dwell time of 10 minutes and ramp rate of 10°C/s. The test was stopped at 1,686 cycles, when more than 50% failures were observed for each board. Table 1 shows the characteristic life (N63) and early failures (N01) for each case. It can be observed that SAC systems had on average better characteristic life followed by SCN (150 fewer cycles) and mixed (191 fewer cycles) systems. The data favor 238°C peak temperature, and there was not a significant difference between TAL.

Table 1

Early failures, which correspond to 1% of the failure data, showed a different trend. In this case, SAC systems had higher numbers of cycles-to-failed, followed by mixed (90 fewer cycles) and SCN (204 fewer cycles) systems. An improvement in early failures was observed when nitrogen was used in SCN joints, resulting in similar behavior to SAC systems.

SCN shows promise as a replacement for SAC alloys for some reflow applications. SCN shows comparable performance in mechanical and thermal testing to SAC systems. Thermal cycling results suggest that the appropriate process window for the SCN system should have a peak temperature of 238°C and TAL of 50 sec. Thus, a typical SAC profile can be used to assemble pure SCN. At this temperature, heat-sensitive material suitable for Pb-free applications can be used without any problem. The use of a single alloy in wave and reflow processes will benefit the end-user by reducing complexity and cost.

Mixed assemblies, which are mainly SAC alloy (ratio sphere/paste = 3.18), were affected by the content of silver and nickel, which results in a decrease in the characteristic life when compared to pure SAC, but were comparable to SCN assemblies. In general, all three systems performed similarly within an appropriate process window, but more experiments are needed to support this conclusion.

Reference

1. F. Song, J. Lo, J. Lam, T. Jiang and S.W.R. Lee, “A Comprehensive Parallel Study on the Board Level Reliability of SAC, SACX, and SCN Solders,” Electronic Components and Technology Conference, May 2008.

Ursula Marquez de Tino is a process and research engineer at Vitronics Soltec, based in the Unovis SMT Lab (vitronics-soltec.com); umarquez@vsww.com.

Page 65 of 192

Don't have an account yet? Register Now!

Sign in to your account