caLogo

Identify and solve printer-related solder defects.

As noted last month, bridging is solder connecting or, in most cases, misconnecting two or more adjacent pads that come into contact to form a conductive path. This month, we look at instances caused by the screen printer, and how to mitigate them.

Problem: Poor gasketing (paste oozes beneath stencil during printing, increasing chance of wet solder paste bridges).
Recommendations:

  • Zero print gap between stencil and PCB.
  • Check paste smear underneath stencil.
  • Check sufficient stencil tension.

Problem: Misaligned print will challenge the paste to pull back to pads during molten stage, increasing the potential for bridging.
Recommendation:

  • Ensure print accuracy and consistency for both print strokes.

Problem: Smearing and bridging phenomenon on the next
printed board after stencil cleaning
operation.
Recommendations:

  • Verify stencil is dry after cleaning and before next print.
  • Standard cleaning mode is wet/vacuum/dry.

Problem: Poor print definition with dog ears, especially on fine-pitch components.   
Recommendations:

  • Check board support.
  • Adjust separation speed to achieve minimum dog ears. (Different paste chemistry requires different separation speed to minimize dog ears.)

Problem: Dented squeegee blades could result in uneven print pressure.
Recommendation:

  • Check squeegee blade condition.

Paul Lotosky is global director – customer technical support at Cookson Electronics (cooksonelectronics.com); plotosky@cooksonelectronics.com. His column appears monthly.

Excess flux can disturb PoP placement.

This month we feature one recent process control issue submitted to the database.

Figure 1 shows the bottom of a package-on-package (PoP) device, a small BGA, prior to placement on the top of the bottom package. The liquid flux should only be on the surface of the balls. In this case, however, the part has been drowned, with flux covering all the balls and the package surface. Excess flux can result in component float during reflow, and depending on the solvent in the flux, the part could even jump off the board during reflow. Excess flux should be avoided; it shows poor control, may cause problems during cleaning and will affect underfill application (if conducted).



Flux or dip paste is used to assemble PoP components during a typical SMT process. As a guide, the package balls would be dipped to a depth of 50% of the height of the balls. In this example, the flux tray clearly was not flat; the amount of the flux was uneven on the tray, or the doctor or control blade height was incorrectly set.

These are typical defects shown in the National Physical Laboratory’s interactive assembly and soldering defects database. The database (http://defectsdatabase.npl.co.uk), available to all this publication’s readers, allows engineers to search and view countless defects and solutions, or to submit defects online. 

Dr. Davide Di Maio is with the National Physical Laboratory Industry and Innovation division (npl.co.uk); defectsdatabase@npl.co.uk. His column appears monthly.

 

From Fremont to Tunisia, AsteelFlash aims to be a worldwide Tier 2 EMS.

It was among the largest EMS mergers of the decade, but the timing – just a few days before Christmas 2007 – may well have distracted from the enormity of the event. On Dec. 20, Asteel announced its acquisition of Flash Electronics in an all-stock deal, forming an EMS company with more than $600 million in annual sales.

Today, Paris-based AsteelFlash (asteelflash.com) has 12 manufacturing locations on four continents, and continues to make the occasional strategic acquisition, having acquired MRP in the UK in 2008. Georges Garic, corporate vice president and general manager of America and Asia, spoke with Circuits Assembly’s Mike Buetow in July about the company’s strategic approach – and its atypical footprint.

CA: Asteel has 12 sites worldwide, but compared to some of the large EMS companies in the world, they are not in all the usual places. Do you see that changing at all over the next two years?
GG: Asteel and Flash Electronics created AFG in February 2008. Our strategy is to become a worldwide Tier 2. We want to differentiate ourselves by the proximity, the footprint, the technology and our supply chain.

We are planning to acquire something in Mexico, and on the East Coast in the US and in some other countries in Europe. The proximity [to customers]
is important.

CA: In some company literature, Asteel presents itself as a high-mix, low-volume company.
GG:
When we analyzed the market, we saw all the Tier 1s merging and becoming bigger, and
customers were coming to us and complaining about service problems for the high-mix, low-volume markets. When it’s high-volume, low-mix, it’s easy to run with the Tier 1s. When it’s high-mix, it’s another story.

CA: Could you break down the percent of revenues from low-mix, high-volume vs. high-mix, low-volume?
GG: Today, high-mix, low-volume is about 60% to 70% of revenue.

CA: What is a typical lot size?
GG: One hundred to 200 boards would be a big batch.

CA: Break down Asteel’s end-markets by sales percentage.
In a FlashWe are trying to balance the markets: networking infrastructure, storage, industrial, RF, and lighting.

CA: Why are those ideal markets for the company today?
GG: It’s linked to our history and footprint. California is heavily in networking and storage. In Europe, it’s heavily industrial. So due to the history of the two companies, we are strong in those areas.

Other areas we are trying to develop are military/aero – we are now ITAR registered in our Fremont, CA, plant – and medical, so we can expand. We also are working on solar, which we group under industrial.

CA: Among your sites are two in Tunisia. Are there differences to manufacturing in North Africa vs. other locations?
GG: Tunisia is to Europe what Mexico is to North America. It is stable and low-cost. There are a lot of companies there from Europe: Spain, Germany, France, Italy. Among EMS companies however, none from the Tier 1 is there. After us, there are companies in the $20 million to $50 million range. There is a clear stability of costs that you don’t have in Eastern Europe. We saw all the costs in Eastern Europe skyrocket. Tunisia has stability in the wage rate and the exchange rate of the dinar to the euro. And it is one hour from Europe by plane.

CA: As of 2008, you had sales of $608 million. Where does that stand today?
GG: At the end of this year, we will be ahead of our 2008 revenue.

CA: Have you encountered problems with parts availability?
GG: Everyone has the problem. We have heat on lead times and prices. I don’t know when the end to this crisis will be. It started in the fourth quarter (2009); I think we will suffer through the end of this year. But we don’t see any more huge expansions of lead times.

Our backlog is clearly increased. I think there is a potential small bubble effect of this backlog. We are spending a lot of energy and have some negative heat because of prices.

Changes to the design by non-designers usually result in unforeseen failures. 

A February article by Jack Olson and Mike Tucker titled “PCB Data Preparation” (http://pcdandf.com/cms/magazine/209/6996) spurred me to elaborate on a few items from the designer’s perspective.

It is true that often a designer does not want the fabricator to modify the PCB data file. We have a note similar to the one mentioned in the article:

DATA MAY NOT BE MODIFIED WITHOUT WRITTEN APPROVAL

I believe our version is a little more practical. We call it Note 23. We put it on every PCB print, always as Note 23. Note 23 reads as follows:

MODIFICATION TO COPPER WITHIN THE PCB OUTLINE IS NOT ALLOWED WITHOUT WRITTEN PERMISSION FROM MOREY ENGINEERING, EXCEPT WHERE NOTED OTHERWISE ON PRINT. MANUFACTUER MAY MAKE ADJUSTMENTS TO COMPENSATE FOR MANUFACTURING PROCESS, BUT THE FINAL PCB IS REQUIRED TO REFLECT  THE ASSOCIATED GERBER FILE DESIGN +/-0.001 IN. FOR ETCHED FEATURES WITHIN THE PCB OUTLINE.

This makes it clear to the fabricator that it can smooth geometry, perform edge compensation, etc., to make the fabrication process reliable. It is also clear that the designer requires the copper in the final product to match the Gerber files.

The real motivation behind Note 23 is simple: to send any given PCB design to more than one fabricator and get the same result. Most companies probably always fabricate the PCB for a given project with the same fabricator for consistency, but what happens if the fabricator goes out of business or is purchased by a competitor, or begins having quality issues? You may be forced to change your PCB fabricator, a messy business for many reasons. It’s even worse when boards from a different fabricator do not function the same way. If the fabricator follows the designer’s print and the PCB does not function properly, the responsibility lies with the designer for not having a complete and accurate description of their own design.

How can you verify you actually get what you asked for? It is true, as Olson and Tucker mention, that a designer should establish a good relationship with their PCB fabricator, which includes working to communicate one’s own needs and process, and to understand the fabricator’s needs and process. Note 23 is another step in the right direction. If the fabricator finds an error, the last thing you as the designer want them to do is to fix it without telling you. Given the chance to fix the problem, the designer will eliminate the possibility that another fabricator fixes the problem differently (resulting in a different final PCB) or, worse, fails to identify the problem and builds nonfunctional PCBs.

Another move is to work with the board fabricator to get it to provide individual layers of the PCB any time it changes the fabrication files. What would you do with individual layers? From experience, I can say that most of the time, if everything goes well, they will collect dust. If you do need them, however, you will not be able to get them after the fact.

Individual layers permit observation of differences in the copper in the PCB from different fabricators or from one design revision to the next. Let’s say the customer is having field failures and cannot determine the problem. It will go to its contract manufacturer for proof of the integrity of every component in their product.

Generally for the PCB, all the EMS firm can do is provide the first-article inspection (a piece of paper), and cross-section the PCB and verify the layer stackup, neither of which can reveal whether the copper on each layer inside the board was fabricated per the design. From a contract manufacturer’s perspective, the individual PCB layers can help the customer determine if the board was fabricated per its design.

Another big tradeoff is controlled impedance versus designed impedance. Controlled impedance is the practice of specifying specific traces on a PCB must be a certain impedance, and leaving it to the fabricator to adjust the design as necessary to get the specified result. (The adjustment typically includes trace widths and PCB material thicknesses.) Designed impedance is when the designer specifies the PCB stackup and trace widths to get the desired impedance.

As an engineer, controlled impedance can be scary. You have given control of critical PCB parameters to the fabricator. The fabricator could change the manufacturing parameters on the fly without notification based on its currently available material. The advantage is that you (could) receive a lower price, but at the cost of losing control over the design. This could be OK, provided the design is simple and can permit this variation. Complex designs usually mean less design headroom and lower tolerance for change.

Let’s say you have a design that incorporates several fine-pitch BGAs connected by an address/data bus that requires impedance A, and you have a few RF traces that require impedance B that route to antennae fabricated right on the PCB. The PCB complexity is now at six to eight layers, and the designer has the option to specify each “important” trace on the PCB and its impedance (controlled impedance), or specify the trace widths and PCB stackup (designed impedance). With controlled impedance, the fabricator would be responsible for the impedance and would measure it to verify and put the data on the first-article inspection. With designed impedance PCBs, the designer could ask the fabricator to measure the impedance as reference only, and include the data on the first-article inspection. The latter permits more design control with the same check and balance in place to ensure proper impedance has been realized.

The point here is that a good designer will take full responsibility for the design. No one, for any reason, should alter that design except the designer. The designer knows why everything is the way it is. Changing a design without the designer will usually result in unforeseen failures. Example: The PCB manufacturer removes nonfunctional pads around a via on innerlayers because they are electrically nonfunctional. Result: The structural integrity of the via is compromised when the end-product is subjected to stress testing. How would the fabricator know that the little bit of extra structural integrity of the via is needed for the product to pass HALT (highly accelerated life testing)?

We work on products every day that are successfully subjected to thermal shock rates of 50°C per minute temperature change, between -40° and +85°C, while experiencing random vibration levels of 30G. The best solution in this situation is Note 23. This forces the fabricator to consult with the designer. Feedback from the fabricator then gets incorporated directly into the design. This process makes the design better and hopefully minimizes any room for error. The designer must be in total control of the design.

It is great to have a fabricator that can provide its “secret sauce” to make the design work, but be ready to get locked into that supplier, because another fabricator’s recipe will be different. As a designer, is that a risk you want to take? Is that a risk you want to take with your product? Is that a risk you want to take with your business? 

Jeff Champa is department head, development engineering at Morey Corp. (moreycorp.com); jchampa@moreycorp.com.

Parylene coating is tough to see under UV light.

This month we feature an issue recently submitted to the database. It purports to be missing conformal coating, but is it?

Actually, this is a satisfactory board assembly after coating. This board assembly has a Parylene coating, which is around 15 µm in thickness on all surfaces of the board and components. Based on the process, coverage and thickness is virtually guaranteed. It is difficult to see the coating due to the process and the type of material used, so it’s not a defect. Most conformal coatings are fairly easy to inspect visually with UV light; however, one of the best coatings in terms of performance is difficult to see. This coating is more expensive than most, and the process is normally subcontracted.

It is sometimes possible to see the coating under higher magnification on the side of the components close to the board surface, normally when a thicker coating has been applied. Inspection also can be easier if a QC label is placed on the surface of the board prior to coating, or if it’s possible to compare the solder mask surface with and without coating.

These are typical defects shown in the National Physical Laboratory’s interactive assembly and soldering defects database. The database (http://defectsdatabase.npl.co.uk, available to all this publication’s readers, allows engineers to search and view countless defects and solutions, or to submit defects online. 

Dr. Davide Di Maio is with the National Physical Laboratory Industry and Innovation division (npl.co.uk); defectsdatabase@npl.co.uk. His column appears monthly.

Continuous improvement relies on a balance between well-defined strategy and effective execution.

Pick up any industry trade magazine or attend any conference and you invariably are exposed to the current slate of “hot topics”: Consolidation. Merger. Acquisition. Vertical integration. NADCAP, AS9100, ISO 13485 certification. New materials, new processes, new equipment. Reliability and quality. Low-cost geography. Quick turn. Intellectual property.

All are valid and valuable strategic elements. Yet, it has been said that vision – or in this case, strategy – without action is nothing more than dreams. But when you thoughtfully merge vision and action, the result is real, meaningful progress. So in today’s wild and changing environment, how do we turn these important strategic initiatives into world-class progress by effectively merging this vision and urgent action? What is the execution plan?

This author believes there are three key components to this vision/action merger: the growth of an effective Lean Six Sigma methodology forming the foundation of a culture of continuous improvement, the embrace of a vital and vibrant employee development program, and the presence of an effective leadership structure and system.

The various Lean methods and Six Sigma problem-solving techniques have been well documented, and need not be discussed here in great detail. But to any degreed engineer, a large part of the Six Sigma toolkit is what was originally taught to us as “The Scientific Method.” The Scientific Method was the cornerstone of every science class in high school and college. So, if these methods are so common and so well known, what’s the big deal? It is that the methods are not followed due to a fundamental lack of problem-solving discipline and the quest to put out the latest fire. Often, these fires are only superficially solved, left smoldering, waiting to flare again, since the true root cause was never identified and subsequently extinguished. Human nature is to deal with what will get you shot today, and a problem that went away will not get you shot (at least not today). That being the case, the Six Sigma structure and methodology is genuinely valuable because it drives a set of problem-solving methods and disciplines that value true root cause analysis, and thus prevents problem recurrence. These tools and techniques foster a data-based decision-making culture via broad use of statistical process control, well designed experiments, automated statistical analysis tools, and the use of Ishikawa fishbone diagrams and the “Five Why’s” for root cause analysis. Once a problem is solved, the proper controls are put in place, and that specific problem should never happen again. You never want to buy the same real estate twice.

Lean methodology is also critical, especially in manufacturing settings where a constantly changing product mix will quickly obsolete run rules aimed at maximizing efficiency. The variable product mix issues are amplified by new product introduction and their aggressive schedules. As such, setup reduction and lot size optimization are constant challenges and critically important, perfect applications of the Lean tools, including manufacturing associate-driven kaizen events. Lean methods, including 5S and work cell standardization, will always help improve factory efficiency, optimize overall throughput, and set the stage for effective Six Sigma tool use.

One of the beneficial improvements driven by Lean implementation is cycle time reduction. Cycle time reduction obviously is desirable to customers; shorter lead times are regularly used as a competitive tool in the sales force arsenal. But the operations team loves cycle time reduction too. A shorter build cycle time means more cycles of learning available per quarter, which accelerate iterative yield learning. Similarly, in the event of the unavoidable process problem, shorter cycle times reduce mean time to detection, and minimize the amount of work in process that may be affected. (Of course, reduced cycle times also are attractive to the CFO, for cash preservation reasons.)

Many of these initiatives start with a flourish, but then settle into a business-as-usual approach, especially if the initiative is treated as a program, rather than a new way of life.  To cement these tools in the organization, one needs a comprehensive training system to proliferate these tools, with a situational delivery method to accommodate different incoming skill levels. A haphazard education system that results in a cultural backslide will be exceptionally discouraging to those who enthusiastically volunteered a few short months ago. Manufacturing teammates want to do a better job and want to be more involved, and if education and problem-solving participation is dangled in front of them without a sustainable system for 100% inclusion, the result will be far worse than before the start of the initiative. I’ll reiterate: this is not a program, but a process aimed at sustainable, continuous improvement. The employee development program cannot simply offer single-ended Lean or Six Sigma training. A truly effective program will address a wide variety of topics within the three skill sets: job content skills, transferable skills and self-management skills. All three must be collectively addressed with the individual’s personal readiness taken into account, and the resultant training must be immediately put to use on the production floor so that the skills are not merely theoretical in nature, but have been reduced to practice in a meaningful way. Employee development is a long-term investment too often sacrificed during tough financial times. This is a mistake.

Strong, vibrant leadership is the last key component, the glue that holds all this together, as well as a catalyst to make the organization more effective. Great leaders show their mettle under fire. There is a marked difference between leading and managing. Managers, by definition, manage, or react to situations after they arise. In stark contrast, leaders exercise a more predictive, anticipatory approach to their responsibilities, which always has the team moving forward in sync with the overall organizational strategy. All too often, people are put into leadership positions because they had historically been a great doer, a great problem-solver. But are they a great problem avoider? The leader is truly a catalyst, employing an effective organizational structure that fosters timely, penalty-free communication and has interlocked and cascaded SMART goals: specific, measurable, attainable, realistic and timely. Since people respect what you inspect, these goals link directly to a handful of bellwether metrics, from which the pulse of the operation can always be taken.

Many strategies can be incorporated into an organization’s operating plan. Execution and the tools to enable execution are the missing link. The proper balance between a well-defined strategy and an effective execution plan will form the foundation for a culture of continuous improvement. One without the other is counterproductive. Once this culture takes root, however, and the entire organization’s efforts are synchronized and catalyzed, the result is unstoppable.

James Fuller is vice president, development at Endicott Interconnect Technologies (eitny.com); jfuller@eitny.com.

Page 31 of 192

Don't have an account yet? Register Now!

Sign in to your account