caLogo

2013 Articles

Why one solvents expert thinks we have the method for approving substances backwards.

The popular solvent known as 1-bromopropane (aka n-propyl bromide or nPB for short) came on the industrial market in 1996. Its main uses are metal cleaning, diluent for adhesives, dry-cleaning garments, and – most relevant to readers – defluxing electronics assemblies.

When it was first marketed, there were no data on toxicity, carcinogenicity, mutagenicity, and so on. The makers initially proposed a 500ppm limit of the substance in the air breathed by workers, but this was rapidly dropped to 200ppm. This was the Operator Exposure Limit (OEL), and it was only a recommendation. One small French producer recommended 10ppm. The US National Toxicity Program started work on it in the early 2000s, but this is still ongoing. OSHA cannot issue a permitted legal limit (PEL) until the NTP has finished. In the early 2000s, anecdotal reports started coming out, indicating neurological problems in some workers exposed to the substance. The makers funded some animal tests, and most of them dropped their recommendations to 100ppm in the mid-2000s after it was found that mice suffered longer nerve reactions in the CNS and lower sperm motility. Later results indicated that both the neurotoxicity and the reproductive toxicity were worse than thought, and most makers reduced their recommended OEL to 25ppm, although one recommended 100ppm. This is still valid today.

From the practical point of view, 100ppm is fairly easy to control. The limit of possible for defluxing in our industry is 25ppm using ordinary high-quality open-top machines. So-called “zero-emission” machines are available but cost three to five times more than conventional ones and generally can hold average exposure to lower single digits if used correctly.

In Europe, nPB was classed as a VOC (less regulated in the US), and the VOC Directive limited its use to negligible amounts. However, the REACH program has proposed labeling it as a reproductive toxin, but this has not yet been studied. In China, where vast amounts are made and used, the limit is pending.

About two years ago, the NTP issued an interim warning that animal tests on both rats and mice indicated a strong probability of carcinogenicity and mutagenicity. The American Conference of Government and Industrial Hygienists (ACGIH) issued a Threshold Limit Value (TLV), similar to the  OEL, of 10ppm. The ACGIH is a highly respected NGO, and its recommendations are based on known science, but it has no force of law. However, many companies using chemicals do try to take notice of its recommendations.

The ACGIH has recently put a cat among the pigeons in that it has given advanced notice of a change in the TLV for nPB. The proposed draft value is 0.1ppm, an unprecedented two whole orders of magnitude lower than the current TLV (acgih.org/store/ProductDetail.cfm?id=2151). I’m not yet aware of the reason behind this, but it must be serious.

Needless to say, 0.1ppm is impossible to respect under industrial conditions so, if this value were to become a legal limit, nPB would be condemned to death as an industrial solvent for defluxing, etc. (It is also used in small quantities as feedstock for the pharma industry in the manufacture of psychotropic drugs such as diazepam.)

Why this rant? Simply because I suggest we have it wrong, all wrong. Workers were initially exposed to what has been deemed potentially dangerous levels of nPB, up to 5,000 times higher than the proposed “safe limit.” Even today, one maker is recommending an OEL of 1,000 times the proposed OEL. This is far from the first time that workers have been told that such-and-such a chemical is safe, only to die prematurely from the effects of exposure 10, 20 or 30 years later; I know of cases, one involving a close friend, where people have died in their 50s and 60s from organ failures resulting from chemical exposure as young adults.

Yes, we say that chemicals are innocent until proven guilty. This is wrong, and I suggest that they should be considered guilty unless proven innocent. An arbitrary tight limit should be placed on new chemicals, based on computer modeling and analogical comparison with similar known substances. This limit may be slackened if tests and experience show it is safe to do so.

Brian Ellis is a retired consultant and longtime vice-chairman of the IPC Cleaning Handbook task group. He is author of Cleaning and Contamination of Electronics Components and Assemblies, among several other books and technical papers. He was a member on the Solvents Technical Options Committee of UNEP, advisory to the Parties of the Montreal Protocol, from 1989 to 2004, co-chaired the nPB Task Force and was Senior Solvents Consultant to the UNEP Multilateral Fund and to the Swiss Federal Office of the Environment. He founded Protonique SA and Protonique Ltd. and developed the Contaminometer, the Insulohmeter, the APL series of aqueous cleaners and driers and other equipment; bne@bnellis.eu.

Used properly, standards can be effective tools in a static control program. 

Economic pressures, device densities, new technologies, and an increased reliance on outsourcing are just a few of the ongoing indicators of change in electronics production. And ESD control has a key role in coping with some of these changes facing the industry.

In meeting the complex challenge of reducing ESD losses, standards are playing an increasing role in reducing marketplace confusion in the manufacture, evaluation, selection of static control products and implementation of static control programs. Standards help to ensure lot-to-lot consistency for static control products and provide a means of objective evaluation and comparison among competitive ESD control products. Standards help reduce conflicts between users and suppliers of ESD control products and are used in developing, implementing, auditing, and certifying ESD control programs.

Standards, however, should be viewed simply as tools, and like any tool, effectiveness is often a function of how well the tool is used. When properly used to drive a nail, a hammer ensures that siding adheres to the frame of a house. A swollen thumb is vivid testimony to the hammer’s misuse. Current industry conditions demand that all available tools be put to good use. Wasting time and money is out of the question. The following is a set of guidelines for effective use of standards.

First, evaluate and select standards that are appropriate for your specific applications. In the US, use of standards is generally voluntary, although their use may be written into contracts or purchasing agreements between a buyer and seller. In much of the rest of the world, the use of standards is usually compulsory, and they often have the force of law behind them. Manufacturers should carefully consider and make the right decision when given the option to use, or not use, standards. Good decisions are based on good information and analysis of need. Use those standards applicable to your manufacturing environment, and confusion will be greatly reduced and efficiency will improve.

Second, understand the difference between a standard and a test method, and use each accordingly. The term “standard” is often used to refer to all standards-related documents, but significant differences exist in the different types of documents. A true standard is a precise statement of a set of requirements to be satisfied by a material, product, system or process: the electrical resistance of a worksurface, for example.

Usually, a standards document also specifies the procedures (test methods) for determining whether each of the requirements is satisfied. For instance, you can specify that the worksurfaces in your facility meet the requirements outlined in a standard.

A test method is a definitive procedure for the identification, measurement and evaluation of one or more qualities, characteristics or properties of a material, product, system or process. For example, you can specify that the worksurfaces have a specific resistance level when measured according to the test method, but you cannot specify that the worksurfaces meet the requirements of the test method, because the test method normally doesn’t contain the actual specification for the product being tested.

You’ll avoid a lot of conflicts in purchasing and in program implementation when you use the right type of document for the specific application.

Third, understand what the standard covers and what it doesn’t. Usually this type of information is found in the scope of the document. If the scope says the document covers worksurfaces with a resistance to ground of 106-109Ω, then the document is not applicable for materials requiring 1010Ω or more. If the scope indicates the document applies to facilities that manufacture explosives, then it may not be applicable to facilities that manufacture semiconductor devices. A thorough understanding of the document’s scope will prevent the purchase of incompatible products and the implementation of a faulty manufacturing procedure.
Fourth, learn the detailed provisions of the standard. If the document specifies resistance, then you would not select a material based on its resistivity. If the test method specifies the application of 100V when measuring resistance, you may need to replace your resistance meter that applies 500V.

Finally, be specific and clear in all communications. If a potential material supplier provides confusing specifications, ask questions and get clarification. If evaluating materials using a specific test method, reference that test method correctly in program documentation. And, always make sure to include appropriate staff in communications: vendors, customers, production managers, and the purchasing department.

Used properly, standards can be important and effective tools in your static control program.

This column is written by The ESD Association (esda.org); info@esda.org.

LEDs
“Understanding Power LED Lifetime Analysis”

Author: Philips Lumileds Lighting
Abstract: When designing LED-based lighting systems, engineers need to understand LED lumen maintenance and mortality in similar terms to those used when designing with conventional light sources. However, comparable data have been nearly impossible to find. In addition, designers need extra information to predict the lifetime of LEDs under a variety of operating conditions. A number of techniques to predict LED lifetimes have been proposed, but have not been sufficient to generate the clear and unambiguous data that lighting engineers can use easily. A new tool from Philips simplifies the process, allowing full flexibility in design options. This one tool provides information for making decisions about product lifetimes, driver constraints, number of LEDs required, and thermal management. (Company white paper)

Signal Integrity

“Bandwidth Tests Reveal Shrinking Eye Diagrams and Signal Integrity Problems”

Authors: Tim Caffee and Eric Johnson; ejohnson@asset-intertech.com.
Abstract: Each new generation of a high-speed bus typically runs at a higher signal frequency, but this decreases the margin for error on the bus, making it more sensitive to disruptions from jitter, inter-symbol interference (ISI), crosstalk and other factors. To avoid potential problems on high-speed buses like DDR3, PCI Express, Intel QPI, Serial ATA, USB and others, bus performance must be validated during each phase of a system’s lifecycle, including design/development, manufacturing and as an installed system in the field. Unfortunately, effectively and economically validating the signal integrity on a high-speed bus has become more difficult, as the limitations of legacy probe-based test equipment such as oscilloscopes have become more obvious in recent years. Non-intrusive software-driven test methods based on embedded instrumentation are providing alternative validation solutions that are more cost-effective and deliver observed signal integrity data. (Company white paper, November 2012)

Solder Joint Reliability

“The Role of Pd in Sn-Ag-Cu Solder Interconnect Mechanical Shock Performance”

Authors: Tae-Kyu Lee, Ph.D., Bite Zhou, Thomas R. Bieler, Chien-Fu Tseng and Jeng-Gong Duh; taeklee@cisco.com.
Abstract: The mechanical stability of solder joints with Pd added to SnAgCu alloy with different aging conditions was investigated in a high-G level shock environment. A test vehicle with three different strain and shock level conditions in one board was used to identify the joint stability and failure modes. The results revealed that Pd provided stability at the package-side interface with an overall shock performance improvement of over 65% compared with the SnAgCu alloy without Pd. A dependency on the pad structure was also identified. However, the strengthening mechanism was only observed in the non-solder mask-defined pad design, whereas the solder mask-defined pad design boards showed no improvement in shock performance with Pd-added solders. The effects of Sn grain orientation on shock performance, interconnect stability, and crack propagation path with and without Pd are discussed. SAC 305 + Pd solder joints showed more grain refinements, recrystallization, and especially mechanical twin deformation during the shock test, which provides a partial explanation for the ability of SAC 305 + Pd to absorb more shock-induced energy through active deformation compared with SAC 305. (Journal of Electronic Materials, December 2012)

“A Mechanistically Justified Model for Life of SnAgCu Solder Joints in Thermal Cycling”

Authors: Peter Borgesen, Ph.D., Linlin Yang, Awni Qasaimeh, Babak Arfaei, Liang Yin, Michael Meilunas, and Martin Anselm; pborgese@binghamton.edu.
Abstract: We have shown the life of a SAC solder joint in a typical BGA or CSP assembly in thermal cycling to scale with the time to completion of a network of high angle grain boundaries across the high strain region of the joint. This provides for a scientifically credible materials science-based model. In-depth studies did, however, show this to require significant temperature variations. Isothermal cycling may also lead to recrystallization, albeit at a much lower level depending on alloy, processes, and cycling parameters, but a quantitative model would need to be completely different. The question therefore arises as to how large a cycling temperature range is required for our model to apply. We present results indicating that repeated cycling between 20˚ and 60˚C should be sufficient; i.e., the model should permit extrapolation of accelerated test results to realistic service conditions. Many practical applications involve a combination of thermal excursions and mechanical cycling, and there is little doubt that thermal cycling-induced recrystallization will tend to lead to much faster crack growth through the solder in subsequent vibration, etc. (SMTA Pan Pac Symposium, January 2013)

This column provides abstracts from recent industry conferences and company white papers. Our goal is to provide an added opportunity for readers to keep abreast of technology and business trends.

Page 12 of 14

Don't have an account yet? Register Now!

Sign in to your account