caLogo

 

Since its launch in 1990, Tessera has assumed a leading role in next-generation packaging technologies. This month, Craig Mitchell, senior vice president of the company’s Interconnect, Components and Materials (ICM) division, himself an inventor on some 32 patents, talks with editor-in-chief Mike Buetow.

CA: Packaging today consumes much of the total cost of the end-product. Does Tessera try to drive end-products through its packaging inventions, or do you take your cues from the form factor?

CM: Traditionally Tessera has focused on chip-scale packaging and multichip modules. Going back to our founding, we recognized the importance of understanding SMT requirements and assembling them to boards. We see continued migration to finer pitch. Some of the pushback is from the impact on the board cost and assembly yields. We are seeing some volume on 0.4 mm pitch and believe we will get to 0.3 mm product late next year in cellphones and some other handheld devices.

Cellphones are a key driver for semiconductor package technology today. Desktop PCs pushed semiconductor package technology from a performance perspective, but now cellphones are pushing it. Consider, one cellphone could have a baseband processor, a multimedia processor, and an application processor. That’s three different processors, each with a few hundred I/O to several hundred I/O. It’s forcing the industry to come up with ways to evaluate finer pitch because they don’t want to increase the [physical] area of a given product.

CA: Does Tessera concentrate on non-handheld markets?

CM: We are concerned with high volume, because we want to help enable our licensees to do high volume. So we focus on a select few markets. Cellphones ship over 1 billion units each year – that’s high volume. Notebooks ship about 150 million units, but there could be four to eight DRAMs per notebook, which drives up unit volume. We develop prototypes and then work with licensees to introduce the technologies.

CA: Do you get pushback on your designs from others in the supply chain?

CM: We do prototype samples and get feedback from board fabricators. We get some feedback [from them] on cost and yield. We are also talking to packaging licensees, who get information back from their customers. And we get feedback from OEMs on cost levers.

CA: What is Tessera’s involvement on the various industry roadmaps for packaging?

CM: We have been involved in some of the various roadmaps, such as ITRS and IPC, and try to stay in touch with those. We’re not doing anything with iNEMI at this time. Roadmaps are a guide. They are not the gospel, if you will, but a general direction of where the industry is headed.

CA: How do you set your roadmap?

CM: It’s through direct involvement with semiconductor manufacturers and OEMs. I think that’s the same with many companies. It’s the customer that drives us. The intention is to collect information from across the industry and get unbiased info, and to see the issues that people specifically see. But you have to look at what’s happening in the industry and how you can best solve your customers’ challenges.

CA: Generally, the smaller the form factor, the more thermal management comes into play. How does Tessera factor this in?

CM: Depending on the customer, we may or may not have to do some optimization for thermal management. We look at the various materials, minimizing interconnect lengths, reducing copper content, and trying to reduce overall thermal resistance. We do a fair amount of modeling to try to understand the thermal performance. In the notebook space, Tessera has been developing technology to provide very low profile cooling methods. We call this silent air cooling [technology]. This leverages electro-hydrodynamics, and involves taking an electrode and cathode, applying a voltage across the two, ionizing the air molecules, and adding a charge. The air molecules bump into other molecules and provide airflow without any moving parts.

I think thermal management has to be managed at various levels. You have to look throughout the chain and understand how to balance thermal resistances throughout that chain. It may mean allowing some residual copper on the board in order to spread heat, instead of etching it away.

CA: At what point do you involve the EMS/ODMs?

CM: We give the EMS providers an update on where we see things headed and introduce them to our latest and greatest. For example, our MicroPillar Interconnect technology uses a copper pillar formed during the fabrication process. We talk about the impact on board assembly. We actually go through an evaluation, where we supply a test board and package, and [the EMS company] assembles it and gives us feedback on it. The soldering, placement, reflow, the results and yields, and recommendations on how we might improve design rules, be they package, PCB or substrate design rules.

CA: How often does this take place?

CM: In certain cases it’s a standing relationship, maybe a couple times a year. It changes over time. Generally, we try to keep a direct relationship. Sometimes it’s a specific EMS vendor based on whom they are working with. Sometimes we have an OEM that asks us to work with an EMS. trate design rules. The EMS vendors want to stay up to date on the latest and greatest technology, and we want to understand the issues in high volume manufacturing. 

CA: Do material costs influence your choices?

CM: This is very important. What we have learned is that customers pay for “good enough.” You basically have to meet their specification but you don’t have to go beyond it. The other component is cost. Customers will buy a product with this size, performance, etc., at this given cost. Materials, especially in advanced packaging, are a significant part of the cost. We are looking at how we can improve yields so very little of the materials that are used get thrown away as part of the process. What is high cost today may not be high cost tomorrow. CSP was that way.

CA: Tessera has been on the forefront of SiP.

CM: We think packaging in general is a critical core to the miniaturization and integration of products. Miniaturization and integration are a balancing act at the system, board and packaging levels: You try to balance at the most effective way at each level. If you have integration at the chip level at a high yield, it can be the most cost-effective process. But you might have to decouple the two. Industry has invested billions upon billions of dollars, and there is great opportunity for packaging interconnects.

CA: Would you have predicted this when Tessera was launched?

CM: [Tessera cofounder] Tom DiStefano predicted circuitry coming off the chip and going into the substrate. And it makes sense. Take more of that surface routing and optimize it in the package instead. You wouldn’t test the chip before you packaged it because the package completed the circuit. I think in the next decade you will see more of that in reality.

CA: What will be the effect on known good die?

CM: I don’t know if it will have a direct impact on KGD because you still need that. No single solution allows you to integrate all these. You need a 3-D toolbox. And in a 3-D toolbox you have multiple tools: through-silicon via, die stacking, package stacking. All these things are required to drive integration and miniaturization. If you look at the board and package assemblers, it will be having access to this toolkit that allows them to package the future.

CA: Does the move by certain EMS/ODMs into packaging complicate your relationship?

CM: There’s been a migration of the package assemblers moving into board assembly, starting with module assembly. And the EMS guys are migrating into packaging. Flextronics and Foxconn have semiconductor packaging capability. And there will be some overlap. Each will need access to the other’s capabilities. Does it complicate our relationship? No, because at the end of the day, our objective is to make technology broadly available, with more functionality, more performance, a better price point, and the right reliability.
  

I haven’t seen it all, but sometimes it feels like it.

Like, for example, when Canada issues a national inquiry into the possible ban of five rosin-containing substances from all products manufactured and sold there.

Yes, you read that right: rosin. As in tree sap.

A ban on rosin would make it difficult, if not impossible, for electronics manufacturers to conduct business in Canada. Notes the IPC trade group, which submitted comments against the proposal to the Canadian Department of the Environment, rosin is used in the manufacture of more than 75% of electronics products, including defense systems, telecommunication and transportation technologies. It would mean more expensive products for consumers there, as manufacturers would have to engineer products specifically for that market. Moreover, you would hear the proverbial great sucking sound as Canadian electronics manufacturers bolted for countries that do not ban the use of rosin.

According to IPC technical director Dr. Greg Munie, rosins are naturally occurring materials that possess irreplaceable chemical and electrical properties that ensure a reliable, safe and long-lasting product. There is no known chemical or combination of chemicals that can provide the same functionality and reliability of rosin. Therefore, eliminating rosin would force a change in the composition of soldering flux and solder paste that will ultimately affect the reliability of the final product.

It’s true that rosin-free fluxes have been available and in use for years. But that comes as small comfort to the responsible manufacturers that have invested heavily in air filtration and ventilation systems to protect their employees. We encounter hazardous materials every day – my wife is singlehandedly trying to bail out the economy through her massive purchases of cleaning products containing bleach – but handled correctly and with the appropriate safety gear in place, they do not pose health threats.

Indeed, as one engineer joked, the average Canadian is probably exposed to more rosin sitting around campfires or heating their homes with wood. Perhaps Canada should cut down and export all its sap-producing trees, too.

Trade (no) shows. Productronica takes place this month, wrapping up a trying year for trade show producers. Starting with the tepid attendance at Apex last spring and continuing through the anemic turnouts at IPC Midwest and SMTA International in September and October, respectively, the time to deal with the industry’s apathy toward these gatherings is now. While several companies remain on travel lockdown, the Chicago and San Diego locations were central to large numbers of designers and assemblers, precious few of whom bothered to make the (short) drives.

The attendance for SMTAI’s technical conference remains pretty good. But there was very little traffic on the exhibition floor, a result that mirrored IPC Midwest a few weeks earlier.

We can blame the economy. We can blame the layoffs. We can blame a lot of things. But the industry seers – also known as the media – have been saying for years there are too many shows. With Electronics New England, Electronics West, the myriad Design2Part shows, Apex, Assembly Technology Expo, IPC Midwest, PCB West, SMTAI, and countless vendor days, among many others, the regionalization – and bastardization – is effectively complete. There is simply no reason for a potential attendee to get excited about an event, because when you are practically showered with opportunities, the impact is dramatically lessened. The show producers of these events are going to have to look hard at their bank accounts and reconsider their missions. While I would be surprised if the for-profit companies (of which Circuits Assembly’s parent company, UP Media Group, is one) changed their approach, it’s high time the trade associations get together and get an agreement done that puts some sanity back into the trade show calendar.

Productronica and Nepcon China are the largest electronics assembly equipment/materials shows outside of Japan, but they have no technical conferences to speak of. Meanwhile, the US-based shows are suffering in attendance, but the conference side is much stronger than what is found offshore. The US might not be the ideal site for a blowout exhibition, but the primary goal of a technical conference is to create a forum for the exchange of ideas, and that not-so-minor goal would be enhanced by having a critical mass of engineers in one place.
Put the egos and greed aside, and get it done.

Maximizing the line. On the Circuits Assembly blog over the past month, Dr. Ron Lasky (or “Dr. Ron” to most), has been serving up a scintillating story on line balancing. Most companies way overestimate their actual line utilization rates. If you haven’t seen it, be sure to take a look (http://www.circuitsassembly.com/blog/).

  

The two primary ways need more repeatable accuracy.

In the July issue of Circuits Assembly, in presenting the direction taken by the 2009 iNEMI Roadmap chapter on Photovoltaics, Alain Harrus and Jim Handy underscore that the principal barrier to widespread adoption of PV technology is its cost compared to the energy it generates. They go on to point out that just over half of an installed PV panel’s cost is in its conversion technology: the cells’ capacity to capture and convert solar energy. It is here innovation is most likely to occur.

Indeed, in recent years, as sunlight has become increasingly attractive as a sustainable alternative energy source, the PV industry has made significant inroads in improved efficiencies, to the point that today, a good solar cell is capable of harvesting up to 18% of the solar energy that hits it. Clearly, there is room for improvement, as Harrus and Handy point out, but to achieve this, we can no longer push the boundaries on existing technologies, as their capabilities have been stretched to their limits. This means the PV industry must develop revolutionary new ways to improve efficiencies.

The industry can work in a number of directions to achieve this: One, as Harrus and Hardy outline, lies in improving the cell’s conversion capacity – the efficacy with which it transforms captured sunlight into electricity. Another perhaps more fundamental approach is to ensure the light capturing parts of the cell receive as much sunlight as possible.

Last month, I mentioned the PV industry is investing heavily in reducing the width of the silver conductor grid screen-printed onto the front side of the cell, and that effectively shadows the precious light-capturing real estate below. With current feature widths at 100 to 120 µm, this shadow effect has been reduced as much as it can be using existing PV processes. Finer line work, although it may not sound overly complicated to anyone active in electronics production, requires a step change in technique and approach. This is because, to maintain their conducting capacity, finer lines must stand higher. This is not as simple as it sounds; the finer the line, the lower it stands – thanks to print paste rheology and the physical properties of screen printing stencils.

One solution is to print the conductors twice over, effectively doubling their height. This is achieved using Print on Print (PoP) technology, and enables us to print features down to 50 to 60 µm wide without compromising their precious conductive capacity. It is clear that here repeatable accuracy is key – as we know well from our experience with this procedure for the semiconductor and biomedical sectors. This is because this degree of resolution calls for excellent wall definition, and therefore, a perfect gasket must be formed between the substrate and the underside of the print screen, calling for alignment accuracies to within an exacting 10 µm.

PoP capability also comes into its own in the selective etching cell structuring technique, whereby an etchant is screen-printed onto the cell’s surface in a grid pattern identical to that of the conductor grid. Upon heat activation in an oven, the etchant removes the cell’s non-reflective top layer and exposes the silicon underneath, so that in the subsequent print pass, the silver grid enters into direct contact with the cell’s active energy-transforming layer.

In a similar vein, the selective emitter process aims to increase efficiencies by printing extra n-type dopant in a pattern mirroring that of the collection grid. The fact that the two patterns must be aligned to within 10 to 12 µm is complicated by the fact that the dopant is invisible. This precludes the use of conventional vision systems, so two small fiducials, to which the two deposition processes must be precisely aligned, are typically etched on the edge of the cell.

With growing emphasis on these new PoP processes, it is clear that, if it is not already, repeatable accuracy is soon to become key for solar cell and module manufacturers, especially if we consider that the PV marketplace is far from mature, and that as panel prices come down, we can expect massive increases in production volumes. Current industry throughput rates, which at 1200 wafers per hour are already fast for those of us used to electronics manufacturing standards, are expected to increase in the very near future to twice or even three times that rate. This will put even more pressure on us to ensure our processes are accurate, and repeatably, reliably so. Sophisticated, powerful metrics such as the Process Capability Index (Cpk), long used by the semiconductor and board assembly sectors, will soon become critical for the PV industry too, as manufacturers strive to maximize end-of-line yields, reliability, and bottom-line performance. 

Darren Brown is business development manager, alternative energies at DEK International (dek.com); dwbrown@dek.com. This column runs periodically.

Smaller test pads are overwhelming the best probing techniques.

The trend of shrinking electronics, coupled with added complexity and capability packed into the board assembly, is a manifestation of Moore’s Law at a level beyond the silicon realm. We see this trend in our daily lives, and it has become more apparent in consumer products like cellphones, notebooks and gaming products, to name just a few.

The implications of this trend on the assembly level are profound if one looks at the test access on the assembled board and how that has changed over the years. Gone are the days when 0.100˝ test pads were in abundance. That evolved to 0.075˝ test pads, which then progressively shrank to 0.050˝, then 0.035˝ and so on. Fixture vendors now get customer requests to accommodate 0.018˝ test pads.

In the past, one could mitigate this trend through more precise probing. However, due to escalating and prohibitive costs, this is becoming increasingly difficult, and the test pads themselves are diminishing.

Despite this, the manufacturing industry still needs a way to electrically test assemblies amid an environment of reduced access. The good news: There are several ways to go about doing just that on in-circuit testers.

Categorically, the tools available can be placed into five groups:

1. IEEE 1149.x boundary scan tools. Boundary scan is becoming more important as test access diminishes. Complementing the gain in prominence of boundary scan is the fact that putting extra silicon on the die real estate to enable boundary scan is less of a barrier now compared to five years ago. In fact, this may no longer even be an issue, because this extra investment in silicon is infinitesimal compared to the silicon occupied by the core logic itself.

To those unfamiliar with boundary scan, it simply can be described as follows: A boundary-scan compliant IC would have at each of its pins a boundary scan cell, which, depending on its type, would be capable of driving signals or receiving signals or both, if it is a bidirectional cell. These boundary scan cells can be controlled through four pins on the IC called the Test Access Port (TAP), and they are governed by the IEEE 1149.x standard. So essentially having test access to the four TAP pins will enable the I/O control of the rest of the pins on the IC, which could be in the hundreds or even thousands.

Three boundary scan tools come to mind. First, and chief among them, is that related to the IEEE 1149.1 standard. This is the most popular form of IEEE 1149.x. Having a “chain” of ICs conforming to this standard permits testing of the interconnect nodes between ICs. This is done through inputting a predefined pattern of bits into the interconnect nodes. A short or open on those nodes will alter this pattern, and as it exits the chain, the defect then can be diagnosed. As shown in Figure 1, no physical test access is required from these interconnect nodes.


Boundary scan tools can use IEEE 1149.1 not only to test the interconnect nodes, but also between the interconnect nodes and other nodes that have test access, but are not part of the boundary scan chain. This is the advantage of boundary scan on ICT, which has access to a bed-of-nails fixture. However, IEEE 1149.1 is not equipped to handle AC-coupled differential signals. For this type of architecture, use IEEE 1149.6.

One derivative of boundary scan is when it is used to test non-boundary scan devices. Again, think of the boundary scan cells mentioned earlier for which their I/O capabilities can be controlled. We can use that to drive or receive a non-boundary scan IC to perform various digital tests.

2. Vectorless test. Vectorless test is a popular misnomer for a test methodology involving parasitic capacitance coupling. One such solution involves the use of a capacitive coupling plate resting on the device-under-test (DUT) to pick stimulus signals injected via test pads.

But this is to imply that VTEP per se is not a limited access tool, as it still requires physical test access for the user to inject this stimulus signal. However, a novel variant of VTEP, on the other hand, is. DriveThru works the same way as VTEP, but unlike VTEP, the stimulus signal is injected one-passive-component-away from the DUT (Figure 2). This permits the user not only to test the DUT, but also the passive component, as it is now made part of the signal propagation path. If there is an open in the path (e.g., a missing resistor), the test will fail. The advantage is that the user does not need to assign test pads to both sides of the passive component. Just one will do.



3. Boundary scan/vectorless test hybrid. A recent product to the market is a hybrid between boundary scan and vectorless test. As you would recall, vectorless test requires a stimulus signal, and this is typically injected into the DUT through a test pad. This hybrid technique eliminates the need for this test pad. Instead, the stimulus signal now comes from a boundary scan device.

4. Bead probes. Bead probes are basically lumps of solder sitting on an exposed part of a signal trace or microvia on the assembly. In the case of the signal trace, they are only as wide as the signal traces themselves. Compared to a traditional 0.035˝ test pad, which may be seven times wider than the trace that it serves, bead probe is a boon to board designers, as it does not disrupt the layout of signal traces and therefore simplifies the design process. Also, studies have shown that bead probes do not degrade signal integrity any more than a virgin trace does, even up to a 20 GHz level. Bead probes provide direct physical test access in places where it would normally be difficult to do so, either due to physical space constraint or because of the high-speed signals the trace is intended to carry.

5. Analysis software. Strictly speaking, this category is not one that will directly provide test coverage in a limited-access environment, but rather, it serves as a productivity tool to increase the effectiveness of those limited-access solutions at our disposal. A case study performed on two different assemblies showed that the first, a 3312-node network switch product, had the opportunity to reduce its test pad population by 42.8%, while the analysis on the second, a high-volume consumer product, came up with a reduction of 166 test pads from a total of 479 before the analysis. A reduction of test pads would naturally also translate into a reduction of test probes needed to test them on a fixture, bearing with it cost savings in tow.

Andrew Tek is a product marketing engineer at Agilent Technologies (agilent.com); Andrew-ck_tek@agilent.com.

Special software permits factory floor adjustments, potentially saving capital investments.

What is virtual manufacturing? The simple answer is a PC-based manufacturing simulation. The more complete answer is that virtual manufacturing is the process of designing a model of a real system and conducting experiments with this model for the purpose of understanding system behavior. Processes must be completely understood before implementation in order to “get it right the first time.” To achieve this, use of a virtual environment is essential for simulating individual manufacturing processes and the total manufacturing system. By driving compatibility between product design and assembly plant processes, virtual tools enable early optimization of cost, quality and time to help achieve integrated products, process and resource design, and affordability.

Moreover, manufacturing systems may be widely distributed geographically and linked in terms of material, information and knowledge flows. Virtual manufacturing is the only method that can encompass product, process, resources and plant to provide flexible and agile production.

Benefits of virtual manufacturing include:
Visualize the material flow through the manufacturing system.
Identify the bottlenecks of the system.
Understand the equipment and manpower utilization.
Optimize the system by virtually adding resources (equipment/manpower) to observe performance responses.
Perform cost analysis per manufacturing process.
Predict and eliminate on-the-job injuries as well as ensure manufacturing feasibility, part by part.
Assembly planning and validation.
Process simulation.
Virtual manufacturing has also been successfully implemented in the following areas:
Airport operations.
Urban traffic study and development.
Maintenance operations.
National economy study.
Waging military battles.
Material and warehouse distribution systems.

Figure 1 shows the manufacture of a double-sided assembly, modeled using simulation software.

1. Kitting area. Work orders are created, and PWBs and components are pulled from the stock room and kitted. Components are replenished.

2. Automated area. PWBs are labeled and routed through an automated screen printer. Solder paste is inspected and SMT components placed on the PWB using a chipshooter or a pick-and-place machine. AOI checks the component presence, component value and orientation. Missing components are replaced manually. PWBs are then passed through a reflow oven where solder paste is melted and component attachment takes place. Residual solder flux is removed using cleaning equipment. If components must be attached on the bottom-side, the process is repeated. Assembled SMT components are then inspected under x-ray for solder reflow quality. If any defects (missing solder, voids, shorts) are observed, the PWBs are reworked.

3. Manual area. Non-SMT components and some heavy SMT parts that need special attention are hand-soldered in this area. The PWB panel is depanelized into individual PWBs using a depanelizer. After mounting on a special casing to protect the bottom-side components, connectors and filters are hand-soldered, and the PWB is cleaned. Flying probe and continuity tests are performed on individual PWBs. Failed components are reworked or replaced at this station.

4. Conformal coating. PWBs are conformal coated, cured in the oven, and inspected. Any coating defects found in this area are returned to the manual area for recoating. Good parts are packed and the job order closed out.

Assumptions Before Simulation

During the manufacturing simulation, educated assumptions were made regarding the resource, equipment layout, equipment availability and process flow. Some important assumptions included:

Double-sided, hybrid PWBs will flow through the kitting area, automated area, the first manual area, coating area, and the second manual area. Individual PWBs are on a 2 x 2’ panel.
PWBs are transferred between different areas in batches of 16 PWBs.
Six operators will be needed in these positions: kitting, placement, inspection, assembly, test and coating.
Assembly time used in the simulation is based on previous experience.

Equipment resource was assumed to be dedicated to PWB assembly with no conflicts in resources. It is assumed that machine utilization is 100% with no downtime. Resource utilization (manpower) is 70%.

Processes such as solder paste printability, component placement, solder reflow, and conformal coating are assumed to be 98% defect-free.

Simulation report and analysis. We ran 95 iterations to get a 95% confidence level in the simulation model. The time spent in each area and manpower utilization was calculated. The average time spent to assemble a batch of 16 PWBs was determined to be 26 hrs. The most time is spent in the first manual area, where connectors hand-soldering and PWB cleaning took place.

Based on statistical analysis, with 95% confidence we can state that:

The time spent in the manual area accounts for approximately 50% of the total manufacturing time.
The assembly operator is the most utilized resource at 34% utilization, more than twice any other resource.
The system is underutilized, with most of the resources utilized less than 20% of capacity.

It can be concluded that with the current input parameters, the system is underutilized. Steps taken included:

The biggest bottleneck was hand soldering. With operator cross-training and additional hand-soldering stations, this bottleneck was eliminated and manpower utilization improved to 50%.

Equipment layout was modeled using simulation software to optimize product flow with minimal handling, thus saving unnecessary installation and moving costs.

Operator movement around the machines and workbenches was modeled to provide ergonomically designed workcells.

Kan ban storage for replenishing components and floor stock was strategically placed to optimize production flow.

The kitting operator was trained and certified to perform conformal coating. After optimizing the production line resource to five operators instead of six, the simulation was recalculated and showed an additional 10% improvement in manpower utilization. 

ACI Technologies Inc. (aciusa.org) is a scientific research corporation dedicated to the advancement of electronics manufacturing processes and materials for the Department of Defense and industry. This column appears monthly.

  

Chemistry cleaning trials should be conducted prior to equipment selection.

With the recent pressure to implement more cost-effective (i.e., lower cost) processes, a number of companies have purchased concentration monitoring or other equipment prior to selecting the actual cleaning agent. In the majority of cases, this approach has led to cleaning processes that did not provide satisfactory or expected results. Once implemented, real production conditions reveal process problems the customer did not foresee during the evaluation period. As a result, the engineer realizes they made a premature decision, and tries to rectify it. What complicates the situation (in most cases) is the fact that the budget has been spent based on information available at the time.

But what should an engineer do once they discover that the process foams or monitoring equipment does not work with all fluxes? Further, what if the overall cost-effectiveness falls short of what was promised? In the short term, most will try to make the selected cleaning agent work as it has been qualified and supported. It’s difficult to go to a supervisor and ask for forgiveness once the equipment already has been purchased.

We frequently witness this or similar dilemmas, often enough in fact to alert unsuspecting customers and help them avoid such mistakes.

A related scenario of what we have taken to calling premature equipment purchase relates to cleaning with DI water, which is reaching its limits of cleaning ability. The majority of the North American SIA cleaning processes still use DI water for defluxing organic acid residues. This worked well in the past; however, recent studies suggest that water alone cannot completely remove water-soluble Pb-free flux residues. The equipment, however, was not purchased with a chemical isolation section at the time, which means it relied on cascading DI water from back to front. This means the equipment cannot be used with a chemical product, unless the user is prepared to face the (enormous) chemistry bill of dragged-out product.

Learning from both lessons, we conclude the following: Talking to a chemical service provider will help avoid either scenario. We also recommend testing under production conditions (i.e., for 30 days) prior to making final decisions. While testing new inline equipment under production conditions typically is not feasible, it might however be possible for peripheral capital equipment.

A checklist of key aspects to include during process selection is below. They include, but are not limited to:

Selecting the cleaning agent. Cleaning products vary greatly in performance and price. Some carry a low price tag, but might not be sufficient for the application.

Ensure a 100% match between the chemistry and contaminants. If that is not given, even the best mechanical assistance won’t help clean the residues.

Process parameters. During equipment selection, pay particular attention to the exposure time variable.

Equipment requirements. Cleaning equipment and peripheral tools do vary and need to be examined in detail to fit the requirements.

Process recommendations. Talk to peers who have implemented similar cleaning processes and have gathered similar data during their own process evaluations.

It is important to keep in mind that the balance between residues and their chemical counterpart is very complex. Traditional surfactants are known to be quickly exhausted and have a limited process window. Their main drawbacks become obvious when cleaning under production conditions. Products are available that can lift off contamination without a chemical reaction, and thus do not deplete easily. This provides a much larger cleaning process window and fewer process hiccups.

Over the years, many suppliers have invited customers to their facilities to provide an “entire” process solution. The offered support ranges from equipment selection, analytical help, and other process support services. Clearly this strategy is intended not only to help the customer save on travel by minimizing vendor visits, but also allows vendors to minimize the customer’s exposure to competitive products.

Customers today are diligent in determining the most cost-effective solution, but at the same time are restricted by reduced travel budgets. That said, some vendors offer complete cleaning process qualification, including automated concentration monitoring and vapor recovery solutions, allowing engineers to study their applications in depth.

Harald Wack, Ph.D., is president of Zestron (zestron.com); h.wack@zestronusa.com.

Page 52 of 192

Don't have an account yet? Register Now!

Sign in to your account