caLogo

2013 Articles

Several key design rules are identified that require change from conventional design points.

Design for manufacturability (DfM) violations appear to be on the rise. Component placement densities are increasing, and printed circuit board sizes are maintained or decreasing. The result leads to increased component interactions during assembly and rework operations that can affect first-pass yields, quality levels, and product reliability performance. Increased product functionality using smaller form factors or provision for added functional content within a common and constrained pre-existing system form factor are the primary driving forces behind this trend.

Adding Pb-free assembly process constraints to these design trends further increases risks for quality issues, lower first-pass yields, and potential reliability issues. The higher temperature processing requirements, smaller process windows and increased mechanical fragility sensitivities associated with Pb-free assembly must be managed with utmost scrutiny when designing and assembling high-density complex hardware configurations.

Given these considerations, the need and opportunity to improve DfM management processes is clear. To ensure highest quality and reliability levels using new Pb-free-based materials, components, and processes, well-defined, collaborative, up-to-date, and efficient DfM practices are required.1

High-complexity server and storage hardware must be designed to maximize yields, quality and reliability performance. Because many server and storage class PCBs are high-dollar-value assets, they must be designed to enable rework operations, when needed. DfM reviews during early design stages help ensure product functionality requirements are attained, while ensuring the concept design can be properly manufactured and reworked.

With significant learning over the past six years, IBM has studied numerous DfM elements with the intent of determining what (if any) changes are required when migrating to Pb-free PCBs. Collective efforts have led to the identification of critical DfM elements that have indeed changed with the transition to Pb-free assembly, including:

  • Printed circuit boards. Of critical importance is adhering to use of qualified PCB laminate materials and PCB constructions designed to survive maximum allowable temperatures, coupled with the use of preferred design structures for both ground plane thermal shielding and PTH devices.
  • Temperature-sensitive components (TSC). To protect identified TSCs on a card assembly, special attention to TSC component selection, placement counts, and physical layout is required. As discussed in Pymento et al2 and Grosskopf et al3, TSC temperature and/or time violations can lead to reduced product field life. DfM reviews must therefore address these known TSC risk items.
  • Component keep-out spacings. Minimum keep-out distances around components are required for several reasons, including rework nozzle and tooling needs, heatsink clearances, and minimization of adjacent/mirrored partial reflow during rework operations. Keep-out spacings apply to all component types, including SMT, PTH and compliant pin (CP) package constructions.
  • New SMT pad geometries. SMT package styles continue to be introduced. DfM guidelines must therefore be optimized for new devices, including FC-QFNs, 0201 and 01005 passives, SMT DIMMs, and LGA hybrid sockets (to name a few).
  • PTH component design elements. Conversion to alternate Pb-free alloys as published in Hamilton et al4 and Hamilton et al5 have led to many new DfM recommendations for PTH components, including finished barrel size, minimum remaining copper thickness, pin protrusion, and wave pallet design features.
  • General PCB design elements. In some cases, design features that help to generally improve the manufacturability of PCBs may be mistakenly excluded from high-complexity design points. Some examples include the use of global fiducials, component polarity markings, and card edge keep-outs. Although considered very basic requirements, these elements may not always be included in designs, creating unnecessary challenges and longer process setup times during new product introduction activities.

Current Practice

OEMs have three primary options to control and manage DfM activities:

OEM ownership of DfM rules. In general, original equipment manufacturers can choose the option to continue investing resources to maintain internal DfM rules and processes. While some firms continue working in this traditional model, others do not. Some firms have offloaded all DfM responsibility to their contract manufacturers. Others use a collaborative approach with both firms working together to address DfM issues. It should be noted that if OEM firms decide to maintain ownership of DfM activities, sufficient resource and organizational structure must be allocated.

Industry standards. A second option is to use industry DfM standards. As documented by IPC6, there are five IPC documents relating to the design of PCB assemblies (Table 1). As shown in Table 1, four of the five have not been updated since the July 2006 promulgation of the EU RoHS Directive. Therefore, the majority of the documents do not contain any Pb-free content, nor is their coverage of component and PCB technologies up to date.

[Ed.: To enlarge the figure, right-click on it, then click View Image, then left-click on the figure.]



The exception here is IPC-7351B. This standard has been revised and does contain a variety of Pb-free additions. This said, there are still many key omissions within the document, including:

  • SMT component design guide focus only.
  • No rules for component keep-outs / spacings.
  • No linkage to temperature-sensitive component risk points via J-STD-075.
  • No PTH component design guidance.
  • No compliant pin design guidance.
  • No SMT connector design guidance.
  • Lack of leading-edge component package styles.

Even with IPC-7351B now available, many contract manufacturers building server and storage hardware products opt for their own DfM rules. So although an industry standard exists in this case, it is not widely used.

Given that many of the industry DfM standards and guidelines are not being maintained, the limitations of IPC-7351B, and low IPC-7351B usage rates among contract manufacturers, utilizing industry standards to manage DfM activities does not seem the optimal solution.

Leveraging contract manufacturing partner protocols. As a third option, OEM firms can leverage their contract manufacturers’ skills and resources. Over the past 10 years, the majority of OEMs across many hardware product segments have outsourced manufacturing operations. With this shift, EMS firms now are primarily responsible for DfM reviews for new product introductions. Essentially, DfM activities that were once the responsibility of the OEM are now that of the EMS. As a result, OEMs are dependent on the performance of contract manufacturer DfM tools, capability and protocols.

While this option does help alleviate OEM resource and bandwidth issues, it also increases the risk of DfM control when an OEM is working with multiple contract manufacturers.

Many contract manufacturers offer DfM services as a differentiator among their competition. Therefore, DfM guides are usually considered confidential information and generally are not shared openly. For high-complexity hardware as found in server and storage systems, multiple card assemblies comprise the system architecture. Figure 1 shows a sample system comprised of:

  • 15 PCB assemblies.
  • 3 contract manufacturing locations.
  • 3 different worldwide geographies.
  • 3 different DfM guidelines / protocols (confidential).

In this example, multiple DfM rules and protocols exist and must be confidentially managed by the OEM. Although the final system sold to clients has an OEM logo, it is in this case built utilizing three different DfM protocols. It is therefore critical that OEM management of various DfM processes with contract manufacturers are well managed to protect quality and reliability levels expected by clients.

DfM Parameters for Optimal Pb-free Performance

To date, 21 specific DfM elements have been identified that are considered significant when converting to Pb-free-based assembly processes and design points. While optimal DfM parameters cannot be shared, Table 2 serves as a roster of focus items for the industry. To illustrate the importance of these design elements, the sections below discuss a primary DfM concern – component spacings (keep-outs) – along with a sample application to illustrate associated risks.

[Ed.: To enlarge the figure, right-click on it, then click View Image, then left-click on the figure.]



As shown in Figure 2, the keep-out area is defined as the additional area required around any component where no other component may be placed. Specified x/y linear distance is added to an existing component body dimension to calculate the resultant keep-out area. Although a typical BGA keep-out example is shown, it is important to note that such keep-outs are required for all component types, including SMT, PTH, and compliant pin technologies. Keep-outs help with assembly thermal and mechanical exposures affecting final product reliability, including:

SMT components:

  • Local TSC protection during rework.
  • Adjacent partial reflow protection (rework).
  • Mirrored BGA rework challenges.

PTH components:

  • Wave pallet chamfer keep-out designs.
  • SMT and TSC component body solder exposures.
  • Dendritic growth risks with unactivated fluxes (vias).

CP components:

  • Nearby component mechanical force protection during CP assembly and rework operations.

Of equal importance, keep-out dimensions are required for a variety of assembly tools, including but not restricted to, component placement equipment, heat sink attachment process tools, hot gas rework nozzles, manual hand iron access, and ICT bed-of-nail fixtures.

As stated, with higher alloy melting and processing temperatures, smaller acceptable process windows, and increased mechanical fragility issues associated with Pb-free assembly, component keep-out areas have become of greater importance. This is especially true during Pb-free rework. While primary attach operations may not be significantly impacted by improperly designed keep-out areas, the ability for products to be reliably reworked may create significant challenges.

There are multiple reasons why products must be reworkable:

  • Maximize first-pass yields and reduce scrap.
  • Obsolete components.
  • Supplier component recalls.
  • Approved vendor listing changes.
  • Product bill of material changes.
  • Field failing components.

If rework is required (for whatever reason) operational efficiency, workmanship levels, quality levels, and reliability performance must be maintained.
If such component keep-out areas needed for rework are not well defined or implemented during design phase activities, then several consequences during new product development and/or volume production are likely:

  • Inability to rework hardware quickly/efficiently.
  • Increased reliability risks with reworked hardware.
  • Added process development time required for complicated/non-ideal rework processes.

Figures 3, 4 and 5 illustrate several different component type keep-out area DfM violations as a result of increased component placement densities.
With the extremely tight spacing shown in Figure 3, the ability to rework the BGA device using a standard hot gas nozzle was not possible without special consideration of protecting the nearby DIMM connector. The solution in this case was the development of a specialty rework process that required nearly three months of development time.



Hand soldering iron rework access is illustrated in Figure 4. Passive or small body lead-frame components placed within connector arrays or too near large SMT connectors pose significant rework access issues and generally lead to lower workmanship and quality levels.

The last example (Figure 5) highlights the need for keep-out areas near PTH components. Since the temperature-sensitive component identified in red is within the designed PTH wave pallet chamfer opening, the component was originally exposed to molten solder during wave soldering operations. TSC component bodies are not generally rated to allow for Pb-free wave solder pot set point temperatures. Component movement outside the chamfer area was the solution in this case, requiring an additional circuit card design revision.

SMT component hot gas rework. If sufficient component keep-out areas are not integrated into designs, the ability for rework tooling and/or operators to access target rework components can be jeopardized. If the hardware design cannot be modified, complicated, multiple component rework processes are often required. If speciality rework processes are implemented, many technical risk elements must be carefully examined, including:

  • Localized PCB damage, including delamination, solder mask/copper peeling, copper dissolution.
  • Nearby temperature-sensitive component exposures.
  • Excessive flux usage potentially leading to corrosion or dendritic growth.
  • Partial solder joint reflow of same side adjacent and/or mirrored devices.

Figure 6 illustrates localized hot gas rework nozzle heat flows and highlights the key risk elements listed above. Figure 7 shows the same heat flow from a top side view perspective. The closer neighbor components are to the target rework site, the greater the risk additional defects can be introduced.



The BGA location shown in Figure 8 required hot gas rework. Due to increased system functional requirements, the assembly in this example included extremely tight component spacings. To access the target BGA device, seven nearby components first needed to be removed. As highlighted in the figure, the purple outline represents hot gas nozzle impingement; all components within this perimeter would first need to be removed. Once nearby components were removed, the target BGA device was removed, a new one added, then finally, all new perimeter components were added back, constituting a single rework cycle for the target BGA device.



Reduced throughput, added reliability risks, additional component costs, and additional engineering qualification activity resulted in this case due to the extremely tight component spacings defined in the original design point.

If nearby components were outside a well-defined keep-out area, then BGA rework operations for this case would be much lower risk and considered a business-as-usual process. Since keep-out dimensions were instead very tight, an expensive, slower throughput rework operation was required – adding quality, workmanship, and reliability risks to the process.

DfM priorities. Table 2 lists a significant number of DfM elements requiring action and optimization when migrating hardware to Pb-free constructions. As can be observed from the sample application, the complete DfM list encompasses a wide variety of technology elements, specification requirements, and design considerations. Extending these risk discussions to the other twenty items quickly demonstrates the need for optimized, collaborative DfM business processes between OEMs and contract manufacturers.

Traditionally, design groups tend to integrate design requirements with the following priority:

  1. Signal integrity.
  2. Power requirements.
  3. Cooling.
  4. Electronic card assembly/test manufacturability.

With the transition to Pb-free assembly, it is important to increase card manufacturability priority.

System-level management. Given the diversity of DfM control options, OEMs must consider when designing, procuring and fulfilling complex PCBs for subsequent high-reliability use in systems applications, it is extremely important to create a consistent DfM management review and communication process that can be used both early and often within the overall design, development, and hardware delivery cycles. This reoccurring review process is preferably used in three phases with a closed and interactive feedback loop between the OEM development organization, the OEM procurement organization, and the array of potential contract assembly organizations under consideration. The three general review phases preferably consist of the following general elements:

An initial phase involving a DfM review that occurs immediately after general board stack-up, PCB attribute selection, and general component floor planning activities have been defined by the OEM design and development teams, followed by second and third phases involving DfM reviews that occur during the assembler(s) selection cycle, and during the overall early hardware release-to-build cycle(s). Of critical importance within the early design cycle DfM review is for the PCB designer to be aware of key high-level design complexity attributes that may limit an assembler’s inherent ability to manufacture a given design, and to convey these critical attributes to assembly development and procurement organizations. By proceeding in this fashion, special process considerations, limitations, assembly risk, and special equipment needs can be clearly defined well in advance of actual assembly supplier selection.

In general, the scope of considerations includes identification of critical raw PCB attributes that may impact assembly reflow processes, and the general identification of critical components that may limit or restrict the scope of Pb-free SMT assembly operations that are viable for the supplier to implement, while ensuring assembly success and overall assembly reliability.

Using this process flow, subsequent DfM reviews that occur both at and beyond supplier selection stages can then preferably incorporate additional component-specific temperature sensitivity information and added details regarding necessary critical component assembly layout attributes referred to herein, while taking into account the various nuances associated with general assembly limitations identified from the initial and early design stage DfM evaluations.

Conclusions

As designs become denser, it is important for OEMs to conduct DfM reviews as early as possible in the design cycle to ensure that resultant server and storage class products can be built at the highest quality and reliability levels. Working collaboratively with contract manufacturers helps to balance workloads and ensure adequate reviews have occurred, protecting key technology elements within each card assembly.

Keeping up-to-date with DfM elements affecting Pb-free quality and reliability performance is critical. Simply relying on industry standards is not considered an optimal solution.

Well-defined business processes must be established to efficiently and confidentially work with EMS partners. Since many EMS firms use company-specific DfM tools and protocols, OEMs must strive to minimize DfM variations at a system level.

Finally, additional focus and management of DfM issues will help to ensure the continued safe transition of server and storage class products to Pb-free constructions.

Recommendations

Results from this ongoing work provide insight and lead to the following recommendations:

  • A collaborative DfM process model appears to be the best solution to ensure OEM product manufacturability, maximized yields, maximized throughput, scrap reductions, maximized product reliability assurance, and maximized contract manufacturing firm expertise and feedback.
  • Ensure design and new product engineering teams have strong communications.
  • When OEMs work with multiple contract manufacturers, it is important to minimize DfM protocol variation and implement a common approach and business processes.
  • Holding DfM reviews early in the new product introduction design schedule helps to identify key DfM issues as soon as possible, with the increased likelihood of required design changes actually being implemented; if left too late, some DfM changes cannot be modified.
  • Consistently and efficiently manage DfM: use of common DfM analysis tools, early concept design reviews and modifications, engineering specifications and guideline documents.
  • Future work in this area should continue to test quality and reliability impacts when using tighter component spacings.

References

1. Matt Kelly, et al, “Lead-Free Supply Chain Management Systems: Electronic Card Assembly & Test Audit and Technology Qualification,” ICSR Conference Proceedings, May 2011.
2. L. G. Pymento, et al, “Process Development with Temperature Sensitive Components in Server Applications,” IPC Apex Conference Proceedings, April 2008.
3. Curtis Grosskopf, et al, “Component Sensitivity to Post Solder Attach Processes,” ICSR Conference Proceedings, May 2011.
4. Craig Hamilton, et al, “High Complexity Lead-Free Wave and Rework: The Effects of Material, Process and Board Design on Barrel Fill,” SMTA International Proceedings, October 2010.
5. Craig Hamilton, et al, “Does Copper Dissolution Impact Through-Hole Solder Joint Reliability?” SMTA International Proceedings, October 2009.
6. IPC Specification Tree, ipc.org/ 4.0_Knowledge/4.1_Standards/SpecTree.pdf, October 2011.

Ed.: This article was first presented at the SMTA International Conference on Soldering and Reliability in May 2012 and is published with permission of the authors.

Matt Kelly P.Eng, MBA and Mark Hoffmeyer, Ph.D., are senior technical staff members at IBM (ibm.com); mattk@ca.ibm.com or hoffmeyr@us.ibm.com.



Aperture shape and size have measurable impacts on material volume.

While the surface-mount printing process is well-defined and established, ongoing demand for expanded product capability in a shrinking device footprint continues to challenge conventional rules. Ensuring robust print deposits for ultra-fine pitch dimensions, printing in tighter side-by-side configurations for high-density products and accommodating high-mix assemblies that require both small deposits and large deposits are all factors that must be overcome as the industry migrates toward more advanced products as standard. What’s more, achieving these priorities must be done cost-competitively and at high yield.

Stencil printing capability is dictated by the area ratio rule and, in my view, we are sitting on the proverbial edge of the cliff in terms of the limits of the printing process. To accommodate future technologies, current printing rules will have to be broken. The area ratio is the central element of a print process that dictates what can and cannot be achieved. Historically, the area ratio has hovered at 0.66, and over time, with better stencil technologies, solder paste formulation advances and improved printing capability, the stretch goal area ratio sits at about 0.5 (a 200µm aperture on a 100µm thick foil). What is critical at these finer dimensions and smaller ratios is a tight tolerance – in other words, the deviation in aperture size that is acceptable. Take our example of a 0.5 area ratio: If the tolerance is +/-10% (the generally accepted standard, incidentally) and aperture size is at the edge of that tolerance with a 190µm aperture on a 100µm thick foil, all of a sudden the true area ratio is now 0.475, which is beyond the edge of the cliff for most processes. Results from testing at our company revealed that a 10% deviation on a relatively large 550µm circular aperture nets 4% less material volume than if the aperture were cut to size. This same scenario on a 150µm or 175µm aperture can result in as much as 15% material volume reduction. It’s a double whammy; not only is it harder to print, but if you can print, the material volume will be less.

Given this, how does the industry move forward? We’ve previously discussed active squeegee technology in this space and its viability for breaking past existing area ratio rule limits and enabling robust transfer efficiency for miniaturized devices. In addition to this, however, new work undertaken by our company has shed light on the increasing importance of aperture shape in relation to improved transfer efficiency. Not only will the aforementioned 10% aperture size deviations factor greatly in transfer efficiency capability, so will aperture shape. The testing revealed that square apertures significantly outperform circular apertures, with the greatest impact being realized at smaller aperture dimensions. To fully understand the effect of aperture shape on volume, however, standard deviation must be analyzed. Aperture transfer efficiency volume numbers in a process are just numbers, say 75% as an average, but establish nothing in terms of maximum and minimum boundaries. But, if the transfer efficiency is 75% with a standard deviation of 10%, then there is a tight band of data behind the process. If, on the other hand, there is 75% transfer efficiency with a standard deviation of 40%, that’s probably not a process I’m going to be shouting about.

With this as the basis, we analyzed the standard deviation in relation to aperture shape and size, as well as the impact of active squeegee technology on transfer efficiency. On the larger aperture sizes (>250µm), standard deviation was approximately 5% on both the circular and square shaped apertures. But, when the area ratio moves below 0.5, this is where significant differences were noticed. The smallest area ratio that could be printed with a standard squeegee and a round aperture was 0.5. With a square aperture and a standard squeegee, the achievable area ratio was 0.47. When active squeegee technology was introduced, good transfer efficiency was realized on circular apertures at an area ratio of 0.45, but on the square apertures, it was a remarkable 0.34. To put it in percentages, 1.2% transfer efficiency was the result on a round 0.34 area ratio aperture, while this same aperture had transfer efficiency of 50% with an active squeegee process implemented. On the square apertures, 7.3% transfer efficiency occurred at a 0.34 area ratio with a standard squeegee and jumped to 60% when active squeegee technology was used. What’s more, this was with a standard deviation of less than 10%, indicating process stability.

As I’ve said before, moving toward more highly miniaturized assemblies is upending many factors of the traditional print process. Ensuring a successful outcome with high yields means taking a holistic approach, incorporating all technologies and best practices available. Aperture shape is but one more piece in this increasingly complex puzzle.

Ed.: The author will present more detail on the findings shared in this column during IPC Apex in San Diego, CA. The session, titled “Printing II,” will take place Feb. 21.

Clive Ashmore is global applied process engineering manager at DEK International (dek.com); cashmore@dek.com. His column appears bimonthly.

Hints for offsetting QFN void formation.

QFNs: Designers love them. Assemblers don’t necessarily share that affection. These very popular devices, first on the electronics scene in the late 90s/early 2000s, have become one of the more popular packages among handheld manufacturers. From a design and manufacturing point of view, the QFN is an excellent package. QFNs are flat, plastic packages with perimeter leads underneath the device (as opposed to leads that extend outside the package) with a large pad in the center. They are low-cost in comparison to other components, allow a high number of I/Os and are an excellent alternative to traditional QFPs, TSOPs and the like.

A QFN’s ability to pack major functionality into a relatively thin package in addition to excellent electrical and thermal performance has driven their popularity. And, since their introduction over a decade ago, these devices have expanded on their functionality capability. Advancement in QFN design has given rise to dual- and even triple-row perimeter I/O, whereas the first designs had only a single perimeter row of I/O. In addition, QFNs are now incorporating other packaging styles such as internal die stacking to increase function.

So, while designers adore the QFN for all of the design latitude, assembly specialists – and to some degree packaging specialists – have many QFN challenges to overcome. At the package level, manufacturing hurdles include issues with wire bonding on polyimide and the die to pad ratio effect on JEDEC performance. New conductive die attach films, however, are helping overcome the die to pad ratio challenge and are also enabling more die per package, as these materials essentially eliminate the fillet associated with paste-based mediums. At the board level, ensuring long-term reliability is the central issue. In truth, what makes the QFN so appealing for component and handheld product designers is at the root of one of its greatest assembly challenges: solder voiding.

The architecture of the QFN makes it inherently more susceptible to void formation. The reduced standoff (which makes these components appealing to designers) combined with the planar surfaces easily traps escaping solvent vapors and activator residues. While the increased cross-section of the interconnect joints compared with a CSP mean that voiding is typically not a reliability issue, the same cannot be said of the central ground plane which is often used as a heat sink. The large central solder paste deposit provides only very limited escape routes for the volatile materials generated during a reflow process, resulting in an increased level of voiding.

The parameters that contribute to QFN voiding are many and varied including the size and design of the device itself, reflow profiles, solder paste capability and multiple other process conditions. And, just as there are multiple factors influencing the formation of QFN voids, there are several approaches to reducing them. One of the more well-established techniques for minimizing voiding is depositing the paste for the center pad in a specific pattern, depending on the size of the QFN. Arguably the most popular of these is the windowpane pattern where the pad is divided into between four and 16 equally-sized smaller deposits (Figure 1). Although these patterns are well-used, little work has been done on the efficacy of any one design. The ratio of total pad area to deposit area becomes more important as the number of panes increases and it is crucial not to starve the pad of solder as this can cause an increase in voiding. This approach can reduce the voiding from around 35% for a single large deposit down to below 20%.

The paste pattern is important, but even more critical is a complete understanding of the characteristics and capability of the flux medium formulation within the solder paste and, to some degree, the alloy construction. Partnering with a materials supplier that has expertise in flux development and can test flux performance in-process prior to material commercialization ensures success. Over the past few years, major advances in flux formulation technology have been achieved and this evolution in flux design is benefitting QFNs and other large planar pad devices in terms of overall performance – including the reduction of voids.

The simple fact is that a complete understanding of the role and interaction of the various flux components and their ultimate impact on the performance of the final paste is essential to optimize product performance. A supplier’s formulation expertise and application knowledge regarding the acids, bases, rosins, activators, alloys and other flux components enables the development of high performance fluxes and solder pastes. In fact, newer generation solder pastes have shown as much as a 10% to 15% reduction in the base QFN voiding as compared to older generation pastes and this is due in large part to the evolution of flux design techniques. That said, each process, each environment and each assembly is unique and, therefore, takes a customized approach to delivering high yields and high reliability. It’s not only flux formulations that impact voiding; it’s also other paste characteristics such as rheology during reflow that enable voids to escape more easily.

There are multiple factors that affect QFN void formation including the flux, alloy, reflow, component type and board metallization, to name a few. Without question, flux is a huge variable when it comes to voids but it has to be balanced with all of the other factors as well. My advice? Start with robust flux formulations, understand each customer’s unique process conditions, delve into the specific application and then develop a solution that achieves the desired QFN result: less voids and higher reliability.

Acknowledgements

The author would like to thank technical service supervisor Jonathan Jiang of Henkel for his valuable input.

Jie Bai is a chemist at Henkel Electronics Group (henkel.com); jie.bai@us.henkel.com.

Page 11 of 14

Don't have an account yet? Register Now!

Sign in to your account