HOME

TheInfoList



OR:

Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce
failure Failure is the social concept of not meeting a desirable or intended objective, and is usually viewed as the opposite of success. The criteria for failure depends on context, and may be relative to a particular observer or belief system. On ...
to predict reliability and improve product performance. Other definitions of Physics of Failure include: * A science-based approach to reliability that uses modeling and simulation to design-in reliability. It helps to understand system performance and reduce decision risk during design and after the equipment is fielded. This approach models the root causes of failure such as
fatigue Fatigue is a state of tiredness (which is not sleepiness), exhaustion or loss of energy. It is a signs and symptoms, symptom of any of various diseases; it is not a disease in itself. Fatigue (in the medical sense) is sometimes associated wit ...
,
fracture Fracture is the appearance of a crack or complete separation of an object or material into two or more pieces under the action of stress (mechanics), stress. The fracture of a solid usually occurs due to the development of certain displacemen ...
,
wear Wear is the damaging, gradual removal or deformation of material at solid surfaces. Causes of wear can be mechanical (e.g., erosion) or chemical (e.g., corrosion). The study of wear and related processes is referred to as tribology. Wear in ...
, and
corrosion Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engine ...
. * An approach to the design and development of reliable product to prevent failure, based on the knowledge of root cause failure mechanisms. The Physics of Failure (PoF) concept is based on the understanding of the relationships between requirements and the physical characteristics of the product and their variation in the manufacturing processes, and the reaction of product elements and materials to loads (stressors) and interaction under loads and their influence on the fitness for use with respect to the use conditions and time.


Overview

The concept of Physics of Failure, also known as Reliability Physics, involves the use of degradation algorithms that describe how physical, chemical, mechanical, thermal, or electrical mechanisms evolve over time and eventually induce failure. While the concept of Physics of Failure is common in many structural fields, the specific branding evolved from an attempt to better predict the reliability of early generation electronic parts and systems.


The beginning

Within the
electronics industry The electronics industry is the industry (economics), industry that produces electronic devices. It emerged in the 20th century and is today one of the largest global industries. Contemporary society uses a vast array of electronic devices that ar ...
, the major driver for the implementation of Physics of Failure was the poor performance of military weapon systems during
World War II World War II or the Second World War (1 September 1939 – 2 September 1945) was a World war, global conflict between two coalitions: the Allies of World War II, Allies and the Axis powers. World War II by country, Nearly all of the wo ...
. During the subsequent decade, the
United States Department of Defense The United States Department of Defense (DoD, USDOD, or DOD) is an United States federal executive departments, executive department of the federal government of the United States, U.S. federal government charged with coordinating and superv ...
funded an extensive amount of effort to especially improve the reliability of electronics, with the initial efforts focused on after-the-fact or statistical methodology. Unfortunately, the rapid evolution of electronics, with new designs, new materials, and new manufacturing processes, tended to quickly negate approaches and predictions derived from older technology. In addition, the statistical approach tended to lead to expensive and time-consuming testing. The need for different approaches led to the birth of Physics of Failure at the
Rome Air Development Center Rome Laboratory (Rome Air Development Center until 1991) is a U.S. Air Force research laboratory for " command, control, and communications" research and development and is responsible for planning and executing the USAF science and technology pr ...
(RADC). Under the auspices of the RADC, the first Physics of Failure in Electronics Symposium was held in September 1962. The goal of the program was to relate the fundamental physical and chemical behavior of materials to reliability parameters.


Early history – integrated circuits

The initial focus of physics of failure techniques tended to be limited to degradation mechanisms in
integrated circuits An integrated circuit (IC), also known as a microchip or simply chip, is a set of electronic circuits, consisting of various electronic components (such as transistors, resistors, and capacitors) and their interconnections. These components a ...
. This was primarily because the rapid evolution of the technology created a need to capture and predict performance several generations ahead of existing product. One of the first major successes under predictive physics of failure was a formula developed by James Black of
Motorola Motorola, Inc. () was an American multinational telecommunications company based in Schaumburg, Illinois. It was founded by brothers Paul and Joseph Galvin in 1928 and had been named Motorola since 1947. Many of Motorola's products had been ...
to describe the behavior of electromigration. Electromigration occurs when collisions of electrons cause metal atoms in a conductor to dislodge and move downstream of current flow (proportional to current density). Black used this knowledge, in combination with experimental findings, to describe the failure rate due to electromigration as : \text=A(J^)e^ where ''A'' is a constant based on the cross-sectional area of the interconnect, ''J'' is the current density, ''E''a is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), ''k'' is the
Boltzmann constant The Boltzmann constant ( or ) is the proportionality factor that relates the average relative thermal energy of particles in a ideal gas, gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin (K) and the ...
, ''T'' is the temperature and ''n'' is a scaling factor (usually set to 2 according to Black). Physics of failure is typically designed to predict wearout, or an increasing failure rate, but this initial success by Black focused on predicting behavior during operational life, or a constant failure rate. This is because electromigration in traces can be designed out by following design rules, while electromigration at vias are primarily interfacial effects, which tend to be defect or process-driven. Leveraging this success, additional physics-of-failure based algorithms have been derived for the three other major degradation mechanisms ( time dependent dielectric breakdown DDB hot carrier injection CI and negative bias temperature instability BTI in modern integrated circuits (equations shown below). More recent work has attempted to aggregate these discrete algorithms into a system-level prediction. TDDB: ''τ'' = τo(''T'') exp ''G''(''T'')/ εoxwhere τo(''T'') = exp(−''E''a / ''kT''), ''G''(''T'') = 120 + 5.8/''kT'', and εox is the permittivity. HCI: ''λ''HCI = ''A''3 exp(−''β''/''VD'') exp(−''E''a / ''kT'') where ''λ''HCI is the failure rate of HCI, ''A''3 is an empirical fitting parameter, ''β'' is an empirical fitting parameter, ''V''D is the drain voltage, ''E''a is the activation energy of HCI, typically −0.2 to −0.1 eV, ''k'' is the Boltzmann constant, and ''T'' is absolute temperature. NBTI: ''λ'' = ''A'' εoxm ''V''T''μ''p exp(−''E''a / ''kT'') where A is determined empirically by normalizing the above equation, ''m'' = 2.9, ''V''T is the thermal voltage, ''μ''p is the surface mobility constant, ''E''a is the activation energy of NBTI, ''k'' is the Boltzmann constant, and ''T'' is the absolute temperature.


Next stage – electronic packaging

The resources and successes with integrated circuits, and a review of some of the drivers of field failures, subsequently motivated the reliability physics community to initiate physics of failure investigations into package-level degradation mechanisms. An extensive amount of work was performed to develop algorithms that could accurately predict the reliability of interconnects. Specific interconnects of interest resided at 1st level (wire bonds, solder bumps, die attach), 2nd level (solder joints), and 3rd level (plated through holes). Just as integrated circuit community had four major successes with physics of failure at the die-level, the component packaging community had four major successes arise from their work in the 1970s and 1980s. These were Peck: Predicts time to failure of wire bond / bond pad connections when exposed to elevated temperature /
humidity Humidity is the concentration of water vapor present in the air. Water vapor, the gaseous state of water, is generally invisible to the human eye. Humidity indicates the likelihood for precipitation (meteorology), precipitation, dew, or fog t ...
: \text = A_0 (RH)^ f(V) \exp\left(\frac\right) where ''A'' is a constant, ''RH'' is the relative humidity, ''f''(''V'') is a voltage function (often cited as voltage squared), ''E''a is the activation energy, ''k''B is the Boltzmann constant, and ''T'' is absolute temperature. Engelmaier: Predicts time to failure of solder joints exposed to temperature cycling : N_\text(50\%)=\frac\left frac\right\quad\Delta D(\text)=\left frac\right/math> where ''ε''f is a fatigue ductility coefficient, ''c'' is a time and temperature dependent constant, ''F'' is an empirical constant, ''L''D is the distance from the neutral point, ''α'' is the coefficient of thermal expansion, Δ''T'' is the change in temperature, and ''h'' is solder joint thickness. Steinberg: Predicts time to failure of solder joints exposed to vibration : Z_0=\frac\quad Z_\text=\frac where ''Z'' is maximum displacement, PSD is the power spectral density (''g''2/Hz), ''f''''n'' is the natural frequency of the CCA, ''Q'' is transmissibility (assumed to be square root of natural frequency), ''Z''c is the critical displacement (20 million cycles to failure), ''B'' is the length of PCB edge parallel to component located at the center of the board, ''c'' is a component packaging constant, ''h'' is PCB thickness, ''r'' is a relative position factor, and ''L'' is component length. IPC-TR-579: Predicts time to failure of plated through holes exposed to temperature cycling : \sigma=\frac,\quad \text\sigma\le S_Y : N_\text^D_\text^+0.9\frac \left \frac\right-\Delta\epsilon=0 where ''a'' is coefficient of thermal expansion (CTE), ''T'' is temperature, ''E'' is elastic modules, ''h'' is board thickness, ''d'' is hole diameter, ''t'' is plating thickness, and E and Cu label corresponding board and copper properties, respectively, ''S''u being the ultimate tensile strength and ''D''f being ductility of the plated copper, and ''De'' is the strain range. Each of the equations above uses a combination of knowledge of the degradation mechanisms and test experience to develop first-order equations that allow the design or reliability engineer to be able to predict time to failure behavior based on information on the design architecture, materials, and environment.


Recent work

More recent work in the area of physics of failure has been focused on predicting the time to failure of new materials (i.e., lead-free solder, high-K dielectric ), software programs, using the algorithms for prognostic purposes, and integrating physics of failure predictions into system-level reliability calculations.


Limitations

There are some limitations with the use of physics of failure in design assessments and reliability prediction. The first is physics of failure algorithms typically assume a 'perfect design'. Attempting to understand the influence of defects can be challenging and often leads to Physics of Failure (PoF) predictions limited to end of life behavior (as opposed to infant mortality or useful operating life). In addition, some companies have so many use environments (think personal computers) that performing a PoF assessment for each potential combination of temperature / vibration / humidity / power cycling / etc. would be onerous and potentially of limited value.


See also

*
List of finite element software packages This is a list of notable Computer software, software packages that implement the finite element method for solving partial differential equations. Feature comparison This table is contributed by a FEA-compareCritical plane analysis *
Maintainability Maintainability is the ease of maintaining or providing maintenance for a functioning product or service. Depending on the field, it can have slightly different meanings. Usage in different fields Engineering In engineering, maintainability ...


References

{{reflist, 30em Mechanical failure