This article discusses how technology can be used to drive reliability improvement using condition monitoring techniques for rotating machines. There are no silver bullets when it comes to establishing an effective maintenance program. However, there are ways to simplify the approach and take advantage of knowledge from the past 80 years. There are a couple of good reasons to apply what we have learned to developing your maintenance strategies. First is to reduce cost of operations and maintenance by analyzing the operating parameters of equipment to identify and eliminate premature failures. Second, to minimize the need to buy new equipment or make excessive repairs by increasing the mean time between repairs.
The most common condition monitoring techniques that guide maintenance tasks are vibration analysis, infrared thermography, lubricant analysis, ultrasound testing, acoustic emission and on-line/off-line motor testing.
There are also other measurable equipment performance conditions related to the process, specifically for pumps. These are useful in assessing whether the equipment is operating properly or within its design parameters. This can help identify early equipment issues that ultimately lead to premature failure. Some of these conditions are flow, inlet and outlet pressure, speed and fluid viscosity.
Condition monitoring can be traced back to the 1930s when equipment was not very sophisticated. It was generally built to withstand small nuclear blasts and still survive. These early monitoring devices focused more on performance rather than physical conditions or reliability improvement. Flow meters, pressure transmitters and temperature probes were fairly crude. Vibration monitoring was more about using the human senses than technology. The bigger problem was the time it took to perform any kind of analysis, since everything had to be manually recorded, tracked and analyzed.
The first vibration meters were introduced in the 1950s. They measured the overall or “broad band” level of machine vibration, either in peak-to-peak mils (thousandths of an inch) of displacement or vibration velocity measured in inches per second. The introduction of tunable analog filters allowed these meters to discriminate between different frequency components. This produced a vibration spectrum that could aid in determining the actual source of the vibration.
The introduction of the personal computer in the 1970s coupled with digital signal technology led to the development of the Fast Fourier Transform (FFT) analyzer. Although these analyzers were very accurate and drastically improved results, they were more suited for laboratory use than field use because of their size and weight.
The 1980s ushered in the microprocessor on a single silicon chip that could analyze copious amounts of data, battery-powered portable digital signal analyzers and sophisticated computer programs to store the data. These new tools provided the logistics of vibration data collection and gave us concise user interfaces to display actual results.
The astronomer Herschel discovered in 1800 that an invisible electromagnetic wave has a direct correlation between the wave and the temperature of an object. Since then, infrared thermography has come a long way. The first thermographic camera began with the development of the “infrared line scanner” created by the collaboration of Texas Instruments and the U.S. military in 1947. It took one hour to produce a single image. We have progressed from these bulky thermal-imaging cameras that cost tens of thousands of dollars to a 1-ounce camera that connects to a smartphone or tablet and can read temperature differences for reliability improvement up to 1,000 feet for less than $200.
Infrared cameras have been used in many applications from finding deteriorating bearings to faulty insulation and tank levels. The U.S. Coast Guard actually mounts infrared cameras on helicopters to locate people in the water. Some of the more sophisticated cameras can measure temperatures of plus or minus .01 degrees at distances up to 1,000 feet. Again, early thermographic analyzers were very bulky and did not lend themselves to field applications.
It is important to put precision skills and a preventive maintenance program alongside your condition monitoring program. Condition monitoring does not replace these skills and programs. Preventive maintenance routines are aimed at eliminating failure modes and improving MTBF by performing time-based tasks established on historical analysis of failure. Visual inspections, cleaning and lubrication are the cornerstone of a common-sense program and go a long way in keeping costs low and increasing MTBR.
A deeper analysis of failure and failure modes reveals that approximately 90 percent of all failures occur on a random basis. With this in mind, how can we possibly determine the length of time that failure rates remain low on the bathtub curve in Figure 1? It could be six months or six years. It is difficult to determine when the failure rate will begin to increase due to wear using time-based routines. The more accurate way to do this is by employing continuous condition monitoring. This is a more aggressive maintenance strategy. When wear begins, several changes occur in the machine, including an increase in temperature, vibration level, frequency of the sound and amperage.
If we are monitoring these variables and analyzing the results, we can predict when the failure rate will increase. This leads us to take action at the right time. Before developing these technologies, all we knew was to try to establish a schedule to either replace or rebuild the machine based solely on historical failure time data of similar machines. This resulted in a waste of money and an unnecessary increase in the likelihood of failure by introducing infant mortality each time we rebuilt or replaced the machine.
Figure 1. The bathtub curve
There are many companies that have embraced this technology for reliability improvement and have significantly reduced maintenance costs and increased the MTBR of their equipment.
What We’ve Learned About Technology and Reliability Improvement
Now let’s turn to a way to capitalize on what we have learned over the past 80 years or so by embracing the new approach of the Industrial Internet of Things (IIoT) to take condition monitoring and reliability improvement to the next level and reduce the cost of the human capital required to analyze the data. IIoT is the integration of the physical and digital world through networked sensors, machine learning and analysis of big data utilizing cloud computing. Prior to IIoT, the Internet depended on people to interface the physical world with the digital world. Microprocessors gave people the ability to analyze data much faster, but now we can bypass people and take inputs directly from sensors, send this data to the cloud and use cloud computing to analyze the data and send us the results in the form of actionable information.
Industry experts predict that 50 billion machines will be connected directly to the Internet over the next five years. It’s happening all around us as we speak. Does anyone in your family wear a Fitbit? This is a great example of the kind of technology that we can apply to our industrial machines. The Fitbit sensors can measure the number of steps you take in a day, perform analysis to convert them to distance, calculate how many floors you have climbed, provide a graphic representation of your activity, determine how many calories you have burned, track how much weight you have lost and the progress you have made on your weight loss goals, and measure the amount of sleep you have gotten. It does all this seamlessly. The information is displayed on the device or through a user interface on your smartphone or computer, as shown in Figure 2.
Figure 2. Example of information provided by a Fitbit
Now let’s apply this thinking to a pump installed in your plant with sensors and controls. This pump is controlled by a variable-speed drive. It has accelerometers and temperature probes (RTDs) at each bearing location, a flow meter, and inlet and outlet pressure transmitters. All sensor and drive outputs are connected to a gateway that transmits the data to the cloud. This data is analyzed in the cloud and sent back in real-time through a user interface. It is user-name and password protected with encrypted data.
This monitoring system also generates alerts based on preset triggers to specified personnel via cellphone, email or text message. Or, it creates a work order that defines specific action to be taken and sends it to the appropriate technician. The user interface in Figure 3 displays results from these sensors. This type of system can actually show you a real-time pump curve depicting where the pump is running on its curve. This all occurs without human intervention. Nobody has to travel to the pump, go to a website or take any other action.
Figure 3. User interface with sensor results from a pump for reliability improvement
This is just putting our toe in the water around this technology and reliability improvement. Cisco’s CEO has pegged the entire Internet of Things (IoT) as a $19 trillion market. The IIoT is a significant subset that includes concepts like digital oil fields, advanced manufacturing, grid automation and smart cities. General Electric and Accenture conducted a global research project. It took a pulse of the progress, challenges and opportunities of industrial companies around the world.
Approximately 88 percent of those interviewed say big data analytics are in their company’s top three priorities. 53 percent say it is now a board-level decision. These companies are looking at leveraging connectivity to the Internet and data analytics to achieve business priorities. They are targeting real-world issues like increasing throughput, reducing costs, improving product quality, improving resource efficiency, shortening response times and other valuable outcomes. I have not worked with a customer who was not interested in most, if not all, of these issues.
In conclusion, we would like to leave you with a few thoughts. Historically, the type of continuous monitoring system described in this paper cost tens of thousands of dollars per machine. This made it cost-prohibitive to consider using on a plant-wide basis for reliability improvement. Today, the cost can be driven below $1,000 per machine, which makes it economically feasible for a total plant solution.
The other factor in cost-justification is to not only consider the cost of the repair or replacement of the machine. It is to also include the loss of manufacturing time, procurement cost and product loss. When all the costs of loss are included in the calculation, justification for reliability improvement is much easier. Consider using this system only on your critical equipment if budget is a concern. Still skeptical about the value of this approach? Try it on a few of your most critical machines. Then evaluate the results and decide where to go from there. Basically, it boils down to whether you want to use the Fitbit approach to monitor all your machines.
This article was previously published in the Reliable Plant 2016 Conference Proceedings.
By Tom Dabbs and Rick Zinkl, DXP Enterprises