Documente Academic
Documente Profesional
Documente Cultură
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Do you have nightmares about emergency shutdowns? Are you worried that some of your important systems and equipment are always on the ragged edge of failure? Welcome to the club. You arent alone. With the downsizing of operations and the loss of much institutional knowledge in process automation, it is a rare plant that has enough manpower and knowhow to keep the place running the way it should. Downtime. PreventiveSMaintenance.LEmergencyI Shutdowns.ZAllE these cost big T H E D I G I TA L R E O U R C E O F P A N T S E R V C E S M A G A I N of money. Not only do the repairs cost more, after a failure in equipment or instrumentation, but the real cost is that the plant cant make product. I was once told by a plant manager in the specialty chemicals industry that every hour his plant was down reduced his output by three-quarters of a million dollars. Compared to just one instance of an unplanned shutdown, the cost of maintenance and upgrades is miniscule. Increasing productivity is the only way we can keep our industrial infrastructure open in the face of significant cost differentials in other parts of the world. Installing plant automation has been the road to increasing productivity for decades. Now the return T H E D I G I TA L R E S O U R less, and we are turning E S I N G M A G A increase producfrom additional automation is C E O F C H E M I C A L P R O CtoSother ways toZ I N E tivity. Clearly, one of the most important ways to increase productivity is to keep the plant operating more of the time. This is called plant availability. Plant availability is critical to keeping your plant open and your workers employed. Here are ten steps to increasing plant availability and avoiding unnecessary shutdowns. These are important steps, but arent necessarily the only ones you can take. But if you do take these steps, you will see improved productivity and plant availability immediately, and you will also see some additional steps you need to take. Each of these steps has a proven ROI, and these articles have been chosen to illustrate THE DIGITAL RESOURCE OF CONTROL DESIGN MAGAZINE how and why to implement each step, and help you calculate the ROI from doing so. And dont forget to figure out how many unplanned shutdowns you are avoiding and crank those into the ROI value.
Center on Reliability Forget preventive maintenance. Today's uptime requirements call for an entirely different approach Rich Merritt, Senior Technical Editor
THE DIGITAL RESOURCE OF CONTROL DESIGN MAGAZINE
12
16
20
27
33
35
39
43
Center on Reliability
Forget preventive maintenance. Today's uptime requirements call for an entirely different approach
RICH MERRITT, SENIOR TECHNICAL EDITOR
Maintenance has come a long way since the fix it when it breaks mentality of the 1940s and the preventive maintenance philosophy of the 1970s and 1980s. Today, the new world of reliability-centered maintenance (RCM) calls for computers, software, and sensors to achieve maximum plant availability and reliability at the most effective cost. A big surprise is preventive maintenance (PM) actually can be bad for certain systems! Not only is it expensive, but it doesnt work well with modern, high-tech equipment. So instead of PM, we are entering a whole new world of condition monitoring, loop analysis, and predictive maintenance. Although RCM often works best with fieldbus architectures and high-level asset management software (because they can more easily obtain and process data) the techniques involved are not beyond the reach of a typical process control user, even those with legacy control systems. This is because its not how you acquire the data thats important, its what you do with it.
Military Maintenance
Modern maintenance technology procedures began years ago in the military. The war in Iraq proves beyond a doubt just how effectively these techniques work. I used vibration monitoring and maintenance management [15 years ago] in the propulsion power plants of U.S. Navy vessels, says Robert Rosenbaum, an automation consulting engineer in American Canyon, Calif. The use of portable handheld vibration instrumentation was so successful that the Navy purchased several permanently installed vibration monitoring systems for its fleet. Rosenbaum also says hes familiar with RCM as once used by United Air Lines to prevent failure in commercial aircraft systems. Vibration monitoring is just one part of an RCM program. The overall RCM process includes procedures to determine the functions and performance standards of an asset, what causes it to fail, what happens when it fails, and what can be done to prevent failures. The commercial airline industry was the first to realize the benefits of a maintenance decision-making process. According to a white paper by Aladon, this led to the development of the MSG3 process in the aviation industry; in manufacturing, its just called RCM. (For a complete description and multiple articles on RCM, go to www.aladon.co.uk.) One of the most startling developments to come out of RCM studies involves preventive maintenance. Many people still believe that the best way to optimize plant availability is to do some kind of proactive maintenance on a routine basis, says Aladon. This assumes one traditional view of failure, where devices fail as they enter a wear-out zone after a certain period of time. This may have been true 30 years ago, but equipment is much more complex these days. Now, we identify six patterns of failure to deal with. According to Aladon, studies on commercial aircraft showed only 4 % of failures conform to the well-known bathtub curve. However, a whopping 68% conformed to a less familiar pattern high infant mortality followed by random failures. These findings contradict the belief that there is always a connection between reliability and operating age, says Aladon. Nowadays, this is seldom true. Unless there is a dominant age-related failure mode, age limits do little or nothing to improve the reliability of complex items. In fact, scheduled overhauls increase overall failure rates by introducing infant mortality back into otherwise stable systems.
3
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Commercial aircraft and process control systems both use similar systems: pneumatics, electro-hydraulics, servomotors, networks, control valves, pumps, miles of wire and cables, networks, computers, electronic controls, and flow, temperature, level, and pressure sensors. Some process industry data seems to confirm just how random failures can be. The majority of failures in valves and control loops is not predictable and the probability of failure does not increase with time, says Lane Desborough, manager of loop management services, Honeywell Industry Solutions, Thousand Oaks, Calif. There is little evidence that valve failure can be predicted reliably based on accumulated stem travel alone. If failures of process equipment are random, so much for preventive maintenance. What do we do now?
Three-Pronged Attack
Preventive maintenance isnt completely dead, of course. Rosenbaum, who bought into RCM 15 years ago, still believes in PM. It heads off trouble before it starts, and the return is well worth the money invested. Certain equipment does have a wear-out zone, and prudence dictates that it should be maintained before it breaks. We count valve operation cycles automatically, using our data historian, an OSI PI system, says Don Erb, manager of production planning and information, Ciba Specialty Chemical, McIntosh, Ala. When the cycle count reaches a certain trigger value, the valve is scheduled for maintenance during the next opportunity. Cibas valve performance is evaluated based on historical data. After we have one fail at a number of cycles, we check valves in similar service next time before they reach the same number, with the objective of service before failure occurs, explains Erb. One major objective of our plant is reliability improvement. Our reliability has been improving over the past year, and although this is certainly not the only program in place, it is contributing. Many process plants have developed similar PM programs for valves, only to find that 30% of the valves that are taken apart for preventive maintenance have absolutely nothing wrong with them (Chemical Processing, November 2001). As Honeywells Desborough points out, this is probably because these programs drive maintenance actions based on device usage, not on control loop performance degradation. Therefore, what we need is a better way to determine when assets actually require maintenance. This requires a three-pronged attack: 1. Sensors, tracking systems, or on-board diagnostics on each asset that help identify the presence of a problem. 2. A data acquisition system to collect asset information. 3. Software to analyze the data, determine that a problem exists, and suggest maintenance procedures to correct the situation. All of the above pieces are readily available on the open market. Plants with fieldbus-based hardware, a frameworks-based control hierarchy, and asset management software already have the infrastructure in place to do RCM. Those with legacy systems can buy the necessary hardware and software and install it on their process. As with all things in this industry, you can get the RCM capability you need by spending anywhere from a few thousand to a few million dollars.
Sensing Problems
In olden days, supervisors would dispatch technicians to the field to check on problems. But not anymore. The days of having instrument technicians run to the field every time there is a problem are long gone, says Rami Mitri, director of asset optimization, New England Controls, Mansfield, Mass. Downsizing and reduced budgets have taken a toll on maintenance operations in many plants, he says. As staff and budgets decrease, equipment problems increase. Many customers neglect to link downsizing to reduced maintenance on critical equipment that can either shut down or delay production. To overcome problems caused by downsizing and budget cuts, Mitri says end users have to adopt new, enabling technologies for maintenance. In many cases, this means being able to identify problems before they occur, so maintenance dollars go further. Several ways exist to determine if a device or system is having problems: * Manual observation (leaking, making noise, boiling over, etc.).
THE ONLINE RESOURCE OF CONTROL MAGAZINE
4
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
* Condition sensing (running hot, vibrating, losing pressure, etc.). * Internal diagnostics (the device itself detects problems). * Performance analysis (valve sticking, slow control response, hunting, etc.). PG&E, the giant utility in California, uses manual techniques to check its gas distribution operations, says Brian Steacy, general manager of DST Controls, Benicia, Calif. DST supplied PG&E with a PDA-based data acquisition system. PG&E opted out of fully automating its data acquisition because it would have been cost-prohibitive and, more importantly, not entirely safe, says Steacy. Much of PG&Es compressor station instrumentation is too far flung to be hardwired, and many of the thousands of gauges that must be read daily are old, mechanical, or otherwise too costly to match up with transducers or hang on a network. Using a handheld system provides regular human presence and keeps an eye on things to help avoid disasters, such as leaking compressor lubricant, unusual conditions, and graphic evidence that a cat had strayed into a compressor cooling fan. The fan kept running, so the alarm wasnt triggered, but visual inspection revealed the necessity to shut the fan down for cleaning, repair, and balancing, says Steacy. Wandering cats aside, manual observations are becoming the solution of last resort these days. Therefore, users must seek out ways to detect problems remotely, or predict them based on operating conditions. One of the best ways is via condition monitoring, as explained in Prevent Failure (see Step Three p 11). That article explains how vibration analyzers and sophisticated data analysis can predict equipment problems in advance. Condition monitoring, of course, often requires sensors to be installed on equipment to detect the conditions. Fortunately, this is getting much easier for end users. Many devices now come with HART or fieldbus interfaces, both of which can transmit diagnostic information. Manufacturers also are building diagnostics into various devices, such as power supplies. The S8VS power supply from Omron Electronics, for example, can monitor percent usage and available life remaining. For devices that do not have embedded diagnostics, users can install the necessary sensors on vital assets. Its not something you would want to do on thousands of devices in a typical plant, but condition sensors can be installed on assets of particular interest. If a certain pump, valve, compressor, or similar device is failing and causing problems, it could be fitted with vibration or voltage sensors on a permanent or temporary basis until the problem is diagnosed. For example, Allen-Bradleys MachineAlert relays can be installed in a control panel to monitor phase, current, temperature, and motor rotation in any motor control application. Its also possible to make manual vibration measurements on certain key machines. For example, SKFs MicroVibe portable vibration test and measurement instrument can be used with a PDA; this lets a technician run out into the plant periodically to check critical systems. The Ultraprobe 1000 from UE Systems has its own on-board recording, logging, and application software for ultrasonic condition analysis locally or later at a computer. When buying new or replacement equipment, its a good idea to seek out devices that have built-in sensors and embedded diagnostics. Investing in assets that can communicate when they require attention, such as maintenance or calibration, is critical to proactive strategies, says Mark Bitto, product manager of asset optimization products at ABB, Wickliffe, Ohio. Intelligent field devices, control systems, workstations, and network hardware all contain a rich set of embedded diagnostic information. Unfortunately, unless the device is enabled to report these health conditions, the information will go unnoticed for long periods of time. This means all that condition sensing and diagnostic data needs to be acquired for further analysis.
5
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
the technician to make a visual inspection of the meter to determine if it was a transcription error or if the meter is having a problem. If an equipment fault is discovered, the tech can flag it for maintenance. Maintenance departments everywhere are using similar handheld PDAs and laptop computers. Many maintenance departments realize the benefits of automated maintenance technology, but simply cant afford it, so they stick with their manual systems. We are looking at replacing any failed transmitters and new installs with fieldbus transmitters, mainly because of wiring and future advances in information provided, says Matt Smith, process control supervisor, Amalgamated Sugar Co., Twin Falls, Idaho. We looked at Emersons AMS, but couldnt justify the per-point costs because we have about 2,500 transmitters and 1,000 control elements. We employ 10 instrumentation technicians and are currently implementing an electronic work-order maintenance management system. I guess the bottom line is, we have the labor to do it manually. When we asked end users for inputs on how they were acquiring data for maintenance purposes, several agreed with Smith, telling us they simply could not afford to install fieldbus instrumentation and asset management software. Few are as lucky as James Loar, engineering group leader at Ciba Specialty Chemicals, Newport, Del. We are in the process of installing a system to monitor reliability of process control and instrumentation, he told us. We are installing a system with Foundation fieldbus, DeviceNet, Profibus, and AS-i. A new corporate standard for control systems forced us into the luxury of having this capability.
6
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
In other words, the information is out there, buried inside HART and fieldbus instrumentation, and all you have to do is extract it. Its not always easy, especially in mixed legacy systems. Im using Foundation fieldbus [FF] in our process plant, and I have problems with the instruments due to the architecture, laments Jorge Cano, process control engineer at MetMex Penoles, in Torre?n, Coah, Mexico. We have a Rockwell ControlLogix PLC, but all interface to FF is with National Instruments FF Configurator software. Cano is using Rosemount instrumentation and Rockwell RSView 32 software. The results are bad. The maintenance costs are very high, process improvements are difficult to implement, and failures in our process and equipment occur many times. I have plans to migrate to Plant Web with our platform. With all due respect to the equipment named, such problems are not rare and are not caused by the equipment. We hear from many engineers that bringing up a fieldbus system of any kind can be a bear. But there has been progress. Perhaps in a year or so there may be better software available that will let you obtain the necessary maintenance information from HART and fieldbus more easily. Once you obtain the necessary field data, a host of software packages is available to help you interpret data, analyze conditions, predict problems, and recommend solutions. These range from CMMS packages that help schedule maintenance procedures to performance monitoring software that analyzes plant data and looks for loops that are not performing up to snuff. RCM is such a major change from the old, easily understandable preventive maintenance techniques, its no wonder that engineers are reporting mixed results. We use Emersons AMS on control system equipment, says Joe Pittman, principal safety systems specialist at Lyondell/Equistar Chemical, Channelview Texas. Other than an automated documentation system, I have seen little benefit on the sensor side. It has provided benefit on the valve side, with the ability to do valve scanning and define which valves need to be pulled and repaired during turnarounds. Such systems also require a major change in attitude. Syncrude has an AMS server from Emerson in parallel with its Honeywell TDC 3000 system, says Ian Verhappen, instrument engineer at Syncrude in Fort McMurray, Alberta. It has not been integrated with the remainder of the maintenance software system for two reasons: first, bureaucracy; second, buy-in from the maintenance team and their supervisors, who do not understand these systems require work to get results. As with many engineering projects, the biggest hurdle is not introducing the technology, but rather the culture change required after the fact to use it effectively. In other words, the tools to implement an RCM system are available. You just have to conquer a few minor obstacles--such as fieldbus idiosyncracies, bureaucracies, old equipment failure theories, politics, and maintenance department mindsets--to make it work.
7
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Prevent Failure
Emerging sensor and analysis technologies let operations personnel foresee and correct problems before equipment goes down
DAN HEBERT, PE, SENIOR TECHNICAL EDITOR
There is an inverse relationship between maintenance and downtime in most process control plants. The more time and money spent on maintenance, the less downtime due to unplanned shutdowns. Unfortunately, frequent maintenance performed according to timebased schedules can be prohibitively expensive and can also fail to prevent equipment failures. Condition-based monitoring coupled with sophisticated data analysis tools can allow process plants to adjust the maintenance/downtime relationship in their favor. We use condition monitoring and other tools to help us shift our process operations and maintenance from reactive to predictive, says Leoncio Estevez-Reyes, P. Eng., an engineering specialist in the corporate process information, diagnostics, and control unit for Weyerhaeuser in Federal Way, Wash. Weyerhaeuser uses hardware and software systems from Honeywell (www.honeywell.com/imc) to implement condition monitoring at its pulp and paper plants. Without condition monitoring, users are forced to perform maintenance on an arbitrary and usually incorrect basis. Condition monitoring allows companies to save money by determining repairs based on true need and not on a somewhat arbitrary period of time. It can also help prevent costly process outages by identifying deviations in performance before they get to the point of affecting production, adds Estevez-Reyes. Predictive maintenance based on condition monitoring has been around for a long time, but some fairly recent developments are making these systems easier and cheaper to implement. First among these developments is smart instruments. It is difficult to predict valve failure if the only data available from the valve is open/close status. Smart instruments can deliver much more detailed information about a valve, and this information can be used to predict valve failure. Another significant development is low-cost computing and data storage hardware. The most effective condition monitoring systems analyze large amounts of process data. These data must be collected on a frequent basis and stored on non-volatile media. PCs with standard operating systems can now perform these data collection and number crunching functions, and storage hardware is cheap enough to allow retention of sufficient amounts of data for effective analysis. The third development is sophisticated data analysis software that can automatically examine large amounts of collected data and present analyses to plant personnel in a concise fashion. This actionable information can be used to schedule maintenance and prevent unplanned outages. The chief benefits of condition monitoring are lower maintenance expenses and reduced downtime. We have reduced maintenance expenses significantly by saving up to 75% in unnecessary control valve repairs, says Estevez-Reyes. We have focused maintenance during planned shut-downs, identified and prevented process incidents that could have caused down-time, and reduced occurrence of process alarms by more than 50%. These benefits are real and available, but it is impractical for most users to initially implement sophisticated and expensive condition monitoring systems on a plant-wide basis. Most users instead should first implement these systems in areas of greatest need, and then expand use to other areas if and when the condition monitoring system proves effectiveness.
Analyze This
In theory, condition monitoring can be used to analyze the operation of virtually any plant component or system. In practice, most condition monitoring systems are used to monitor valves and rotating equipment because these moving parts continuously are subject to wear and tear. More esoteric areas of condition monitoring can examine fixed components (such as motor and cable insulation) subject to deterioration from forces other than friction.
THE ONLINE RESOURCE OF CONTROL MAGAZINE
8
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Every process plant contains a number of control valves, and condition monitoring is a natural fit for predicting valve failure. Most control valves are part of control loops, so it often makes sense to simultaneously implement valve and control loop monitoring. Weyerhaeuser and Honeywell co-developed a control loop monitoring system that surveys the performance of every control loop in the mill on a regular schedule, reports Anthony Swanda, an R&D process control engineer with Weyerhaeuser. If a loop has poor performance and a high probability of a sticky valve, a maintenance work order is automatically generated. Before implementing its condition monitoring system, maintenance was performed on an unscientific basis. Condition monitoring systems provide a standard and systematic approach to plant maintenance, as opposed to the most persistent operator or superintendent getting the attention, Swanda adds. Also, the information provided by the systems (e.g., total valve movement, degree of stiction) can be quite valuable in quickly diagnosing the problem. Most of the major control system vendors now offer some type of control loop analysis software, and these software tools often identify control valves as the culprit of poor or degrading loop performance. ExperTune (www.expertune.com) specializes in control loop tuning and analysis software, and its PlantTriage product can run on a standalone or integrated basis to identify the worst performing control loops in a plant. Control valves are one of the primary points of failure for many unplanned shutdowns, and most condition monitoring systems work well with smart valves and actuators to predict valve malfunctions. The other primary point of failure common to virtually all process plants is rotating equipment. Pumps, generators, and compressors are found throughout most plants, and a host of sophisticated condition monitoring tools can be used to examine these key components.
9
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Combining condition monitoring with expert remote analysis can increase the effectiveness of preventive maintenance and troubleshooting efforts. Tri-State G&T uses a monitoring system from Bently Nevada (www.bently.com) to measure radial vibration, speed, thrust, differential expansion, and shell expansion on three 411 MW General Electric turbines located at its generating plant in Craig, Colo. The plant has some on-site expertise, but assistance from vendor personnel helps to expedite problem resolution. We have used the remote diagnostics inherent to the condition monitoring system twice now on vibration problems, says Gary Crisp, Tri-State senior mechanical engineer. It saves time and money because a field rep does not have to be on-site. Remote condition monitoring also allows several Bently personnel to review the problem at the same time, which results in better and faster response.
10
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
According to the process engineering and optimization department, condition monitoring lets them determine the optimum time interval between cleanings for each of heat exchanger bundles. The condition monitoring software is also used to monitor tray efficiencies in the distillation columns. The plant uses this information to drive turnaround planning.
11
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Redundancy is a requirement for many process control systems. For each part of the process, users need to determine if redundancy is needed, and then decide where and how to implement redundancy schemes. The first item to consider is the necessity of redundant control. Redundancy should be an economically-based engineering decision, says Kevin Totherow, the president of Sylution Consulting (www.sylution.com). Decision factors are cost of redundancy, likelihood of failure, cost associated with downtime, recovery time and cost of maintenance for redundant systems, adds Totherow. According to Totherow, many redundancy decisions are not based on cost/benefit analysis. Companies often make emotional decisions about redundancy. Many of the companies that insist upon redundancy would have much better project ROI as well as lower ongoing costs with non-redundant systems and good recovery plans, observes Totherow. Others echo Totherows comments. The basic calculation is economic: Is the cost of device failure times its probability of failure greater than or less than the cost of redundancy? asks Ed Bullerdiek, the control group leader with Marathon Ashland Petroleum in Detroit. For Safety Instrumented Systems (SIS), the processes by which you determine this are well known (Fault Tree, FEMA, LOPA, Markov Models), and this basic thinking is easily extended to other systems, concludes Bullerdiek. One difficult cost to quantify is risk of injury or death. Costs of unsafe conditions should always be presumed exorbitantly high, according to Matt Bothe, a senior automation engineer with CRB Consulting Engineers (www.crbusa.com). Therefore, operations that pose danger to personnel should always apply redundant systems, continues Bothe. Cost versus benefit calculations can be complex, but many processes can be analyzed for redundancy without detailed mathematical analysis. Reliable generation of electricity is a necessity, but some of our sub-processes can handle downtime, reports Dale Evely, PE, an I&C consulting engineer with the Southern Company in Birmingham, Ala. Ash and coal handling as well as sootblowing and water treatment dont need redundancy because of built-in storage capacity, adds Evely. For our primary process, the boiler and steam turbine, we design in dual redundancy of HMIs, controllers, and communication networks. For critical measurements we install redundant field devices and connect those devices to separate I/O cards in separate I/O racks, concludes Evely. Some processes are low risk in terms of hazards, and this can be a key factor in redundancy decisions. Use of redundancy in our consumer goods industry is perhaps lower than more critical industries, observes James Reizner, a section head with Procter & Gamble in Cincinnati. But on our paper machines, where downtime is very expensive and getting back on line can be a major event, we perform financial redundancy calculations, continues Reizner. Redundancy is often needed when a process must run uninterrupted for a long period of time. This is often the case in biotech where batches can take months to produce. Another such scenario is test systems. Some of our tests run for thousands of hours, ideally uninterrupted, so redundancy is critical, according to Robert Shaw, PE, an electrical engineer with the QSS Group (www.theqssgroup.co.uk) at the NASA Glenn Research Center in Cleveland. Once a decision is made on which process needs redundancy, the next step is to determine where to apply redundancy in the control system architecture. It is rarely feasible to make an entire process redundant, and there are major differences in cost and benefits depending on where redundancy is applied.
12
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Redundancy Simplified
If redundancy is needed, the next question is where to implement it. For simplicitys sake, lets divide the control system into five areas: HMI/Server, controller, I/O, field devices and communications (see Table). Readers surveyed ranked each of these five areas according to most benefit and least cost. According to our readers communications yield the best cost/benefit ratio. An almost perfect confirmation of the reader survey comes from one of the industrys leading vendors. Where do customers spend their redundancy dollars, asks Steve Lazok, the technical solutions support manager for Yokogawa Corporation of America (www.us.yokogawa.com)? In order: communications, controller, I/O and HMI. Redundant field devices come into play only for SIS solutions.
Do It Yourself?
In the final analysis, the responsibility for redundancy implementation lies with the end user, but selection of the proper control system can make implementation less difficult. Field device redundancy is specifically designed by the user for each process, but redundancy at the other four levels of the control system can either be an integral part of the control system or a custom add-on. When redundancy is required, most users suggest buying a control system designed for redundancy from the ground up. Buy a system with redundancy built-in and it doesnt cost much in the long run, nor does it take much in the way of resources to support. Buy or inherit a cheap system and try to add redundancy and you will rue the day you were born, says Bullerdiek. Examine the vendors redundancy scheme at all levels. The two most important questions to ask are: If it breaks, how do I fix it without taking my process down? And, Do I have to program anything in to get the redundancy or associated diagnostics to work? suggests Bullerdiek. Without diagnostics, redundancy can disappear unbeknownst to the user. Support and diagnostics on the HMI or communication side are needed, even with rigorous initial testing, observes Kyle Austin, a technical specialist, critical control systems for process information & control, with UOP LLC in Des Plaines, Ill. If a secondary communications path or hardware option fails, it can often be neglected if critical control is not directly affected, and then the benefit of redundancy is lost, adds Austin. Others echo Bullerdieks opinions concerning single source responsibility. If someone is concerned about redundancy, they should look at a DCS or a Triple Modular Redundant (TMR) system because these systems have hardware-based options that make support much easier. Single vendor solutions eliminate finger pointing and minimize oft overlooked life cycle costs, says Robert Burgman, a senior automation engineer with the Pigments Division of Sun Chemical in Muskegon, Mich. Burgman has implemented redundancy with both a DCS, and with a PC-based HMI and a PLC controller, and he says the differences are significant. By far the weakest link in an HMI/PLC system is the PCs hard drive. Unfortunately, our HMI vendors solution to redundancy is simply duplication, which means that both HMIs poll the PLCs-in effect doubling communications traffic to the PLCs, reports Burgman. According to Burgman, this scheme requires ongoing management and can result in poor performance because duplicate polling can eat up the bandwidth and cripple the highway. His firm has had to resort to warm standby for some HMI/PLC systems, with both HMIs running, but with the backup HMIs polling shut off. By contrast, HMI redundancy on Sun Chemicals DCS system is seamless. Our DCS doesnt have this issue because the HMIs talk to the DCS controllers directly via unsolicited communications over a redundant highway. This has proven to require much less support than our PLC/HMI systems. In addition, redundancy is a snap with our Yokogawa DCS because it was designed that way from the ground up. We dont even think about it- it just works, concludes Burgman. Not only can the HMI be a problem with a HMI/PLC redundant system, so can the PLC. We have never found a PLC redundancy system that works. It has been our experience that the promise of the technology is beyond its performance, says Andrew Rowe, the technical manager of process controls & MIS with the United States Gypsum Company in Chicago.
13
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
DCS-type control systems are almost always more expensive the HMI/PLC systems, but in the case of redundancy the cost differential may be an illusion, especially when life cycle operations and maintenance costs are included. This can be especially true in terms of software and hardware upgrades. Like many HMI/PLC systems, we also use Windows at the HMI level, reports Bob Hausler, the vice president of system marketing for ABB (www.us.abb.com). A key difference is that we test all upgrades with the entire control system including the redundancy features prior to releasing these upgrades to our clients. Hauslers point is well taken. If a plant has an HMI/PLC system controlling a critical process, it would not be wise to simply accept the latest Microsoft patch. All such changes to the operating system or to any other areas should be thoroughly tested prior to installation on the control system. With a DCS, this testing is done for the user. With an HMI/PLC system, the user has to do the testing. It is clear that DCS and TMR vendors have spent a lot of time and money designing and implementing redundant systems (see Sidebar). It may be wise to take advantage of this expertise if redundancy is needed.
14
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The secondary controller will not have latent flaws detectable only upon switchover because it is performing exactly the same operations as the primary controller. The secondary controller is synchronized with the primary one, which ensures up to the moment data in the event of a primary controller failure. No matter what control system is selected, it is important to examine all failure points including power supplies, fuses, and other ancillary components. No control system is truly redundant if there is a single point of failure, and there usually is such a point somewhere in the system, observes Hausler of ABB. A cooling fan may be a minor component, but failure could bring down an entire plant if care is not taken when designing redundancy.
15
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Electric power is expensive, often unreliable, and sometimes dirty. Using too much power, losing power, or running on unclean power can adversely affect the operation of your plant. Accordingly, several companies serving our industry are more than happy to sell you hardware, software, and energy management services that will analyze and optimize your power consumption. Frank Hein, principal engineer at Abbott Laboratories, North Chicago, Ill., has installed an advanced energy optimization system from Pavilion. With all elements of the program in place, Abbott expects to save at least $273,000 per year, based on the current production rate, he says. However, as Bela Liptak says, before you can control something, you have to measure it. Heres how and why to make power measurements and do your own analysis and optimization. As we can see from Abbotts results, the payoff can be enormous.
16
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
4. Longer interruptions can affect critical temperature and humidity tolerances in a process. Because of the high economic cost of power quality disturbances, many sensitive facilities are beginning to ask electric utilities to guarantee levels of reliability in their energy contracts, says Brown. Although not widespread at this time, such guarantees are already in effect for several car manufacturing facilities in Michigan. If a factory experiences a power quality disturbance that disturbs the production process, it receives a large credit on its energy bill. California companies faced electric supply problems every day in 2001, as the energy crisis there reached epidemic proportions. At Cargills salt plant in Newark, Calif., interruptions in the power supply caused production delays, leading some longtime customers to switch to alternate suppliers. Our electricity costs doubled here, even with the incredible energy conservation measures we put in, says Lori Johnson, public affairs manager at Cargill Salt. You need to monitor your incoming electric power to see if you have problems, avoid demand charges, decide what kind of protective and backup equipment to install, and catch your utility with its electric pants down. Mike Powell, president of eLutions (www.elutions.com), says his companys EP Web monitoring and control software was installed in a chocolate manufacturing plant and quickly uncovered a severe problem during training. The clients energy manager discovered a utility feed that had a 1 MW variance when compared to the other two feeds, Powell explains. The utility found that the underground feeder was undersized before it entered the facility. Using the EP Web software with the integral billing module, we were able to determine that the higher demand caused by the undersized feeder cost the facility millions of dollars in overcharges. This information was provided to the utility and an adjustment was received. Perhaps your utility will help you monitor power. Georgia Power, Atlanta, offers EnergyDirect.com, an online tool that its business customers can use to analyze and track their current and historical energy use and billing information. The standard package is free, and allows businesses to track energy costs; correlate energy usage with production data, changes in operations, or weather; forecast energy costs for the next year; and spot increased energy usage by faulty equipment.
What a Waste
The other side of the energy question is, are you using power efficiently? As John Havener, energy czar at Pavilion Technologies (www.pavtech.com) points out, a modern plant has many energy-consuming operations. The central utility system can be extremely complex, encompassing boilers, chilled water systems, distilled water systems, compressed air systems, compressor trains, cooling towers, district energy systems, large building HVAC systems, and distributed generation systems, he explains. The central utility system operators must incorporate a number of factors into their energy management decision, including fuel price, energy demand, energy prices, energy reliability and availability, emissions limits, and corporate profitability. Thats a lot of systems to monitor. What uses the most power? Motors are by far the biggest consumer of electrical power at our site, reports Michael LaRocca, senior process control specialist at Solutia, Sauget, Ill. Electric heaters for a particular unit operation are second. The Dept. of Energy (www.energy.gov) agrees. Over 13.5 million electric motors of 1 hp or greater convert electricity into work in U.S. industrial process operations. Industry spends over $33 billion annually for electricity dedicated to electric motor-driven systems, says a 1998 DOE report. Because nearly 70% of all electricity used in industry is consumed by some type of motor-driven system, increases in the energy efficiency of existing motor systems will lead to dramatic nationwide energy savings. There are dozens of other ways to save energy in a plant. At Cargill Salt, an Energy Team of workers and supervisors was formed to identify conservation measures. Eric Hoegger, refinery project engineer, says changing lighting systems, installing manual on/off light switches, and a reduction in air pressure throughout the plant helped cut power consumption. Lighting changes alone saved $24,000 per year, he says. Other changes included work practices and moving loads to off-peak hours. Overall, Cargill reduced power consumption by 42% and shaved its peak demand by 52%. Clearly, if you want to eliminate the energy villains in your plant, you need a plan.
What to Monitor
Several companies will be happy to do an energy program for you. They will analyze your plant, make recommendations, and install the necessary equipment. For example, if you hire Rockwell Automations Energy Consulting Services to optimize your plant, heres what they will do:
THE ONLINE RESOURCE OF CONTROL MAGAZINE
17
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
* Tariff Analysis: Using information compiled from energy bills, energy tariffs, and electrical supplier contracts, Rockwells analysts will identify alternative methods to reduce energy costs, such as aggregating multiple meters or evaluating new supplier, delivery, and tariff options. In other words, they analyze how you are paying for electric service now, and see if they can renegotiate with that supplier or find an alternate source. This may or may not be something you can do yourself. * Power Quality Studies: Rockwell measures and monitors incoming power to determine the causes of voltage excursions, momentary power losses, phase reversals, and harmonics. They determine the correlation between power quality, premature equipment failures, and the cause and frequency of plant shutdowns. Now this is definitely something you can do yourself. Mark Liemiller, director of marketing at Power Distribution Systems at Siemens (www.sea.siemens.com), says new power monitors are available to make most of the necessary measurements; then they put the information in a form that is usable. These can be very basic or quite elaborate devices that monitor and display critical power data, says Liemiller. * Plant Energy Audits: Rockwell determines energy consumption patterns in the plant, and identifies equipment and processes that should be modified to reduce power consumption. Again, this is within your grasp. Once you make the necessary power measurements, a host of software is readily available to analyze the data and make recommendations. The approach seems to work. For example, at Chevrons refinery in Richmond, Calif., Rockwells energy analysis showed that two pumps in the diesel hydrotreater were oversized, sometimes operating at 40% below their best efficiency points. Chevron installed new medium-voltage drives on a 2,250 hp primary feed pump and 700 hp product pump and saved $330,000 per year.
Monitor Thyself
Engineers have monitored power for decades. Before the microprocessor revolution, power system engineers recognized three classes of instruments, says Reaz Tajaii, a staff engineer in Power Systems Engineering at Square D (www.squared.com). These include digital fault recorders, which obtain oscillograms of fault currents and voltages to evaluate the effect of voltage sags and to construct sequences of events; transient recorders and oscilloscopes, which capture oscillatory and impulsive voltage transients; and power monitors, which report steady-state currents and voltages and provide basic energy and power calculations. With microprocessor-based technology, the three classes are slowly merging, he says. The power monitor is becoming a universal measurement and recording instrument. Power monitors now do much of what digital fault recorders and transient recorders do. In the near future, it is likely that power monitors will take over most of the functionality of these devices. Power meters at substations have changed, too. Now they are called intelligent electronic devices, or IEDs. Modern IEDs measure phase currents, voltages, power factor, frequency, harmonics, and other data, according to Mike Coleman, North American manager for GE Multilin, a division of GE Industrial Systems (www.geindustrial.com). He says the cost has dropped dramatically, too. Ten years ago, a fully loaded system with metering equipment, oscillography, and protective relays for a feeder would have cost $20,000 to $30,000. Today, a state-of-the-art IED will do all these functions for a fraction of that money, says Coleman. Not only that, an IED can communicate all this information over an Ethernet connection. Data from an IED connected to the corporate network can be accessed to see the energy consumption of a particular feeder, or an engineer can access the IED to check currents and voltages or analyze an event, explains Coleman. Soft Switching Technologies (www.softswitch.com) says its I-Grid power monitoring system can be installed for less than $300 per monitor. A web-based monitor plugs into a 120 or 240 VAC outlet and a phone line. It monitors the power line and records voltage profiles for sags, swells, brownouts, overvoltages, and outages, and sends power quality information through the Internet to the monitor owner and to a central server for display on the I-Grid web site (www.i-grid.com). I-Sense monitors can be installed in front of production equipment or processes within manufacturing facilities, at the service entrance, or on utility substations, says William Brumsickle, SoftSwitching director of technologies. Power equipment manufacturers have been embedding power monitoring functions and communications capabilities into panelboards, transformers, and circuit breakers for several years. Such smart power distribution equipment can monitor its own health and communicate with any device that wants the data. Rick Grove, staff project engineer at IMC Phosphates, Mulberry, Fla., uses such equipment to monitor total incoming power. GE Multilin 750 relays, which are used to trip incoming breakers, are connected by Modbus to Allen-Bradley PLCs, and then to MSSQL via RSSQL, and then to web servers, says Grove. This means that Grove is getting the data he needs from circuit breakers, then passing it through the system all the way to a web server for display and analysis. Metering potential transformers and current transformers are
THE ONLINE RESOURCE OF CONTROL MAGAZINE
18
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
mounted within the metal-clad switchgear, and are the inputs to the Multilin 750 relays. If you have a DCS in your plant, you might be able to hook directly into such equipment. Honeywell, for example, can plug right into third-party multifunction power meters, relays, and motor monitors. We make nodes such as the Communications Link Module that interface to these meters via Modbus, says Mark Converti, manager of power generation marketing at Honeywell (www.honeywell.com). The link feeds directly into Honeywells Power Monitoring and Control, Power Emergency Load Shed, and Tie Line Control software packages. Power system faults are initially detected at the power meters. Later, the cause of the fault and any harmful effects can be determined by uploading the waveform data captured by the meters to our Plant History database. It might be a good idea to check with your control system vendor to see what type of similar power monitoring packages they offer. You may be able to tie directly into your existing switchgear and panelboards to get the power quality information you need. If you cant get a free ride from existing power monitors or smart switchgear, you may have to install the necessary equipment yourself. Most of the time, the measurement is done at the switchgear or distribution center, advises Jay Park, Power Rich System group product manager at ABB. Each feeder supplies power to different sections or operations of a plant. Power quality meters and/or IEDs from companies such as Siemens, Square D, Landis+Gyr, General Electric, and other companies are reasonably easy to install at key distribution points. In almost all cases, they have communications capabilities and software support. All markets use the same devices for data-gathering, says Park, implying that integration is easy. Portable tools, analyzers can be used to perform power system surveys, track down problems, or monitor units temporarily. They also can be installed permanently. Like the IEDs, the portable units come with software support and communications capabilities.
19
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Modern control systems have brought us many benefits, but along with the benefits have come problems. One of the major benefits has been the increased information available to the operator, while one of the problems is what to do with all that information. Computer-based control systems also have increased the level of abstraction of the process. The operator has more and more information, but with a smaller and smaller window to look through, resulting in a higher and higher level of abstraction. Increased complexity and sophistication, increased automation, control concentration and separation, and additional layers of control have further increased the level of abstraction. Some systems are so abstract they approach the complexity of a video game. To compensate for this abstraction, control systems have provided additional operator interface functions, and system designers have increased the number of alarms and alerts to help keep the operator informed. Alarms increase the amount of information going directly to the operator but they often are a source of operator overload and confusion. In older control systems, hardwired panels were used to provide alarm annunciation. The panels were large but limited in capacity, and so by their very nature tended to limit the number of alarms. In modern control systems, alarms are generally software-driven and are essentially free for existing process variables. Little incentive to limit their creation has led to a laissez-faire attitude toward alarms. We can configure a new alarm at the flick of a finger, and there has been a lot of flicking going on. Also, regulations from OSHA and the EPA as well as voluntary programs such as ISO 9000 and 14000 have led to the addition of alarms, sometimes with little consideration of the effect on alarm loads at the system level. Some notable examples of alarms causing problems include the Three Mile Island accident in 1979, where important alarms were missed; the Texaco refinery explosion at Milford Haven in 1994, where, in the 10 minutes prior to the explosion, two operators had to respond to 275 alarms, peaking at three per second; and the recent Esso Longford gas plant explosion in Australia, where some experts concluded that operators routinely ignored alarms leading up to the explosion because, in the past, ignoring them had no negative impact.
Alarming Growth
Alarm growth is a natural outcome of the increased information load and abstraction of the modern control system. However, if alarms are not dealt with in a disciplined manner, uncontrolled alarm growth can result, which can lead to out-of-control alarm systems. If your alarm system has one or more of these characteristics, it may be out of control: 1. Many alarms during abnormal situations. 2. Many alarms on during normal operation. 3. High alarm loading rates (alarms per unit time, alarms per operator, alarms per event, etc.). 4. Incidents or near-incidents where operators missed key data provided (or not) by the alarm system. 5. A large number of high-priority alarms. 6. Alarms that are on for long periods of time.
20
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
7. Alarms going off and on regularly or intermittently (chattering or transient). 8. Lost count of the number of alarms. 9. Lost track of alarm setpoints or why they were set there in the first place. 10. Dont know which alarms are safety, operational, environmental, informational, etc. 11. Operators dont know what particular alarms mean or that there may be inappropriate alarms. 12. Operators dont know what to do when a particular alarm occurs. 13. Dont know when the alarms were tested last. 14. Alarms that are not useful and even confusing or obscuring. 15. A large number of defeated alarms. 16. No procedure or policy on alarm creation; i.e., anyone can create an alarm or change the limits on his or her own authority. 17. Alarm documentation is out of date or nonexistent. 18. No written procedures or policies on alarms.
21
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
* Alarm maintenance and testing. * Alarm rationalization for existing systems and as part of continuous improvement. It should be noted that the control system equipments capabilities, as well as any third-party add-ons you have or will have, can affect what you can do with an alarm system and thus can impact alarm management procedures and practices and alarm rationalization.
Rationalization Is Key
Alarm rationalization is the systematic process of optimizing the alarm database for the safe and efficient operation of the facility. This process normally results a reduction in the total number of alarms, the prioritization of alarms, the validation of alarm parameters, the evaluation of alarm organization and presentation, evaluation of alarm functionality, etc. Rationalization also can, in some cases, identify the need for new alarms or changes in the process, equipment, or instrumentation. It can be done to fine-tune an existing good alarm system, but it is more commonly done where the alarm system has gotten out of control. Note that alarm rationalization is not a one-shot process. The forces of chaos are out there looking for any opportunity to take control of any complex system, and alarm systems are no exception. Over time, people will come and go, the process will change, operating philosophies will change, marketing will stick its nose in things, the hardware system will change, improve, degrade, etc. All these are opportunities for changes or lack of changes to the alarm system, and they indicate the need for periodic alarm rationalization. Training, procedures, procedural controls, and auditing are some of the tools used to maintain an optimum alarm system for effective and safe operation of a plant. How to Rationalize Alarms Alarm rationalization is a structured process that generally involves an approach similar to that of a HAZOP team, with representatives from operations, maintenance, engineering, and safety. It is important to have operator input on this team. It is also important to have an organized plan to perform the alarm rationalization, with an established procedure and practices. While alarm rationalization will vary from company to company and plant to plant, the methodology generally consists of eight basic steps (Figure 1). These steps are presented serially but in fact can overlap or run in parallel in some cases.
22
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Operators and operating staff interviews also should be used as metrics. These people are on the front line and have good firsthand information on how the plant is operating.
23
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Data gathered on the alarm in the initial benchmarking step will be used in this step to characterize individual alarm performance and correlation with other alarms, the process, and equipment for this analysis.
5. Prioritize Alarms
Here you determine the importance or significance of the alarm through a ranking scheme. This is normally done by risk analysis to determine the importance that the operator detect and perform the expected action when the alarm occurs. Prioritization helps ensure the operator knows the importance of the alarm itself as well as the importance of the alarm in relationship to other alarms. The prioritization scheme is generally limited to the control systems capabilities and any third-party alarm management software on the system. The number of prioritization levels should be kept to a minimum to minimize operator confusion. The number of alarms prioritized in each category (high, medium, low) generally can be visualized as a triangle with EEMUA Guideline 191 alarm proportions (see sidebar, "Few Solid Guidelines"). Informational alerts should be kept out of the prioritization scheme if possible.
24
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
References:
1. Johannes Koene & Hiranmayee Vedam, "Alarm Management and Rationalization," Third Annual Conference on Loss Prevention, 2000. 2. A. Nochur, H. Vedam, & J. Koene, "Alarm Performance Metrics," IFAC 2001, www.asmconsortium.com. 3. Edward Marszal, "The Longford Gas Plant Explosion: Could Alarm Management Have Prevented This Accident?" Exida, 2003, www.exida.com. 4. W.H. Smith, C.R. Howard, & A.G. Foord, "Alarms Management--Priority, Floods, Tears, or Gain?" 4-sight Consulting, 2003, http://4-sightconsulting.co.uk.
THE ONLINE RESOURCE OF CONTROL MAGAZINE
25
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
5. Yoshitaka Yuki & Kimikazu Takahashi, "Event Analysis Based on Causal Relation of Events, Alarms, and Operator Actions," ISA, 1999. 6. E.H. Bristol, "Improved Process Control Alarm Operations," ISA, 1999. 7. Yoshitaka Yuki & Jim Parks, "Alarm and Event Analysis for detecting Productivity Bottlenecks," ISA, 1999. 8. "Use Critical Condition Management to Improve Your Bottom Line," ARC Strategies, ARC Advisory Group, April 2002. 9. Donald Campbell Brown & Manus ODonnell, "Too Much of a Good Thing? Alarm Management Experience in BP Oil, Part I: Generic Problems With DCS Alarm Systems," www.asmconsortium.com. 10. Dick Perry, "Alarm Systems and Their Role in Abnormal Situation Management, Part II of IV," Instrument and Controls, SAIMC, July 2000, www.instrumentation.co.za. 11. C.T. Mattiasson, "The Alarm System From the Operator Perspective," ASM Consortium, www.asmconsortium.com. 12. D. Shook, "Alarm Management White Paper," Matrikon.
Suppliers Exist
There a number of suppliers and the market is growing for alarm management and rationalization products and related services. The key is to find the products, services, and experience that most closely match your in-house capabilities and philosophy. Some of the companies that supply alarm management and rationalization products and/or services (in alphabetical order) are Control Arts (www.controlartsinc.com), Exida (www.exida.com), Honeywell (www.assetmax.com), Matrikon (www.matrikon.com), Process Automation Services (www.pas.com), Process Systems Consultants (www.prosysinc.com), and TiPS (www.tipsweb.com). There also are some companies that provide expert systems for improving the operator decision process and detecting developing problems before the alarm stage, thus reducing the alarm load. Examples are Gensym (www.gensym.com) and Nexus Engineering (http://www.nexusengineering.com). Many DCS and SCADA systems and vendors provide alarm management features or products. It generally is more cost-effective to use whatever features your system provides rather than a third-party add-on. However, some of these are somewhat primitive and dont provide the necessary functionality by themselves, though they are generally improving. There also some add-on alarm products on the market that enhance a control systems basic alarm capabilities by providing online alarm management (alarm controls, alarm parameter management, alarm documentation, alarm system auditing, change control, etc.), alarm filtering, cause-and-effect analysis, alarm patterns, and dynamic reconfiguration of the alarm system for varying operating conditions.
26
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Dealing with legacy equipment is occupying many engineers these days as they struggle to bring ancient control systems into the 21st century and meet the ever-increasing hunger of information technology (IT) software. Everybody wants that data: accountants, bean counters, upper management, plant engineers, process engineers, operations engineers, maintenance people, and even the governor of New York. Some of the software packages you must feed include process historians, asset management, ERP, MES, LIMS, CMMS, SCM, and SPC, to name a few. The Catch-22 is that many of these old systems, despite their inadequacies, are still performing incredibly well the jobs they were designed to do. It's hard to discard or even alter an old system that's in the midst of setting a reliability record. In this article, we'll look at a few ways that may help you extract data from your legacy systems.
27
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
28
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Honeywell, for example, claims that it still supports the very first TDC 2000 from 1975. David Novak, control system engineer at BASF in Monaca, Pa., is a Honeywell TDC 3000 user, and has been constantly upgrading his systems to keep up with modern technology. Most of our changes are expansions, not total upgrades, he explains. With very few exceptions, the control system has all the functionality of a new system. If your old Honeywell DCS has been kept up to snuff, connecting it to a modern IT system appears to be a piece of cake. A modern system can easily accommodate an integration strategy, keeping the existing application logic and user interface of the legacy system, says Garry Lee, information management & analysis product manager for Honeywell Industry Solutions (www.acs.honeywell.com). Honeywells recommended solution is to use its Experion PKS data historian, which can deal with Honeywells networks, plus other industry standards such as Modbus, 4-20 mA, and RS-232/422 serial communications. Foxboro/Invensys (www.foxboro.com) recently announced a series of IA software modules that work on legacy systems up to 15 years old. Foxboro also encourages its users to migrate their legacy systems over to fieldbus. In situations where the existing system is already doing a good job controlling the plant, users should seriously consider upgrading to Foundation fieldbus (FF), says David Shepard, vice president at Invensys. It allows migration to be performed on a maintenance budget, and preserves the existing investment in hardware, software, training, and intellectual property. Shepard says automation vendors are split on the fieldbus issue. While some vendors are telling customers that they have to replace their systems to take advantage of fieldbus, other vendors provide a migration path to FF from existing systems. Each approach--bulldozing and migration--has its advantages and disadvantages. Upgrading to fieldbus can be a bit expensive, especially if all you want is to get a process data value out of a field instrument. Walter Driedger, senior process control engineer at Calgary, Alberta-based Colt Engineering (www.colteng.com), says the cost premium is about $500 to $2,000 per point. A standard 4-20 mA transmitter costs about $800. The same with FF is about $1,300, explains Driedger. To get a valve with FF you have to get an FF positioner instead of a simple I/P. That adds about $1,000 to the cost. A simple switching valve has two limit switches and a solenoid. To connect these to an FF module, such as TopWorx, adds about $2,000 to the cost. Emerson Process Management recently announced a migration path that not only upgrades old Fisher-Rosemount systems, it upgrades everybody elses systems, too. The path migrates legacy systems to Emersons DeltaV platform, and leaves the original I/O intact. Emerson claims that this upgrade is often less expensive and faster to implement than upgrading to the latest version of a legacy system. Emerson says it supports old Fisher Provox and RS3, Bailey Infi90 and Net 90, Honeywell TDC 2000/3000, GSI D/3, GE Genius I/O, Siemens Teleperm, Moore APACS, Taylor Mod 300, Yokogawa, and Foxboro Spec 200, Spectrum, and I/A control systems. The Siemens/Moore Products APACS+ is fairly easy to upgrade. At Cytec, to add S88 capability, Ward says they expanded their existing system with the help of Siemens and systems integrator Avid Solutions (www.avidsolutionsinc.com) and then installed Siemens ProcessSuite Batch Manage. We were able to transition the plant to the batch manager system with minimal production outages using a series of short partial shutdowns followed by a one-week cutover, reports Ward. Once that was accomplished, they installed OSIsofts PI historian. This permits operators and engineers to gain access to current and historical data. The historian data is also accessed remotely from network and dial-up connections. Over on the PLC side of the control business, similar support exists. We have migration paths for products that were developed 20 years ago, says Nelson of Schneider. This is important, because changing out a PLC processor is much easier than replacing an entire process control system, so control engineers upgrade PLC systems all the time. Going from an old PLC to a new one could cause you to lose all your application programming effort if the supplier hasnt provided a migration path,
29
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
A typical data model for users these days is to have a data historian connected to each legacy device, says Michael Paulonis, technical associate at Eastman Chemical, Kingsport, Tenn. User-written applications will use the data historians application program interfaces [APIs]. This removes the need for the user to be proficient in all operating systems and legacy APIs. Process historian vendors have to be proficient in control system APIs, or they wouldnt sell any software. OSIsoft, for example, has been selling process historian software for 20 years. Over that time, the company has developed interfaces to just about every control system on the market. We have more than 350 interfaces, says Kennedy. Arcom Control Systems, Stillwell, Kan., and IBM UK, Hursley, England, have collaborated on an MQSeries telemetry integration solution (as IBM calls it), which connects to legacy control and SCADA systems. Arcom supplies its Director Series hardware and software interfaces to various legacy systems, and IBM UK provides its WebSphere enterprise software, which has links to SAP, Oracle, and similar IT packages. Each Director module has protocol drivers that access data from field devices. Drivers available include HART, Modbus, TCP/IP, terminal server, UDP, PPP, Telnet, and other systems primarily used in oil & gas, electric utilities, water, and telecomm industries. InStep Software, Chicago, says it has installed its eDNA process historian on just about everything. The company seems to relish tackling tough and obscure systems. Anthony Maurer, a partner at Instep, says the company can reach legacy systems with file-based transfers, serial interfaces, parallel interfaces, TCP/IP socket reads, DMA, and printer device sniffing techniques. InStep has been interfacing to control systems for 15 years, starting with nukes. VMS-based systems are the easiest, says Maurer, and dedicated nuclear-grade plant process computers are the hardest. Although process historians do make your task much easier, weve heard that this convenience and process expertise comes at a very steep price, up to $1,000 per data point.
30
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
If you dont have programming talent in house, you may want to bring in a systems integrator who knows your particular system and knows how to write C code and APIs. Of course, if you have clever people around, who needs outside experts? George Lister, computer technician at U.S. Gypsum, Sweetwater, Texas, has a sweet and simple solution. Most legacy systems have a serial port. The trick is to figure out what data you want to export, format it for a serial stream, and then output the data, he says. Sounds easy. But how do you output it? Port the legacy data to a printer port as if it were a print command. Format the print command in a form that will be accepted as a serial stream. Lister connected his Foxboro Fox III SCADA system (circa 1975) to a Unix system that supported multiple serial ports. We configured the serial port on the Unix box to accept the serial streamed data and imported it into a database record. We used Foxbase for Unix software. The database software writes the serial streamed data into a record delimited by spaces. Once we got the data into the database, we could do anything we wanted with it. Walt Anderson at PCS Nitrogen wound up replacing his legacy Foxboro system with another legacy system. In 2001, PCS Nitrogen closed down two nitrogen plants in Iowa and Nebraska, both of which had three-year-old Moore APACS+ systems. So Anderson shipped them down to Georgia to replace the aging Foxboro Spec 200 controllers. In 2001, we upgraded 1970s technology to 1990s technology, quips Anderson. He also installed a Wonderware HMI/SCADA system to replace the old Foxboro Spectrum supervisory computer. We used the Internet to find specialized data conversion instrumentation, he reports. The system had been modified in the 1980s with a Transmation temperature monitor, Fischer & Porter MicroDCI single-loop controllers, and a Tensa Unix box with optimization software. The operators were not entirely happy at the prospect of losing their familiar Spec 200 analog controller faceplates, so Anderson installed thin clients on the Wonderware system. We formatted thin client displays so they looked exactly like a Spec 200 faceplate, says Anderson, and installed thin clients in place of Spec 200s. With a touchscreen, the operators could adjust the loops the same way they did on the old hardware.
31
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
CitectSCADA comes with over 170 drivers bundled with the base product at no extra charge, reports Richard Bailey, product marketing manager, Citect. More and more customers are extending the life of their legacy systems with add-on tools. Software modules connect seamlessly to the legacy system and allow bidirectional data transfer, including migrating legacy data into a format and location that is useful. The beauty of an HMI/SCADA system or process historian is that once its installed, you have access to all the data in the legacy control system, so you dont have to work with additional I/O hardware or software drivers. But if you need to work with the actual I/O, there is help available from hardware vendors such as Sixnet, Opto 22, and Lantronix. An Opto system can be added to the existing control architecture without affecting any of the core processes, says David Crump of Opto 22 (www.opto22.com). The specific machines, equipment, and system components that are involved almost without exception have attachment capabilities to our hardware. And that attachment can be direct, via serial communication, or via analog, digital, or serial modules. This worked at the Callaway Golf Co. ball manufacturing plant in Carlsbad, Calif. Callaway recently installed a modern control system in a brand new plant, but neglected to provide any data acquisition capability. Although it was a state-of-the-art control system, it wasnt open, so getting data posed a problem. The Snap Ultimate I/O system monitors thermocouples, pressure and conductivity sensors, and other equipment used in the production of golf balls, says Crump. We didnt modify any of the actual production processes, but came in under the control layer, at the instrumentation level, strictly for the purposes of capturing data. Such add-on I/O hardware is available from a host of companies. Unless you like contacting every vendor in the world, your best bet might be to prowl the Internet. At some point, you have to ask yourself if its worth the time, trouble, and expense of working with a legacy system. You might be postponing the inevitable, because all systems die eventually. However, if funding and other problems dictate that you work with what you have, remember that data in any legacy system can be accessed. It just takes time and money.
32
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The impact of HART Communication on the process automation industry is immeasurable. No other field communication technology comes close in size, scope of installation or overall effectiveness. HART is the industry's most cost-effective, easy-to-use and low-risk communication solution and a key enabler for asset management and process improvement. HART is best known for ease in digital process instrument calibration, but the modern HART device has many more capabilities that are increasingly useful to the end user. Most process automation suppliers now offer control system interfaces, remote I/O systems, and PC-based software applications that leverage the intelligence of HART-smart field devices to deliver continuous, real-time device diagnostics, multi-variable process information and much more. Real-time HART integration into DCS architectures enables users to get the full benefit from intelligent devices making HART Communication an important part of plant applications for control, safety and asset productivity. Continuous, intelligent communication between the field device and control system allows problems with the device, its connection to the process, or inaccuracies in the 4-20mA control signal to be detected automatically, within seconds all of which enables proactive action to avoid process disruptions and unplanned shutdowns.
33
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
34
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The term Safety Instrumented Function or SIF is becoming common in the world of safety instrumented systems (SISs). It is one of the increasing number of S-words--SIS, SIL, SRS, SLC, etc.--that are coming into our safety system terminology. The definition of a SIF as provided in IEC standard 61511, Functional safety: Safety Instrumented Systems for the process industry sector, leaves a bit to be desired as a practical definition, and the application of the term leaves many people confused. IEC standard 61511 defines a safety instrumented function as a safety function with a specified safety integrity level which is necessary to achieve functional safety. A safety instrumented function can be either a safety instrumented protection function or a safety instrumented control function. A safety function is further defined in 61511 as a function to be implemented by a SIS, other technology safety-related system, or external risk reduction facilities, which is intended to achieve or maintain a safe state for the process, with respect to a specific hazardous event. The standard 61511, however, uses the term SIS and SIF somewhat interchangeably in places. From this definition we can also see that there are two types of safety instrumented functions. The first is a safety instrumented protection function, which is a safety instrumented function operating in the demand mode. The second is a safety instrumented control function, which is a safety instrument function operating in the continuous mode. Let us look at some of other definitions of SIF that may make things a bit more clear. In their book, Safety Integrity Level Selection, Systematic Methods Including Layer of Protection Analysis, Ed Marszal, PE, and Eric Scharpf describe it as, a function that is a single set of actions that protects against a single specific hazard. The term SIF often refers to the equipment that carries out the single set of actions in response to the single hazard, as well as to the particular set of actions itself. From these sources we might define the SIF as an identified safety function that provides a defined level of risk reduction or safety integrity level (SIL) for a specific hazard by automatic action using instrumentation. A SIF is made up of sensors, logic solver, and final elements that act in concert to detect a hazard and bring the process to a safe state. Another view of a SIF is that of an instrument safety loop that performs a safety function which provides a defined level of protection (SIL) against a specific hazard by automatic means and which brings the process to a safe state.
What a SIF Is
Both these definitions define the key properties of a SIF. Some examples of SIFs are: * High pressure in a vessel opens a vent valve: The specific hazard is overpressure of the vessel. The high pressure is detected by a pressure-sensing instrument and logic (PLC, relay, hardwired, etc.) opens a vent valve, bringing the system to a safe state. * High temperature in a furnace that can cause tube rupture shuts off firing to furnace: The specific hazard is tube rupture. Instrumentation automatically causes a main fuel trip that removes the heat, bringing the system to a safe state. * Flame-out in an incinerator that can lead to a release of toxic gas causes process gas feed to be shut off: The specific hazard is a flame-out. The automatic instrument protective action is to close process gas feed to the incinerator, which stops any toxic gas release bringing the system to a safe state.
35
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
* Flame-out in an incinerator that could cause fuel gas accumulation and explosion causes a main fuel gas trip: The specific hazard is a flame-out. The automatic instrument protection action is a main fuel gas trip, which cuts off the fuel and prevents fuel gas accumulation, bringing the system to a safe state.
36
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
However, while you have a single hazard (and generally a single consequence) associated with a SIF, you can have multiple initiating causes, each with its own frequency of occurrence. For example, overpressure of a vessel due to loss of cooling (with a consequence of vessel rupture and fire/explosion) could be caused by loss of cooling water supply, loss of cooling water pump(s), temperature control loop failure, plugging of tubes, etc. Each of these initiating causes can have a different frequency of occurrence, and thus different risks (consequence x frequency) for the same SIF. When determining the target SIL of a SIF with multiple initiating cause scenarios, the highest SIL of all the scenarios is normally used. In cases where there are a large number of causes or multiple scenarios with the same or similar SIL (risk), a look at the overall risk may be warranted and may result in a higher SIL for the SIF. Fault tree analysis or other quantitative methods are sometimes used for this purpose. William L. (Bill) Mostia Jr., PE, League City, Texas, has more than 25 years experience applying safety, instrumentation, and control systems in process facilities. He may be reached at wmostia@msn.com.
37
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
17. Definition of all the interfaces between the SIF and any other systems (including the basic process control system and operators). 18. A description of the modes of operation (normal and abnormal) of the plant that affect the SIF and the its response or operational mode for these modes of operations (startup, reduced rates, high rates, shutdown, different product grades, known upsets, etc.). 19. Application software safety requirements pertinent to the SIF. 20. Requirements for maintenance, testing, or operational overrides/inhibits/bypasses, including how they will be initiated, how they will be monitored while in place, and how they are cleared. 21. Identification of any action necessary to achieve or maintain a safe state in the event of fault(s) being detected in the SIF. Any such action shall be determined taking account of all relevant human factors, procedures, training, etc. 22. The mean-time-to-repair (or restoration) that is feasible for the SIF, taking into account the in-house maintenance capabilities, procedures and practices, spare part availability, etc. If the required maintenance is out of house then capability, travel time, location, spares holding location, service contracts, environmental constraints, etc., must be considered. 23. Maximum allowable spurious trip rate. This should also consider whether there are any safety issues to spurious trips such as the potential hazards involved in restarting the SIF. 24. For SIFs that have multiple final elements affecting different process functions (different equipment, valves that isolate or vent different process streams, etc.), identify any possible dangerous combinations of output states (where not all the final elements operate properly) that need to be avoided. 25. Identify the extremes of all environment and abuse conditions likely to be encountered by the SIF. This may require consideration of temperature, humidity, contaminants, grounding, electromagnetic interference/radio frequency interference (EMI/RFI), shock/vibration, electrostatic discharge, electrical area classification, flooding, lightning, human factors, and other related factors. 26. Definition of the requirements for the SIF necessary to survive a major accident event, i.e., time required for a valve to remain operational during a fire.
38
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
I was brought over to Millennium Inorganic Chemicals from Equistar in 1998 to manage new capital projects. Market demand for titanium dioxide was expected to outpace installed capacity by 2005 due to the closure of outdated and noncompetitive lines, and Millennium wanted to know what could be done to meet this demand while maximizing shareholder value. The traditional--and assumed--answer was to build new plants and additions. You know the routine: Sales & Marketing tells management X% more of this-and-that product is needed. Management then tells finance to tell manufacturing to spend big money to build or expand facilities to meet those demands. But as many know, things often start to go off the track at that point. Eyes grow big in manufacturing: Oh, wow, we're so lucky. We get to build the next great facility. We've got an unlimited budget. This is tremendous. We're going to keep our jobs going for the next 20 years. New plants take money (a lot of money), people, and time to build, and even more time to pay back. Therefore, they don't necessarily maximize profit. To maximize profit, a company must maximize the value added for each incremental investment dollar spent. In other words, a sound business case must be made. That became my first task. The idea was to follow the usual build formula where finance and manufacturing negotiate with sales to determine how much product can be moved at a range of prices. We then developed price/output/profit curves. Such curves are rarely linear, and profits don't always rise with volume. Once we agreed on the point of maximum projected profit, my job was to give sales that level of production as cost-effectively as I could, using our existing capacity. If we then found that we were still capacity-constrained, our next step was line expansion, and finally, a new plant.
39
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
So we set out to find if a larger, hidden plant lay within the Ashtabula TiO2 facility. Two intertwined avenues were explored: process reliability and its evil twin, process variability. I define process reliability broadly here as the percent of time plant assets are available for their intended purpose at full design capacity; downtime or poor performance due to plant problems or constraints, and which result in shutdowns or slowdowns ordered by management, are excepted and addressed.
40
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The former, highly cyclic product output also resulted in an average of five, two-hour flue-pond downtime episodes per month totaling $75,000 in maintenance charges. These failures, which were the highest-cost repetitive maintenance items in the plant, have essentially been eliminated.
Whats Next?
The TiO2 process is complex and requires a lot of operator input. Eventually we would like push-button operation when starting cold or transitioning from one product to another. The operator would just key in what product and how much, and hit Go. The application of process automation is obviously the key. We sought a single platform that can integrate well with PLCs and laboratory information management systems (LIMS) and other analytical instruments, and also provide advanced control techniques, predictive maintenance, exhaustive data collection and processing, seamless communications with the MES and ERP levels, etc. We investigated various DCS and PLC turnkey automation platforms to replace the Ashtabula plants Emerson RS3 DCS. Central to this effort was a lifecycle, total-cost-of-ownership bid analysis. Bids often are close in price and difficult to compare. One bid may be heavy on I&E, another on the mechanical aspects. Therefore, the bids had to be conditioned to make sure we were comparing apples to apples. Important was the age of the candidate platforms: Buying soon-to-be-outdated technology would be a disaster. We also looked at the bidders grasp of schedule and commitment, the people issues and rsums. We asked for references from former projects of a similar nature, suggestions on improving project performance beyond our ideas, and information about project and subcontractor management and staffing approaches and experience. We looked at the migration path and cutover from the existing system, ability to perform not only in the U.S. but in Timbuktu for future projects, local support worldwide after commissioning, etc. Each bidder had to allow our evaluation team to attend, gratis, a weeklong training course on their automation software. We know that buying and using software often costs more than hardware over the long run. Some bidders balked, saying they charge $3,000 a head for training. We suggested they build it into their prices. Ive yet to see a software presentation that didnt look terrific, but we had to get an idea how much time would be spent learning, programming, updating, and modifying the software and integrating it with other equipment and systems. Our team conducted afterhours research during each course, trying to duplicate existing Millennium DCS graphics and configurations. Ease-of-use here varied widely among bidders.
41
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The 12 people on the evaluation team unanimously chose Emersons PlantWeb digital plant architecture. Primary reasons for the choice were ease-of-use and robust connectivity to field equipment, to other systems, and to Enterprise Resource Planning (ERP) and IT networks. PlantWeb hardware and software were not the least expensive, but bid-conditioning demonstrated that they provided the greatest added value, lowest total cost of ownership, and most promising future viability. The PlantWeb selection paves the way for us to look at connecting the process to our SAP ERP system in the hope of optimizing the supply chain and gaining even more efficiencies. The goal of supply chain management is to relay information to the right people in a manner and speed that facilitates the decision-making process for buying raw materials and making and delivering products. We quickly discovered we needed a detailed plan to implement an ERP-to-process connection, which required that we flow-chart a base communications layer comprised of two pieces: the PlantWeb architecture and an existing, OSI PI-based, custom LIMS. The PlantWeb-LIMS base layer connects a manufacturing execution system (MES) layer. This layer consists of the DeltaV systems PC-based historian, engineering, and application workstations. The application station stores such items as advanced control, batch records, process recipes, planning and scheduling tables, etc. It is also the station that connects to the ERP layer. Information must flow bidirectionally and seamlessly among the three levels. We are working on developing periodic forecasts, material resource plans, process orders, customer inputs, and material consumption, confirmation, and finished goods reports. As yet, our supply chain does not extend to vendors. In the future, customer orders will automatically feed into the system and could affect production in minutes. If the Ashtabula TiO2 plant continues to run at 100% first-pass prime without grade transition losses, its possible to optimize the grade mix by dialing in the quality. If a customer changes its requirements, we will learn of it instantly to better meet its just-in-time delivery requirements. This is slated as a service to those customers who make up the majority of our business. Supply chain optimization will allow us to be so close to those customers that theyll have no incentive to look to any other vendor. Today, we are continuing to reveal hidden plants throughout Millennium facilities worldwide. Some 75% of our savings still come from working in the basement: efforts like correcting a scrub salt valve. The rest (25%) results from adding and replacing automation and instrumentation. Were not seeing supply chain savings yet, but were developing the capability. Eventually, we expect half of savings will come from the supply chain side, the other half from the automation side, with both locked intimately together.
42
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
Two Spyro steam crackers from Technip/KTI are the heart of the BASF AG Ludwigshafen integrated chemicals site in Germany. BASF operates the crackers mainly for captive use, as logistically they are at the end of the pipeline. The crackers produce in excess of 610,000 metric tons of ethylene and other products mainly for internal consumption. The company aims for 365 days per year operation with extended cracker shutdown every five years. Feed characterization in the form of gas chromatography has been used for proper unit operation for a number of years. With a measurement time of more than one hour, the existing gas chromatographs (GCs) were slow to respond and had high maintenance overhead. More frequent analysis and lower-maintenance technology was required to allow constraints to be filled, reduce coil outlet temperature and severity variability, optimize the yields of the most valuable products, and, in time, allow greater feed variability and flexibility. A feasibility study investigated a number of options. A process magnetic resonance analyzer (MRA) system was chosen due to its measurement linearity, fast-track project execution, and high availability. BASF purchases naphtha of variable feed specification, depending on strategic or tactical requirements. More than 140 naphtha types are regularly used. With tank stratification and regular feed tank changes, good monitoring of feed quality is essential to minimize lost production during feed transitions and to maintain stable operation in the case of a slug of high-variability naphtha potentially violating a constraint. Heavy naphthas could violate the coil outlet temperature, causing excess coking or tube wall damage. A light naphtha could exceed the downstream compressor loading limit. Working in partnership with BASF, Invensys Process Systems GmbH supplied a feedstock analyzer system that has been successfully extended to both crackers, including naphtha recycle streams. The measurements cover some 29 components plus four calculations from c4-c11, including paraffins, isoparaffins, naphthenes, and aromatics.
43
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E
The installation included integration of feed characterization from analytical and optimization software Process MRA, plus ROMeo Optimization software to improve the operational efficiency and reliability of this site-critical operation. Availability has been high (greater than 98%) with prediction models updated remotely (typically once per quarter) as part of an inclusive threeyear support agreement.
Project Execution
Since September 2000, the process MRA system has been functioning online, providing feed-forward stream characterization to the steam cracker reactor model. Implementation of the process MRA took place over a period of about 22 weeks. During the first two weeks, BASF was provided with an MRA that measured a starter set of some 100 naphtha samples spanning the expected operational range. Fewer than 50 were incorporated into the final model. In the next 16 weeks, an online system was installed and online model development commenced. Another 75 samples were gathered prior to the decision to run the validation phase. During the last four, a validation phase was conducted and the unit was accepted and transitioned to operations. The system was installed and commissioned without any disruption to production and within the operational constraints of the local maintenance, engineering and laboratory staff. Since the sampling requirements are straightforward, the system was integrated into the established shelter for other analytical equipment associated with the crackers. No water removal is required and the filtering was set at 100 microns to prevent valve seat damage only (the sample passes through a relatively wide-bore 6 mm tube). For the multistream sampling, an inline heater is preferred to clamp stream-to-stream temperatures within +/- 5 C.
44
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E