Then it´s time or the swap_user_roles script :p
Forum Post: RE: Configure Process History View
Forum Post: Content Based Analysis during PAS modernization
Are there any examples of where analytics were applied to a legacy PAS configuration prior to modernization? Specifically, was the legacy configuration migrated via bruteforce or was an effort made to understand the content of the configuration? Were similar loops identified? Were custom loops identified? I am seeking input as to what analysis steps are taken to translate a legacy configuration to DeltaV.
Forum Post: Reliability in Power
The 2014 Strategic Directions report for US Power Generation was recently published and reliability is again the number 1 concern among utilities. The report discusses how online monitoring of critical assets can be used to help companies prioritize capital spending. Has your facility considered doing this? If so, how did you go about developing an implementation plan and measures for success?
A copy of this report can be found here: http://bv.com/reports
Forum Post: Content Based Analysis during PAS modernization
Are there any examples of where analytics were applied to a legacy PAS configuration prior to modernization? Specifically, was the legacy configuration migrated via bruteforce or was an effort made to understand the content of the configuration? Were similar loops identified? Were custom loops identified? I am seeking input as to what analysis steps are taken to translate a legacy configuration to DeltaV.
Blog Post: Improving Technology Transfer by Earlier Adoption of Standards and Software Platforms
In 2011, the U.S. Food & Drug Administration (FDA) published their Guidance for Industry – Process Validation: General Principles and Practices. A PharmManufacturing.com article, A Framework for Technology Transfer to Satisfy the Requirements of the New Process Validation Guidance: Part 2 shared the impact on the Life Sciences industry:
In the life of any drug product, the technology transfer of a process is a complex matter, made more complicated by the new definition of the Process Validation (PV) guidance issued by FDA in January 2011.

Zuwei Jin
Senior Life Sciences Consultant
Zuwei believes that early adoption of MES and the ISA-88 (S88) standard addressing batch process control and ISA-95 (S95) standard addressing the enterprise and control system interface provide a framework for implementing not only the technical but also the regulatory and business processes required to support the tech transfer and expansion throughout the drug development cycle. It is a key element in introducing a scalable, modular, hardware independent platform for drug development and data management.
He feels that early adoption of an MES platform provides a structure for consistent engineering and business practices at early stages of the drug development cycle which allows processes to be defined on unified framework. This platform also provides guidance for user requirement specification (URS) and allows more effective communication between the end-users, suppliers, and engineering companies.
As part of the design and engineering effort, incorporation of the MES software platform into the framework leads to a consistent engineering and documentation practice in equipment qualification and commissioning. Even more importantly, the practice benefits the next phase and expansion tremendously as the technology transfer can be carried out on the same framework.
Zuwei notes that it makes for a much easier process of transferring technology from site to site and/or country to country for similar reasons. Adopting the MES platform and following the S95 standard helps improve speed to the market (better project time), reinforce regulatory compliance (better project quality), enforce consistent technical and business practice (smoother workflow), and increase efficiency and cost effectiveness (better labor utilization).
Many pharmaceutical and biotech manufacturers have adopted this approach in their new facilities and have improved their technology transfer practice through the introduction of the S88 and S95 standards. Coupled with the adoption of an MES platform in the early phase, this combination is particularly attractive for Greenfield projects, due to the importance of compliance expertise and time to market. This approach will likely receive even more attention in Asia where many more Greenfield projects exist and the demand for speed to market and regulatory compliance are great.
You can connect and interact with other pharmaceutical, biotech, and manufacturing execution system experts in the Life Sciences and Operations Management tracks of the Emerson Exchange 365 community.
Related Posts:
Blog Post: Improving Technology Transfer by Earlier Adoption of Standards and Software Platforms
In 2011, the U.S. Food & Drug Administration (FDA) published their Guidance for Industry – Process Validation: General Principles and Practices. A PharmManufacturing.com article, A Framework for Technology Transfer to Satisfy the Requirements of the New Process Validation Guidance: Part 2 shared the impact on the Life Sciences industry:
In the life of any drug product, the technology transfer of a process is a complex matter, made more complicated by the new definition of the Process Validation (PV) guidance issued by FDA in January 2011.

Zuwei Jin
Senior Life Sciences Consultant
Zuwei believes that early adoption of MES and the ISA-88 (S88) standard addressing batch process control and ISA-95 (S95) standard addressing the enterprise and control system interface provide a framework for implementing not only the technical but also the regulatory and business processes required to support the tech transfer and expansion throughout the drug development cycle. It is a key element in introducing a scalable, modular, hardware independent platform for drug development and data management.
He feels that early adoption of an MES platform provides a structure for consistent engineering and business practices at early stages of the drug development cycle which allows processes to be defined on unified framework. This platform also provides guidance for user requirement specification (URS) and allows more effective communication between the end-users, suppliers, and engineering companies.
As part of the design and engineering effort, incorporation of the MES software platform into the framework leads to a consistent engineering and documentation practice in equipment qualification and commissioning. Even more importantly, the practice benefits the next phase and expansion tremendously as the technology transfer can be carried out on the same framework.
Zuwei notes that it makes for a much easier process of transferring technology from site to site and/or country to country for similar reasons. Adopting the MES platform and following the S95 standard helps improve speed to the market (better project time), reinforce regulatory compliance (better project quality), enforce consistent technical and business practice (smoother workflow), and increase efficiency and cost effectiveness (better labor utilization).
Many pharmaceutical and biotech manufacturers have adopted this approach in their new facilities and have improved their technology transfer practice through the introduction of the S88 and S95 standards. Coupled with the adoption of an MES platform in the early phase, this combination is particularly attractive for Greenfield projects, due to the importance of compliance expertise and time to market. This approach will likely receive even more attention in Asia where many more Greenfield projects exist and the demand for speed to market and regulatory compliance are great.
You can connect and interact with other pharmaceutical, biotech, and manufacturing execution system experts in the Life Sciences and Operations Management tracks of the Emerson Exchange 365 community.
Related Posts:
Forum Post: RE: Avoiding Plugged Pressure Taps
Installing a transmitter that has remote diaphragm seals is an easy alternative solution to using impulse piping. They are available in a variety of process connection sizes and styles, and are commonly used in applications similar to what is described in this post (high temperature, viscous). http://www2.emersonprocess.com/en-US/brands/rosemount/Level/Differential-Pressure-Level/1199-Remote-Seals/Pages/index.aspx
Another solution is to use a Rosemount 3051S transmitter with Advanced Diagnostics. This transmitter has the unique capability to detect the presence of plugged impulse lines using Statistical Process Monitoring technology. http://www2.emersonprocess.com/en-US/brands/rosemount/Pressure/Pressure-Transmitters/3051S-Advanced-Diagnostics/spm/Pages/index.aspx
Forum Post: RE: 2 wire inductive sensor connect to SOE card
Hi,
I think is better use isolator ( barrier) to reshape signal and solved problem.
Blog Post: 여성 엔지니어들이 명심해야 할 다섯 가지 조언
Forum Post: RE: OPC license release
Doesn't sound like a good idea to have an autonomous program changing DCS process parameters, especially if you are going to entrust a VBA application....just a thought!
Sent from my Windows Phone
[collapse] From: Youssef.El-Bahtimy
Sent: 22/08/2014 17:21
To: DeltaV@community.emerson.com
Subject: RE: [EE365 DeltaV Track] OPC license release
If you sequentially step your client through the process of removing items, deleting groups, then disconnecting, do you see the number of points count decrease thorugh diagnostics?
Blog Post: Improving Reliability in Power Generation: A Competitive Advantage

Douglas Morris
Director of Marketing, Mining & Power Industries
Recently, the consulting arm of Black & Veatch published its annual strategic directions report for the US utility industry. In 2014 “reliability” was again identified as the top industry concern. This report discusses how technology will play an important role for utilities as they look to improve upon asset reliability.
The industry has always had some play with this discipline; in fact, most plants had staffs dedicated to the practice of reliability. As utilities cut back staffing over time, though, many of these departments disappeared and the focus was suddenly absent. When most fossil plants ran as originally intended, this didn’t pose a large problem. Times have changed and now with the growing number of renewables along with gas plants being cycled on a regular basis, former baseload plants are increasingly running in load following mode, subjecting these units to greater thermal cycling and more stress on mechanical equipment.
As the B&V report states, technology can be the tool that helps utilities achieve better reliability. Per the report:
…new data collection and performance monitoring technologies will assist utility operators in better understanding potential points for failure and managing risk by improving visibility into asset condition and performance.
There are already sites that have used technology to improve plant reliability and they are reaping the benefits.
Tucson Electric Power (TEP), Springerville Generating Station, is one such utility. Gary Gardner of TEP wrote an article published on reliabilityweb.com which states:
TEP relies on technology with high resolution, accurate data collection and advanced diagnostics capabilities.
In 2012, Gary and his predictive maintenance team, along with the use of advanced technology, helped the company avoid more than $1M in maintenance and replacement costs.
So as utilities embrace the recent rebirth of reliability, many will likely follow the path of TEP. Those that do and invest in proper technology for condition monitoring will reap the rewards of increased plant availability and reduced operating and maintenance (O&M) costs.
From Jim: You can connect and interact with other Power industry and reliability professionals in the Power and Asset Optimization, Maintenance and Reliability tracks of the Emerson Exchange 365 community
Related Posts:
Blog Post: Improving Reliability in Power Generation: A Competitive Advantage

Douglas Morris
Director of Marketing, Mining & Power Industries
Recently, the consulting arm of Black & Veatch published its annual strategic directions report for the US utility industry. In 2014 “reliability” was again identified as the top industry concern. This report discusses how technology will play an important role for utilities as they look to improve upon asset reliability.
The industry has always had some play with this discipline; in fact, most plants had staffs dedicated to the practice of reliability. As utilities cut back staffing over time, though, many of these departments disappeared and the focus was suddenly absent. When most fossil plants ran as originally intended, this didn’t pose a large problem. Times have changed and now with the growing number of renewables along with gas plants being cycled on a regular basis, former baseload plants are increasingly running in load following mode, subjecting these units to greater thermal cycling and more stress on mechanical equipment.
As the B&V report states, technology can be the tool that helps utilities achieve better reliability. Per the report:
…new data collection and performance monitoring technologies will assist utility operators in better understanding potential points for failure and managing risk by improving visibility into asset condition and performance.
There are already sites that have used technology to improve plant reliability and they are reaping the benefits.
Tucson Electric Power (TEP), Springerville Generating Station, is one such utility. Gary Gardner of TEP wrote an article published on reliabilityweb.com which states:
TEP relies on technology with high resolution, accurate data collection and advanced diagnostics capabilities.
In 2012, Gary and his predictive maintenance team, along with the use of advanced technology, helped the company avoid more than $1M in maintenance and replacement costs.
So as utilities embrace the recent rebirth of reliability, many will likely follow the path of TEP. Those that do and invest in proper technology for condition monitoring will reap the rewards of increased plant availability and reduced operating and maintenance (O&M) costs.
From Jim: You can connect and interact with other Power industry and reliability professionals in the Power and Asset Optimization, Maintenance and Reliability tracks of the Emerson Exchange 365 community
Related Posts:
Forum Post: electromagnetic flow transmitter error
Hi
I configure new electromagnetic flow transmitter (transmitter 8732C and flow tube 8705) and when enable “ Empty Pipe” feature , received empty pipe message in display and also “electrodes circuit open “ in hart communicator ,is message normal? I work on it in work shop and transmitter is empty, and also when using self-test this message appear.
Thanks
Mohsen
Forum Post: RE: Can't configure Flowserve Logix 3400MD Positioner with AMS 11.5 / Delta V 11.3
I got some fresh news about this post.
We called to Flowserve and they send us their local representative, this guy came and he made a hard reset in the fieldbus circuit board. After that we try to comission this device in DeltaV and we success, now we can calibrate the valve from AMS 11.5 and the valve is under observation, so far looks god. I'will show you more details about it.
Flowserve’s Representative performed a HARD RESET to the positioner’s Fieldbus circuits. After that we commissioned the 3400MD positioner in our DCS with negative results, the valve doesn’t respond to the open and close DCS commands orders
Delta V and AMS Versions
The positioner is recognized by the DeltaV System (Flowserve LX3400MD Rev 1) as follows
- When we use the Positioner’s Analog Output AO function block we got an error in the block diagram, the signal BKCAL_OUT of the AO block is flagged as a bad quality signal with a red “X” and the valve doesn’t respond to the DCS command signal, the valve doesn’t move.
When we check the positioner’s AO Function Block Mode we see that the target mode is “Cascade” but the actual mode is in “Auto”.
We try to change the actual AO block mode to “Cascade” but the changes don’t affect the actual mode and it continues in “Auto” Mode
The Positioner’s AO Block Parameters
Forum Post: 3rd Party OPC Server, connecting from Application Station via OPC Mirror
I have all the relevant user accounts created (CPX_OPC which is used for running OPCMirror on DV, CIMUSER as the 3rd party OPC Server runas account) in both the remote OPC Server (Cimplicity and a workgroup) and DV app station (domain), but I can't get the unsolicited callbacks to work, I get an advise fail, and event logs (at the DV application station) lists a failed logon (cimuser).
UNC works both ways and I can read/browse OPC points from the Cimplicity OPC server using OPCWatchit, but I do receive an advise fail error. OPC Mirror reports the pipe as active, but monitoring the pipe items gives stale data.
Obviously this is related to the failing logon from the remote OPC server, preventing the OPC group advise from working.
SID and anonymous account translation is enabled in group policy, but still I get failed logons.
What am I missing
Forum Post: Conflict Resolution
My neighbor and good friend Hayden Hayden has just written a book entitled "Conscious Choosing for Flow" and I would like to recommend it to the community. It isn't a technical book, but rather a book on ways to deal with conflict. Hayden is a successful entrepreneur and currently a coach for executives. His thesis and the subtitle of the book is "Transforming Conflict into Creativity." He says his book is targeted at business managers and HR professionals, but I think his message can be more broadly used than that. Whenever people interact, there is a good chance there will be some conflict somewhere along the way. Rather than looking to conflict management or negotiation, he offers a third way that he describes as conflict transformation. He believes and describes how you can consciously choose to turn any conflict into something positive, dynamic, and creative. His approach is built around STAR... Stop, Think, Act, Review, which is not too far from the DMAIC approach that many engineers know and use. And not just dry reading, it includes exercises you can use to explore or validate the concepts as they are presented. It can be as useful to your personal life as it is to your professional life. His book is available on Amazon in paperback and kindle versions.
Forum Post: How good is your level control?
Is it good enough? Is it too good? Do you even know? Should you care?
Well yes, you probably should care. Most level processes are non-self-regulating or integrating processes. Everything you probably learned about tuning PID self-regulating loops like flow, pressure, and temperature does not work quite the same on integrating processes. So it is quite common for level loops to be tuned "by the seat of the pants" or trial and error. Furthermore, most level loops are tuned to achieve good setpoint response and yet most level loops have one setpoint (typically 50% of the tank height) and rarely is the setpoint ever changed. It is usually more important to consider the response to load disturbances. Even if and sometimes especially when the level is tightly controlled, regardless of how it was tuned, it is likely that the underlying disturbance and resulting variability is amplified rather than attenuated. That is never a good thing.
Control loops are intended to control processes with more gradual (low frequency) disturbances. They are not the right tool for attenuating high frequency variability. That is one reason we have surge tanks, which can attenuate high frequency variability in in flow or out flow. Yet level controls on many surge tanks are tuned to prevent almost any deviation in level. If the in flow is varying, then the out flow will vary the same. This essentially is in conflict with the intended purpose of the surge tank. And the truth is that all tanks are surge tanks. Some may be undersized and others may be over-sized, but they are all essentially "wide spots" in the pipe. To take advantage of the surge capacity, it is necessary to know the potential variability or worst case disturbance of the wild flow and the allowable limits on the level. Then we can tune the level control to be able to respond th the worst case while keeping the level in bounds.
You need a tuning methodology. The one Emerson's control performance consultants use is lambda tuning. The premise is that for any linear process with a feedback control loop, the control loop can be tuned to provide a first order closed loop response using the right PID tuning constants. The information required to tune the process is the process gain, the process dead-time, and the process time constants. Lambda is the closed loop time constant and defines the speed of response of the loop under control. Interacting self-regulating loops can be dynamically decoupled by making the lambda of one loop sufficiently larger than the other. There is a minimum lambda that can be defined to avoid unstable or oscillatory response under closed loop control. But in the context of level control, the selection of lambda defines the speed of response which is related to the arrest time and deviation for a disturbance. Lambda tuning of integrating processes reduces the variability of the manipulated flow and takes maximum advantage of the surge capacity in the vessel without risking loss of containment.
I would like to offer some examples I have seen often enough to mention. One is the base or bottom level control in a distillation process. It is quite common to see the base level controller tuned for very tight, aggressive control. The result is that the bottom flow can be quite variable and in the extreme can see the bottom flow oscillating between high flow and no flow as fast as the control valve can move. This is obviously not good for the control valve, but it can be detrimental to the process as well. The bottom of the column is at a high temperature and often is is beneficial to recover some of the heat before that stream is sent to the next step in the process. If the heat recovery is used to preheat the column feed, for example, you can see how that will introduce variability into the feed of the column and be disruptive. I have found this to be quite common on fractionator columns in refinery crude units. It would be much better to reduce and minimize the variability in bottom product flow, even if the level varies a bit in the base of the column.
Another distillation example is seen at the top of the column. It isn't too common to control the level in the reflux accumulator by manipulating the reflux flow, but it is sometimes necessary to use that configuration. Sometimes the distillate product flow, which is being used for composition control, will be feed forward into the reflux flow loop to improve level control in a way analogous to 3 element steam drum level control. Regardless, a poorly tuned level control will create variability in the reflux flow which obviously has an immediate effect on composition and temperatures at the top of the column. Lambda tuning with as large of a lambda value as can be tolerated, will minimize the variability created by variable reflux flow. As long as the reflux accumulator level stays within limits, the rest of the control loops can be successful.
Another process which is usually characterized as integrating is pressure control of a gas where there is no phase change. Like liquid volume is the integral of liquid flow, pressure is the integral of gas flow. Sometimes the disturbances are greater and/or the vessel pressure limits are tighter, equivalent to an undersized surge tank, but the process is inherently integrating and analogous to liquid level control. The same techniques and formulas apply. In one example I saw a few years ago, a distillation tower was being fed directly from a reactor effluent. The feed flow was cascaded to the reactor pressure control. The pressure controller was tuned too aggressively and that resulted in a variable feed flow to the column. This limited the ability of the column controls to achieve good composition control as the product quality variable had exactly the same frequency as the feed flow. We could dampen the amplitude of the product quality, but could not eliminate the variability until we re-tuned the pressure controls using lambda tuning.
Sometimes even lambda tuning alone is not sufficient to achieve satisfactory control. In this case, feed forward makes a lot of sense. Steam drum level control is often implemented with 3-element control in which steam flow is essentially a feed forward signal to the boiler feed water flow and the level control trims the feed forward controls. It is never a good thing to boil a steam drum dry or to get water into the team header and that is why steam drum controls are often designed with 3 element drum level control. The alternative would be to have a larger steam drum, but as a pressure vessel, the cost of increasing the size of the steam drum is much higher than implementing a straight-forward control strategy. In another example where level was difficult to control, the probably was dead time. Dead time in any loop is the hardest dynamic element to overcome. In this case, a hopper was being fed by granular solids and there was a rotary drum used to provide mixing. No matter how fast the different feeds were changed, they had to move through the rotary drum which was constant speed. This provided a significant amount of dead time in the loop. The contents of the hopper were fed at a controlled rate to the next process. The level in the hopper had an important affect on the density and packing of the material on the hopper bottom conveyor which affect the downstream process. and even worse, if the level in the hoper is too high, material in the hopper can bridge and there will suddenly be no feed on the hopper bottom conveyor. We could have resolved this with feed forward control using PID to trim the level. But in this case, we used Predict MPC control and configured the feed flow as a disturbance (feed forward) variable which worked quite well.
So to evaluate your level control, you should look at the level behavior, but you must also look at the behavior of the manipulated flow. If you want to learn more about Lambda tuning and integrating processes, there is usually a workshop and/or short course discussing it at Emerson Exchange. If you want help, please ask for the assistance of one of Emerson's Control Performance Consultants. And Emerson's Education Center offers courses in Modern Loop Tuning and Control Engineering that provide all the information you will need to tune up your level controllers to achieve "best" control performance. And as always, your comments and feedback are appreciated. I like to learn new things, too.
Blog Post: 에머슨 초음파 누출 감지기, 해양 부문 적용에 대한 DNV 승인 획득
Forum Post: Multi-point temperature applications
Many plants have process units where multipoint temperature sensor arrays are used to capture temperature profiles to detect hot-spots, or where multiple single temperature points are within close proximity. Multi-input temperature transmitters are ideal for applications where there are many temperature measurements clustered together. Applications include:
- High resolution temperature profiles of tanks using multipoint temperature sensor arrays for computation of density to calculate volume and mass of the product.
- High resolution reactor temperature profiles using multipoint temperature sensor arrays to identify hot-spots and channeling to prevent product or catalyst damage, and control reaction efficiency.
- Column temperature profile with sensors at every tray to optimize separation and product quality.
- Multiple points throughout a furnace to determine how efficiently the furnace uses energy to improve energy usage to reduce operating costs.
- Motor winding temperatures to ensure they are operating within specifications, thus extending service life and preventing unnecessary downtime.
- Bearing temperature on critical compressors, pumps, fans, agitators, and conveyor belts etc. to alert when they exceed suggested operating temperatures to prevent potential damage, cascading into shutdowns of larger processing equipment.
- Heat exchanger efficiency by measuring inlet and outlet temperatures for steam and product to detect degradation due to fouling to determine if cleaning is needed.
- Boiler tube surface temperature to detect slagging or soot deposits hampering heat transfer and predicting fatigue to prevent boiler shutdowns due to tube ruptures, improving efficiency and plant availability.
To condition these sensor signals in the past you used to have to decide between accuracy, using many single-point measurement transmitters, or low-cost using control system temperature input cards or temperature multiplexers. However, multi-input temperature transmitters provide both the precision of field mounted transmitters, and economy using wireless or using only a single pair of wires from the multi-input temperature to the junction box and is two-wire loop powered Thus, no separate electrical power is required. The solution can be intrinsically safe, non-incendive, and flame/explosion-proof, making it suitable for all hazardous areas. All sensor signals are carried on the same two wires or over the air.
Some reactors and heat exchangers around plants may not be continuously monitored, relying on manual data collection because they were never instrumented due to the high cost of temperature input cards and compensation wires, or single point transmitters, wiring, and analog input cards. Modern plants are now built with multi-input temperature transmitters at lower cost, and existing plants can be modernized with multi-input temperature transmitters where measurements are missing.
Around-the-clock automatic device diagnostics monitoring alerts personnel to problems like sensor failures.
The right temperature is important for the operation of many processes. The wrong temperature will impact plant throughput, quality, and yield. Temperature is also important for maintenance, as high temperature is a leading indicator of problems in motors and machinery. If left unattended, improper temperatures can result in plant downtime and maintenance costs. Deploying transmitters to cover these missing measurements therefore makes sense.
A single gateway can be used to integrate hundreds of temperature points into an existing control system.
Read about one such modernization case here:
http://www2.emersonprocess.com/siteadmincenter/PM%20Central%20Web%20Documents/QBRExxonMobil3feb.pdf
What other applications are there where there are multiple temperature points in close proximity of each other such that it would make sense to use temperature transmitters with 4 or 8 inputs?
Forum Post: RE: OPC license release
Hi,
is the tag count increasing or the connection amounts?