Energy pipeline operator installs control system, gains production system visibility

Oil and Gas: Information-enabled control architecture provides production intelligence, system visibility for Cybertrol Engineering’s software implementation. This includes a wide view of historical information, health monitoring, and verification for critical assets, like 5000 hp motors, and role-based access to performance data.

By Ben Durbin December 12, 2014

As oil and gas production driven by innovation and new production techniques expands from Alaska down into the Gulf of Mexico, pipeline companies need greater visibility and asset monitoring using system-wide historical references to ensure product quality and reliability. New control system architectures are helping.

As energy on the market today and reserves for tomorrow continue to increase, predictions of when the U.S. will be a net energy exporter have fallen to within a decade.

To be an energy exporter, U.S. oil and gas companies need to get products to international markets via road, rail, or pipeline. American oil and natural gas have been shipped by pipeline for more than 75 years. Nearly 500,000 miles of pipeline, regulated by the U.S. Dept. of Transportation, traverse the U.S., linking to additional miles (or kilometers) in Canada.

For pipeline companies, the basic mechanics of how a pipeline functions have not changed drastically in 25 years. A tube directs flow of the product that is kept in motion by pump stations spread along the line. New control and information system techniques have focused on how to ensure pipeline and product integrity and maximize flow profitability. 

Thirst for comprehensive data

For one major North American transporter of liquid and natural gas, a lack of access to reliable and comprehensive data from its pipeline network threatened performance, profitability, and even pipeline and product integrity.

With more than 20 lines and 150 pump stations spread across nine states monitored from a centralized supervisory control and data acquisition (SCADA) system in Wisconsin, the pipeline had multiple data sources that were isolated and difficult to access from the central system. The company had already started upgrading its control infrastructure from programmable logic controllers (PLCs) to a programmable automation controller (PAC) architecture to improve multidiscipline control and help producers connect and share actionable data between control and business systems.

As the upgrades continued, a large amount of data was still stored in local pump station PACs, human-machine interface (HMI) alarm logs, and on-site chart recorders. All information was only accessible on location and by the engineering team. According to the company’s control systems manager, the small amount of data collected through the enterprise-level historian system in Canada "Wasn’t very reliable for analysis because so few data points were collected, and the minimum interval between collections was too long. We were getting a data point every 10 seconds, and more valuable updates only came in hourly."

All of these data sources also created multiple versions of the truth. When information differed, maintenance personnel didn’t know if they should defer to HMI logs and alarms with a one-second scan rate or the local high-speed chart recorder to confirm flow rate and pressure metrics.

In many cases, data was also incomplete. Data collection to the remote enterprise-level historian was dependent on network connectivity, so power and network failures would leave many metrics uncollected. The SCADA system was only watching information pertinent to control center operations.

Additionally, production managers had little insight into the overall performance of the pipeline or opportunities for improvement. Should a leak occur, they were not confident they had the depth of reliable data necessary to determine the exact cause.

Management, engineering, and maintenance needed a complete history of information from station controllers. They wanted access to data on pump temperatures, amps, volts, vibration, and gas levels. They needed to ensure there were no gaps in historical data tracking due to a network or power outage. They needed all of this on a one-second scan rate or better, and they needed data contextualized for each function—maintenance, production, field technicians, and engineers. 

Wider view of historical data

To help unlock production data, the company worked with Cybertrol Engineering, a provider of control, process, and information solutions. Cybertrol developed a production intelligence strategy for the pipeline network based on an information software suite for improved integration with varied information sources.

The new system drastically increased the scalability and reliability of information collection with machine-level historian software coupled with site-level software. More than 170 historian modules have been installed in pipeline pump stations, terminals, remote device monitoring stations, and tank farms. These modules feed information to up to four historian servers to provide an enterprise-wide view of the historical data.

Situated within the PAC chassis, the historian modules collect data in real time directly at the source. With a 10 millisecond scan rate, and the capability to function as a stand-alone historian, the local modules collect over 1,200 data points and send 400 of these to site-level historian servers every few seconds. During a network or power lapse, the local historian modules store time-stamped data and push it out to site servers as soon as a connection is regained.

To visualize the vast new array of data the company was able to access, Cybertrol implemented an enterprise manufacturing intelligence (EMI) software that connects to data sources within the control system and from historians to create a unified production model (UPM), a single virtual data resource. The data remains distributed at the source but is collected in real-time. The system’s role-specific dashboards provide pipeline personnel across function areas with a contextual view of production sites for more responsive and informed decision making.

To get the system up and running quickly, Cybertrol developed code that directed the EMI UPM to build itself programmatically for each object at each pumping station, terminal, and tank farm added. If the company had to manually bind over 10,000 data tags to the individual UPM attributes, the solution would still be in development. Instead, Cybertrol engineers leveraged the company’s standardized PAC code and tags for each pumping station controller, and developed a system that could create UPM objects and bind object attributes to historian and controller tags automatically.

Building the UPM programmatically saved vast amounts of engineering hours and dollars to get the system up and running quickly. Additionally, it increases scalability and reduces ongoing software maintenance going forward. When a new pump station is brought into the system, the local historian modules can be configured in less than an hour; the UPM then recognizes tags and adds objects as necessary, creating a global reporting change across all systems. 

Motor performance verification

The new system provides the pipeline network with a better representation of operating conditions. Leveraging local historian modules "Allows us to record any data we want, in real time, forward this to centralized servers, and view a comprehensive picture of system performance at any given moment" using the EMI, said the pipeline’s control systems manager. "This has been an excellent tool for troubleshooting and analyzing various conditions and sequences to see where we can make improvements. It gives us the insight we need to take action."

For example, when a field technician called and said he had a motor running hot, a senior engineering technologist with the company had him pull up the EMI portal via a Web browser and navigate to a trend that showed him the resistance temperature detectors for this motor for the last six months. The trend report showed no real change to the temperatures. The engineering technologist added, "That might not sound like a big deal, but our pumps are anywhere up to 5,000 hp, so to pull one out for maintenance can run in the tens of thousands of dollars. Demonstrating there was not a problem prevented unnecessary maintenance and goes straight to our bottom line."

Role-based secure access

The permissions system within the EMI software allows each group of users to create their own reports and trend analyses, and share them with predefined group members. The pipeline company is getting used to operating with data they never imagined they would have easy access to.

Next on the agenda, Cybertrol is helping the pipeline to add third-party historian data into the EMI UPM along with data from business systems like Microsoft SQL Servers. This will help the company mine additional efficiencies, and provide a clear view into operations at all times from anywhere to prevent issues from arising in the first place, safeguarding system integrity.

– Ben Durbin is president, Cybertrol Engineering; edited by Mark T. Hoske, content manager, Control Engineering, mhoske@cfemedia.com.

Key concepts

  • Energy pipeline operator installs control system for production system visibility.
  • Information-enabled control architecture provides production intelligence, system visibility for Cybertrol Engineering’s software implementation.
  • Historical information, health monitoring, and verification for critical assets are provided with role-based access to performance data.

Consider this

While an old architecture may still operate, what visibility, monitoring, and analytics are you missing by not upgrading to modern systems?

ONLINE extra

www.rockwellautomation.com/rockwellautomation/industries/oil-gas/ 

www.cybertrol.com 

See other Control Engineering oil and gas engineering articles, products and news. 

Cybertrol Engineering is a CSIA member as of 3/2/2015

Original content can be found at Oil and Gas Engineering.