Basic steps to take when applying analytics processing

Upstream oil & gas operations improved using data analytics.

By Michael Risse January 29, 2019

Reducing the break-even price of U.S. shale oil production requires the intelligent application of what Goldman Sachs refers to as “brawn, brains, and bytes.” Any discussion related to bytes must address use of data analytics to accelerate insights by engineers and other experts into Big Data.

These insights can improve operations, increase safety, and cut costs. According to McKinsey, these types of improvements represent a $50 billion-dollar opportunity in upstream oil & gas, including increasingly important shale oil.

The brawn stage of shale oil innovation started around 2013, including longer horizontal wells and fracking with more sand and horsepower. The brains stage included better horizontal well placement and targeted, optimized fracking. These innovations were responsible for reducing break-even pricing from $70 per barrel in 2013 to $50 per barrel in 2017. At the same time, production in key U.S. shale plays rose from 2.4 million barrels per day (MMBPD) in 2013 to 4.6 MMBPD in 2017.

To further reduce the breakeven price to $45 per barrel and increase production to about 7.7 MMBPD will require more brains, but also will rely heavily on bytes.

These bytes can improve operations in several areas. This article will discuss two of them, production monitoring and preventive maintenance, but let’s first look at how data is gathered and stored in preparation for analysis.

Collecting data

Production monitoring and preventive maintenance each require acquisition of data from sensors—wired and wireless. Discrete sensors indicate whether an item of equipment, such as a pump, is on or off. They also are used commonly to indicate open/closed status, as with a valve.

Typical analog sensors measure pressure, temperature, flow, and density—parameters of considerable interest to shale producers. Analytical analog sensors are used more sparingly, most often to measure the chemical composition of oil.

Sensors can be wired or wireless. Traditional wired sensors work well in many applications, but as the name implies they have a drawback—the requirement to connect them via cabling and wiring. This is particularly problematic for retrofit applications at existing sites.

Discrete sensors transmit their on/off or open/closed status to monitoring systems via a single pair of wires. Smart discrete sensors transmit not only status, but also sensor condition, via a digital communications link.

Wired analog sensors also are either standard or smart. Standard analog sensors transmit a single process variable, for example a pressure reading for a pressure sensor, to a monitoring system, usually via a 4-20 mA signal.

Smart analog sensors transmit a wealth of data, up to 40 parameters for a sophisticated sensor such as a mass flow meter. For example, a typical Coriolis mass flow meter will transmit mass flow as its process variable, plus density and temperature. Diagnostic data indicates meter condition, and shows when the meter was last calibrated, and when it should be again.

Wireless sensors were introduced about a decade ago, and both discrete and analog versions are smart. For industrial applications, the two main wireless protocols are ISA100 and WirelessHART. Although wireless is relatively new, there are well over 30,000 WirelessHART networks worldwide, with more than 10 billion operating hours.

Sensors and networks collect data, stored and often shared, a task which technology advances make easier.

Storing and sharing data

Not long ago, storing vast amounts of data generated by a shale drilling site was expensive. Costs have come way down, for both on-premises and cloud storage.

On-premises storage is typically on a server-class PC connected to the monitoring PC via a hardwired Ethernet connection. The server-class PC hosts one of the many popular time-series databases, such as OSIsoft Pi. Unlike relational databases, time-series databases store huge amounts of real-time data efficiently.

Data stored on-premise often is needed at central locations, such as a control center, and may be transmitted via many different means, including cellular and satellite networks.

In like manner, data may go directly from a local PC-based monitoring system to the cloud, which has many advantages over on-premises data storage. Costs per unit of storage are lower, and storage can scale as required. Once in the cloud, data is accessed worldwide via any Internet connection.

Accessing either on-premises or cloud-based data remotely presents some security issues, and while not insurmountable, are outside the scope of this article.

Now that data has been collected, stored, and shared, it can be analyzed to improve operations.

Improve and implement

Many oil & gas companies are overwhelmed by the sheer volume of data collected. Despite claims by some suppliers to the contrary, it’s not possible to simply turn AI or machine learning software loose on data and get useful information. Instead, exploiting data analytics must follow a multi-step process, shown in Figure 1 and described below.

Connecting to data is easier when using data analytics software with secure, pre-built connectors to the databases used. When evaluating data analytics software offerings, make sure pre-built connectors link to existing and anticipated databases. Automatic linking to databases allows Google-like searches for parameters and time periods of interest. Otherwise, custom code must be written to link the analytics software to the database, an expensive and time-consuming task.

Data cleansing requires aligning data sources on the same time scale, and validating data quality. Doing so can consume up to 50% of the time required for gaining insights, depending on the nature of the existing data. Data analytics software should come with built-in data cleansing tools. Tools should be specific to the process industries and be usable by a process engineer with limited background in signal processing methods such as spike detection, low pass filtering, managing intermittent bad values in data sets, and others.

Capturing context relates each data point to others. A relational database does this upon setup and creation, with each data point’s relationships to others defined. With time-series databases, each data point is time-stamped, but without relationships established among data points. Capturing context adds the relationships for each data set as it’s pulled from the database into the data analytics software. Once again, tools use must be intuitive for process engineers, with no assistance required from data scientists or IT experts.

Today’s most popular data analytics tool is the spreadsheet but analyzing time-series data with this general-purpose tool is time-consuming and requires expertise with macros, pivot tables, and other arcane spreadsheet functions. Furthermore, data volumes handled by spreadsheets typically limit the types of analysis. Software for analysis of time-series process data is needed. The software should support subject-matter experts (SMEs) with visual representations of the data of interest, allowing direct interaction with the data using an iterative procedure (Figure 2). SMEs can then rapidly perform data calculations on the data, search for patterns, analyze different operating modes, and so forth.

Capture and collaborate capabilities give SMEs the means to share results with colleagues. This not only brings multiple minds to bear on a problem, but also supports knowledge transfer. Annotated captured results allow others to follow the trail that generated the original insights.

Extensibility provides the flexibility to use a data analytics solution anytime and anywhere. A browser-based interface means the look and feel are the same whether on an office PC or a tablet in the field.

“Run-at-scale” means the data analytics software works with the largest data sets to solve the most complex problems. In extreme cases, the software runs on multiple servers to harness the processing power and local data storage needed. This capability will be more important in the future as deployment data volumes and problem complexities grow.

Finally, the SME may want to establish monitoring applications for alerting stakeholders to specific operating conditions, providing early warning and driving faster corrective action.

Detailed use case

Pioneer Energy is a service provider and original equipment manufacturer solving gas-processing challenges in the oilfield with gas capture and processing units for tank vapors and flare gas.

Pioneer operates and monitors these geographically disperse units from its headquarters in Lakewood, Colo., analyzing the results to deliver continuous improvement.

Their FlareCatcher system is powered with a natural gas generator, found inside a trailer. Fuel gas for the generator can be any of FlareCatcher’s refined energy products, representing only about 5% of the total energy of the gas processed by the equipment.

Pioneer has systems installed in the Western United States. Future sites could be anywhere in the world with cellular or satellite connectivity. Alternately, a local radio network could get the data to a network hub.

Well-site data is sent to a local data center with built-in redundancies in power and networking services. Pioneer has data centers in Denver and Dallas. It is investigating virtualization to add dynamic scaling and load balancing to improve field data gathering.

Analog data is transmitted at one-second intervals and discrete data is transmitted as it changes, but Pioneer had no sophisticated data analysis tools. If engineers found themselves with free time, they manually loaded historical data into a Microsoft Excel spreadsheet to calculate a few basic metrics. But Excel is not suitable for calculations of reasonable complexity, so much of the data gathered was not exploited for value.

Pioneer selected Seeq’s advanced analytics application because it manifested what they envisioned. It has a graph database, time series optimization, a clean browser-based interface, as well as advanced data analytics and information sharing capabilities. The decision was easy after seeing the visual pattern search tool demonstrated.

The solution enables Pioneer to optimize the data stream. Simple computations performed at the edge determine what data is streamed to headquarters for analysis, and what is archived locally.

The system analyzes historical data to define rules for operating parameters. In a continuous improvement cycle, all data has potential value if unlocked and leveraged. Seeq is the environment for experimentation and learning. Visual feedback allows engineers to analyze complex data in a reasonable amount of time.

For example, Pioneer’s refrigeration systems are very sensitive to changing operational conditions. Seeq allows Pioneer to isolate these effects, identify their causes, and develop simple operational rules to extend the life of its capital investment.

Pioneer delivers value by operating systems remotely. If the software identifies a problem with field equipment, corrective action can be taken quickly. For instance, Pioneer uses air-cooled cascade refrigeration systems. During hot days, discharge temperatures and pressures can rise to elevated levels, leading to hardware failure. By detecting this condition, the system allows operators to intervene by reducing system throughput.

All well-site data is streamed to a centralized, secure data center where the server resides. The interface is available via a web proxy server. Pioneer technicians and engineers can access the data anywhere there is a network connection, including at the well site, given a cellular hot spot.

The analytics software installation improves operational intelligence, shedding light on otherwise complex processes. The challenge now is deciding what mystery to tackle next.

Use cases in brief

Optimizing oil collection

Problem: A company collected oil from scattered well sites, but never optimized pick-up routes to match the erratic nature of production. Internal analysis efforts using level data from sites proved fruitless.

Solution: Using programmatic analysis, production data from the sites is monitored by watching the rate of change, with results used to predict the optimum time to send a truck to a given location. Truck calls are more efficient, and reports are generated automatically.

Well pump analysis

Problem: Flow assurance engineers watching out for undesirable pumping conditions had difficulty analyzing well production data from a large group of sites. A mathematical model could perform the calculations but typically took an entire day, delaying corrective action.

Solution: With improvements to the model, the same calculations can be done in about 30 minutes. This makes it far easier to identify problem situations and evaluate the effectiveness of corrective measures, improving production overall.

Well performance analysis

Problem: Flow assurance engineers knew specific attributes of crude oil from a given well were predictors of equipment performance issues such as clogging, fouling, and corrosion—but could not develop adequate mathematical models for accurate predictions.

Solution: Using data from a large group of wells, the solution ties oil characteristics to equipment performance, helping operations and maintenance departments recognize when and how a change in the crude produced is likely to cause equipment performance problems.

Rotating equipment evaluation

Problem: Even with all the diagnostic sensors applied to large rotating equipment installations, users had difficulty getting useful information beyond the most basic alarms. Performing the required sophisticated analysis proved elusive with conventional tools.

Solution: Using process analytics, analysis zeroed in on root causes quickly and effectively, eliminating the problematic first-principle models and false positives common with less sophisticated analytical approaches. It is far easier to determine optimal operating conditions and avoid outages.

Original content can be found at Oil and Gas Engineering.


Author Bio: Michael Risse is the CMO and vice president at Seeq Corporation, a company building advanced analytics applications for engineers and analysts that accelerate insights into industrial process data. He was formerly a consultant with big data platform and application companies, and prior to that worked with Microsoft for 20 years. Michael is a graduate of the University of Wisconsin at Madison, and he lives in Seattle.