Exclusive Content
20 Aug 2015

Safety Case in the Gulf of Mexico: Method and Benefits for Old and New Facilities

The purpose of the Bureau of Safety and Environmental Enforcement (BSEE) Safety and Environmental Management Systems (SEMS) is to enhance safety of operations in the Gulf of Mexico (GOM). One of the principal SEMS objectives is to encourage the use of performance-based operating practices. However, the current US regulatory framework for GOM operations does not provide adequate tools to focus on specific risks associated with a facility. The adoption of the safety-case regime would steer operations toward this goal.

Introduction
This paper discusses the application of the safety-case concept and how the operator can demonstrate that the major safety and environmental hazards have been identified, and associated risks estimated, and show how these risks are managed by achieving a target level of safety. Throughout the safety-case road map, the identification of safety critical elements (SCEs) and associated performance standards represents one of the cornerstones of asset-integrity-management (AIM) strategy.

The paper discusses how application of the safety-case regime for existing facilities would highlight particular risks that may have been misjudged, taking into account the current state of installations and the actual operational procedures in place. For new facilities, the introduction of the safety case at the early stages of design would ease the integration of the overall risk-management (RM) plan at each level of organization.

General Safety-Case Approach
The safety-case approach is referred to generally as part of an objective-based (or goal-setting) regime. Such regimes are based on the principle that legislation sets the broad safety goals to be attained and the operator of the facility develops the most appropriate methods of achieving those goals. A basic tenet is the premise that the ongoing management of safety is the responsibility of the operator and not the regulator. The term “safety case” arises from the Health and Safety Executive in the UK, where the safety-case regime was implemented after the Piper Alpha accident in 1988. Most of the performance-based regulations have adopted elements of the safety-case approach. Moreover, many operators have included safety-case components as part of their companies’ requirements and have integrated them in their general management system.

Fig. 1

The safety-case regime is a documented demonstration that the operator has identified all major safety and environmental hazards, estimated the associated risks, and shown how all of these risks are managed to achieve a stringent target level of safety, including a demonstration of how the safety-management system in place ensures that the controls are applied effectively (Fig. 1). The safety case is a standalone document, based on a set of several subsidiary documents, undertaken to present a coherent argument demonstrating that the risks are managed to be as low as reasonably practicable (ALARP). Fig. 1 presents the general principle of the safety-case development process.

Current RM Regime in GOM
All leasing and operations in the GOM part of the outer continental shelf are governed by laws and regulations to ensure safe operations and preservation of the environment, while balancing the US’s need for energy development. Since October 2011, the BSEE enforces these regulations and periodically updates the rules as the responsible party for the comprehensive oversight, safety, and environmental protection of all offshore activities.

The original SEMS rule, under the Workplace Safety Rule, made mandatory the application of the following 13 ­elements of the American Petroleum Institute (API) Recommended Practice (RP) 75:

  • General provisions: for implementation, planning, and management review and approval of the SEMS program
  • Safety and environmental information: safety and environmental information needed for any facility (e.g., design data, facility process such as flow diagrams, mechanical components such as piping, and instrument diagrams)
  • Hazards analysis: a facility-level risk assessment
  • Management of change: program for addressing any facility or operational changes including management changes, shift changes, and contractor changes
  • Operating procedures: evaluation of operations and written procedures
  • Safe work practices: e.g., manuals, standards, rules of conduct
  • Training: safe work practices and technical training (includes contractors)
  • Assurance of quality and mechanical integrity of critical equipment: preventive-maintenance programs and quality control
  • Prestartup review: review of all systems
  • Emergency response and control: emergency-evacuation plans, oil-spill contingency plans, and others in place and validated by drill
  • Investigation of incidents: procedures for investigating incidents, implementing corrective action, and following up
  • Audit of safety- and environmental-management-program elements: strengthening API RP 75 provisions by requiring an initial audit within the first 2 years of implementation and additional audits in 3-year intervals
  • Records and documentation: documentation required that describes all elements of the SEMS program

Introduction of Safety Case for Operations in the GOM
Analogies Between Strengths and Weaknesses of SEMS Rule and Safety-Case Development. As part of BSEE communication, the four principal SEMS objectives are the following:

  • Focus attention on the influences that human error and poor organization have on accidents.
  • Continuous improvement in the offshore industry’s safety and environmental records.
  • Encourage the use of performance-based operating practices.
  • Collaborate with industry in efforts that promote the public interests of offshore worker safety and environmental protection.
  • SEMS is promoted as a nontraditional, performance-focused tool for integrating and managing offshore operations. However, the current US regulatory framework for offshore operations in the GOM does not provide adequate tools to focus on the specific risks associated with a facility. The development of the SEMS program is generally focused on the provision of the 13 elements required in API RP 75 rather than a consistent narrative where the operator demonstrates how effective the controls and management system in place are against the identified risks.

Fig. 2

Nevertheless, the 13 elements of API RP 75 could be seen as a skeleton for the development of the safety-case regime. The links between them are naturally identifiable, but significant efforts would be necessary to meet the safety-case philosophy and the ALARP concept in particular. Fig. 2 presents a correlation between the 13 elements of API RP 75 and the main steps of safety-case development.

As shown in Fig. 2, the elements of API RP 75 are truly part of the components of safety-case development. However, as is also obvious in Fig. 2, critical shortcomings are present, such as the ALARP process as part of the risk-reduction effort, an unambiguous strategy for the identification of SCEs, and the development of the associated performance standards. Moreover, the safety-case regime advocates a clear demonstration of how the decision process is based on the output of each development stage. Such a continuous link among API RP 75 elements is missing.

The SEMS vulnerabilities are primarily related to the lack of targets (or how to define targets) as part of a performance-based approach.

Use of Safety Case for the Development of RM/AIM Plans
Asset integrity is largely considered as a key for managing major accidents. It is an outcome of good design, construction, and operating practices. It is commonly accepted that the AIM process follows a standard continual improvement cycle (the Deming cycle)—plan, do, check, act.

As part of the first step, it is crucial to establish the objectives and processes necessary to deliver the expected results (plan). These different aspects cover factors outside the organization, such as the applicable legislation, codes, and standards, as well as key stakeholders, and internal factors, such as the company RM standards, processes, and targets or roles and responsibilities.

Once the plan is defined and the objectives are clearly stated, it is important to implement the plan—execute the process to deliver the results (do). This stage is based on a risk-assessment process from hazard identification to risk analysis, to provide a risk evaluation of the facility.

The actual results are studied (measured and collected in “do” stage) and compared against the expected results (targets or goals from the “plan” stage). This phase of risk treatment involves considering all the feasible options and deciding on the optimal combination to minimize the residual risk as far as reasonably practicable.

Once the decisions are made, on the basis of an ALARP process, the solutions are implemented (act). It is also crucial to monitor and periodically review the approach taken.

The safety-case process involves a similar development cycle; therefore, it is natural to promote the development of RM/AIM plans and the safety case in parallel.

For existing facilities, existing RM/AIM plans would be challenged and revised toward a continuous improvement of their effectiveness. Application of the safety-case regime for existing installations would highlight particular risks that may have been misjudged, taking into account the current state of the installations and the actual operational procedures in place. Output from verification activities would lead to the identification of corrective actions for existing assets. This type of revision could be seen as a significant effort, but it would actually help the operator to optimize its AIM strategy and spend its resources more effectively. This approach would also give the regulator a quantified picture of current operations in the GOM. Because all facilities would be evaluated against the same performance targets, it would be easier for the operator to prioritize the critical aspects of each facility.

For new facilities, the introduction of the safety-case regime early in the project would naturally lead to an optimized AIM philosophy, strategy, and plan. The operator would be able to anticipate the efforts to be deployed for the entire facility life cycle. The introduction of the safety-case regime at the early stages of design would ease the integration of the overall RM plan at each level of organization.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper OTC 25957, “Safety Case in Gulf of Mexico: Method and Benefits for Old and New Facilities,” by Julia Carval, SPE, and Bibek Das, SPE, Bureau Veritas North America, prepared for the 2015 Offshore Technology Conference, Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2015 Offshore Technology Conference. Reproduced by permission.

20 Aug 2015

Safe Handling and Disposal of Nanostructured Materials

Nanostructured materials are substances that contain at least one dimension in the nanometer-size regime and can include nanoparticulate materials such as quantum dots, nanofibrous materials such as carbon nanotubes, and nanoporous material such as activated carbon. Potential applications of these novel materials in the oil and gas industry include wastewater treatment, antimicrobial additives, and multifunctional coatings. These applications cause concerns regarding safe handling and disposal of the materials. This paper provides a first-hand perspective on the appropriate handling of nanomaterials in a laboratory setting.

Introduction
After several cycles of technological advances in fields such as polymers, electronics, and the energy sector, the world is currently undergoing a nano revolution, wherein materials with increasingly smaller dimensions are generating considerable interest in the interdisciplinary technology community. Such materials, known as nanomaterials or nanostructured materials, typically have at least one dimension in the nanometer range. These materials have been found to possess many useful properties, such as high strength, high surface area, abrasion resistance, and tunable chemical reactivity. They are currently being researched extensively or actively proposed for related applications in critical realms (e.g., aerospace, defense, medicine) such as aircraft composites, electronic devices, biomedical sensors, and coatings. This trend makes it evident that nanomaterials and nanotechnology, the science and application of such material or the manipulation of material at molecular or atomic scales, are here to stay and will grow in popularity. A wide range of economic institutions worldwide estimate the global market for nano-related products and technologies to be worth currently more than USD 1 trillion.

As with any new material or technology, there will be unknowns such as questions related to safety, economy of handling and processing, and effect on the environment. Therefore, the increasing use of nanomaterials in research laboratories and industries makes it essential to understand and address these questions better.

This paper focuses on prevention of possible safety issues related to nanomaterials through a review of current good practices and regulatory developments as applied to an industrial laboratory setting. As the saying goes, “Prevention is better than cure.” As with any material or activity associated with human endeavor, risks exist and can always be addressed by the judicious use of appropriate protective or preventive measures in the research-and-development phase and during manufacturing and commercialization.

Potential Risks of Occupational Exposure to Nanomaterials
Various types of nanomaterials have their own unique sets of physical, chemical, and biological properties. For example, nanoparticulate powders can be easy to aerosolize and disperse, even unintentionally. Because these particles are very small, even a small quantity of the material can be dispersed over a wide area. Liquids containing dispersed nanomaterials (nanofluids) can sometimes be less dispersible because, unless pressurized, they cannot be dispersed over large areas as easily as the dry particles. Pressurized aerosol containers of nanodispersions (in a liquid or gaseous carrier), on the other hand, are energized and potentially are even more dispersible than dry nanoparticles.

Given that nanomaterials are a new class of widely used materials, only sparse definitive data exist on their effects on human beings. A person can be exposed to these materials through several key routes: oral ingestion, inhalation, skin contact, and injection. Upon coming in contact with finely dispersed particulate material, literature suggests that a person can suffer from mild or chronic symptoms (depending on the mode and duration of exposure). These range from respiratory discomfort and dermatitis to lung or eye damage (especially for prolonged exposure or exposure to high doses of the material). Several of these symptoms have been recorded in the literature for various micrometer-sized particles. Asbestos is another material that has been studied extensively and can provide an analog for the potential risks of exposure to nanomaterials.

Some common exposure routes and resultant consequences exist if precautions such as the use of personal protective equipment (PPE) are not taken. Initial damage arising from external exposure to nanomaterials (in the form of dispersions, aerosols, or powders) can translate into more-complex and -­unpredictable consequences within the body of a human being. Exposure to nanomaterials can be prevented easily with some commonly used PPE such as safety glasses, laboratory coats, face masks, and gloves.

What Is Nanosafety?
Given the development of several new types of nanomaterials, the lack of definitive data on their harmful effects, and the availability of a wide range of preventive safety measures, approaches need to be developed to promote better safety when working with these materials. Such an endeavor results in safe working conditions for personnel, which can be termed “nanosafety.” Among the most common ways to promote nanosafety is prevention by the use of widely available and commonly used PPE and suitable engineering controls. A hazard-risk assessment usually helps identify opportunities for designing such controls. The use of PPE along with engineering controls effectively reduces external exposure and subsequent internalization of nano­materials by personnel. One cannot emphasize enough the importance of these simple measures.

It must be noted that merely using PPE and engineering controls would not be sufficient to promote nanosafety. The authors of this paper consider nanosafety to be a philosophy and a responsibility to work with nanomaterials in a careful manner, guided by sound scientific principles and common sense.

Regulatory Activity: Emerging Trends and Challenges
Although general guidelines and regulations pertaining to the safe handling and disposal of chemical or hazardous wastes exist, the initiatives addressing the unique requirements related to nanomaterials are still in their infancy. Several regulatory organizations are looking into addressing these initiatives. In late 2014 and early 2015, some basic information regarding nanomaterials came to be required from manufacturers by the US Environmental Protective Agency (EPA) as part of the Toxic Substances Control Act (TSCA) under the auspices of the Significant New Use Rule. Moreover, in the US, the Nanoscale Materials Stewardship Program introduced by the EPA under the auspices of the TSCA still regards nanomaterials as conventional chemicals, despite differences in their properties. The Registration, Evaluation, Authorization, and Restriction of Chemicals program rolled out in the EU tends to focus on bulk chemicals. Consequently, the smaller quantities of nanomaterials and their related wastes tend to “fall through the cracks.” While it is likely that not all nanomaterials are harmful, several categories of these materials will be capable of having a negative effect on human health and the environment, either in isolation or in a mixture with more-conventional materials and chemicals (e.g., polymer nanocomposites). Challenges regarding the effective evaluation of hazards pertaining to nanomaterials could contribute to these inadequacies, where the issues could be addressed potentially through a combination of improved toxicology-test protocols and computational methods. Any improvements to the current regulatory stipulations may take some time to be formulated and implemented. Meanwhile, one way to handle this challenge is to voluntarily adopt suitable good practices, coupled with existing regulations and intracompany policies. The key will be to err on the side of caution wherever possible.

Good Practices in Action
Until nanosafety regulations are in place, some voluntary good practices should be adopted, based on currently used laboratory and industrial safety protocols. On the basis of literature published by the National Institute for Occupational Safety and Health, some suggested universal guidelines pertaining to nanosafety can include

  • By default, treat nanomaterials as hazardous chemicals, and learn about related technical literature before working with them.
  • When new to the field, employees should be provided with adequate training.
  • Employers should work toward identifying tasks, processes, and equipment involved in handling nanomaterials, especially in their native forms (e.g., bulk powders). Workplace profiles of exposure to nanomaterials should be conducted regularly.
  • Ongoing education programs pertaining to nanosafety should be in place and inform employees periodically about the latest developments in this field.
  • Plan the experiment or process beforehand, and obtain the required amounts of nanomaterial; this reduces subsequent waste and disposal problems.
  • Be aware of neighboring personnel when working with nanomaterials, and always confine or restrict the workspace where nanomaterials are handled.
  • Use suitable engineering controls and proper PPE specific to the materials and processes in question.
  • Properly dispose of any waste.
  • Wash hands (even after removing gloves) with soap and water before handling food or working outside the laboratory.
  • Regularly monitor changes in the organization’s policies, industry practices, and emerging regulatory activity, and comply as required.

Fig. 1

In Fig. 1, we can see that the type and quantity of nanomaterial, the processes employed, the existing infrastructure, and (above all) the human factor all play a big role. The flow chart must be customized for specific nanomaterial-related activities.

Conclusions
This paper attempts to present a detailed overview of safe handling of nanomaterials in an industry setting, from a laboratory practitioner’s viewpoint. Increased usage of nanomaterials leads to increasing amounts of related waste, also termed “nanowaste,” with as-yet-­unknown ramifications.

Nanowaste is currently treated as a conventional hazardous chemical in academic and industrial entities working with these new materials, though not all nanomaterials are toxic or harmful. However, owing to size-dependent differentiation of the properties of materials, nanomaterials and related waste require certain unique additional safety measures. Moreover, nanomaterials can consist of various compositions and chemistries that must be addressed separately. Many good practices are based on current precautions used when handling hazardous chemicals and involve general common sense.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper OTC 25975, “Safe Handling and Disposal of Nanostructured Materials,” by Pavan M.V. Raja, SPE, Monica Huynh, and Valery N. Khabashesku, SPE, Baker Hughes, prepared for the 2015 Offshore Technology Conference, Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2015 Offshore Technology Conference. Reproduced by permission.

20 Aug 2015

Managing Marine Geohazard Risks Throughout the Business Cycle

Today, the industry is faced with entry into frontier areas with little prior published understanding and potentially complex slope and deepwater settings. In such settings, early effort in the exploration-and-production cycle is required to allow appropriate data to be gathered and assessed. In order to address these issues, BP has adopted a methodology to manage geohazard risks over the life of the license.

Introduction
In 1964, the rig C.P. Baker was lost in the Gulf of Mexico in a shallow-gas blowout with the loss of 22 lives. That accident, and similar events in the industry around the same time, triggered the development of geophysical site investigation or geohazard methodologies to support safety in tophole drilling and field development through detailed assessment of seabed and near-surface geology. To this end, the Hazards Survey in North America and the Site Survey in Europe became the staple means for evaluating predrill or predevelopment conditions over the following 30 years.

The technologies used in these surveys have continued to be developed. These approaches have generally served the industry well for 50 years. How­ever, as the industry has progressed from operations generally on the continental shelf out onto the continental slope and into ultradeep water, the geohazard issues that need to be addressed by the industry have grown in variety and complexity.

While the scope of possible ­sources of geohazards has expanded, so has the potential size of license areas to be studied.

If conditions across such blocks on the continental shelf or in ultradeep water were homogeneous, it may be acceptable to continue with the traditional approach of the site survey. However, the conditions in many large blocks are far from homogeneous, and, therefore, a site survey would deliver little understanding of the variability in geohazard conditions and processes that may have implications for the immediate safety of drilling.

The longevity of production operations now faced in a license or field has also been gradually extended through the implementation of improved-recovery techniques. BP’s Magnus field was discovered in the far north of the UK continental shelf in 1974. At the time of first oil in 1983, the projected field life was seen as being out to the mid-1990s. However, another phase of production drilling will be starting from the platform in 2015, and current projected field life is now seen out to the 2020s. However, the last high-resolution seismic data to have been acquired below the platform were acquired in 1984. Before restarting drilling, a prudent operator would ask the question, “What is the possibility that geohazard conditions may have changed over the last 30 years?” The prudent operator, therefore, needs to revisit geohazard risks and the validity of site-investigation data across the full life of the license, from entry to field abandonment, and to update geohazard understanding consistently across the whole time period.

Fig. 1

This paper, therefore, sets out an integrated approach to address management of geohazard risks across the life of a license (Fig. 1), an approach that seeks to consistently update understanding of what geohazards might be present, and, thus, where possible, seeks to avoid them directly or mitigate their presence.

License Entry
Upon entry to a new license area, existing seismic or published geoscience information upon which to build understanding of geohazard complexity may be sparse.

A consistent approach for the rapid evaluation of the potential degree of geohazards complexity before, or upon, entry to a new license area uses an evaluation of four fundamental geoscience attributes: evidence for presence of shallow hydrocarbons, recent-deposition rate (over the last 1 million years), structural complexity, and underlying seismicity. A final attribute is the quality of the database available to review the area: The sparser or poorer the data available, the greater the interpretive uncertainty. Each of these five factors is scored by use of a consistent scoring mechanism, and they can be plotted on a pentagon where the greater the area finally shaded, the greater the fundamental level of underlying geohazard risk.

Geohazard Baseline Review
After initial fundamental evaluation of risk before or upon entry, it is normal to expect that licensewide exploration 3D data acquisition will be a first step to support the exploration effort—if this is not already in place.

Delivery of a geohazards or short-offset volume at this stage is a simple and effective byproduct. Indeed, in the case of wide-azimuth data acquisition, delivery of such a product may be a key intermediate quality-control output to delivery of the final product and may be of significantly greater value to the geohazards interpreter than the final volume used by the explorer.

Once processed, 3D data are available to produce a complete geohazards baseline review (GBR) of the delivered volume. Such baseline reviews need to be performed and communicated efficiently to the exploration team in a way that supports eventual prospect ranking and delivered early enough in the exploration cycle to affect choice of drilling location.

Production of a GBR provides the underlying framework for all later geohazard studies to be built and data requirements to be defined. The GBR, therefore, should be revisited and updated regularly.

Geohazard-Risk-Source Spreadsheet (GRSS)
A GRSS captures individual ­sources of geohazards, the threat that each may pose to operations, and their effect on those operations. These then form a threefold semiquantitative evaluation of the interpretive confidence that a hazard is present, the likelihood of that geohazard event occurring, and the effect of that event to establish an initial definition of operational risk from the individual source of the hazard.

Exploratory Drilling
On the basis that a prospect is identified within the licence that is considered of sufficient value to commit to exploratory drilling, a location will need to be assessed for its safety for drilling.

Local regulatory requirements may establish specific constraints. Other­wise, the level of visible overburden complication may suggest, even in deep water, that site-specific high-resolution 3D-data acquisition is required to support either selection of a location clear of geohazards or accurate definition of the geohazards present to allow their mitigation in well design.

The key is that, outside of regulatory requirements, the operator, rather than applying a rote process to evaluation of a drilling location, should be designing a site-investigation program that specifically addresses the potential hazards faced at that location.

Appraisal: Toward Field Development
At this stage of the life cycle, direct operational experience of initial drilling activities should have been gathered and can be fed back directly into improving predictions of tophole appraisal drilling. Beyond this, however, the addition of potential location-specific site-investigation-survey data, combined with direct operational experiences from initial drilling, will allow a full revision of the GRSS contents. This review should focus on whether the GRSS contents either were too conservative or overlooked possible hazards sources.

Major-Project Delivery
At the onset of a field-development project, it is expected that all site-investigation-data needs have been met and plans have been put in place for data acquisition or that the data are already in hand. Ultimately, the different study strands defined in the project GRSS should be brought together into an integrated geological model.

Outputs from a completed integrated study allow proper risk avoidance in concept screening through choice of development layout, for example, or risk mitigation by engineering design.

Development-Project Execution Into Early Production
As a development project moves into the execute phase and the instigation of production drilling or facility installation, the refinement of geohazard understanding needs to continue.

Drilling requires the same screening as used for the exploratory-drilling phase. Experiences from drilling of the first wells from a location need to be captured either directly by presence of tophole witnesses on-site or indirectly by use of remote monitoring facilities. These experiences should be fed back into updated predictions of drilling conditions for ensuing project or production wells to allow appropriate and safe adjustment of drilling practices in accordance with actual conditions encountered. This process needs to be carried through the production phase after the initial development is complete. Variances should always be investigated and reconciled against pre-existing knowledge.

Drilling Renewal and Field Redevelopment
Before the restart of drilling or redevelopment operations, an operator should pause to capture previous operational lessons learned. Reviews of the ongoing integrity of the overburden should be held regularly throughout the life of the field, especially ahead of any engineering operations, and, as a result, the validity of overburden imagery should be considered regularly and carefully for renewal.

Abandonment
Ahead of the instigation of abandonment operations, a review of the potential for change in overburden, or geohazard, conditions should be undertaken. For a single suspended or partially abandoned subsea well, the period since the well was last worked over may have been considerable. The prudent operator will undertake a review of the original operation to understand the condition of the well. It is also prudent to undertake a simple survey of the seabed around the well to look for anomalies that may suggest a change in the integrity of conditions since temporary abandonment.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 173139, “Managing Marine Geohazard Risks Over the Full Business Cycle,” by Andrew W. Hill and Gareth A. Wood, BP America, prepared for the 2015 SPE/IADC Drilling Conference and Exhibition, London, 17–19 March. The paper has not been peer reviewed.

20 Aug 2015

LULA Exercise Blends Surface and Subsea Responses to Simulated Deepwater Blowout

To test the improved blowout-response capabilities implemented following the Deepwater Horizon accident, Total organized and ran a large exercise to check the ability to efficiently define, implement, and manage the response to a major oil spill resulting from a subsea blowout, including the mobilization of a new subsea-dispersant-injection (SSDI) device. After a year and a half of preparation, the exercise took place 13–15 November 2013.

Introduction
The oil-spill-response exercise, code-named LULA, considered a scenario in which a blowout at a water depth of 1,000 ft resulted in an uncontrolled release at 50,000 BOPD. The main objectives of the LULA exercise were

  • To mobilize all the emergency and crisis units in Luanda, Angola; offshore; and in Paris
  • To use all the techniques and technologies available to track an oil slick
  • To mobilize the SSDI kit from Norway to Angola and to deploy it close to the well
  • To deploy all the available oil-spill-response equipment of Total E&P Angola
  • To test the procurement of dispersant and the associated logistics
  • To test the onshore response, including coastal protection, onshore cleanup, oiled-wildlife management, and waste management

Subsea Response
During the Deepwater Horizon disaster, the injection of dispersant directly at the source of the oil leakage at seafloor level proved to be an effective technique. The technique required the deployment of an SSDI system.

After the Deepwater Horizon accident, Total was involved with a group of nine major oil and gas companies in the Subsea Well Response Project. As a result of the work of this group, two SSDI kits were manufactured and positioned in Stavanger. Total wanted to test the ability to mobilize and deploy in a timely manner the newly developed equipment, and Total E&P Angola was designated as responsible for the organization of the LULA exercise in collaboration with the Ministry of Petroleum of Angola. The SSDI kit, positioned in Norway, would be transported by air to Angola, sent offshore, and deployed.

The objective of dispersant spraying, at the surface or subsea directly at the wellhead, is to break down the oil slick or plume into microdroplets that can be degraded much more ­easily by micro­organisms occurring ­naturally in the marine environment. Marine environments with a long history of natural oil seepage, such as Angolan waters, ­already host ­micro-organisms well-­suited to bio­degradation of hydrocarbons.

Fig. 1

The SSDI kit was loaded on a field support vessel (FSV) on 9–10 November 2013. The SSDI kit (Fig. 1) is composed of a coiled-tubing termination head (CTTH), a subsea dispersant manifold (SDM), dispersant-injection wands, and four hydraulic flying leads on racks (only three were mobilized).

The first step of the offshore operation was conducted by the FSV and consisted of the installation of the SSDI kit on the seabed and proceeded with the subsea connections of the various parts of the system by use of the vessel’s crane and a remotely operated vehicle (ROV).

The second step involved deploying the CTTH from the light-well-intervention vessel in open water by use of a coiled-tubing string.

The final step before starting to inject the dispersant was to connect the last hydraulic flying leads to the SSDI by use of two ROVs. Once the subsea layout of the SSDI kit was completed, the dispersant injection started at a low flow rate, set at 1/100 of the blowout rate.

Surface Operations
For the LULA exercise, one of the main objectives was to test the mobilization and deployment of Total E&P Angola’s offshore oil-spill-response resources (e.g., dispersant spraying, containment, recovery) and the coordination of deployment of additional resources.

While the response to an instantaneous oil spill (e.g., a spill from a tanker following a collision) will involve deploying resources on a moving target (following drifting oil slicks), the strategy for the response to a blowout incident will focus primarily on the oil reaching the surface from the wellhead.

Fig. 2

The advantages of doing so include the fact that fresh oil can be dispersed more efficiently, whether by aircraft or ships. If resources for containment and recovery are positioned adequately, the spreading of the oil will be limited, thus increasing the efficiency of such operations. The response invariably will involve the deployment of numerous response resources, all fighting for space. Therefore, it is critical to organize the operations by identifying areas dedicated to each component of the response (Fig. 2).

Although not fully implemented on-site during the exercise, the planning section of the emergency unit set the zoning of the response operations in cones and defined the following zones, starting from the well:

  • An exclusion zone: A no-go zone in the area of the surfacing oil, if needed, when volatile-organic-compound concentrations or other risks are too high to allow working safely
  • An area dedicated to the subsea response above and very close to the well (SSDI, capping of well, relief-well drilling)
  • Various areas for oil-spill response at the surface of the sea
    • Close to the area of the surfacing of the oil—dispersant spraying from ships and containment-and-recovery vessels
    • A second area dedicated to aerial application of dispersants
    • A third area for containment recovery of weathered scattered patches of oil
    • Coastal-area response (mainly recovery of patches of weathered oil coming close to the coast)

Shoreline Protection and Cleanup
Another major objective of the exercise was to mobilize and use simultaneously a variety of tools available to Total E&P Angola for monitoring and modeling oil slicks and to evaluate their scope of application and effectiveness. From an operational standpoint, the response efforts need to focus on the areas where the film of oil is the thickest within the slicks that rapidly spread. The effectiveness of the response relies extensively on the ability to guide and maintain the response resources on these thick oil patches.

The tools tested during the LULA exercise were used for tracking the oil slick and predicting its movement.

Soon after the release of crude oil into the sea, two drifting buoys were launched at the front edge of the oil slick. Their positions were tracked continuously by satellite and were visible online within 1 to 3 hours.

Helicopter surveys provide the greatest flexibility and the most-­detailed information about the spread and be­havior of oil slicks. Two helicopter flights took place during the LULA ­exercise. The survey reports were sent to the emergency units.

Fixed-wing aircraft were used to rapidly obtain an overall view of the oil slick. An airplane mobilized from Accra, Ghana, flew over the site on the second day of the exercise. It provided information about the oil slick in a report submitted to the emergency units.

On the basis of experience from a past incident, an observation balloon was developed. It was launched from a ship and used for the first 48 hours of the exercise. The balloon was tied to the boat approximately 150 m above sea level, and the camera fitted on it fed images (visible and infrared) to a station on the boat. The boat can then follow the oil slicks day and night, and position the response vessels on the thickest parts of the slick and start operations at sunrise.

Conclusion
The LULA exercise was conceived by the management of Total to test the capability of the company to initiate the response to a major deepsea blowout. The exercise went far beyond the scope of classic large-scale exercises, including

  • 5 years of preparation
  • More than 500 people involved during the exercise and international experts mobilized in Angola
  • Mobilization from Norway and deployment of a newly designed SSDI system
  • Deployment of monitoring tools used on a controlled release of crude oil (e.g., observation balloon, observation aircraft mobilized from Ghana, satellite radar imagery)
  • Deployment of surface oil-spill-response resources from Total E&P Angola and from other oil operators in Angola
  • Mobilization of the emergency management organization of Total and Total E&P Angola and of the Angolan National Incident

Command Center
The exercise highlighted the following main challenges and areas for improvement:

  • Responders and experts must be mobilized in-country to provide assistance for offshore operations but also for the emergency management.
  • Sourcing, contracting, and mobilizing personnel, equipment, consumables, and logistical support must ensure sustainable and coordinated responses for a blowout situation, including subsea, surface, and onshore operations.
  • The emergency management organization of Total E&P Angola must interface with national authorities at strategic and tactical levels to facilitate the operations (e.g., involving customs, immigration, flights authorization, and links with local and provincial authorities).
  • Damage-assessment and -compensation mechanisms for affected communities and activities must be reinforced in case the oil comes ashore.
  • A comprehensive health, safety, and environment monitoring program must be set up during an incident to ensure safe response operating conditions (e.g., explosivity and volatile-organic-compound measurement of fresh surfacing oil), to assess the effectiveness of the response (e.g., efficiency of subsea and surface dispersant spraying), and to monitor the potential effects on the environment and its restoration.

LULA was a success. All the planned actions were carried out safely and effectively during the 3 days of exercise. Many lessons learned were identified and included in a set of recommendations that will help to improve Total’s capability to respond to a blowout situation. The findings of the exercise will also benefit the whole oil and gas industry, particularly companies operating in deep­water environments.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper IPTC 18215, “LULA Exercise: Testing the Oil-Spill Response to a Deep-Sea Blowout, With a Unique Combination of Surface and Subsea Response Techniques,” by C. Michel, L. Cazes, and C. Eygun, Total E&P Angola, and L. Page-Jones and J.-Y. Huet, OTRA, prepared for the 2014 International Petroleum Technology Conference, Kuala Lumpur, 10–12 December. The paper has not been peer reviewed. Copyright 2014 International Petroleum Technology Conference. Reproduced by permission.

20 Aug 2015

Hydrogen Sulfide Measurement With Wireless Technology

Hydrogen sulfide represents a major hazard in oil and gas production, and the efficient and reliable detection of gas leaks is a critical safety aspect. Wireless-detection systems offer an opportunity to expand the measurement area. This paper reviews a specific application of wireless technology in gas detection and details the steps taken to assess the integrity of the wireless system and the considerations necessary to ensure the reliability and availability of the signal transmission.

Wireless-Sensor Networks (WSNs)
WSNs are an alternative to hard-wired systems where the cabling is replaced by radio-frequency (RF) transmission of the measured data into a host system. The network may be point-to-point or meshed transmission. Meshed transmission allows for multiple alternative routes and, therefore, offers potential improvements in the ability of the system to ensure that the data are delivered to the host system.

WSNs have been developed since 2003 on the basis of Institute of Electrical and Electronics Engineers (IEEE) Standard 802.15.4, which defines the operating frequency of 2.4 GHz and other aspects of the basic physical layer of communication. This is currently adopted by the process industry as the essential foundation for most wireless-­measurement systems.

Subsequent to the definition of the physical layer for communication, the HART Foundation, which was originally established to define protocols for serial data communication between cabled field devices, was extended to cover WSN technology. This wireless HART technology was subsequently approved by the International Electrotechnical Commission (IEC) in 2006 as IEC Standard 62591. A parallel development was undertaken by the International Society of Automation (ISA) in the US under the ISA 100.11a standard in 2009. Each of these standards seeks to establish interoperability of the different equipment manufacturers, and it is important that this convergence be achieved to prevent development and adoption delays.

The development of battery technology is also an important aspect of WSNs. Significant advances in battery design, solar-cell charging, and energy harvesting are expected to play an active role in the future. In present systems, sophisticated software is used to turn on and wake up components to minimize power consumption. It is also imperative that monitoring of battery status be managed actively by the host system.

Reliability Considerations
Reliability may be defined as the ability of a system or component to perform its required function under stated conditions.

Fig. 1

The quantitative analysis of reliability is a well-established practice for point-to-point systems. One methodology useful for visualization is a decision tree. A simple example for a system with a top event, which is loss of any of two signals, is provided in Fig. 1. For purposes of demonstration, the reliability of both receivers is 0.9 and of two transmitter/sensor combinations is 0.85 and 0.8. The reliability of the wireless transmission is assumed to be 1.0 (direct line of sight over a short distance). The result of the decision-tree analysis is an overall reliability of 0.55.

Fig. 2

Wireless systems using multicasting provide an alternative communication route by enabling the failed receiver to be bypassed (Fig. 2). It is clear from the ­decision-tree analysis that multicasting in this case provides a significant improvement in reliability, with the probability of the successful measurement of both inputs improving for this example from 0.55 to 0.67. It is also clear from these examples that the complexity of the decision tree increases significantly as the number of alternative routes increases.

Extrapolating the decision-tree approach to include the wireless transmission in larger mesh systems (e.g., 2,000 points) introduces the problem of estimating reliability influenced by many factors, some of which are interdependent. These include the effect in mesh systems of the signal consolidation from many reflections at the receiver in addition to line of sight, the natural tendency of an RF signal to spread over a radial distance, and the limitations of statistical assumptions in the probability of reflection.

Accordingly, for large wireless mesh systems, decision trees and other conventional point-to-point methods are difficult to apply; they simply become too large. As a result, the mathematical development of modeling techniques for these types of multiple information flows has received significant attention in recent years, driven not only by reliability considerations but also by the need to identify the smallest routes to limit investment costs on large-scale communication systems and to identify limitations on capacity of isolated sections of the network. Graph theory represents a suitable method for analysis of networks with multiple routes, but, again, solutions require complex extended algorithms and are difficult to visualize.

Many of these approaches to analysis concentrate on component reliability for the equipment (e.g., transmitters, receivers, batteries, sensors) and on generalized assumptions regarding the performance of the mesh design.

The sensitivity of reliability for a wireless network, however, is dominated by the RF environment, rather than component reliability. The assumption that the system will comply with standardized probability functions in particular may be ambitious, and specific planning of the network, testing of the network, and maintenance of the RF environment are imperative to ensure that the system will continue to work properly.

The Test Installation
The application reviewed in this paper was located at a fire training ground at Asab in the United Arab Emirates. A number of fire scenarios can be simulated, including gas leaks at flanges and tank fires.

Fig. 3

Safety at the training ground is focused on leak prevention; however, secondary risk mitigation is provided by gas detection. Gas detection is normally hardwired, and systems are available for detection of hydrogen sulfide and hydrocarbons. In addition to this hard-wired gas detection, a supplementary wireless gas-detection installation was put in place and investigations were conducted related to wireless aspects of the installation. The system consists of four gas detectors (hydrogen sulfide and hydrocarbon) transmitting to a receiver that converts the signals to the plant operator interface (Fig. 3). The system also has local alarm stations capable of receiving alarms from the various detectors.

The wireless system tested transmits at 2.4 GHz on the basis of the IEEE 802.15.4 standard and used direct-­sequence-spread-spectrum (DSSS) technology, which combines the transmitted signal with a broader spectrum of frequencies.

The transmitter power is limited to 100 mW to enable compliance with European and local statutory requirements for avoidance of interference with existing wireless facilities. The transmission of the signal is limited by reflections and spreading (i.e., the effect of radiating in a circular pattern). For the tests, a gas detector and transmitter were placed in a vehicle and driven away from the receiver over an area of 2-km radius and signal-transmission status was monitored to determine the extent of coverage. At various points within this area, gas detectors were tested with gas samples to ensure that full functionality was maintained.

Test Results
The detectors normally operate at a distance of 150 m from the monitoring-­system receiving antenna. For these tests, a gas detector with a battery power source and a wireless transmitter were transported around the area of the plant in various directions, and the distance from the receiving antenna was increased until communication with the host system was lost. As can be expected, the transmission is influenced significantly by the topography of the land and by building and process-­equipment obstructions. The successful transmission distance varied over a range of 0.4–1.6 km. The analysis also shows that, whereas direct line of sight is optimal for transmission, it was possible to maintain coverage with transmission through structures or using reflection.

For the installation reviewed here, further field tests were conducted to determine the practical robustness of the system in resisting other sources of RF interference from various potential sources.

Conclusions

  • The technology applied in wireless systems in this application appears to be very effective in preventing typical sources of interference with process plants from affecting measurement reliability.
  • The use of hopping with mesh networks effectively extends the possible coverage, within the typical national statutory limits of 100 mW for transmission power.
  • The reliability of equipment may be considered to incorporate hardware and software, which includes the battery, sensor, transmitter, and receiver. This equipment reliability is, to an extent, deterministic and can be managed effectively. The transmission quality of the RF signal, however, is heavily dependent on the application (e.g., location, obstructions, topography) and less easily modeled in reliability assessment.
  • The reliability of the system transmission quality cannot be modeled easily with conventional point-to-point approaches, and the systems may not, in practice, be represented accurately by statistical models. As a result, it is necessary to manage the RF environment actively to support wireless-network systems.
  • Mesh designs that enable local alarm activation without depending on the remote monitoring facility offer particular advantages for gas detection by reducing the difficulty in managing a widespread RF environment while achieving the primary objective of announcing the hazard directly to personnel who may be at risk near the leak source.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 171720, “Hydrogen Sulfide Measurement Using Wireless Technology,” by P. Phelan, A.-R. Shames Khouri, and H.A. Wahed, Abu Dhabi Gas Industries, prepared for the 2014 Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, 10–13 November. The paper has not been peer reviewed.

20 Aug 2015

Coral Relocation Mitigates Habitat Effects From Pipeline Construction Offshore Qatar

The Barzan Gas Project is a critical program to deliver natural gas to Qatar’s future industries. The project was expected to affect shallow coral communities during pipeline construction from Qatar’s North field to onshore. To partially meet the state’s environmental clearance for the project while supporting the state’s national vision, RasGas developed a project-specific coral-management, -relocation, and -monitoring plan that incorporated proven methodologies to relocate at-risk coral colonies to a suitable location.

Introduction
In addition to natural-gas reserves, ­coral-reef communities are regarded as a significant and highly productive natural resource in Qatar, providing refuge and nursery areas for many commercially important fish and shellfish species during portions of their life cycle. Corals off the coast of Qatar grow in one of the more thermally stressed environments in the world. Elevated sea temperature and other coastal pressures such as overfishing, port development, and construction have led to a decrease in local coral-reef communities. Recognizing the importance of these habitats, Qatar included measures in the Qatar National Development Strategy 2011–16 calling for the protection, conservation, and sustainable management of marine and coastal habitats and associated biodiversity.

Fig. 1

The RasGas Barzan Gas Project off eastern Qatar (Fig. 1) is a critical program for the state, delivering natural gas from Qatar’s North field to the onshore processing plant through export pipelines. As part of the construction phase, the Barzan project was expected to affect shallow coral communities through the direct physical removal of coral colonies from trenching activities and through sedimentation and a general deterioration of the habitat immediately adjacent to the trench.

To partially meet the state’s environmental clearance conditions for the project, RasGas developed a project-­specific coral-management, -relocation, and -monitoring plan that incorporated proven methodologies to relocate at-risk coral colonies to a suitable location away from both present and future development to minimize potential harm.

Benthic Environmental Survey
In order to document the status of environmentally sensitive resources within the pipeline corridor and delineate coral and seagrass habitat, a benthic environmental survey was conducted along two predetermined parallel transects within the pipeline corridor from the shoreline (pipeline landfall) to 2 km offshore. Following the habitat delineation, quantitative data were also collected to estimate the number and species of corals within each habitat type.

Survey results showed there were four distinct areas of hard-coral habitat, differentiated by substrate type (e.g., sand, hard bottom) and coral density. By use of the areas of the four characterized hard-coral-habitat types and the estimated coral densities, it was determined that approximately 40,000 coral colonies with a diameter >10 cm were present within the hard-coral-habitat impact footprint.

Hard-Coral Recipient-Site Selection
In order to identify an acceptable recipient for the hard-coral colonies to be relocated from the pipeline corridor, several areas offshore northeast of Qatar were surveyed to assess their suitability for reattaching hard-coral colonies. Sites were selected primarily for their location outside of potential future pipeline construction and depth through a review of environmental-sensitivity maps provided by the Qatar Ministry of Environment and satellite imagery along the northeast coast. Site surveys were conducted at 21 sites within two larger areas to assess their suitability on the basis of the substrate type and topographic relief, dominant biota, coral presence/absence, and urchin presence/absence. Where hard corals were present, coral coverage was assessed qualitatively. An additional eight sites were assessed within an area closer to the project site for the potential deployment of limestone boulders to act as substrate for reattachment if a suitable natural substrate site was not identified.

A review of the survey data indicated that only two natural hard-­bottom sites were suitable with the exception of water depth, which was shallower (<2 m) than the original depth of the corals to be relocated (7–8 m), potentially exposing them to an extreme thermal change. Because of the lack of available hard substrate, it was recommended that native quarried limestone boulders of composition similar to that of the natural substrate be used to create exposed hard-bottom habitat.

Coral-Relocation Program
Approximately 550 limestone boulders, each nearly 1 m in diameter, were power washed to remove excessive sediment, transported from Ras Laffan, Qatar, and deployed into a predetermined recipient site that had been deemed suitable for habitat creation because of its proximity to a healthy reef, water depth, and distance from Ras Laffan. The relatively shallow sand veneer (≤11 cm) overlying a hard-bottom substrate indicated no risk of subsidence.

The rocks were deployed off the side deck of a barge, allowing for varying densities of rock patches and a configuration that would mimic the naturally divergent rocky outcrops. The newly created habitats not only provided a suitable substrate for the reattachment of hard-coral colonies, but additionally provided vertical and horizontal subsurfaces, interstitial spaces, crevices, and voids to create a complex habitat for a wide range of other marine life.

Corals were removed from the areas of highest coral density within the pipeline corridor by divers using hammers and chisels to separate the coral from its substrate and lift it intact to the extent possible. Corals were transported carefully to the recipient site onboard a survey vessel and were temporarily cached in metal trays on the seabed directly adjacent to the boulders until they were ready for reattachment.

Monitoring of Relocated Hard Corals
In order to assess the relative success of the Barzan coral relocation, a monitoring program was designed to permit the detection of and response to significant changes in habitat and community structure because of external disturbances (e.g., thermal extremes). Monitoring surveys will be conducted twice yearly for a minimum of 5 years to

  • Evaluate the attachment status (presence/absence) of reattached hard corals
  • Evaluate relative health of reattached hard corals
  • Assess habitat features to evaluate temporal ecological trends
  • Conduct water-quality monitoring twice yearly
  • Acquire and log on-site-temperature data

Summary and Conclusions
In 2012, more than 1,600 hard-coral colonies were relocated into a newly created habitat of limestone boulders because of the lack of hard bottom. Baseline monitoring of the relocated corals was conducted 3 months post-relocation. ­Monitoring-survey results showed that the relocated corals exhibited health comparable to that of the reference communities and exhibited comparable signs of stress. Future monitoring surveys conducted twice yearly for a minimum of 5 years will provide data to evaluate the overall success of the project and for comparison with other coral-relocation projects in the region.

This paper presents the composite monitoring results from Surveys II (January 2013), III (July 2013), and IV (January 2014), which were assessed for reattached-colony bonding status, ­colony health, benthic characterization, reef-fish assemblage, sediment accumulation, sea-urchin density, and water-column data.

Reattached-Coral-Colony Bonding Status. The substrate-augmentation approach with quarried limestone boulders is deemed to be successful, with fewer coral-colony detachments at the re­attachment site than reported during previous monitoring surveys.

Coral-Colony-Health Assessment. The number of coral colonies with more than 10% of the coral tissue affected by one or more conditions decreased at the reattachment and shallow reference sites from Survey III to Survey IV, indicating increased overall health at these sites.

Benthic Characterization. Low­profile filamentous benthic algae continued to account for the greatest benthic cover within the reattachment site. The algal cover increased not only on the limestone boulders but also the surfaces of the coral colonies, resulting in a decrease in percentage of coral tissue and increase in coral-health stress ranking.

Fig. 2

Reef-Fish Assemblage. Although the number of reef-fish observations decreased during Survey III compared with Survey II, it increased in Survey IV to the highest for the monitoring period. But the number of fish species stayed the same for the last two surveys. The assemblage composition recorded during Survey IV was more similar to those of Surveys II and III than to that of Survey I. An analysis revealed that the differences were because of increased numbers of dory snappers, yellowfin seabream, and Persian cardinalfish recorded during the latter surveys relative to pearly goatfish, a numerical dominant during Survey I. Although not observed in high abundance during the first three surveys, the yellowstripe scad was recorded in high abundance during Survey IV. The Persian cardinalfish, however, has continued to be an abundant member of the assemblage since Survey I. Overall, the assemblage was generally typical of the geographic region and habitat (Fig. 2).

Sea-Urchin Density. With the increase of algal cover, the presence of sea urchins may provide a means to reduce competition for space between the coral recruits and algae. It has been encouraging to observe an increased presence of sea urchins during Survey III compared with Survey II because these herbivores contribute positively to the dynamics of coral recruitment rates and potential survivorship in the reattachment site.

Water-Column Data. Sediment accumulation on and around the boulders has been negligible during Surveys II through IV, validating the selection of the coral-reattachment site. The hydrographic water-column profile data have been as expected in this portion of the Arabian Gulf, with anticipated temporal changes from seasonal fluctuations.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 170359, “Coral Relocation as Habitat Mitigation for Impacts From the Barzan Gas Project Pipeline Construction Offshore Eastern Qatar: Survey IV Update,” by Kaushik Deb, RasGas, and Anne McCarthy, CSA Ocean Sciences, prepared for the 2014 SPE Middle East Health, Safety, Environment, and Sustainable Development Conference and Exhibition, Doha, Qatar, 22–24 September. The paper has not been peer reviewed.

20 Aug 2015

Building the Foundations of Process Safety in Design

The maximum level of process-safety performance an operational asset can attain is set indelibly in the early stages of a project. This is why it is crucial to lay a solid foundation for process safety during design. Before embarking on front-end engineering design (FEED) of the Al Karaana petrochemical project, a behavior-based process-safety program was created with the aim of entrenching process safety in the hearts and minds of design engineers and positively influencing behaviors. The nine foundations of projects process safety form the cornerstone of the program.

Dangers Concealed in Design
Over the past several decades, industry attentiveness to asset integrity and process safety has progressed. The advancement of industry codes and standards and governmental regulations improving the prescriptive minimum requirements in facility design has served to elevate the general level of asset process-safety performance.

On an industrywide basis, during the 20th century, process-safety improvement was influenced initially by fundamental regulations and subsequently by technology, management-system implementation, and more-­refined codes and regulations, which have served to improve the performance of process safety.

However, little comfort can be taken from this general improvement; the continued occurrence of catastrophic incidents is a stark reminder that further strides need to be made.

What should give engineers and designers pause for thought is that many of these catastrophic incidents in the operational phase have root causes in design and could have been avoided with proper attention.

The contribution of errors in design is significant. Of accidents reported to the EU Major Accident Reporting System, statistical analysis shows that design inadequacies are present in approximately 70% of them.

Fig. 1

The company has compiled data from recent capital projects on items or issues that could have adverse effects on safety, production, quality, or costs and could subsequently affect successful commissioning, startup, and first-­cycle operation of new facilities.  Of these flaws experienced in projects, it has been suggested that nearly half have their origins in design stages (Fig. 1). These have resulted in, among other consequences, process-safety incidents that could have been avoided with effective consideration in design.

A minor error or oversight in a design document can often be the source of a future process-safety incident. If the root cause of most incidents can be traced back to design, it is evident that a focus on the activities that take place in an engineering design office is imperative for delivery of a safe performing asset.

Project Method—Process-Safety-Behavior Program in Design
Building a strong process-safety culture in the project demands full cooperation from both the company and the contractors. This staunch commitment drives effectiveness of all subsequent activities in FEED. The program requires leadership commitment and visibility; senior managers and team leads not only drive key messages through various workshops and activities but also are supportive and listen to feedback that comes from the work floor. The Al Karaana FEED process-safety-­behavior program began with embedding this culture among project leadership and disseminating it within the entire project team.

The overall program for FEED contained elements of impactful culture building (the heart) as well as practical tools and techniques (the mind). To bolster the systematic key messages, nine foundations of projects process safety were rolled out and interwoven into the fabric of all behavioral process-safety elements and activities.

In order to establish the tone of the program and introduce the nine foundations early in design, core activity workshops were carefully planned and initiated in combined contractor/company sessions at the onset of FEED. Supporting these discrete workshop events were various other program components, which built upon the common theme of the nine foundations of projects process safety and continued throughout the duration of FEED.

These components were built off the structure applied at worksite behavioral-safety programs but were tailored to meet the needs of work performed in an engineering office and received by a design audience.

Nine Foundations of Projects Process Safety
Before FEED commenced, several health, safety, security, and environment managers investigated project-process-­safety-audit findings and assessed incidents by linking root causes to inadequate design. The findings revealed common weaknesses and identified focus areas for process safety in design. A conceptual campaign was ­initiated that began as rules to stop leaks but evolved into the wider nine foundations of projects process safety.

To communicate effectively on process safety in the Al Karaana process-safety-behavior program, the project sought to simplify key messages, make it easier for project team members to relate to process safety, and help them to better understand how it relates to their work. The numerous concepts and detailed mantras that often constitute asset-­integrity and process-safety-­management packages were distilled into a set of simple, translatable tenets that address common weaknesses.

The result of this effort is called the nine foundations of projects process safety. They are

  • Process-safety leadership
  • Identify and assess risks
  • Identify and specify barriers
  • Standards and procedures
  • Quality
  • Right people in place
  • Manage change
  • Reviews
  • Action closeout

The nine foundations are meant for project leaders as well as front-line engineers and for company and contractor staff alike. They are applicable during FEED but also during all project phases, with varying emphasis.

The nine foundations are not new concepts. They purely achieve simplified and effective key messages on process safety. These simple messages enable focus on changing behaviors in key areas of weakness that are typically experienced in projects, initially with design engineers during FEED but also at subsequent phases of projects. In an easily digestible format, the nine foundations of projects process safety create a consistent message and focus points so that the value of process safety in design is sustained throughout the team.

Each element within the nine foundations is important; missing the mark on any of them during design will compromise facility integrity.

The effective implementation of workshops and supporting activities of the FEED process-safety-behavior program hinged on building the nine foundations into everyday, individual credence. They were a readily accepted rallying point to which engineers could easily relate and also willingly hold themselves and each other accountable to fulfill.

Early in FEED, members of all levels from both company and contractor fully adopted each of the nine foundations, enabling engineers to champion process-safety causes with freedom and confidence.

Reflections on the Success of the Program
The ultimate assessment of the effectiveness of the process-safety-behavior program in design can be made only at the end of asset life looking back at its safety performance. However, it is possible now to reflect on the degree to which the program translated into positive behavioral shifts around two central aspects:

  • Did the resulting design confidently demonstrate mitigation of perceived threats to robust process safety in design?
  • Was the way of working during FEED noticeably different with respect to process safety in design?

The following evidence helps draw conclusions on questions related to these aspects:

  • Despite the fixed schedule with penalties for delay, key process safety reviews were halted from commencing (with full management support) until the full preparatory activities were undertaken in accordance with the terms of reference, even if it may have been possible to progress in parallel with final data gathering. Robust reviews constitute one of the nine foundations.
  • The preliminary design of a filter press in one of the derivative units met all required standards and current operational practices. However, one of the engineers in a remote design office was not comfortable with the level of residual exposure to toxic materials and spoke up, saying, “We can do better.” He proposed an alternative solution that was evaluated and adopted. Feedback from management was that this intervention was “not from the usual location and not from the usual level of the organization,” indicating a new sense of empowerment within all team members. The intervention demonstrated a real care for the future operator.
  • The fuel/flare-system design and the initial utility-steam-system design both met project requirements and standards. However, both were challenged proactively by engineers in the spirit of “What more can we do?” The two systems were re-engineered, significantly reducing complexity. Engineers were able to remove several higher grades of steam, thereby reducing operating temperatures in many parts of the plant and also the likelihood of trips. The fuel/flare integration improvement proposal was adopted, resulting in a more-straightforward process design that will yield benefits in safer operability.
  • The status quo was challenged. An engineer identified unclear language in a design standard that could potentially lead to process-safety consequences if interpreted incorrectly. Another engineer reviewing standard data sheets used on previous projects had concerns that, if additional service specifications were not included, a potential opportunity for oversights in equipment procurements may be introduced. These interventions were in the spirit of the nine foundations, and the revisions in both will serve to reduce the likelihood of incorrect equipment ending up at site.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 170442, “Building the Foundations of Process Safety in Design,” by James Jessup, SPE, Julian Barlow, Damian Peake, and Manoj Pillai, Shell Global Solutions, prepared for the 2014 SPE Middle East Health, Safety, Environment, and Sustainable Development Conference and Exhibition, Doha, Qatar, 22–24 September. The paper has not been peer reviewed.

20 Aug 2015

Airborne Remote-Sensing Technologies Detect, Quantify Hydrocarbon Releases

 

Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote-sensing technique used to generate 2D maps of surface properties over large areas. Two recent applications are particularly relevant to the needs of the oil and gas sector and government: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse-gas components. These techniques provide valuable capabilities for monitoring petroleum seepage and for detection and quantification of fugitive emissions.

Introduction
The Jet Propulsion Laboratory (JPL), a National Aeronautics and Space Administration (NASA) federally funded research-and-development center operated by the California Institute of Technology, has been a pioneer in optical remote sensing since the 1980s. JPL capabilities include expertise across all project phases, including sensor design and construction, airborne experiment execution, and data generation driven by science and customer needs. JPL has particular expertise in imaging spectroscopy, a passive method to interrogate objects or surfaces without physical contact. Such remote sensing has traditionally been applied to investigation of surface composition in terrestrial environments. These surface compositions are characterized by use of a spectral library that includes the surface-reflectance or emissivity fingerprints of constituent materials. Airborne imaging spectrometers provide a powerful method to survey wide spatial extents with high-performance surface characterization because of the wide contiguous spectral range at moderate spectral resolution. Novel quantitative methods have emerged recently for both atmospheric gases and surficial oil on water.

Imaging-Spectrometer Parameters

Fig. 1

Airborne pushbroom imaging spectrometers incorporate a 2D focal-plane array to collect data over a wide swath beneath the aircraft by use of a nadir-mounted sensor (Fig. 1). The areal coverage and spatial resolution depend on the sensor design characteristics and altitude. The crosstrack sensor characteristics include the sensor field of view (FOV), which determines the swath coverage as a function of altitude, while instantaneous FOV (IFOV) defines the across-track resolution, or pixel size as projected on the ground. On the basis of these sensor characteristics, a simple geometric relationship links sensor characteristics to crosstrack performance parameters.

Contemporary JPL pushbroom airborne imaging spectrometers include two major types: Offner and Dyson spectrometers. Offner spectrometers operate by collecting light through a narrow optical slit and, by use of a dispersive grating and multiple mirrors, focusing light onto the focal-plane array (FPA) with high spectral uniformity. Thus, during flight, pushbroom sensors simultaneously image pixels beneath the aircraft across the entire sensor swath width. The FPA images discrete spectral channels across the entire contiguous spectral range while crosstrack spatial information is captured across the second axis. Pushbroom approaches eliminate any moving optical subsystems by implementing a fixed optical train. In order to optimize sensor performance with respect to the signal/noise ratio, it helps to fly slowly with these systems (80–100 knots) to enhance oversampling. The second type of spectrometer in which JPL specializes is the Dyson spectrometer. The main difference in Dyson-spectrometer designs compared with Offner types is that the dispersion is accomplished by an arsenic-doped silicon block. These Dyson designs often result in a smaller form factor, particularly in the thermal infrared region of the spectrum, while still maintaining excellent spectral uniformity.

Application 1—Imaging-Spectrometer Applications for Investigation of Oil on Water
The Deepwater Horizon oil spill began on 20 April 2010. One of the NASA remote-sensing instruments was deployed less than a month later: the airborne visible/infrared imaging spectrometer (AVIRIS). The surveys were conducted from high altitudes (approximately 20 km) to maximize spatial coverage (i.e., 12.2-km swath width).

The results from these experiments revealed the suitability of optical remote sensing for oil-slick assessment in the visible (0.4–0.7 μm), near-infrared (1.2–1.7 μm), and shortwave infrared regions (2.3 μm). It was demonstrated through correlation with laboratory measurements that the depth of the 1.2-μm hydrocarbon absorption feature provided quantitative oil-thickness information.

The collection of these Deepwater Horizon data was the first time that optical imaging spectrometry demonstrated quantitative capability for oil-slick-thickness determination. Thus, the suitability of this technique for disaster response and estimates of net surface oil has been recognized.

Application 2—Imaging-Spectrometer Applications for Remote Sensing of Atmospheric Methane
Contemporary demonstrations of advanced NASA airborne imaging spectrometers for detection of fugitive methane emissions yield impressive results. These imaging techniques use sensors with wide spectral ranges in the visible to shortwave infrared (VSWIR) or the long wave infrared (LWIR). The NASA sensors offer much greater signal/noise ratios and greater spectral resolution than the few imaging spectrometers available commercially. Thus, these JPL applications reap the benefits of the most advanced imaging spectrometers in the VSWIR and LWIR regions that have been built. JPL and colleagues have begun flights over conventional oil fields and unconventional production areas to help constrain natural and anthropogenic methane emissions, including quantification of fugitive-­emission sources by use of highly mature algorithms. These airborne spectrometers have demonstrated sensitivities at flux rates as low as <250 scf/hr when flown at low altitudes (approximately 1000 m) using VSWIR or LWIR sensors. These results were demonstrated with existing NASA spectrometers that were not designed specifically for methane detection.

Imaging spectrometers provide a unique solution for noninvasive investigation of large areas. The feasible spatial coverage for a daily survey at low altitude is on the order of hundreds of square kilometers (flight-plan dependent) while flying at relatively low altitudes (1–3 km).

One need that has resulted in wide adoption of imaging spectroscopy is that production of data products is typically labor intensive, resulting in significant delay in results because of the vast amount of data generated by these imaging spectrometers. One solution is to implement real-time algorithms as part of an onboard flight data system. A real-time detection system for methane point-source visualization currently exists as part of the AVIRIS onboard data system. This successful implementation results in real-time data analysis during collection and allows for an adaptive flight planning approach using the heads-up display.

Imaging Spectroscopy in the Shortwave Infrared (SWIR) Using AVIRIS
The JPL next-generation AVIRIS is a passive imaging spectrometer that operates by collecting the upwelling (reflected) solar radiation in discrete bands across the range of the visible (0.4 μm) through the shortwave infrared (2.5 μm). Using this technique, characteristics of surface features can be diagnosed by use of the detected spectral signatures or fingerprints. AVIRIS provides high spectral resolution for a visible/infrared imaging spectrometer (5-nm bandwidth), exceeding those of other flight systems by at least a factor of two. Increased spectral resolution allows for more-detailed discrimination between surface features.

In September 2014, six AVIRIS scenes were acquired over Garfield County, Colorado, a region with considerable gas and oil extraction. Flights were made approximately 1.4 km above ground level, which resulted in images approximately 0.8 km wide and 8 km long, with a ground resolution of 1.3 m per pixel. Quantitative methane retrievals were performed on all images, and a number of plumes were clearly visible emanating from multiple well pads.

Fig. 2

Fig. 2 clearly indicates a plume consistent with the local wind direction (white arrow) that extends 200 m downwind of the emission source. Google Earth imagery obtained from June 2014 indicates that the likely source is tanks located on the edge of the well pad. Five wells are located at the center of this well pad, and all use horizontal drilling to produce mostly gas.

Conclusions and Path Forward
The results demonstrate the utility of existing advanced NASA imaging spectrometers for detection of oil on water and quantitative mapping of methane plumes. While existing data sets for both applications are currently quite small, future opportunities to demonstrate these capabilities further are a high priority for the program.

The optimal solution for wide adoption of methane monitoring is to build an imaging spectrometer sensor fit for purpose. None of the technologies used was designed specifically for quantitative methane detection; however, ­sensitivities in the range of 250 scf/hr remain impressive. A new sensor would improve the achievable sensitivity (<10 scf/hr) and increase specificity for small point-source emissions sources. This is the optimal solution from a science perspective to help understand the spatio-temporal variability of natural and anthropogenic methane emissions. The major improvements of this spectrometer design include a narrower spectral range with enhanced spectral resolution. These factors will increase the sensitivity, specificity, and spatial resolution, while virtually eliminating any false positives. This sensor has been designed to be accommodated on a fixed-wing aircraft or helicopter for more-flexible flight implementation.

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper OTC 25984, “Crosscutting Airborne Remote-Sensing Technologies for Oil and Gas and Earth Science Applications,” by A.D. Aubrey, C. Frankenberg, R.O. Green, M.L. Eastwood, and D.R. Thompson, National Aeronautics and Space Administration Jet Propulsion Laboratory, California Institute of Technology, and A.K. Thorpe, University of California, Santa Barbara, prepared for the 2015 Offshore Technology Conference, Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2015 Offshore Technology Conference. Reproduced by permission.

19 Aug 2015

Panel Event Considers Water Management in a Down Market

An SPE panel event on 26 August will examine the effects of low oil prices on the water management value chain and other aspects of water treatment in the industry.

Low oil prices are driving low utilization rates, resulting in downward pricing pressure across many key segments. Rig count is less than half the level of 2014 Q4 despite some recovery in oil prices in 15 H1. According to PacWest, fracturing demand, well spuds, horizontal wells and stages fractured, and fracture pricing are projected to drop by an average of 35% in 2015. The US oilfield water management services market is expected to contract by 19% from USD 23.2 billion in 2014 to USD 18.9 billion in 2015.

Which segments of the water management value chain are being impacted the most and why? How are market conditions exacerbating requests for pricing concessions? Where has rig movement been concentrated? Can we expect to see consolidation in the market? How are operators adapting to the new market conditions, and what does it mean for service companies? Has the cost of water treatment declined, and, if so, which technologies are the most predominant and what are the trade-offs? How does new legislation and the threat of more regulatory pressure affect the water management landscape? Find out the answers to these questions and more in the panel session, Water Management in a Down Market, moderated by April Sharr and Jim Summers.

Speakers at the session will be Piers Wells, chief executive officer of Digital H2O, a software and data analytics company focused on digital oilfield water management; John James Tintera, a regulatory expert in all facets of upstream oil and gas exploration, production, and transportation and president of the Texas Water Recycling Association; Francesco Ciulla, managing director of consulting within the IHS Energy Insight group; Kristie S. McLin, water management project lead for ConocoPhillips’ Permian Unconventional asset; April Sharr, business development manager for water management at Baker Hughes; and Jim Summers, partner at Environmental Resources Management, currently leading both the Midstream Sector and Integrated Water Management Practice for the firm.

The event will be held from 1130 to 1330 on 26 August at the Petroleum Club, 35th Floor, Total Plaza, 1201 Louisiana Street, in Houston. The cost is USD 40 for preregistered SPE members, USD 50 for nonmembers and walk-ins, and USD 10 for students or unemployed or retired SPE members.

Read more about the event and register here.

14 Aug 2015

Latin American, Caribbean Conference Shines Light on Sustainability

In July 2015, SPE held the Latin American and Caribbean Health, Safety, Environment, and Sustainability Conference, in Bogota, Colombia. The conference had a theme of  “Sharing Best HSE and Sustainability Practices for Balancing Economic Growth, Social Development, and Environmental Protection.” It presented state-of-the-art technologies and their effects on sustainability, addressing the outstanding challenges and experiences in health, safety, environment (HSE); corporate responsibility; and innovation from successful execution of oil and gas projects in increasingly demanding environments.

The conference was officially inaugurated by 2016 SPE President Nathan Meehan, who emphasized SPE’s mission—to collect, disseminate, and exchange technical knowledge concerning the exploration, development, and production of oil and gas resources and related technologies for the public benefit; and to provide opportunities for professionals to enhance their technical and professional competence—elaborated on Baker Hughes’ purpose—enabling safe, affordable energy, improving people’s lives—highlighting its Perfect HSE Day way of doing business and promoting the idea that sustainability is the way of doing business for current and future oil and gas activities.

The conference was host to 147 professionals in management, operations, HSE, corporate responsibility, and sustainability, exploring and discussing what has worked and what has not worked and learning about innovative actions and practices where future technological innovations are required to strengthen the necessary social license to operate under sustainability principles. With professionals coming to Bogota from 15 countries and four continents, the conference also provided an opportunity to network, sharing and learning from successful industry players.

Two plenary sessions were held. The first one, titled Regulatory Framework in the Latin American and Caribbean Region, set the tone for the conference. High-level representatives from energy regulatory bodies of Ecuador, Colombia, and Mexico shared and discussed their efforts to create an adequate business environment for productive and socially responsible development. The panelists were Ulises Neri, commissioner with Mexico’s National Hydrocarbon Commission;  Edgar E. Rodriguez, environmental leader with Colombia’s National Hydrocarbon Agency; and Yvonne Fabara, secretary of hydrocarbons at Ecuador’s Secretaría de Hidrocarburos.

The second plenary session was titled Social Challenges in E&P in the Latin American and Caribbean Region and presented industry challenges alongside the best practices that guarantee the development of efficient and sustainable exploration and production (E&P) activities while balancing the environment in regions of projected operation. The sector is, no doubt, known by the high demand of effective social accountability that can prevent or mitigate the long-term effects on the social fabric and along the supply chain, generating shared value for all involved parties and heightening the development of the territory. The invited panelist were all Colombian—David Arce from Arce Rojas Consultants, Wayuu anthropologist Weildler Guerra, and Gustavo Bernal from the Ministry of Energy and Mines.

This conference marked the first time that SPE teamed with with the Regional Association of Oil, Gas, and Biofuels Sector Companies in Latin America and the Caribbean (ARPEL) to organize a panel session regarding sustainability challenges for upstream development in Colombia. This panel was welcomed by local professionals because it offered discussion on current important topics in Colombia, such as ways oil revenue distribution affects relationships among operators and communities and other complex social interactions such as between ethnic minorities and fishermen.

Another highlight of the conference was the subject of the keynote luncheon: Fracking—History, Technology, and Socio-Environment Effects in its Application and Operation. The speaker was Luis A. Anaya from Fenix Oilfield Services. The straightforward approach presented by Anaya contributed to the general knowledge and underscored the “actual controversial” technique and its use for years.

The conference was also the second time that SPE offered a formal course on sustainability—The Sustainability Imperative: Making the Case and Driving Change. Actions such as presenting this course demonstrate SPE’s commitment to this subject and are paramount to enhancing understanding and knowledge about balancing economic growth and social and environmental issues.

Attendees left the conference with the understanding that the industry must take on a responsibility that goes beyond current regulation and must proactively promote and provide timely input on technologies and practices directly applied for successful operations. Future similar events in the region include the 2015 SPE Latin American and Caribbean Petroleum Engineering Conference on 18–20 November in Quito, Ecuador, and the 2016 SPE Mexico Health, Safety, Security, Environment and Sustainabilty Symposium on 30–31 March, which will continue offering insight and learning opportunities.

29 Jul 2015

Communication, Education Key To Discussions About Radioactive E&P Waste

Naturally occurring radioactive material from deep in the Earth is sometimes a byproduct of exploration and production (E&P) in the oil and gas industry. Although some experts say currently produced levels are not a cause for concern, Alex Wagner of Buckhorn Energy and Robert Morris, a radiation safety expert with M.H. Chew and Associates, are working to stay ahead of the curve and educate the public about technologically enhanced, naturally occurring radioactive materials (TENORM).

Wagner and Morris have analyzed recent developments in the management of TENORM. “Right now is a very confusing time,” Wagner said. “The industry as a whole, and regulators, are just starting to learn how to manage this problem.”

“We need to be regulating smartly,” he added, pointing to the public’s generally limited understanding of radiation and the resulting fear, which he called “radiophobia.”

“It’s imperative that the public understand how their regulators with the help of the industry are managing this waste stream,” Wagner said. “We hope that we can address current and future concerns with fact-based discussions rather than rumor or misinformation.”

“The fracturing debate is a good analogy,” he said. “The public is in arms over fracturing, as a result of the lack of publicly available information regarding the process. The oil and gas industry’s attempt to address this has been met with mixed results, likely because it came too late.”

The Pennsylvania Department of Health recently reported preliminary findings of its nearly 2-year-long effort to understand the exposure potential for TENORM. The report can be read here. The report is being revised and will be reissued later this year.

One thousand samples were analyzed from drill cuttings, muds, proppants, sludges, soils, sediments, and flowback and produced water. Natural gas was sampled for radon, and exposure-rate surveys were made on equipment and at operator locations.

The results of the study show that wellsites have low worker-exposure potential but that the potential exists for environmental impact from spills. The majority of conclusions from the report say that more study is needed. “Science is paramount here,” Morris said. “You have to understand what you have before you can regulate it.”

North Dakota is making news for its proposed rulemaking that would allow landfill operators to apply for increased concentration limits for TENORM in existing E&P landfills. Currently, the North Dakota concentration limit is 5 pCi/g, including background. The proposed limit is 50 pCi/g. Argonne National Laboratory prepared a study for the state in order to establish the safety basis for the proposed limit, which can be found here.

A patchwork of regulations covers TENORM, Wagner and Morris said. With a few exceptions including hazardous material transportation, diffuse TENORM, the kind found in E&P waste, is not regulated by the US government because the authority is reserved for states under the Atomic Energy Act.

Because the regulatory, industrial, and public-interest factors vary, the regulations are different in each state. Many states, for example, North Dakota, allow TENORM to be disposed in landfills only if the concentration is less than 5 pCi/g. Montana has a similar rule except for specially permitted landfills that accept E&P waste with total radium concentrations of up to 30 pCi/g. A large fraction of the waste from the Bakken formation is being disposed in eastern Montana.

Texas allows TENORM waste with total radium concentration of up to 30 pCi/g to be disposed in any E&P landfill. Since 1996, Michigan has allowed any landfill to accept TENORM of up to 50 pCi/g. The Michigan limit was reviewed by the state’s TENORM Disposal Advisory Panel. Its recent report, which can be read here (PDF), endorsed the current limit and provided several other recommendations.

“Every state is taking a different approach and learning different things,” Morris said. “The problem is managed as a function of the hazard. In Pennsylvania, it’s a big deal. In Wyoming, it’s not.”

“The bottom line,” he said, “is that you cannot come up with a scenario that approaches a danger point for the public. There are a few workplace situations, especially during natural gas pipeline maintenance, in which workers could be exposed to important levels of radioactive materials. Proper management is necessary to ensure TENORM does not create an environmental and long-term health hazard, and communication is a key to success. TENORM is not nuclear waste from reactors or weapons, and it is a mistake to let any misconception go unanswered.”

28 Jul 2015

SPE Adds Understanding Communities to Expanding Training Courses

SPE is expanding its course offerings in the Health, Safety, Security, Environment, and Social Responsibility (HSSE-SR) and Projects, Facilities, and Construction (PFC) disciplines in time for its Annual Technical Conference and Exhibition (ATCE). Two new course will be offered at the conference, which will be held 28–30 September in Houston.

Within the HSSE-SR discipline, SPE is offering the course Understanding Communities, another in a series of courses under the Sustainability Professional Development Program. The objective of this course is to provide participants with tools and an approach to study and better understand communities in which they operate.

During the past few decades, the general public has become progressively more resistant to development projects in their backyards. This is particularly true today of oil and gas developments involving hydraulic fracturing.

Engineers tend to view this as an education problem, believing that an informed public will understand and align with the industry’s perspective. But there is another piece to this puzzle. Design and operations teams need to understand the communities in which they operate, yet no part of typical engineers’ educations prepare them for the study of local communities. Effective engagement with local communities is now key to project success in onshore developments, especially where hydraulic fracturing is involved.

This course, developed and presented by social ecologists with decades of experience studying communities, provides a proven approach to understand local communities.

The instructors will be James Kent, president of JKA Group and a global social ecologist and advocate for using culture-based strategies when introducing site/corridor projects to local communities, and Kevin Preister, director of the Center for Social Ecology and Public Policy of the JKA Group.

Within the PFC discipline, SPE is offering a course on Separator Design Considerations and Operations. This course will expose the participant’s to different types and functions of a separator.

Separators are typically installed immediately downstream of the wellhead and are responsible for the initial, gross separation of well fluids. They are pressure vessels, often capable of handling fluids from high-pressure wells. The purpose of this course is to develop a working knowledge of production separation systems and the associated science. Understanding the principles behind separation of gas, oil, and water is important, and attendees will gain a valuable understanding of the concepts involved.

The course instructor will be Valmore Rodriguez, director of curriculum of surface facilities and competency assessment champion with NExT-Schlumberger.

Both course will be offered 26 and 27 September in conjunction with ATCE.

Read more about and register for the Understanding Communities course here. http://bit.ly/1MwjPc4

Read more about and register for the Separator Design Considerations and Operations course here. http://bit.ly/1MTC2gF