What’s Your Basis of Process Safety, and Is It Correct? A Pragmatic Safety Approach to Design

Why “Basis of Process Safety” Matters More Than People Think

Every project has a “hidden foundation” that holds the whole design up. It’s not the concrete, the steelwork, or the cable trays. It’s the set of safety assumptions everyone quietly agrees to, often without realising it. That’s your basis of safety.

And here’s the thing: once a basis of safety is set early, it spreads through everything like dye in water. It influences equipment selection, layout, control philosophy, hazardous area classification, maintenance regimes, and ultimately your CapEx and OpEx. If the basis is right, you’ve built a defendable, efficient design. If it’s wrong (or just untested), you can end up spending serious money solving a problem you don’t actually have.

The hidden cost of assumptions

Assumptions are useful. You can’t design anything without them. But they carry a price tag, sometimes immediately, sometimes years later.

  • CapEx creep: more expensive equipment, more complex installation, bigger footprints.
  • OpEx drag: specialist inspections, spares, training, downtime windows, paperwork-heavy modifications.
  • Risk of rework: late discovery that the “truth” is different from the early assumption.

Assumptions are like ordering the biggest winter coat “just in case” and then moving to Spain. You might still be safe… but you’ll sweat for years.

When “playing safe” becomes expensive (and risky in its own way)

Safety is non-negotiable. But “overly cautious” design based on weak assumptions can create operational risk through complexity. More safeguards, more interlocks, more special procedures—these can introduce extra failure modes and more reliance on perfect human behaviour.

So the real question isn’t “Are we being safe?”
It’s: Are we being correctly safe, for the actual hazard, in the real operating envelope?

What Do We Mean by a Pragmatic Safety Approach?

“Pragmatic” sometimes gets misunderstood as “loose” or “cutting corners.” That’s not what we mean.

A pragmatic safety approach is evidence-led, proportionate, and defendable. It’s choosing controls because they match the hazard and the risk—not because they feel familiar or overly conservative.

Pragmatic doesn’t mean relaxed

Pragmatic safety still expects:

  • proper hazard identification,
  • structured risk assessment,
  • compliance with relevant regulations,
  • and safeguards that genuinely reduce risk.

It just avoids “designing by default” when the hazard basis hasn’t been proven.

It means evidence-led, proportionate, and defendable

A pragmatic basis of safety is one you can explain clearly to:

  • operators and maintainers,
  • project leadership,
  • duty holders,
  • auditors and insurers,
  • and (where applicable) regulators.

If you can’t answer “How do you know?” with something better than “It’s always been done that way,” you’ve got a belief—not a basis.

The Inherent Assumptions That Commonly Distort Design

Let’s call out a few repeat offenders. These show up in projects across process industries, and they can quietly steer designs into expensive territory.

“It’s flammable” (without proof)

This is the big one. A material gets labelled flammable early, and the project instantly starts leaning toward ATEX-rated kit and hazardous area classification assumptions.

But “flammable” isn’t a vibe. It’s a property—under specific conditions—shaped by:

  • flash point,
  • vapour pressure,
  • temperature and pressure,
  • composition and impurities,
  • and credible release scenarios.

If you don’t know those, you don’t know the hazard.

“Worst-case all the time”

Worst-case thinking has a place in safety—especially for identifying credible major accident scenarios. But designing the entire plant as if every variable is at its absolute worst, permanently, can become a costly habit.

A better approach: credible worst cases, not imaginary ones. Safety decisions should be grounded in realistic scenarios, not a fear-based “what if everything goes wrong at once?”

“We’ve always done it this way”

This is a shortcut disguised as experience. Past designs might have been driven by site-specific constraints, different materials, different duty holder risk appetite, or simply rushed decisions that became “standard.”

Copy-paste safety is not safety. It’s inertia.

A Real Project Lesson: Challenging the “Flammable” Assumption

On a recent project, the early-stage design carried an assumption: the processed material should be treated as flammable. That single assumption shaped the whole concept and scoping approach.

The assumption shaped everything—fast

Once “flammable” enters the chat, you often see knock-on effects like:

  • discussions about ATEX motors and instruments,
  • bigger hazardous area zones,
  • more expensive installation standards,
  • heavier maintenance procedures and competence requirements,
  • and added complexity in modifications and operations.

Sometimes that’s exactly right. But sometimes it’s simply untested.

Then someone asked the basic questions again

The turning point wasn’t a fancy model. It was simple curiosity:

  • Do we actually know the flash point?
  • Will we operate anywhere near it?
  • Can we design to keep a safe margin below it?
  • What features keep that margin real, not theoretical?

Those “basic” questions can flip a design on its head.

The Four Questions That Can Change Your Design Direction

1) Do we even know the actual flash point of the fluid?

This sounds straightforward, but it’s where many projects stumble.

Why “SDS says so” might not be enough

Safety Data Sheets are helpful, but they can be generic—especially for blends, variable compositions, or proprietary mixtures. Some list ranges or conservative values without clear alignment to your specific conditions.

A pragmatic step is to confirm whether the SDS value truly represents:

  • your actual mixture,
  • the likely impurities/by-products,
  • and the test method and conditions.

If the property drives major design cost, it’s often worth validating properly—early.

2) Will the material be close to flash point in operation?

Flash point is only meaningful relative to operating temperature (and credible upsets). If your process operates comfortably below flash point with a defendable margin, the risk picture can change.

Don’t just ask “normal”—ask “credible abnormal”

You want to understand:

  • maximum credible operating temperature,
  • maximum credible upset temperature (loss of cooling, control failure, blocked flow, etc.),
  • how pressure impacts vapour formation,
  • whether composition drift lowers flash point,
  • and whether batch variability exists.

This is where engineering becomes practical: you’re defining the real envelope the plant will live in.

3) Can we maintain a safe margin below flash point, and what does that do for the design?

This is the heart of pragmatic safety. If you can maintain a safe margin below flash point, you may avoid certain hazardous area assumptions—but only if the margin is engineered and defendable.

What “safe margin” actually means

A margin isn’t a number you casually pick. It should reflect:

  • instrument accuracy and drift,
  • control stability,
  • thermal inertia and response times,
  • upset scenarios and safeguarding layers,
  • and operational discipline.

A margin has to survive real life: night shifts, maintenance realities, and imperfect days.

4) What design features ensure we keep that margin?

If a non-ATEX approach (where justified) could save meaningful CapEx/OpEx, you’ll need design features that make the safety basis robust.

Examples of practical safeguards

Depending on the system, this might include:

  • tight temperature control with appropriate sensor placement,
  • high-high alarms and trips with tested response,
  • permissives that prevent heating above defined limits,
  • flow/level interlocks to prevent dry heating,
  • fail-safe cooling philosophy,
  • cause-and-effect clarity (what happens when X fails?),
  • and procedures plus competence arrangements that match the safety intent.

Think of it like keeping a car within speed limits: it’s easier when you have a speedometer, cruise control, and clear road signs—rather than just “good intentions.”

The Domino Effect: ATEX vs Non-ATEX Decisions

Once you confirm the real hazard basis, your design options often widen. The conversation becomes less emotional (“flammable = ATEX”) and more engineering-led (“what’s credible, and what’s proportionate?”).

CapEx impacts (the obvious ones)

Over-classification can quickly inflate:

  • equipment cost (Ex-rated motors, instruments, panels),
  • installation and verification effort,
  • ventilation or segregation requirements,
  • documentation burden,
  • and commissioning complexity.

OpEx impacts (the painful long-term ones)

OpEx is where early assumptions quietly charge you rent:

  • specialist inspection regimes,
  • longer planned shutdown tasks,
  • expensive spares and limited supplier options,
  • higher barriers to modifications and optimisations,
  • training and competence overhead.

If your plant runs for 10–25 years, this adds up fast.

The trade you must be honest about

A non-ATEX approach (where justified) can reduce equipment burden, but it may increase reliance on maintaining specific process conditions through control and operating discipline. That’s not a bad trade—if it’s realistic for your site and culture.

The “right” answer is the one that is:

  • correct for the hazard,
  • credible for operations,
  • and defendable for the duty holder.

How to Build a Defendable Basis of Safety (Without Slowing the Project)

Define the hazard basis early

At concept/FEED, write down:

  • material properties used and sources,
  • operating envelope (normal and credible abnormal),
  • key safeguards assumed,
  • and what needs validation before final decisions.

This prevents “basis drift,” where assumptions morph quietly as the project evolves.

Separate facts, assumptions, and known-unknowns

It’s perfectly fine to say, “We don’t know yet.” The only mistake is not capturing it.

A pragmatic basis of safety explicitly states:

  • what is confirmed,
  • what is assumed (temporarily),
  • what will be validated,
  • and what decision gates depend on validation.

Use the right risk tools at the right time

  • HAZID: early hazards and risk themes
  • HAZOP: structured review once the design is mature enough
  • LOPA: when you need to test if safeguards provide enough risk reduction
  • DSEAR / hazardous area classification: where flammable atmospheres are credible
  • ALARP justification: demonstrating risk reduction is reasonably practicable

The aim is not paperwork. It’s clarity.

DSEAR, ATEX, and Hazardous Area Classification: Getting It “Just Right”

In the UK, DSEAR and ATEX considerations are often central in process facilities. The pragmatic goal is to avoid both extremes:

  • under-classification (unsafe),
  • over-classification (expensive and operationally heavy).

A defendable classification comes from:

  • credible release scenarios,
  • realistic ventilation assumptions,
  • and accurate material properties tied to your actual conditions.

“We Don’t Have All the Answers Yet” — That’s Normal

Engineering is decision-making under uncertainty. You rarely get perfect data at the perfect moment. The trick is to manage uncertainty without gold-plating everything.

A simple rule of thumb:

  • If an assumption drives major cost or classification, validate it early.
  • If it has high safety impact, treat it as a decision gate.
  • If it’s low impact, document it and move forward.

A Quick Self-Check: Is Your Basis of Safety Drifting?

Ask yourself:

  • Are we using real material properties, or generic ones?
  • Do we understand credible upsets, not just normal operation?
  • Are we designing for a hazard we haven’t proved?
  • Have we documented our key assumptions and validation actions?
  • Would we defend this basis confidently in front of an auditor?

If any answer feels shaky, that’s not a failure—it’s a prompt to tighten the foundation.

How IDEA Supports Pragmatic Safety in Design

At IDEA, we help teams set (and keep) a clear, defendable basis of safety across concept, FEED, detailed design, and commissioning. That typically includes:

  • clarifying hazard basis and operating envelopes,
  • facilitating HAZID/HAZOP and supporting LOPA where appropriate,
  • developing DSEAR-aligned strategies and hazardous area classification inputs,
  • translating safety intent into practical design features (controls, interlocks, alarms, procedures),
  • and producing documentation that stands up during handover and future modifications.

If you’re wondering whether a core assumption is inflating your design, we can help you pressure-test it—early—before it becomes expensive.

Conclusion

A basis of safety isn’t a box to tick. It’s the story your design tells about what could go wrong, how likely it is, and what you’ve done about it. If that story starts with an untested assumption—especially something as powerful as “it’s flammable”—you might be building a costly solution to the wrong problem. A pragmatic safety approach doesn’t reduce safety; it strengthens it by anchoring decisions to evidence, credible scenarios, and realistic operations. So, what’s your basis of safety—and is it actually correct? And if you’re not sure, shouldn’t you be challenging it?


FAQs

1) What exactly is a “basis of safety”?

A basis of safety is the set of safety-critical assumptions and design decisions that define the hazard scenario, operating envelope, and safeguards your design relies on.

2) Is challenging assumptions the same as reducing safety?

No. Challenging assumptions is about ensuring the safety basis is correct and defendable. It often improves safety by removing confusion and mismatched controls.

3) When should we confirm properties like flash point?

As early as possible—especially if the property affects classification, equipment selection, or major cost decisions. Early validation avoids expensive rework.

4) Can a non-ATEX approach ever be appropriate?

Potentially, yes—where justified by credible conditions and supported by engineered safeguards that maintain the safety basis. It must be defendable for the duty holder.

5) What’s the biggest risk of an overly conservative basis of safety?

It can drive unnecessary complexity, higher lifecycle cost, and operational burdens—sometimes creating new risks through increased failure modes and reliance on perfect human behaviour.

Leave a comment

Your email address will not be published. Required fields are marked *

Privacy Overview

At IDEA, we use cookies to collect data about your time on our site. This allows us the opportunity to analyse what works and what doesn’t so that we can improve our website and services!

What Types of Cookies Do We Use?

At IDEA, we implement the following cookies.

  • Necessary cookies:
  • Functionality cookies:
  • Analytical cookies:

Learn more about our cookie policies today.