
Every project has a “hidden foundation” that holds the whole design up. It’s not the concrete, the steelwork, or the cable trays. It’s the set of safety assumptions everyone quietly agrees to, often without realising it. That’s your basis of safety.
And here’s the thing: once a basis of safety is set early, it spreads through everything like dye in water. It influences equipment selection, layout, control philosophy, hazardous area classification, maintenance regimes, and ultimately your CapEx and OpEx. If the basis is right, you’ve built a defendable, efficient design. If it’s wrong (or just untested), you can end up spending serious money solving a problem you don’t actually have.
Assumptions are useful. You can’t design anything without them. But they carry a price tag, sometimes immediately, sometimes years later.
Assumptions are like ordering the biggest winter coat “just in case” and then moving to Spain. You might still be safe… but you’ll sweat for years.
Safety is non-negotiable. But “overly cautious” design based on weak assumptions can create operational risk through complexity. More safeguards, more interlocks, more special procedures—these can introduce extra failure modes and more reliance on perfect human behaviour.
So the real question isn’t “Are we being safe?”
It’s: Are we being correctly safe, for the actual hazard, in the real operating envelope?
“Pragmatic” sometimes gets misunderstood as “loose” or “cutting corners.” That’s not what we mean.
A pragmatic safety approach is evidence-led, proportionate, and defendable. It’s choosing controls because they match the hazard and the risk—not because they feel familiar or overly conservative.
Pragmatic safety still expects:
It just avoids “designing by default” when the hazard basis hasn’t been proven.
A pragmatic basis of safety is one you can explain clearly to:
If you can’t answer “How do you know?” with something better than “It’s always been done that way,” you’ve got a belief—not a basis.
Let’s call out a few repeat offenders. These show up in projects across process industries, and they can quietly steer designs into expensive territory.
This is the big one. A material gets labelled flammable early, and the project instantly starts leaning toward ATEX-rated kit and hazardous area classification assumptions.
But “flammable” isn’t a vibe. It’s a property—under specific conditions—shaped by:
If you don’t know those, you don’t know the hazard.
Worst-case thinking has a place in safety—especially for identifying credible major accident scenarios. But designing the entire plant as if every variable is at its absolute worst, permanently, can become a costly habit.
A better approach: credible worst cases, not imaginary ones. Safety decisions should be grounded in realistic scenarios, not a fear-based “what if everything goes wrong at once?”
This is a shortcut disguised as experience. Past designs might have been driven by site-specific constraints, different materials, different duty holder risk appetite, or simply rushed decisions that became “standard.”
Copy-paste safety is not safety. It’s inertia.
On a recent project, the early-stage design carried an assumption: the processed material should be treated as flammable. That single assumption shaped the whole concept and scoping approach.
Once “flammable” enters the chat, you often see knock-on effects like:
Sometimes that’s exactly right. But sometimes it’s simply untested.
The turning point wasn’t a fancy model. It was simple curiosity:
Those “basic” questions can flip a design on its head.
This sounds straightforward, but it’s where many projects stumble.
Safety Data Sheets are helpful, but they can be generic—especially for blends, variable compositions, or proprietary mixtures. Some list ranges or conservative values without clear alignment to your specific conditions.
A pragmatic step is to confirm whether the SDS value truly represents:
If the property drives major design cost, it’s often worth validating properly—early.
Flash point is only meaningful relative to operating temperature (and credible upsets). If your process operates comfortably below flash point with a defendable margin, the risk picture can change.
Don’t just ask “normal”—ask “credible abnormal”
You want to understand:
This is where engineering becomes practical: you’re defining the real envelope the plant will live in.
This is the heart of pragmatic safety. If you can maintain a safe margin below flash point, you may avoid certain hazardous area assumptions—but only if the margin is engineered and defendable.
A margin isn’t a number you casually pick. It should reflect:
A margin has to survive real life: night shifts, maintenance realities, and imperfect days.
If a non-ATEX approach (where justified) could save meaningful CapEx/OpEx, you’ll need design features that make the safety basis robust.
Depending on the system, this might include:
Think of it like keeping a car within speed limits: it’s easier when you have a speedometer, cruise control, and clear road signs—rather than just “good intentions.”
Once you confirm the real hazard basis, your design options often widen. The conversation becomes less emotional (“flammable = ATEX”) and more engineering-led (“what’s credible, and what’s proportionate?”).
Over-classification can quickly inflate:
OpEx is where early assumptions quietly charge you rent:
If your plant runs for 10–25 years, this adds up fast.
A non-ATEX approach (where justified) can reduce equipment burden, but it may increase reliance on maintaining specific process conditions through control and operating discipline. That’s not a bad trade—if it’s realistic for your site and culture.
The “right” answer is the one that is:
At concept/FEED, write down:
This prevents “basis drift,” where assumptions morph quietly as the project evolves.
It’s perfectly fine to say, “We don’t know yet.” The only mistake is not capturing it.
A pragmatic basis of safety explicitly states:
The aim is not paperwork. It’s clarity.
In the UK, DSEAR and ATEX considerations are often central in process facilities. The pragmatic goal is to avoid both extremes:
A defendable classification comes from:
Engineering is decision-making under uncertainty. You rarely get perfect data at the perfect moment. The trick is to manage uncertainty without gold-plating everything.
A simple rule of thumb:
Ask yourself:
If any answer feels shaky, that’s not a failure—it’s a prompt to tighten the foundation.
At IDEA, we help teams set (and keep) a clear, defendable basis of safety across concept, FEED, detailed design, and commissioning. That typically includes:
If you’re wondering whether a core assumption is inflating your design, we can help you pressure-test it—early—before it becomes expensive.
A basis of safety isn’t a box to tick. It’s the story your design tells about what could go wrong, how likely it is, and what you’ve done about it. If that story starts with an untested assumption—especially something as powerful as “it’s flammable”—you might be building a costly solution to the wrong problem. A pragmatic safety approach doesn’t reduce safety; it strengthens it by anchoring decisions to evidence, credible scenarios, and realistic operations. So, what’s your basis of safety—and is it actually correct? And if you’re not sure, shouldn’t you be challenging it?
A basis of safety is the set of safety-critical assumptions and design decisions that define the hazard scenario, operating envelope, and safeguards your design relies on.
No. Challenging assumptions is about ensuring the safety basis is correct and defendable. It often improves safety by removing confusion and mismatched controls.
As early as possible—especially if the property affects classification, equipment selection, or major cost decisions. Early validation avoids expensive rework.
Potentially, yes—where justified by credible conditions and supported by engineered safeguards that maintain the safety basis. It must be defendable for the duty holder.
It can drive unnecessary complexity, higher lifecycle cost, and operational burdens—sometimes creating new risks through increased failure modes and reliance on perfect human behaviour.