Our AI Governance Framework

When we talk to operations managers about AI in their water treatment plants, one question comes up every time: "What happens if the AI makes a bad decision at 2am and we end up with an environmental incident?" It's the right question to ask. Here's how we've designed our system to address it.

The Core Principle: Human-in-the-Loop by Default

Our AI system is designed around a simple principle: the AI provides insights and recommendations; humans make decisions. This isn't a limitation—it's a deliberate design choice for industrial environments where the consequences of errors can be significant.

Out of the box, our AI extension will:

  • Detect anomalies and alert operators
  • Diagnose root causes when issues occur
  • Recommend actions based on analysis
  • Provide plain-English explanations of what's happening

Out of the box, it will not:

  • Adjust setpoints
  • Start or stop equipment
  • Modify dosing rates
  • Make any changes to plant operation

If you want the system to take autonomous actions, that capability exists—but it requires explicit configuration, and it comes with a full accountability framework.

The Automation Tiers

We've structured automation into four levels. Each site, and each action type, can be configured independently.

Level 1 Alert Only

System detects anomalies and notifies designated personnel. No analysis or recommendations provided.

Example: "High pH detected in EQ tank (8.4). Investigate."

Configuration: Default for all new deployments.

Level 2 Recommend + Alert

System detects issues, performs root cause analysis, and provides specific recommended actions. Human decides whether to implement.

Example: "High pH detected (8.4). Probable cause: Caustic carryover from CIP cycle completed 45min ago. Recommended: Increase acid dosing to 12L/hr for 30 minutes."

Configuration: Enabled via portal settings with appropriate user permissions.

Level 3 Act with Approval

System can implement its recommendation but requires explicit one-time approval from an authorised user before proceeding.

Example: "Primary pump #1 has failed. Recommended: Activate standby pump #2. [Approve] [Deny] [Investigate First]"

Configuration: Requires site administrator approval and documented risk assessment.

Level 4 Standing Approval

Pre-authorised action classes. System can act autonomously within defined parameters with full logging.

Example: "Primary pump #1 failed at 02:34. Activated standby pump #2 per standing approval SA-2024-003 (authorised by J. Smith, expires 30 Jun 2025)."

Configuration: Requires formal change request, risk assessment, and sign-off. Cannot be self-enabled.

⚠️ Level 4 Requires Deliberate Configuration

Standing approvals cannot be enabled through the standard interface. They require a documented change request, risk assessment, and sign-off from an authorised site representative. This friction is intentional.

The Accountability Chain

Every action the system takes—at any level—is fully traceable. This matters for operational review, but it especially matters if you ever need to answer questions from regulators.

What Gets Logged

1
Detection – What triggered the event? Sensor readings, threshold breaches, anomaly scores, and the data that led to the alert.
2
Analysis – How did the system diagnose the issue? What correlations did it find? What was the confidence level?
3
Recommendation – What action did the system propose? What alternatives were considered? What was the expected outcome?
4
Decision – Who approved the action? When? Was it a one-time approval or standing approval? Who originally authorised it?
5
Execution – What action was actually taken? When? What was the result? Did the issue resolve?

This complete chain is exportable. If an EPA auditor asks "why did your system increase chemical dosing at 3am on March 15th?", you can provide the full sequence: what triggered it, what the AI's reasoning was, who approved it, and what the outcome was.

Guardrails: What the AI Cannot Do

Regardless of configuration, there are hard limits the AI cannot exceed:

  • Dosing limits – Maximum chemical dosing rates are capped. The AI cannot recommend or execute dosing beyond these limits, period.
  • Safety interlocks – Critical safety functions remain with the PLC. The AI cannot override safety shutdowns.
  • Permission escalation – The AI cannot grant itself additional permissions. All automation levels require human configuration.
  • Scope boundaries – Actions are scoped to specific equipment and parameters. An approval to manage pump duty/standby does not extend to dosing control.
Why Guardrails Matter

These guardrails exist because we recognise that AI models can be wrong, that sensor data can be faulty, and that edge cases exist that no training data anticipated. The guardrails ensure that even if the AI makes a poor judgment, the consequences are bounded.

Regulatory Considerations

We get asked regularly: "How does this stand up to EPA scrutiny?"

The honest answer: AI in industrial control is still an evolving regulatory space. What we can tell you is how we've designed the system to support compliance:

  • Full audit trail – Every decision is logged and exportable. You can demonstrate exactly what happened, why, and who was responsible.
  • Human accountability – Autonomous actions trace back to a human who authorised that action class. There's always a person in the chain.
  • Conservative defaults – The system defaults to alert-only. Automation requires deliberate enablement.
  • Documentation – We provide risk assessment templates and standard operating procedures that can be incorporated into your site's existing compliance framework.
"We can't guarantee how any specific regulator will view AI-assisted operations. What we can guarantee is that you'll have complete documentation of what the system did and why—which is more than most manual operations can provide."

Overnight/Unattended Operations

This is often the goal: reduce callouts, let the system handle routine issues overnight, and have operators focus on complex problems during business hours.

Our recommended approach:

  1. Start with Level 2 – Let the system alert and recommend for 60-90 days. Review its recommendations. Did they make sense? Were they appropriate?
  2. Identify routine scenarios – Which recommendations did you approve every time without hesitation? Those are candidates for standing approval.
  3. Document the decision – For each standing approval, document the scenario, the authorised response, the boundaries, and the expiry date.
  4. Review regularly – Standing approvals should be reviewed quarterly. Circumstances change. What made sense six months ago might not be appropriate now.

This approach lets you build confidence gradually. You're not handing over control on day one—you're observing, validating, and then selectively automating the scenarios where it makes sense.

Questions About Implementation?

Lets Talk

Every site is different. We’re happy to walk through specific scenarios and discuss how the governance framework apply to your operations.

Request information pack

Like to learn more about how Streamwise D.I.™ can save you money? Please contact us for an information pack.

Stay in touch

Like to keep in touch with us? Please sign up for our newsletter.