Our AI Governance Framework
When we talk to operations managers about AI in their water treatment plants, one question comes up every time: "What happens if the AI makes a bad decision at 2am and we end up with an environmental incident?" It's the right question to ask. Here's how we've designed our system to address it.
The Core Principle: Human-in-the-Loop by Default
Our AI system is designed around a simple principle: the AI provides insights and recommendations; humans make decisions. This isn't a limitation—it's a deliberate design choice for industrial environments where the consequences of errors can be significant.
Out of the box, our AI extension will:
- Detect anomalies and alert operators
- Diagnose root causes when issues occur
- Recommend actions based on analysis
- Provide plain-English explanations of what's happening
Out of the box, it will not:
- Adjust setpoints
- Start or stop equipment
- Modify dosing rates
- Make any changes to plant operation
If you want the system to take autonomous actions, that capability exists—but it requires explicit configuration, and it comes with a full accountability framework.
The Automation Tiers
We've structured automation into four levels. Each site, and each action type, can be configured independently.
System detects anomalies and notifies designated personnel. No analysis or recommendations provided.
Configuration: Default for all new deployments.
System detects issues, performs root cause analysis, and provides specific recommended actions. Human decides whether to implement.
Configuration: Enabled via portal settings with appropriate user permissions.
System can implement its recommendation but requires explicit one-time approval from an authorised user before proceeding.
Configuration: Requires site administrator approval and documented risk assessment.
Pre-authorised action classes. System can act autonomously within defined parameters with full logging.
Configuration: Requires formal change request, risk assessment, and sign-off. Cannot be self-enabled.
Standing approvals cannot be enabled through the standard interface. They require a documented change request, risk assessment, and sign-off from an authorised site representative. This friction is intentional.
The Accountability Chain
Every action the system takes—at any level—is fully traceable. This matters for operational review, but it especially matters if you ever need to answer questions from regulators.
What Gets Logged
This complete chain is exportable. If an EPA auditor asks "why did your system increase chemical dosing at 3am on March 15th?", you can provide the full sequence: what triggered it, what the AI's reasoning was, who approved it, and what the outcome was.
Guardrails: What the AI Cannot Do
Regardless of configuration, there are hard limits the AI cannot exceed:
- Dosing limits – Maximum chemical dosing rates are capped. The AI cannot recommend or execute dosing beyond these limits, period.
- Safety interlocks – Critical safety functions remain with the PLC. The AI cannot override safety shutdowns.
- Permission escalation – The AI cannot grant itself additional permissions. All automation levels require human configuration.
- Scope boundaries – Actions are scoped to specific equipment and parameters. An approval to manage pump duty/standby does not extend to dosing control.
These guardrails exist because we recognise that AI models can be wrong, that sensor data can be faulty, and that edge cases exist that no training data anticipated. The guardrails ensure that even if the AI makes a poor judgment, the consequences are bounded.
Regulatory Considerations
We get asked regularly: "How does this stand up to EPA scrutiny?"
The honest answer: AI in industrial control is still an evolving regulatory space. What we can tell you is how we've designed the system to support compliance:
- Full audit trail – Every decision is logged and exportable. You can demonstrate exactly what happened, why, and who was responsible.
- Human accountability – Autonomous actions trace back to a human who authorised that action class. There's always a person in the chain.
- Conservative defaults – The system defaults to alert-only. Automation requires deliberate enablement.
- Documentation – We provide risk assessment templates and standard operating procedures that can be incorporated into your site's existing compliance framework.
Overnight/Unattended Operations
This is often the goal: reduce callouts, let the system handle routine issues overnight, and have operators focus on complex problems during business hours.
Our recommended approach:
- Start with Level 2 – Let the system alert and recommend for 60-90 days. Review its recommendations. Did they make sense? Were they appropriate?
- Identify routine scenarios – Which recommendations did you approve every time without hesitation? Those are candidates for standing approval.
- Document the decision – For each standing approval, document the scenario, the authorised response, the boundaries, and the expiry date.
- Review regularly – Standing approvals should be reviewed quarterly. Circumstances change. What made sense six months ago might not be appropriate now.
This approach lets you build confidence gradually. You're not handing over control on day one—you're observing, validating, and then selectively automating the scenarios where it makes sense.
Questions About Implementation?
Lets Talk
Every site is different. We’re happy to walk through specific scenarios and discuss how the governance framework apply to your operations.
Request information pack
Like to learn more about how Streamwise D.I.™ can save you money? Please contact us for an information pack.
Stay in touch
Like to keep in touch with us? Please sign up for our newsletter.