AI for EHS Leaders (Part 2/3): How Do I Do This Safely and Legally?
Data Governance & Security for AI use in EHS
Welcome back to our series on navigating AI in the EHS landscape. In Part 1, we explored how AI technologies like Machine Learning and Computer Vision are already transforming safety, compliance, and environmental monitoring—helping us shift from reactive to proactive and predictive approaches.
Now, we face a practical question on many leaders' minds: How can we implement this technology both safely and legally? As EHS professionals responsible for worker safety, environmental integrity, and organizational reputation, we need to take governance seriously from the start.
In Part 2, we're going to examine the unique characteristics of EHS data, explore the four pillars of data governance for EHS applied to AI, and provide some actionable steps you can implement in your organization to get this right.
Beyond Technical Implementation: The Real Challenge
Let's face it - the biggest AI implementation hurdles aren't actually technical, they're structural and organizational. Recent surveys show over 90% of IT leaders are prioritizing AI security, and for good reason—more than 77% experienced data breaches last year. For those of us in EHS, ensuring safe and compliant AI adoption isn't just checking a box; it's essential risk management.
Poor implementation doesn't just hurt your ROI; it can expose your organization to serious legal liability, erode trust with your team, damage your reputation, and wipe out any benefits the technology might offer. Striving to get governance right from day one isn't optional. And the good news? You can and should start with systems you already have in place before attempting to do anything with AI.
Before diving into AI implementation, we need to understand what we're working with and establish a structured governance approach. Let's start by looking at the EHS data landscape and the regulations that already apply to your operations.
The Landscape of EHS Data: Understanding the Information Ecosystem
To effectively govern AI in EHS, we need to first get a handle on the breadth and sensitivity of the data involved. EHS data isn't just one thing; it spans multiple categories with varying sensitivity levels. This data can fuel powerful AI insights, but it also comes with important security and privacy considerations.
Common data source types in EHS include:
Environmental Data: GHG emissions, waste generation, water usage, energy consumption, compliance reports.
Health Data: Employee medical records, industrial hygiene monitoring, illness reports, mental health assessments, PPE usage tracking.
Safety Data: Incident reports, equipment safety logs, audit findings, OSHA compliance metrics.
Operational Data: Risk assessments, training records, inspection reports, emergency plans, chemical inventories.
Not all this data carries the same level of sensitivity. While aggregated environmental data might pose lower privacy risks, employee health records are extremely sensitive and heavily regulated. Even seemingly harmless operational data can become sensitive if it's linked to individuals or reveals proprietary processes. Understanding these distinctions helps you apply the right safeguards where they're truly needed.
The Four Pillars of Safe & Legal AI Implementation in EHS
EHS leaders have spent decades building robust frameworks to protect people, property, and the environment. Now, as AI transforms our operations, we're facing new challenges in securing these powerful tools. Instead of starting from scratch, this framework helps bridge the gap by connecting AI governance concepts to safety principles you already know and use every day.
We'll walk through four essential pillars of EHS AI security, using familiar safety concepts as our guide. By linking AI governance to established EHS principles like hierarchy of controls, management of change, and incident investigation, we've tried to create an approach that's easier to understand and apply within your organization—no matter where you are in your AI journey.
Each pillar addresses a specific aspect of responsible AI governance, but remember they work as an interconnected system – weakness in one area can undermine the entire structure.
Pillar 1: Data Security – Protecting the Foundation
Safety professionals understand the value of multiple layers of protection—and we can apply this same thinking to EHS data security.
Just as you'd secure hazardous materials with multiple controls, your incident reports, exposure data, and safety observations require layered protection. This means implementing encryption, robust access controls, and secure infrastructure to safeguard your safety intelligence.
The hierarchy of controls principle applies to data access too. Your frontline safety observers need different permissions than your corporate EHS directors analyzing incident trends. This becomes especially critical when handling confidential injury reports or personal health information.
Safety requires constant vigilance, and so does your EHS data. Monitoring systems should track who's accessing what information, providing the same level of oversight you'd give to your most safety-critical operations.
Effective data security mirrors our approach to physical hazards—creating multiple protective barriers, assigning the right access to the right roles, and maintaining vigilant oversight of our most valuable safety information. This foundation ensures your EHS data remains as protected as the people it helps keep safe.
Pillar 2: Data Privacy – Respecting Individuals
EHS professionals know that behind every incident report and health metric is a real person whose rights must be protected.
Just as we implement engineering controls to protect workers from physical harm, we need robust privacy frameworks for data that could impact someone's employment, insurance, or reputation. This means understanding the complex regulatory landscape spanning GDPR, HIPAA, and industry-specific requirements that govern your safety data.
The incident investigation principle applies here—transparency and consent are non-negotiable. Workers must understand what data you're collecting, how it's used, and who can access it. Just as you'd communicate hazards clearly, you need to communicate data practices openly to build trust in your safety AI initiatives.
We follow the principle of necessary control measures—your data collection should follow the same discipline. Before gathering information through wearables or sensors, ask: "Is this the minimum viable dataset needed to solve our specific safety challenge?" Often, anonymized or aggregated data will suffice without privacy implications.
Just as we protect workers from physical hazards, we must safeguard their personal information with the same care. By starting with clear safety objectives, collecting only what's necessary, maintaining transparency, and respecting individuals' rights, we build trust in our AI systems—the essential foundation for successful adoption and sustainable implementation.
Pillar 3: Algorithmic Integrity – Ensuring Fairness
Safety professionals recognize that a flawed inspection process leads to flawed conclusions—the same holds true for AI systems analyzing your EHS data.
Just as you'd calibrate gas monitors and test safety equipment, your AI systems require rigorous validation. Historical safety data often contains hidden biases—perhaps certain departments reported incidents more consistently than others. Without proper scrutiny, your AI could misinterpret relative risk levels across your operation.
The management of change principle applies powerfully here. Safety-critical AI systems should never be "black boxes." Just as you'd demand to understand the engineering behind a new safety control, you should expect appropriate explainability from your AI systems, especially those making critical safety assessments.
Safety professionals follow a risk-based approach to hazard control—apply this same thinking to AI implementation. Begin with lower-risk applications like waste management prediction before advancing to computer vision for PPE compliance, and only after establishing robust protocols, consider applications using sensitive biometric data.
In practice, this means establishing clear thresholds for mandatory human review of AI recommendations, creating standardized documentation for each system describing its limitations and performance metrics, following a progressive implementation pathway based on risk levels, and maintaining appropriate human oversight in all safety-critical decisions.
Pillar 4: Governance Framework – Bringing It All Together
EHS professionals understand that even the best safety controls fail without proper management systems—the same applies to AI governance.
Just as you'd never implement a confined space program without clear roles and procedures, your AI initiatives require a comprehensive framework that brings together security, privacy, and integrity considerations. This means establishing cross-functional collaboration with IT, Legal, and Operations teams to address multifaceted challenges.
The document control principle is essential here. Just as you maintain chemical inventories and JSAs, your AI systems need detailed records of their components, dependencies, and risk profiles. This "AI Bill of Materials" should track model provenance, dependencies, version control, and systematic risk assessments.
Safety professionals know the value of competent personnel—this extends to AI systems too. Everyone from developers to end users needs appropriate training on capabilities, limitations, and responsible use of your safety AI tools.
A comprehensive governance approach serves as the management system for your AI initiatives—much like how your safety management system coordinates physical hazard controls. By bringing together cross-functional expertise, establishing clear documentation practices, implementing change management protocols, and ensuring proper training, you create an environment where AI enhances rather than complicates your EHS mission of protecting people, property, and the environment.
Practical Data Governance for AI in EHS: Simple Steps to Getting Started
Now that we've explored the key concepts and pillars of AI implementation in EHS, let's translate them into actionable steps. This practical guide will help you begin implementation of effective AI governance in your EHS initiatives.
Starting with Data Security
To build a strong security foundation for your EHS AI systems:
Map Your Data Environment:
Create a simple inventory of your key EHS data sources
Identify which systems contain sensitive information (health records, incident details)
Document how data flows between systems
Implement Basic Security Controls:
Apply role-based access using your existing identity management systems
Ensure encryption for sensitive EHS data (particularly health information)
Establish monitoring for unusual access patterns
Apply the Zero Trust Model:
Require authentication for each access to sensitive EHS systems
Implement stronger verification for high-risk functions
Create separation between those who modify AI systems and those who approve changes
Establishing Privacy Protections
To respect individual privacy rights while enabling AI innovation:
Define Data Collection Boundaries:
Link each data element to a specific safety or environmental purpose
Apply the minimum viable dataset principle to each initiative
Create Transparent Processes:
Develop clear communications about how employee data is used in AI systems
Implement straightforward methods for consent and data access requests
Document data retention periods and deletion protocols
Conduct Simple Risk Assessments:
Before implementing new AI applications, ask key privacy questions:
What personal data is involved?
Who will have access?
What are the potential risks to individuals?
How will these risks be mitigated?
Ensuring Algorithmic Integrity
To build trustworthy AI systems that enhance rather than undermine EHS functions:
Start with Low-Risk Applications:
Begin your AI journey with applications like waste management prediction or admin process optimization
Build confidence and governance experience before tackling more sensitive applications
Only progress to high-risk implementations (like biometric monitoring) after establishing robust controls
Maintain Human Oversight:
Define clear thresholds for when AI recommendations require human review
Ensure safety professionals understand both the capabilities and limitations of AI systems
Establish clear responsibility for decisions influenced by AI outputs
Document AI Systems:
Create standardized "model cards" for each AI implementation that describe:
Intended purpose and limitations
Data sources and potential biases
Validation methods and performance metrics
Decision thresholds and confidence levels
Building Your Governance Framework
To bring these elements together in a cohesive approach:
Establish Basic Governance Roles:
Designate an executive sponsor for AI governance
Identify key stakeholders from EHS, IT, Legal, and Operations
Assign responsibility for oversight of AI implementations
Develop Essential Policies:
Create a tiered approval process based on risk level
Establish documentation requirements for different types of AI systems
Define change management procedures and training
Implement Vendor Management:
Develop a simple checklist for evaluating AI vendors that covers:
Security certifications and practices
Data ownership and processing terms
Transparency about algorithms and bias mitigation
Support for your compliance obligations
Download our free guide “The EHS Leader's AI Vendor Playbook: 10 Due Diligence Questions To Ask” here.
Scaling for Your Organization
Implementation of AI governance should be proportional to both your organization's size and the risk level of your AI applications:
For Smaller Organizations: Focus on security basics and essential documentation. Consider using existing committees (like safety) for oversight and start with one low-risk AI application to build experience.
For Mid-sized Organizations: Form a cross-functional working group rather than a formal committee. Create documentation templates and implement tiered approvals based on risk. Consider external expertise for high-risk applications.
For Large Enterprises: Establish a formal AI governance committee with executive sponsorship. Develop comprehensive policies aligned with existing frameworks and consider dedicated roles for AI ethics and compliance.
These scaling suggestions provide options that you can adapt to your specific context and maturity level, recognizing that governance should grow alongside your AI implementation.
This framework gives you a starting point, but effective AI and technology governance is an evolving journey rather than a destination. As you implement these measures, keep an eye on emerging trends that will shape the future landscape of AI in EHS.
The Future of AI Governance in EHS
The regulatory landscape for AI in EHS continues to evolve rapidly. While frameworks like GDPR and HIPAA address data protection, AI-specific legislation like the EU AI Act is emerging globally, with many EHS applications likely to be classified as "high-risk." Organizations should implement adaptable governance structures that can evolve with these requirements just as they would with other regulatory obligations.
Moving forward, success will depend on three key factors:
Industry Collaboration: Participate in the development of sector-specific standards and best practices through industry consortia and professional organizations.
Worker Engagement: Involve employees in AI implementation, from identifying use cases to providing feedback, building essential trust in safety-critical environments.
Balanced Governance: Strike the right balance between enabling innovation and ensuring responsible use. The most successful organizations will view governance not as a constraint but as an enabler of sustainable innovation.
As the AI landscape continues to evolve, so will our approaches to governance. However, the basic principles outlined here hope to provide a starting point and foundation that can adapt to these changes.
In Part 3 of our series, we'll explore the ROI of AI for EHS – addressing how to justify the investment and demonstrate tangible value from these initiatives.
Stay tuned, and let's navigate the future of EHS together.
- Dan and Arianna