Research and Innovation

AI is changing how technical knowledge work gets done.

We are exploring some of the key questions.

Our work builds understanding and solutions to the real problems created when artificial intelligence meets high-stakes technical work. Our research tackles the gap between what AI promises and how technical professionals actually need to use it. This research informs our consulting and training services.

Here's what we're working on:

  • How will we develop and maintain good technical judgement in an AI Era?

  • How will early career technical professionals develop competence with AI? .

  • What is the level of shadow AI use in critical infrastructure? What are the associated risks and opportunities?

  • How do we conduct efficient and effective verification of GenAI output.

  • How can agentic AI application be safely applied in critical physical infrastructure and process industry environments?

We partner with organisations across regulation, industry and academia, delivering projects that translate ideas to results.

This isn't innovation for innovation's sake. It's driven by the purpose of improving technical practice and decisions.

Ready to help shape the future of technical work?

We partner with organisations serious about bridging the gap between AI capability and human judgement.

Technical Report:

SAFEHAZ: Safe Use of Artificial Intelligence in Process Safety Applications

Emlyn Square partnered with Discovering Safety (a programme of work delivered by the Health and Safety Executive) to deliver a research project funded by AI Security Institute (AISI) - ‘SAFEHAZ’.

The objectives:

To explore how artificial intelligence (AI) technologies, particularly Generative AI (GenAI) and Agentic AI, interact with process safety and systemic safety in major accident hazard environments.

Particularly:

  • Identify potential hazards, controls and mitigations associated with AI use across the interaction between technical, safety management and AI systems.

  • Assess whether AI could introduce or amplify systemic risks - those that propagate beyond individual systems to wider industry or societal levels.

  • Explore opportunities to potentially use AI safely and constructively in managing major accident hazards assets and identify further research to understand these in more detail.

  • Engage industry and regulatory stakeholders to develop shared understanding and practical approaches to AI safety.

The key findings :

  1. Shadow AI – the use of AI tools by staff or contractors outside formally approved organisational governance, IT, and assurance arrangements introduces unmanaged risks including poor data provenance, non-compliant outputs and loss of oversight.

  2. Human competence and de-skilling – overreliance on AI and reduction in human oversight could erode critical thinking, engineering judgement and early-career skill development.

  3. Decision-making and explainability – AI systems may provide highly plausible yet incorrect outputs (“silent failure”). Lack of explainability challenges auditability and regulatory compliance.

  4. Knowledge and standards management – AI models may use outdated or inaccessible standards and incomplete data, increasing the likelihood of safety-critical errors.

  5. Systemic risks – workshop analysis identified cascading risks across categories of oversight and control, competence, job security, and social impacts. Agentic systems may amplify errors through feedback loops, reduce diversity of thought, and concentrate accountability in fewer individuals.

  6. Security vulnerabilities – data leakage, prompt injection (modifying inputs to AI systems to make the system behave incorrectly) and potential malicious targeting of AI systems in critical infrastructure were recognised as significant cross-cutting risks.

  7. Hazard evaluation techniques – the adaptation of conventional methods shows promise for AI-related safety assessment; however, methods designed for sociotechnical systems with high interactive complexity may prove better suited and should be evaluated alongside HAZID. This may include testing techniques such as STPA (System-Theoretic Process Analysis) and the SOTEC framework (Structural, Organisational, Technological, Epistemic and Cultural).

  8. Opportunities as well as risks – safety, operations and engineering specialists correctly focus on the potential threats of AI use. However, it is important that opportunities to improve safety via use of AI are not discounted. Such use cases include situations where targeted AI use could augment conventional human-based safety actions by undertaking tasks which humans cannot feasibly or efficiently complete. Such opportunities would benefit from further research and evaluation of case studies.

Previous Work

Thank you for your interest…please consider signing up for our news letter…

Technical Gains

Upgrade your technical knowledge intelligence in three minutes per week.