Research and Innovation
GenAI and agentic AI is changing how technical professionals work.
We are exploring some of the key questions.
We seek understanding and solutions to the real problems created when artificial intelligence meets high-stakes technical work. Our research tackles the gap between what AI promises and how technical professionals actually need to use it.
This research informs our advisory, consulting and coaching.
Here's what we're working on:
Technical Judgement in an AI Era.
Early career technical professional competence development with AI.
Shadow AI use in critical infrastructure and associated risks.
Efficient and effective verification of GenAI Output.
Agentic AI application in high-hazard industrial environments.
We partner with organisations across regulation, industry and academia, delivering projects that translate ideas to results.
This isn't innovation for innovation's sake. It's driven by the purpose of improving technical practice and decisions.
Ready to help shape the future of technical work? We partner with organisations serious about bridging the gap between AI capability and human judgement.
The future of your profession is being written now. Help us write it well.
Our Latest Work:
The quiet AI shift making big changes to technical work: your opportunity or risk?
In June 2025 we asked how engineers and technical professionals are really using AI?
An anonymous survey…we said we’d share the results openly.
A huge Thank You! to all those who supported, reposted and completed. And to the team at AISI for their input along the way.
Our summary pdf (recommended) is [here - coming soon]
↓↓↓ The summary and outputs for each question are below…except Q8, the settings impacted results ↓↓↓
Executive Summary
Technical professionals aren’t experimenting they're integrating AI into daily work. 73% were found to be using AI tools multiple times per week.
But here's what you might not expect: 70% use ChatGPT for work tasks, and half combine multiple AI tools because approved systems aren't meeting their needs.
Trust Matters
Their use of AI is smart, not blind. They generally trust it for research (44%) and report drafting (38%), but only 6% trust it for use in complex technical problems. They know where AI helps and where it doesn't. The survey identify the ‘verification tax’. Time is spent double-checking AI output using trusted sources and personal expertise. This isn't a problem, it's exactly the behaviour needed, but at a cost.
The Safety and Data Security Reality
Technical professionals are building their own AI workflows because approved tools aren't cutting it. They're being resourceful. But 65% worry about data confidentiality, and 41% don't trust AI's technical knowledge.
A Strategic Opportunity
Early-career engineers are building AI-native workflows from day one. While experienced engineers adapt existing processes, the next generation is writing an entirely new script.
Prompting the questions:
a) Are organisations ready for engineers who work fundamentally differently?
b) What is the best way to support smart, resourceful engineers making difficult decisions while keeping everyone safe?
So What?
The shift isn't from no AI to AI, it's from informal AI adoption to systematic AI competence.
Organisations that figure out how to support existing AI adoption while building systematic competence will leave everyone else behind.
The opportunity: Turn individual innovation into a competitive advantage. Help engineers get really good at AI while maintaining the technical rigour that matters in high-stakes work.
What Leaders Need to Do
1. Stop Resisting the Inevitable
Technical professionals are already using AI. The question isn't how to stop them, it's how to help them get competent at using, managing risks and building technical judgement with AI.
The shift? From "don't use that" to "here's how to use it safely."
2. Get Specific About Technical Work
ChatGPT is great, but it doesn't understand technical domains. Technical professionals need AI tools built for domain specific technical work, not just clever writing.
3. Make It a Team Sport
Right now, AI adoption is happening in isolation. One person figures something out, keeps it to themselves. What if everyone learned together?
4. Trust the Healthy Scepticism
Technical professionals don't blindly trust AI output. Which is exactly what is needed. Build on that scepticism to create real competence.
Questions Technical Leaders Should Ask:
What happens to technical judgment and deep expertise when AI does the heavy lifting, are your people getting sharper or just faster?
What would it look like if you treated your technical professional shadow AI use as R&D intelligence rather than a compliance failure?
How do you build verification protocols that are fast enough to keep up with AI adoption but rigorous enough to maintain safety?
If early-career engineers are building AI-native workflows from day one, is your organisation and leadership prepared to enable a workforce that fundamentally works differently than you do?
Engineers Are Already Using AI (And Getting 10x Results)
Engineers and technical professionals are the quiet early adopters using AI in their daily work. Our research shows:
73% use AI tools multiple times per week.
64% believe their colleagues do the same.
This isn’t experimentation. It’s integration.
Yet, while organisations invest in approved tools, real-world usage looks different:
70% use ChatGPT for work (44% on the free version).
50% combine multiple AI tools to fit their own workflows.
When Do Engineers Turn to AI?
Exploring new technical concepts - 58%
Facing tight deadlines - 47%
When approved tools aren't available - 50%
The reasons for their choice?
Speed of results - 79%
Ease of access - 73%
Convenience - 76%
Enhanced capabilities beyond standard tools (50%)
What They’re Saying:
Productivity: “How much more productive[?]..x10 x20”
Fear of being left behind: “Do you think you will be behind and slower than your colleagues (and generally in a disadvantaged position) if you don't use or don’t want to use AI?”
Internal AI Tools vs. Actually Use
Corporate AI Tools Are Falling Behind
You may have felt this too, your organisations AI tools just don't measure up to what you can access yourself.
As one respondent put it: “Corporate AI tools feel very limited / dated versus the market leading tools”
Despite 38% having access to AI tools provided by their company i.e. Microsoft Copilot, over 50% of respondents use multiple tools to meet their needs.
The gap between internal tools and the latest externally available AI tools e.g. the latest large language models and agentic AI is widening.
What do engineers and technical professionals need from an organisations tools?
The ask for better integration with existing workflows (68%) indicates a need for better support for technical professionals to develop their AI competence and confidence to deploy appropriate tools in their workflows. And, spot errors and risks when external AI is used.
Half say they need greater specialisation for technical tasks. Sound familiar? You're toggling between three different AI tools because none of them does everything you need. Just under half (47%) raised the need for more powerful capabilities.
Some highlighted the time taken to learn new tools: “The time to implement is currently long due to learning curves” and a need for training on effective use (45%). Indicating a need for support to rapidly and regularly upskill in the use of AI tools as they develop.
The Trust Paradox
Frequent use does not equal full trust. Respondents show strong awareness of AI’s limitations. They rely on it for research and learning, but hesitate to use it for critical decisions and complex problems.
As one engineer put it: “I use a lot but don't trust”.
What's Really Happening with AI Trust?
Look at this pattern: Engineers trust AI most for industry research (44%) and report drafting (38%). But when it comes to solving complex technical problems? Only 6% trust it most of the time.
What does this tell us?
Most technical professionals using AI are being curious and taking initiative. They instinctively know where AI adds value and where it doesn't…yet.
So, why are 94% of engineers only half-trusting AI for complex problems, but nearly half trust it for research?
Low trust drives the Verification Tax.
Even as AI becomes more capable the need to check output evolves. Let's be honest, we all double-check AI output. Because while it's fast, it's not always right. And in technical domains, 'not always right' isn't good enough. So checking is happening, but this is taking both time and deep domain knowledge.
Verification Methods:
• Cross-checking with trusted sources - 71%
• Applying personal expertise - 67%
• Comparing against previous similar work - 44%
Verification is currently a cost and to some a barrier for use. This burden represents an opportunity for competitive advantage for organisations and technical professionals who master efficient verification.
The Safety and Security Question
When Mistakes Matter
Many technical professionals work in environments where getting it wrong isn't just embarrassing, it's dangerous. A miscalculation or wrong assumption doesn't just miss a deadline, it could cause an outage or worse.
So when it comes to AI, the question isn't just "does this work?"
It’s “can I be confident in the accuracy and robustness to use it in decision making.”
Technical Professionals are building their own AI workflows because the approved tools aren't cutting it. Smart? Absolutely. Risk-free? Not quite.
What's keeping engineers up at night?
• 65% worry about data confidentiality
• 41% don't trust AI's echnical knowledge
• 38% find approved tools too limited
Note: Cyber security wasn’t part of the research, but also a growing risk that needs to be considered here.
The Challenge
Technical professionals are being resourceful with AI tools that weren't built for their world. But this is creating risk. How can organisations turn individual innovation into competitive advantage. Enabling engineers to get really good at AI while maintaining the technical rigour and managing risk.
The Next Generation Are AI Native
Only 5% of engineers in our research were early career, but here's why they matter most.
These engineers aren't learning AI as an add-on. They're building their careers with AI from day one.
While experienced engineers adapt existing workflows, early-career engineers are creating entirely new ways of working. They're not unlearning old habits, they're writing the script.
What does this mean for your organisation?
The new graduate joining the team. They'll expect AI to be part of every technical task. They'll know how to spot AI mistakes because they've been doing it since university. But will need to build technical competence to spot technical errors.
Prompting the questions:
a) are organisations ready for engineers who work fundamentally differently?
b) What is the best way to support smart, resourceful engineers making difficult decisions while keeping everyone safe?
Here's the opportunity: Instead of trying to control how AI is used, what if they were enabled to get really good at it? What if the organisation they chose is the place where early-career technical professionals learn to use AI responsibly and effectively?
The technical professionals defining the next 30 years of technical work are already here. The question is whether you'll shape that future with them.
About This Research
About This Research
We surveyed to 34 engineers across different technical fields in 2025. Most were practicing engineers (53%) and team leaders (42%), with a few early-career professionals (5%).
What's Next?
This research sparked more questions than it answered. We're curious about early-career engineers building AI competence, how shadow AI use plays out in critical industries, and what it takes to verify AI output effectively.
If these questions interest you too, let's collaborate. And if you'd like to sponsor next year's research, we'd love to hear from you.
About Emlyn Square
Most organisations teach engineers to use AI. We do something different.
We help technical professionals think with AI, building judgement.
When you work with us, you don't just get faster at technical work. You get better at technical judgment. You build the competence to spot when AI gets it wrong and the confidence to know when it gets it right.
How we help:
We deliver consulting and training that builds competence for better judgement.
We assess AI risks in your specific technical environment.
We run innovation projects that show what's actually possible.
We research what's coming next.
The result? Technical Professionals who deliver safer, better outcomes with GenAI and Agentic AI.
Results
The following results are direct extracts from the survey results. What have we missed? What is your take away?
Note: Question excluded as a settings issue was found to impacted results
You made it to the end! Thank you for your interest…please consider signing up for our news letter…
Technical Gains
Upgrade your technical knowledge intelligence in three minutes per week.