New risk scenarios in the age of AI 

When machines learn to deceive


New risk scenarios in the age of AI: When machines learn to deceive Study

In early 2024, British engineering giant Arup, known globally for its involvement in the renovation of the Sydney Opera House, fell victim to a meticulously orchestrated deepfake attack. A finance staff member in Hong Kong received a video call that appeared to be from a senior company executive. The fake video and audio were so convincing that the employee transferred the equivalent of 25 million USD to fraudsters. This incident highlighted the growing challenge facing even highly sophisticated firms: protecting themselves from AI-powered deception.

According to the ISACA report "Taking the Pulse of AI – European Edition 2025", more than half (56%) of organizations already use AI to increase productivity and generate written content. Another 42% automate repetitive tasks, while 39% use AI for large-scale data analysis. Yet only 31% have a formal AI policy in place, and just 28% offer comprehensive AI training to all employees.

Alarmingly, 63% of organizations allow the use of generative AI, but many fail to supervise its use effectively. Employees are frequently using these tools in uncontrolled or even unauthorized ways. This lack of oversight creates dangerous blind spots, opening the door to data leaks, compliance failures – and sophisticated fraud scenarios like the one at Arup.

Risks Are Known – But Not Prioritized

ISACA's data clearly shows that organizations are aware of AI risks but often deprioritize them:

  • 85% fear disinformation and deepfakes,
  • 68% worry about data privacy violations,
  • 62% are concerned about social engineering,
  • 58% fear loss of intellectual property.

Yet only 43% of respondents consider AI risk a current priority. More alarmingly, just 26% feel confident in their organization's ability to detect AI-generated disinformation. This highlights a significant vulnerability in corporate risk readiness.

Deepfake Threat Rising – But Tools Still Lacking

71% of professionals expect deepfake-based threats to grow in the next 12 months. However, only 18% of organizations are actively investing in detection technologies. This reluctance is dangerous, especially given real-world incidents like the Arup case and others across the financial and service sectors, where impersonation fraud is no longer a hypothetical risk but a proven tactic.

Workforce Unprepared for the AI Shift

71% of digital trust professionals say AI knowledge is "very" or "extremely" important to their field. Yet 61% rate their own AI expertise at only beginner to intermediate levels. One-third of respondents believe they will need additional AI training within the next six months just to maintain their current job. As AI usage spreads, this skills gap is becoming a major operational and security liability.

Conclusion: Strengthening Governance, Technology, and Training is Critical

The Arup case sends a clear signal: Artificial intelligence is not just a productivity enabler – it is a serious risk factor. Companies must act on several fronts:

  1. Establish AI governance structures: 69% still lack a defined framework.
  2. Invest in deepfake detection tools: Only 18% have done so – far too few.
  3. Provide training across all departments: AI literacy cannot remain a niche skill.
  4. Prioritize ethics and compliance: AI systems must be embedded in risk audits, contracts, and decision-making processes.

Only by embedding AI strategy within the broader frameworks of governance and risk management can businesses reap its benefits without falling victim to its darker side. Because in an AI-driven world, the most dangerous threat is not the algorithm – but the human who believes it.

Download Study

[ Source of cover photo: Generated with AI ]
Risk Academy

The seminars of the RiskAcademy® focus on methods and instruments for evolutionary and revolutionary ways in risk management.

More Information
Newsletter

The newsletter RiskNEWS informs about developments in risk management, current book publications as well as events.

Register now
Solution provider

Are you looking for a software solution or a service provider in the field of risk management, GRC, ICS or ISMS?

Find a solution provider
Ihre Daten werden selbstverständlich vertraulich behandelt und nicht an Dritte weitergegeben. Weitere Informationen finden Sie in unseren Datenschutzbestimmungen.