Report reveals most firms deploying AI have experienced problematic incidents

Most organisations that deploy artificial intelligence experience a problematic incident related to the technology, with many affected businesses saying the damage was severe.
This is according to a new report from Infosys, which polled more than 1,500 business executives and interviewed over 40 senior decision-makers in Australia, France, Germany, the UK, the US, and New Zealand.
It found that 95% of C-suite and director-level executives had suffered from at least one type of problematic incident from their use of AI.
"In fact, on average, executives claimed to have experienced around 2.5 different types of AI incidents," the report read.
The most-reported incidents were privacy violations and systemic failures, as cited by 33% of the respondents.
Nearly a third also said they experienced inaccurate or harmful predictions (32%), ethical violations (32%), as well as lack of explainability (30%).
The impact of these incidents varied, with 77% reporting financial losses, such as lost revenue and increased costs.
More than half of respondents (53%) reported that the damage was reputational, while nearly half (46%) indicated it was legal, resulting in fines and settlements.
According to the report, 72% of the respondents said the damage was at least substantial, including 39% who said the damage from AI incidents was "extremely severe."
The report, however, underscored that most AI initiatives in organisations are still in the early stages of development.
"While respondents commonly rate the damage from problematic incidents as serious, the scope of these deployments is often quite limited — and in many cases so is the actual impact," the report read.
Responsible AI remains low
Despite the widespread damage reported from problematic AI incidents, the report found that only two per cent of organisations have adequate Responsible AI (RAI) controls in place to protect themselves from risks.
This is based on Infosys' own set of best practices for RAI called the RAISE BAR, which covers trust, risk mitigation, data and AI governance, as well as sustainability.
The report found that only 25 out of the 1,500 sample met the RAISE BAR.
Another 78%, however, acknowledged that RAI can help in revenue growth, while 83% said future AI regulations would boost the number of AI initiatives.
"Our research clearly shows that while many are recognising the importance of Responsible AI, there's a substantial gap in practical implementation," said Jeff Kavanaugh, Head of Infosys Knowledge Institute, in a statement.
"Companies that prioritise robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses, but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era."