Leaders told to establish frameworks, safeguards to support responsible AI use

Oversight of artificial intelligence use in the workplace remains "significantly lacking" amid employees' widespread use of the technology, according to a new poll from the Information Systems Audit and Control Association (ISACA).
ISACA's new poll, which surveyed 3,029 digital trust professionals worldwide, found that only 28% of organisations have a comprehensive AI policy implemented.
While this is an improvement from last year's 15%, the report found that it falls short, as 81% of the respondents believe employees in their organisation use AI. Among its usage is:
- Creating written content (52%)
- Increasing productivity (51%)
- Automating repetitive tasks (40%)
- Analysing large amounts of data (38%)
- Providing customer service (33%)
"AI is already embedded in daily workflows, but ISACA's poll confirms governance, policy, and risk oversight are significantly lacking," said Jamie Norton, Board Director, ISACA, in a statement. "Leaders must act now to establish the frameworks, safeguards, and training needed to support responsible AI use."
Lack of training
Policy isn't the only area where employers are falling short, as the report found that only 22% of organisations are also providing AI training to all staff, while a bigger 32% said there is no training provided to any employees.
Jason Lau, board director at ISACA, said organisations need to foster a "culture of continuous learning" over AI to ensure that employees are equipped with the expertise to use the technology responsibly and effectively.
Lau further warned that threat actors are already utilising AI to exploit vulnerable organisations.
In fact, previous research from Gartner warned of AI-generated deepfake job applicants that are being used by threat actors to infiltrate organisations.
Two in three respondents (66%) from ISACA's poll said they are expecting deepfake cyberthreats to become more sophisticated and widespread in the next 12 months.
However, only 21% said their organisations are actively investing in tools to detect and mitigate deepfake threats.
"It is just as important for organisations to make a deliberate shift to integrate AI into their security strategies — threat actors already are doing so, and failing to keep pace will expose organisations to escalating risks," Lau warned.