Tighter controls over AI systems needed to stop data exposure

Organisations need to control which AI tools are used inside company networks

Tighter controls over AI systems needed to stop data exposure

Organisations across the world have been advised to introduce guardrails on artificial intelligence use at work amid the rising risk of shadow AI.

Shadow AI refers to the usage of artificial intelligence tools for work without the approval of the company's IT department.

Recent findings from a TELUS Digital survey earlier this year found that 68% of enterprise employees who use generative AI at work access publicly available assistants, such as ChatGPT, Microsoft Copilot, or Google Gemini through their personal accounts.

More than half of them (57%) have also admitted to entering sensitive information in these GenAI tools.

Menlo Security, a browser security firm, warned that growing shadow AI in the workplace puts organisations at risk of data loss or data leak.

"While data loss is a legitimate concern in the enterprise, data leakage, in which sensitive information is inadvertently exposed, can be a bigger issue, particularly when it comes to GenAI," its report read.

"Users may well not intend to transfer sensitive information in their attempt to summarise or reword content, but it happens."

The warning comes as web traffic to GenAI sites increased by 50%, hitting 10.53 billion visits in January 2025, with 80% of access happening via browsers.

Introducing AI guardrails

Devin Ertel, Chief Information Security Officer at Menlo Security, underscored the need to introduce clear governance on AI use at work.

"Governance is about providing employees with safe, secure, and responsible ways to use GenAI, and ensuring that sensitive corporate data isn't inadvertently exposed or lost," Ertel said in a statement.

"Our report highlights the need for organisations to move past fear and start enabling AI use responsibly, with the right guardrails in place."

But simply informing users of corporate policy on AI will not provide protection, the report added.

"In order to eliminate shadow AI, enterprises must select sanctioned AI systems or tools that they trust—and mandate their sole use," it said.

It said that while organisations can control what AI tools are used at work, there is no way to control which tools are being used by employees who are using their own devices at work.

"If organisations cannot control which AI tools are used outside the network, they will have to control what is allowed inside. Malware must be stopped at the door," it said.