Shadow AI puts travel agencies at risk

Unregulated AI use is becoming so widespread by the travel trade that agencies are being forced to implement official company policies to avoid data leaks, compliance breaches and reputational damage, say industry experts.

Franz von Wielligh, Head of Innovation and Member Support at XL Travel Head Office, said: “Agents often use AI for administrative and support tasks such as summarising itineraries, generating marketing content, analysing travel trends, responding to client queries, and managing repetitive back-office duties. This allows them to focus more on providing personalised service and strategic travel advice to clients.”

However, the majority of this is unreported usage of AI, which is prevalent in 90% of US-based companies, according to an MIT ‘State of AI in Business 2025’ report.

Mladen Lukic, GM of Travel Counsellors South Africa, warned that any AI interaction outside a company’s secure environment carried the same risks.

“Shadow AI is a very broad term and the terminology can create the misunderstanding that there are ‘good’ AI tools and ‘shadow’ AI tools, but every AI capacity that is not controlled by companies’ security protocols have exactly the same risks and the same potential consequences.”

The dangers

Lukic explained that, when an agent used any company or client data in their prompts on Large Language Models (LLMs), the underlying LLM retained this information. That information may then be shared with other users at a later stage.

“Agents may unknowingly share sensitive client or corporate information with external AI platforms that do not guarantee data privacy,” said Von Wielligh. “This could lead to data leaks, breaches of confidentiality, or even non-compliance with data protection regulations. In addition, unverified AI outputs can result in misinformation or operational errors that affect client trust and service quality.”

Steph Reinstein, Digital Marketing and Innovation Director at Big Ambitions, told Travel News that agents became liable for these data leaks or misinformation.

“One agency told me they only discovered their agents were using AI when a client complained about getting advice that seemed 'copied from somewhere'. That's not how you want to find out about shadow AI in your business,” said Reinstein.

"Implementing official AI adoption gives you audit trails, data protection, and quality control. Shadow AI use just gives you blind spots and sleepless nights.

Adoption is essential

“Most travel advisers are already using AI,” said Reinstein. “The question isn't whether to allow it, but whether you want control over how it's used.”

He explained that, when agencies officially adopted AI, they could use enterprise-grade tools that actually protected their client data. These tools also enable agencies to switch off data storage, control who sees what, and keep proper records of what's been processed.

Von Wielligh explained that embracing AI proactively allowed agencies to set clear frameworks for responsible and secure use.

“By developing official AI policies, agencies can ensure that staff benefit from the efficiency and productivity gains AI offers while maintaining compliance with data protection and ethical standards.

“Formal adoption also allows agencies to train staff effectively, monitor usage, and integrate AI tools safely into existing workflows rather than risk fragmented or unauthorised use.”

How to formalise policy

However, Lasse Vinther, MD of Automation Architects, an AI consultancy, explained that this was not always straightforward.

Other than providing bespoke agentic AI solutions software, Vinther said Automation Architects educated companies on how to establish roadmaps towards official adoption and AI policy development.

“We are encouraging companies to not sanction people that are using tools that they do not support,” said Vinther. “Right now, everyone is a little bit scared. They are not sure if they are allowed to use AI but they just want to get their jobs done.”

He said it was essential for companies to be open-minded, to find out what tools their employees were using and why, and create an incentive structure that improved transparency.

“For example, companies can allow for a grace period for people to report how they are actually using AI without any sort of sanctions, so the company can really see what the true adoption rate is.

“These are some of the conversations that can assist the development of a road map to incorporate the best AI tactics, used by some of the employees in the organisation, into an official policy.”

Vinther said AI adoption was inevitable, but having these conversations and developing these policies reduced the risk of breach or non-compliance.