ai
Photo by Growtika on Unsplash

For UK sales leaders, artificial intelligence has arrived with a promise that is hard to ignore. More calls. Better targeting. Faster pipelines. But as AI accelerates outbound sales activity, it is also quietly magnifying compliance risk. According to Gerry Hill, Vice President of Customer Strategy at TitanX, a leading platform in phone sales data intelligence technology, many UK organisations are not running into trouble because of the technology itself, but because they are applying old compliance habits to a radically new operating model.

AI does not just make sales teams more efficient. It changes the scale, the speed and the visibility of outbound activity. In the UK, that matters. Regulators care deeply about how personal data is used and how people are contacted. When volume increases without a corresponding redesign of compliance systems, mistakes stop being isolated and start looking systemic.

The most common failures Hill sees are not subtle. They are basic, repeated and increasingly indefensible.

One of the biggest is misunderstanding the Telephone Preference Service and Corporate (TPS). Many teams still believe that business to business outreach is exempt. It is not. Sole traders and certain partnerships are fully covered by TPS. Companies are required to screen calling lists within 28 days of making a call. Some organisations do not screen at all. Others do it sporadically or rely on outdated CSV uploads. When AI multiplies daily dial volumes, those gaps quickly turn into dozens of violations rather than one or two.

Another frequent error is failing to disclose the role of AI. Under the Privacy and Electronic Communications Regulations, fully automated calls require explicit prior consent. Hill says teams often blur the line between live calls supported by AI and calls that are effectively automated. The distinction is critical. A human-led call where AI assists with notes or prompts sits in a very different legal category from an AI-driven call where no human meaningfully participates. Treating those as interchangeable is a fast route to enforcement action.

Volume itself is the third and most dangerous problem. AI can take a team from 50 dials a day to 500 almost overnight. Existing compliance controls are rarely designed for that scale. Abandoned call rates creep above Ofcom's three percent threshold. TPS violations increase. Complaint patterns become visible to regulators. The technology has not failed. The system around it has.

Hill stresses that UK GDPR and PECR are often misunderstood as competing frameworks when in reality they work together. GDPR governs how data is collected, processed and protected. It defines lawful bases such as legitimate interest, transparency obligations and individual rights. PECR governs whether and how you are allowed to make the call in the first place. Following GDPR does not exempt an organisation from PECR. A sales team can be fully compliant with GDPR and still break the law by calling a number that should not be called.

This misunderstanding fuels confusion around legitimate interest. In a business to business context, legitimate interest can often support live calls to corporate subscribers, particularly when outreach is relevant and proportionate. But legitimate interest does not override PECR restrictions. If a number is on TPS or CTPS and there is no prior relationship, consent is required. Automated calls always require explicit consent regardless of the lawful basis used under GDPR. Regulated sectors such as financial services face even stricter expectations.

So what does compliant AI-assisted calling actually look like in practice?

Hill argues that it starts with architecture, not scripts. TPS and CTPS screening must be automated and real-time, ideally via API integration rather than manual uploads. Internal 'do not call lists' must be enforced alongside external registries. Screening should happen continuously, not monthly.

Human involvement is non-negotiable. AI should suggest and inform, not decide and execute. A real person must retain control over who is called and what is said. This keeps calls firmly in the live category and reduces regulatory exposure.

Transparency also matters more than many teams realise. If AI is being used in ways a prospect would not reasonably expect, such as transcription, sentiment analysis or behavioural scoring, that should be disclosed. Hill recommends aligning AI disclosure with call recording notices that prospects already understand. A simple explanation builds trust and meets GDPR fairness requirements.

At scale, a Data Protection Impact Assessment is no longer optional. If AI is used for profiling or decision-making across large datasets, organisations must document risks, including bias, and show how those risks are mitigated through human review and system controls.

Monitoring behaviour is the final pillar. AI should not just drive productivity. It should detect risk. High opt-out rates, repeated call attempts, spikes in complaints and unusually short calls are all signals that something is wrong. Left unchecked, AI will amplify bad behaviour faster than managers can intervene.

TPS compliance becomes particularly challenging as volume increases. Hill describes it as a maths problem. At 100 calls a day, a small TPS hit rate is manageable. At 1,000 calls a day, the same percentage becomes an investigation. The only viable response is layered prevention. External registries, internal suppression lists and behavioural rules must work together. Velocity limits matter. So does documentation. When the ICO investigates, it looks for evidence of systems and controls, not good intentions.

There is also a human side to this shift. AI can push the wrong incentives in the wrong direction. A poorly performing rep making 50 aggressive calls generates a few complaints. The same rep empowered by AI making 500 calls generates regulatory attention. Hill advises hard limits on call attempts, enforced rest periods between calls and a rethink of incentives. Rewarding volume over quality is no longer just ineffective. It is dangerous.

Bias is another emerging risk. AI systems can unintentionally skew who gets contacted and how they are treated. UK GDPR requires organisations to address this head-on. That means regular fairness audits, explainable scoring models and training data that avoids demographic inference. Decisions should be based on behaviour and engagement, not assumptions about age, gender or affluence.

Looking ahead, Hill expects UK regulators to set the tone globally. The ICO's forthcoming AI Code of Practice is likely to carry statutory weight. Enforcement activity is already increasing, particularly around automated calling. Models built on parallel dialling and abandonment are unlikely to survive sustained scrutiny.

A clear divide is forming in the market. One path leans into automation, volume and minimal disclosure. The other focuses on precision, human-led conversations supported by AI intelligence. Buyers are already signalling which they prefer.

For UK companies, the message is simple. AI does not remove responsibility. It increases it. Those who design compliance into their systems from the start will not just avoid fines. They will build trust in a market that is growing increasingly sceptical of how technology is used. As Hill puts it, winning in this environment is not about having more conversations. It is about having better ones.