Petter Neby, Founder & CEO at Punkt. suggests that privacy-first devices, designed without always-listening assistants, could offer the UK a more secure digital future.
A friend once told me their phone must be eavesdropping. They had casually mentioned a trip they were on, and soon after saw adverts for holidays in their feed. Coincidence? Perhaps, but stories like this resonate because they highlight a real concern that our devices and platforms know more about us than we think.
The same is true when people sign up to platforms like TikTok or Instagram. With a single click of ‘accept,’ users often give permission, which tends to be buried in the terms and conditions, for these apps to track their location, monitor activity across other apps, and harvest behavioural data. Without realising it, people hand over a wealth of personal information, which in turn fuels the targeted adverts that follow them around based on where they’ve been or what they’ve done.
At the same time, our smartphones are now becoming AI-powered hubs for work, banking, and social life. With this new level of convenience brings with it the uncomfortable truth that the very assistants meant to help us (Siri, Google Bard, Alexa etc.) are creating new vulnerabilities.
Smartphones listen by design. To respond instantly when we say “Hey Siri” or “OK Google,” their microphones are kept on permanent standby. In theory, nothing is recorded until the wake word is heard. In practice, things often go awry. Apple recently settled a lawsuit after allegations that Siri had recorded private conversations without being prompted. Google has faced similar allegations, and Bard has already leaked fragments of private conversations into search results. Even when unintentional, these incidents highlight how fragile the safeguards really are.
For SMEs, these can pose significant risks. Imagine discussing a confidential funding round, client pitch, or HR issue near your phone, only to have fragments of that conversation captured or stored. Each AI-driven feature adds complexity, expanding what security experts call the “attack surface.” Hackers no longer need to compromise multiple apps if they can break into the assistant that touches everything. Researchers have shown that assistants can even be tricked by light beams aimed at microphones, a surreal but very real illustration of how new capabilities bring new risks. The convenience of always-listening devices has effectively created a permanent entry point into our digital lives.
It’s no surprise, then, that many business users are now asking how to switch this constant profiling off. Some would even pay for the assurance that their devices aren’t harvesting every interaction. This growing demand for a privacy-first approach reflects a clear shift in consumer priorities from convenience to control. Regulators are paying attention. Under GDPR, which still applies in the UK, voice recordings count as personal data and sometimes even biometric data. Collecting them without clear and informed consent is unlawful. Several years ago, European authorities forced Apple and Google to rein in how they handled voice recordings after whistleblowers revealed that contractors were listening to snippets of users’ private lives.
Today, regulation is going much further. The EU AI Act, which is the world’s first comprehensive attempt at across-the-board AI regulation, has already come into force. While its most demanding rules will be phased in over the next few years, the framework is now in place and beginning to reshape how AI is deployed in Europe. Even if the UK chooses its own route post-Brexit, these standards will set the tone globally. The Act requires providers to assess risks, guarantee transparency, and place limits on certain uses of AI. If a service interacts directly with humans, users must be told clearly that they are speaking to a machine. Manipulative or exploitative uses will be banned outright. Together with GDPR, this creates a powerful framework that will penalise companies who fail to put privacy at the centre of design. This is a positive sign that the days of rushing AI features into phones without considering their implications are over.
So, what does this mean for SMEs? A different approach is needed, one that places privacy and security above novelty. Practical steps include selecting smartphones where assistants are manually activated rather than always listening, preferring devices that process data locally rather than in the cloud, and ensuring clear settings allow employees to control their own data. These are practical solutions and are easily achievable today.
The UK is well placed to take a lead here. Consumers and businesses alike are showing growing concern about digital privacy. Those businesses that integrate privacy as part of their brand will be able to differentiate themselves in a crowded market.
AI on our phones can be useful, even transformative, but SMEs should not accept a trade-off where convenience means sacrificing control. Businesses that recognise privacy as a competitive advantage will find themselves on the right side of both customer trust and regulatory scrutiny. Policymakers have a role too, setting clear rules and insisting that protections are built into devices rather than bolted on afterwards.
The assistants in our phones were designed to serve us. It is time we make sure they do exactly that, and nothing more.
By Petter Neby, Founder & CEO at Punkt.