AI May Bring Unprecedented Employee Surveillance
You can buy a mouse wiggler on Amazon for about fifteen bucks. It’s a small device that moves your cursor at random intervals, defeating the idle-detection software that many employers use to monitor remote workers. It exists because of a gap: your employer can technically see everything you do on your computer, but nobody actually reviews most of it. The data exists, but analysing it at scale is prohibitively expensive. So employees learned that appearing active is enough, even if they’re not actually working.
That gap is about to close, and what comes next has no historical precedent.
The cost of comprehension
Workplace surveillance is not new. Employers have been able to monitor keystrokes, emails, screen activity, and browser history for decades. What they haven’t been able to do is understand what all that data means at scale. A manager could theoretically read every Slack message their team sends, but nobody does because it would take hours every day. The mouse wiggler exploits this: it mimics activity because activity is all the software can measure, while the substance of what you’re doing remains illegible to anyone unwilling to invest serious time reviewing it.
Large language models change the economics entirely. An LLM can read every Slack message, every email, every document you produce, and summarise it in seconds. More than that, it can score it: was this email professional? Did this sales call hit the right talking points? Did this code review demonstrate sufficient technical depth? Was this meeting contribution substantive or was the employee just nodding along? The technology to answer these questions across every employee, every day, now exists and costs a few pennies per analysis. When the marginal cost of analysing an hour of video calls drops from several pounds of human attention to a fraction of a penny of compute, the calculus changes completely.
This is the unprecedented part. Previous surveillance technologies could collect data but not interpret it cheaply. Recording phone calls was easy; paying humans to listen to them was expensive. Logging emails was trivial; reading them at scale required an army of compliance officers. Workers retained meaningful privacy not because of legal protections or employer benevolence, but because comprehensive surveillance didn’t make economic sense. AI removes that constraint entirely. Surveillance that was theoretically possible but practically infeasible becomes practically trivial.
A day in the surveilled office
Consider what this looks like in practice. You join your morning standup and an LLM transcribes the call, analyses your tone for enthusiasm, notes that you spoke for only ninety seconds compared to your colleagues’ average of two minutes, and flags that you seemed hesitant when discussing your progress. Your commits are analysed not just for quantity but for complexity; the system notes your code this week looks structurally similar to code you wrote last month, suggesting you might be coasting. Your Slack messages are assessed for sentiment, and your response time to your manager’s messages is logged against team averages.
In a client call, the AI scores your performance: eye contact 73% (below the recommended 80%), twelve filler words (above target), explanation of the product roadmap rated “adequate” rather than “compelling.” By the end of the day, a dashboard displays your productivity score, collaboration score, and trajectory compared to last week. Your manager receives a summary recommending a “coaching conversation.”
None of this is science fiction. Every component exists today. The only question is assembly.
The labour market problem
The standard objection to workplace surveillance is that workers won’t tolerate it: companies that monitor aggressively will struggle to attract talent. This argument assumes workers have meaningful bargaining power, and there are reasons to believe AI will weaken that power considerably. If AI displaces even a fraction of knowledge workers over the coming decade, the resulting slack in the labour market will shift leverage toward employers. Workers accept conditions they would otherwise reject when the alternative is unemployment, and “we monitor everything but at least you have a job” becomes compelling when jobs are scarce.
You do not need mass unemployment for surveillance to become normalised. You need enough displacement to make workers nervous, and enough nervousness to make resistance seem costly.
The normalisation ratchet
New workplace technologies follow a predictable pattern. A few early adopters implement the technology, framed in neutral terms: not “surveillance” but “performance analytics,” not “monitoring” but “coaching tools.” HR vendors bundle these capabilities into standard enterprise software, making adoption a matter of ticking a box rather than a deliberate choice. Boards ask why they aren’t using the same tools as competitors. Within a few years, what seemed intrusive becomes standard.
This ratchet moves in one direction. Once comprehensive surveillance is normalised in a handful of major employers, opting out starts to look naive. The window during which AI surveillance seems excessive will likely be short, and by the time most people notice what has happened, the new equilibrium will already be established. Technology adoption operates on a timescale of months; political response operates on a timescale of years. The damage, if there is damage, happens in that gap.
What we lose
The natural objection is that honest workers have nothing to fear. If you’re doing your job well, why care whether an AI is watching?
This misunderstands what surveillance does to behaviour. People do not perform identically when observed and unobserved. Constant monitoring changes behaviour in ways that are subtle but cumulatively significant. Workers stop taking risks because failed experiments look bad on a dashboard. They stop admitting uncertainty because hedging is flagged as low confidence. They stop helping colleagues in ways that aren’t visible to metrics because only measurable contributions count. They optimise for what the system can see at the expense of what actually matters.
The mouse wiggler exists because people need some space that isn’t measured, some room to have a bad morning without it being logged and scored. Comprehensive surveillance eliminates that space entirely. Every moment becomes a performance, every interaction a data point, every day a test you might be failing without knowing it.
I do not know whether this future will arrive everywhere. Some employers will resist, some jurisdictions may regulate, some workers may retain enough power to demand limits. What I do know is that the tools now exist, the incentives are aligning, and the usual constraints are weaker than people assume. For the first time in history, your employer can not only watch everything you do, but understand it. That is unprecedented, and it is worth noticing before it becomes normal.

