Trust Me, I’m a Machine: Why We’re Building Surveillance Tools We Can No Longer Believe
The arguable genius of modern oversight is that it watches everything — and can be rewritten to show anything. Welcome to the era where proof is performant, not truthful.
“I like being watched. It’s efficient. I barely have to pretend to be productive — the software tallies my keystrokes, the webcam scores my facial enthusiasm, and some polite cloud of analytics gives my manager a digital elbow when my “engagement” drops below 87%”
Said nobody, ever.
The idea is, that’s it’s: Tidy data, tidy workforce, tidy outcomes.
The problem — and it’s a gorgeously existential one — is that we’re building entire regimes of technological trust on the assumption that the things watching us are themselves beyond reproach. Meanwhile, the very same digital sorcery that powers surveillance is now the primary means of corrupting it.
Let’s be blunt: we rely on machines to authenticate reality, then hand those machines the tools to rewrite it.
We have CCTV that records speech and infers heart rates. We have keyloggers that capture every letter we type. We have workplace “productivity platforms” that listen, watch, track, and mood-score their human livestock. And now, slinking in like a villain in a lab coat, come the deepfakes, metadata editors, and AI-generated “reconstructions” — technologies that can make those very same records say whatever someone with motive and a GPU wants it to.
Two realities now exist side by side: one is the world surveillance systems claim to document; the other is the world manipulation systems can manufacture. We trust the first because we built it; we should fear the second because it’s learning to build us.
A few examples to ruin your faith in evidence:
Keystroke logging on corporate machines.
Employers install keyloggers to monitor activity — and sometimes, as Brookings found, those loggers accidentally expose workers’ personal passwords and private accounts. The tech is sold as a productivity tool; in practice, it’s a blunt digital cudgel that turns privacy into a vulnerability spreadsheet.
Webcam and facial-analysis monitoring.
Since remote work exploded, companies have adopted software that uses webcams to analyse eye movements, expressions, and micro-twitches to prove attentiveness. It’s intrusive, stress-inducing, and hilariously unreliable — a point made by the BBC (see links below) and multiple researchers. When your software accuses you of slacking because you blinked too thoughtfully, something’s gone wrong.
Ubiquitous workplace sensors.
CCTV isn’t just cameras anymore. It’s microphones, location trackers, environmental sensors — even heart-rate inference from video. The US Government Accountability Office and The Stack have catalogued how employers now monitor staff like zoo exhibits, often targeting the most precarious workers.
Biometric time clocks and wearable trackers.
Fingerprints, facial scans, smartwatches — all wrapped in cheerful corporate wellness language. The data is “secure,” allegedly, until it’s leaked, spoofed, or repurposed for disciplinary analytics. The issue isn’t just privacy; it’s the creeping absurdity of treating your own pulse as intellectual property.
Productivity analytics platforms.
Tools like ActivTrak turn time-on-apps and screenshot samples into “productivity scores.” Great for colourful dashboards; terrible for understanding creativity, downtime, or actual human thought. Vendors call it “insight.” Critics call it “bossware.”
Meanwhile, the manipulative technologies are quietly unpacking their bags:
Deepfakes and synthetic media.
AI can now convincingly fake faces, voices, and entire events. A video confession, a “caught-on-camera” scandal — all now potentially high-budget fanfiction.
Metadata editing and forensic evasion.
Timestamps, GPS coordinates, and file fingerprints can be scrubbed or rewritten. The digital breadcrumbs that once grounded reality are now editable footnotes.
Malware that rewrites logs.
Compromise a surveillance system, and you don’t just spy — you edit history. The log becomes a narrative, the narrative becomes optional.
AI-driven reconstructions.
Generative models “restore” faces, voices, and scenes with impressive confidence — and wildly varying accuracy. The result is plausible alternate realities dressed as recovered truth.
When the proof lies
So what happens when the very technologies we rely on to prove truth are also the ones most capable of undermining it?
We step into an epistemological bear trap.
Proof used to be physical — a signature, a photo, a witness. Now “proof” is a stack of digital artefacts, each one mutable in principle and often entirely editable in practice.
The Four Horsemen of the Digital Trust Apocalypse
Authentication is broken at scale.
Once fakes hit a certain fidelity, humans can’t tell the difference. We used to say “seeing is believing.” Now “seeing” requires a footnote and a forensic audit. If CCTV can be deepfaked and keystrokes rewritten, what does evidence even mean?
Surveillance is rewarded; security is not.
Companies buy monitoring tools because they look scientific and managerial. They rarely invest in the boring, independent systems that make surveillance tamper-evident. You can buy a dashboard cheaply; you can’t buy certainty.
The social contract corrodes.
Surveilled workers don’t create — they perform. Every action becomes theatre for the algorithm. A surveillance-first workplace treats humans not as moral agents, but as faulty sensors needing calibration.
Provenance is an afterthought.
We’ve built global surveillance empires without building equivalent systems for verifying the footage. We rely on platform lock-in and vendor assurances, as if “trust us” were a cybersecurity protocol.
The philosophical hangover
Trust is both personal and institutional. We once expected courts, journalists, and auditors to establish truth. Now we quietly outsource that role to machines. Cameras seem impartial. Algorithms seem objective. But impartiality isn’t neutrality — it’s automation wearing that very same lab coat.
An algorithm reflects its training data; a camera reflects its operator; a recorded file reflects its editor. None of these are “truth.” They’re just well-formatted interpretations.
So what would sanity look like?
• Design for tamper-evidence, not just tamper-resistance.
Use cryptographic proofs, decentralised ledgers, or hardware attestations — so that evidence carries its own receipts.
• Demand independent audits.
If a system can judge a human, it should be judged in return — publicly, transparently, and regularly.
• Monitor only what’s necessary.
“Because we can” is not an ethical defence. Surveillance should have to justify itself, not just sell itself.
• Treat all digital evidence as potentially doctored.
Assume manipulation. Require corroboration. Trust but verify — then verify the verifier.
• Reintroduce friction.
Sometimes authenticity requires bureaucracy: countersignatures, multi-factor attestations, or multiple sources for high-stakes claims.
This isn’t Luddism. It’s not a call to smash cameras or hurl your smartwatch into a lake (though that does sound cleansing). It’s a plea for humility. We’ve built extraordinary tools that amplify sight and memory — and now we need matching tools for verification and restraint.
Otherwise, we’ll end up in a future where “evidence” is performance art: curated, polished, and designed to convince.
There’s a bitter comedy to it all. We spend billions building systems to make us safer and more efficient — while the cheapest hack can flip the entire script. The tech industry, in its infinite irony, acts like an arsonist who sticks around to help the fire brigade plan the evacuation.
We marvel at automation’s accuracy, then panic when the automated records turn out to be as flexible as a Wikipedia entry on a celebrity scandal.
We live, for now, in the in-between: a world where surveillance is omnipresent and certainty is fragile. The absurdity of trusting the same cleverness that forges our evidence to also police it should do more than amuse — it should alarm.
Until then, smile for the camera. Just know that one day, it might smile back with someone else’s face and someone else’s words.
Thank you for reading! If you liked this article, please remember it was free to read and as a consequence, Substack won’t push it as much as paid content. Please take a second to hit the Share button above and if you’d like to see more content like this, please click the subscribe button below to Subscribe for all free future, and past content.
If you’re interested in supporting my work, one-off donations on Ko-fi can be as little as the price of a coffee. There is also a Tip a Jar button below where you can tip whatever you’d like. For those looking to provide long-term support, there are annual subscriptions available through the Substack subscription page; there is literally now an option to pay just $1.00 a month! Every contribution, no matter how small, is greatly appreciated! Otherwise, I’d still love that Sub & Share, which costs absolutely nothing!
Many thanks, James.
References and further reading:
How employers use technology to surveil employees — Brookings, 2021
https://www.brookings.edu/research/how-employers-use-technology-to-surveil-employees/
How worker surveillance is backfiring on employers — BBC, 30 Jan 2023
https://www.bbc.com/news/technology-64429147
Big boss is watching you: The growth of workplace surveillance technology — The Stack, 12 Sep 2024
https://www.thestack.technology/big-boss-is-watching-you/
Types of Employee Monitoring — ActivTrak
https://www.activtrak.com/
Redefining Productivity in the Age of Workplace Surveillance — Human Rights Research Center, 2025
https://www.humanrightsresearch.org/
The Glass Narc: How Your iPhone Is Becoming a Warden in Your Pocket:
Apple’s on-device processing isn’t just creepy — it quietly breaks encryption, misunderstands context, and could land you in serious trouble in the wrong regime.






As a creative person I understand privacy; I need privacy to create. Writers, actors, painters, often feel this way.
The idea that I’m being watched is a cold shower. Surveillance of workers is never anything but control; it never benefits those being watched.
When employers know this they will either stop, because they want our creativity, or continue, because that’s the last thing they want.
This is so important! And so well written.
Particularly: „Trust is both personal and institutional. We once expected courts, journalists, and auditors to establish truth. Now we quietly outsource that role to machines. Cameras seem impartial. Algorithms seem objective. But impartiality isn’t neutrality — it’s automation wearing that very same lab coat.
An algorithm reflects its training data; a camera reflects its operator; a recorded file reflects its editor. None of these are “truth.” They’re just well-formatted interpretations.“