AI Should Work For Your Team, Not On Them
- Arun Rao
- 21 minutes ago
- 4 min read

I’ve been thinking a lot about digital twins of employees lately. Not because the technology isn’t impressive—it absolutely is. The hype is that they can simulate workflows and preserve knowledge. But I keep seeing a familiar, counterproductive trend: building powerful tools that work against people instead of for them. The most critical distinction is whether the technology is designed for human enhancement or mere behavioral constraint.
The Mobile-Phone Lesson
When mobile phones revolutionized work, they didn’t do so by tracking our every move. They made us productive by putting powerful capabilities in our hands—communication, information, computation—whenever and wherever we needed them. They enhanced human potential.
The difference? Autonomy versus surveillance.
The Surveillance Paradox
Here’s what recent research reveals about workplace AI monitoring:
A study by researchers at Cornell University found that organizations using AI to monitor employee behavior and productivity reported higher rates of complaints, lower productivity and higher quit rates—especially when monitoring replaced or significantly undermined human managerial judgment
In a survey by the Pew Research Center, most workers said they would feel uneasy if employers used AI to monitor or evaluate them—including tracking movements, desk time or tone of voice—and many worry the data would be misused
Another policy primer warns that increased data-collection and productivity-scoring tools risk turning workplaces into experimental zones with weak oversight—undermining human rights, morale and job quality
These findings align with what I’ve observed: when surveillance erodes human agency, trust falls away and performance often suffers.
What Gets Measured Gets … Hidden
The goal of digital twins is noble: preserve institutional knowledge, enable async collaboration, prevent expertise from walking out the door.
But here’s the paradox: when people know their communications and behaviours are being captured and can be queried by anyone, the important work often moves offline. Strategic conversations happen in hallways. Critical decisions are made verbally. Real innovation occurs in spaces that can’t be tracked—because the most valuable work often looks like “wasting time” to a surveillance system.
You end up training your “corporate AI” on performance theater, while the real insight lives in handwritten notes, undocumented hallway discussions and informal networks.
Research on human-digital twin frameworks in industrial/office settings confirms that while the technology holds promise, its benefit depends heavily on governance, transparency and human-in-the-loop design.
The Question That Matters
Before deploying any workplace AI, the question is: Does this technology enhance human capability or constrain human behavior?
Mobile phones: Enhancement—they gave us superpowers.
Keystroke loggers: Constraint—they make us perform.
The pattern? Technology should make people more capable—not more compliant.
With Project Rampart, our focus was defending AI systems—protecting them from attacks while preserving transparency and trust. With Samvid, our goal is giving teams visibility into contracts and pricing—information that helps them make better decisions, not systems that second-guess their judgment.
The 5 Principles of Empowering AI
Transparency first: People know what is being captured—and why.
Voluntary Adoption: Tools should create value so compelling that the team wants to use them, making mandatory adoption unnecessary.
Capability enhancement: Tools that make your job easier, not just make you more visible.
Human Agency: AI that supports decisions, doesn't replace the final human decision-maker.
Purpose limitation: Built for a specific value-add—not mission-creep surveillance.
The AI Surveillance Red Flag Test 🚩
If you’re a buyer, here’s a quick litmus test for identifying surveillance disguised as productivity:
🚩 Red Flag Test (If the vendor boasts this...) | ➡️ It's Probably Surveillance |
"It calculates a 'Productivity Score'." | Focuses on Constraint, not Capability. |
"It tracks time at the desk." | Measures Compliance, not Output/Value. |
"It gives managers a dashboard of hourly activity." | Promotes Surveillance, not Coaching/Support. |
A Challenge to Builders and Buyers
We are at a crucial inflection point. The AI systems we fund, build, and deploy in the next few years will set the default expectations for how people work for decades to come.
We have the capacity to build tools that genuinely enhance human potential—systems that handle the tedious so people can focus on the strategic, technology that surfaces insights so teams can make truly better decisions, and platforms that preserve knowledge because people willingly share it.
The alternative—surveillance dressed as productivity—is a self-defeating strategy. Productivity surveillance alone, especially in knowledge-intensive and teamwork environments, has been shown by the Aspen Institute and others to reduce performance and drive away top talent.
The choice seems clear:
The future of work isn't about perfectly measurable compliance; it's about empowered human capability.
I’m curious: for those of you building or buying workplace AI, how do you ensure your technology clears this high bar? What specific mechanisms do you use to guarantee your systems enhance rather than constrain?
A Critical Legal Note: If you are considering systems that ingest all company communications (emails, chats, documents) for the purpose of creating a digital twin, you are entering an untested and potentially dangerous legal area regarding AI and attorney-client privilege. Talk to your general counsel immediately. Don't let your company be the test case that establishes the legal precedent for destroying privilege via algorithm.
